Zip64 Support for Huge Files
First of all, our unit now supports the long-awaited Zip64 extension. In a nutshell, it allows to store files bigger than 4GB, or have the total
.zip bigger than 4GB - which is the maximum 32-bit stored size.
TZipRead TZipWrite Enhancements
TZipWrite classes were deeply refactored and enhanced. Not only Zip64 support has been added, but can ignore and skip some files during reading - a very efficient way of deleting files in a
.zip. Some additional methods have been introduced, e.g. to quickly validate a
.zip file integrity. Cross-platform support has been enhanced.
TZipRead used to map the whole
.zip file into memory. It was convenient for small content. But huge files won't fit into a Win32 application memory: you could not use Zip64 on 32-bit executables - not very convenient for sure! And performance of memory-mapped files is typically slower than explicit Seek/Read calls, since the Kernel is involved to handle page faults and read the data from disk. This had to be fixed.
Now a memory buffer size is specified to
TZipRead.Create constructors, which will contain the last bytes of the
.zip file, in which the directory header would appear - and very efficiently parsed at opening. Then, actual content decompression would use regular Seek/Read calls, only when needed. Of course, if the data is available in the memory buffer - which is the case for the last files, or for smaller
.zip - it will take it from there. So the new approach seems a very reasonable implementation - typically faster than other zip library I have seen, and our previous code.
Perhaps the main change of this refactoring, is the libdeflate library integration. It is a library for fast, whole-buffer DEFLATE-based compression and decompression. In practice, when working on memory buffers (not streams), it is able to leverage very efficient ASM code for modern CPUs (like AVX), resulting to be much faster than any other zlib implementation around. If streams are involved - e.g. when decompressing huge files - then we fallback to the regular zlib code.
LibDeflate implementation of crc or adler checsums is astonishing: on my Intel Core i5
crc() went from 800MB/s to 10GB/s. And this crc is used for
.zip file checksums, so it really helps.
Also compression and decompression are almost twice faster than regular zlib, thanks to a full rewrite of the deflate engine, targeting modern CPUs, and using tuned asm for bottlenecks.
Last but not least, you can use higher compression levels - regular zlib understand from 0 (stored) to 9 (slowest), but libdeflate accepts 10..12 for even higher compression - at the expense of compression speed which becomes very slow, but decompression will be in par with other levels.
We statically linked libdeflate, so you don't need to have an external library. Sadly, it is currently available for FPC only, since Delphi linking is an incompatible mess.
Note that libdeflate will be used anywhere in mORMot where deflate/zip buffer compression is involved, so for instance regular HTTP/HTTPS on-the-fly
gzip compression will be much faster, and even some unexpected part of the framework would benefit from it - e.g. our default RESTful URI authentication used the zlib
crc() for its online checksum, so each REST request is slightly faster.
Integrated to Signed Executables
The last enhancement was also the ability to append a
.zip content to an existing "signed" executable. Since "mORMot 1", we allowed to find and read any
.zip content appended to an executable. But if you digitally signed this executable, you would need to re-sign it after appending. Not very convenient, e.g. when you build a custom
We added some functions to include the
.zip content within the signature itself, allowing to store some additional data or configuration in a convenient format, without requiring to sign the executable again.
Use the Source, Luke!
Check the mormot.core.zip.pas unit in our Open Source repository!