Multithreaded support in 7za
According to -m (Set compression Method) switch # ZipMultiThread - 7ZIP manual & documentation, mt
defaults to on
, so there's no need to specify it at all.
However, 7zip's implementation of the DEFLATE algorithm doesn't support multi-threading!
As you have already discovered,
7za a archive.zip bigfile
only uses one core.
But .zip
files compress every file individually. When compressing several files, the multi-threading option compresses one file per core at once.
Try it and you'll see that
7za a archive.zip bigfile1 ... bigfileN
will use all available N
cores.
If you want to speed up the compression of a single file, you have two choices:
Split up
bigfile
in chunks.-
Use a different compression algorithm.
For example, 7zip's implementation of the BZip2 algorithm supports multi-threading.
The syntax is:
7za a -mm=BZip2 archive.zip bigfile
Also, the syntax error is caused by your attempt to use the LZM Algorithm for a .zip
container. That's not possible.
The possible algorithms for .zip
conatiners are DEFLATE(64), BZip2 and no compression.
If you want to use the LZM Algorithm, use a .7z
container. This container also handles the following algorithms: PPMd, BZip2, DEFLATE, BCJ, BCJ2 and no compression.
This is an old question, and not the answer to the specific question, but an answer to the spirit of the question (Using all cores to compress a zip format)
pigz (parallel gzip with .zip option)
pigz -K -k archive.zip bigfile txt
This will give you a zip compatible file 7x faster for same compression level.
A quick comparisons of zip compatible and non-zip compressors using single and multiple cores.
wall times on i7-2600k to compress 1.0gb txt file on fedora 20
67s (120mb) 7za (zip,1 thread)
15s (141mb) 7za -mx=4 (zip,1 thread)
17s (132mb) zip (zip,1 thread)
5s (131mb) pigz -K -k (zip,8 threads)
9s (106mb) bsc (libbsc.com) (not zip,8 threads)
5s (130mb) zhuff -c2 (not zip,8 threads)
2s (149mb) zhuff (not zip,8 threads)
wall times to decompress
4.2s unzip -t
2.0s pigz -t
5.1s bsc d
0.5s zhuff -d
Another option, to achieve multi-theaded compression on Linux is to use what Facebook uses, Zstandard. On Ubuntu, you install it like this:
sudo apt install zstd
Super fast multi-threaded compression:
tar cf - /folder/you/want/to/compress | zstdmt -o /location/to/output/fileName$(date '+%Y-%m-%d_%H:%M:%S').tar.zst
You can specify compression levels 1-19 (3 is default).
Max compression (slowest):
tar cf - /folder/you/want/to/compress | zstdmt -19 -o /location/to/output/fileName$(date '+%Y-%m-%d_%H:%M:%S').tar.zst
Medium compression (level 10):
tar cf - /folder/you/want/to/compress | zstdmt -10 -o /location/to/output/fileName$(date '+%Y-%m-%d_%H:%M:%S').tar.zst
My overall experience is that Zstandard compression isn't as strong as 7zip, but it is way faster and the zstdmt
command tries to use all cores.
BTW, on Window, 7zip uses all processors by default and I'm very disappointed that this is not the case in Linux. Its been this way for several years, at this point, and I wish 7zip was multi-threaded by default in Linux too.