Some nitpicking from my side:
- In the tables, the column "compress size (MB)" is actually in MiB, e.g. the xz level 9 result is around 46.47 MiB => ~46.47 * 1024 * 1024 => ~48.7xx.xxx bytes.
- Automatically determined block sizes destroy multi-threading on higher levels for some algorithms (which can be seen in the table column "compress cpu %"). This can be reproduced/improved when explicitly setting block sizes (see xz results below). So some comparing apples to oranges here (e.g. plzip vs. xz on highest level is similar ratio, but 200% vs. 100% cpu), but at least it's clear what happens in the tables - and it's actually a good test on how well compressors perform when used without any extra command line parameters.
Results from my quick tests (to verify results and compare with Precomp):
Code:
Command line compressed size compression time decompression time notes
precomp 46.783.211 (44.62 MiB) 1 min 36 s 6 s 1
precomp -t+ 48.920.911 (46.65 MiB) 1 min 31 s 5 s 1, 2
precomp -lm4000 46.630.387 (44.47 MiB) 2 min 49 s 6 s 3
xz -9 48.765.608 (46.51 MiB) 2 min 35 s 4 s
xz -9 -T 0 48.812.012 (46.55 MiB) 2 min 34 s 4 s
xz -9 -T 0 --block-size=100663296 48.911.256 (46.65 MiB) 1 min 24 s 4 s 4
notes:
test machine: cloud server, 2 vCPU, 2 threads, Skylake, ~2.1 GHz, 4 GB RAM
1) automatically determined block size: 96 MiB on this machine, can be different on others
2) this is pure LZMA2 without recompression, should be very similar to multithreaded xz -9
3) allowing more memory usage improves compression ratio, but leads to 192 MiB block size, so no multithreading boost anymore
4) 100.663.296 bytes = 96 MiB to make it comparable to the first Precomp result