> That brotli dictionary file will work with Zstd? How?
Code:
zstd.exe --ultra -22 -fo book1.zst book1
zstd.exe -D dictionary.bin --ultra -22 -fo book1d.zst book1
zstd.exe --patch-from=dictionary.bin --ultra -22 -fo book1p.zst book1
> What do you mean that compression improvement is not significant?
Code:
768,771 book1
768,771 book1r // reorder.exe forward.xlt book1 book1r
256,361 book1.bro // brotli_gc82.exe -q 11 -fo book1.bro book1
263,455 book1r.bro // brotli_gc82.exe -q 11 -fo book1r.bro book1r
261,092 book1.lzma // lzma.exe e book1 book1.lzma
262,109 book1r.lzma // lzma.exe e book1r book1r.lzma
264,731 book1.zst // zstd.exe --ultra -22 -fo book1.zst book1
264,800 book1r.zst // zstd.exe --ultra -22 -fo book1r.zst book1r
I mean that most of brotli gains come from integrated dictionary rather
than entropy model improvements.
Well, usually - there could be significant differences if we'd try compressing 16-bit wavs
or chinese text in utf8.
> But it is in my testing. Brotli even beats Pavlov's gzip by a wide margin,
> and Pavlov's gzip has been incredible on other files.
Does it really?
Code:
32,768 book1r
13,753 book1r.bro
13,823 book1r.gz
13,844 book1r.lzma
13,879 book1r.zst
You have to take into account that deflate only supports 32k window size,
so for files longer than 32k its usually inferior to any new codecs, Pavlov or not.
Also deflate doesn't have integrated dictionary either.