No source code, just a paper. On the authors' proprietary dataset it outperforms LZ4.
http://csl.skku.edu/papers/icce17.pdf
No source code, just a paper. On the authors' proprietary dataset it outperforms LZ4.
http://csl.skku.edu/papers/icce17.pdf
Bulat Ziganshin (5th June 2017)
all you need to know:
i wonder whether it deserves a paperIn contrast to the original LZ4 algorithm, LZ4m scans an input
stream and finds the match in a 4-byte granularity. If the hash
table indicates no prefix match exists, LZ4m advances the
window by 4 bytes and repeats identifying the prefix match.![]()
encode (5th June 2017),RamiroCruzo (6th June 2017),Shelwien (5th June 2017),snowcat (5th June 2017)
If it works on groups of 4 bytes then it should be comparable to https://github.com/centaurean/density
density isn't an lz77 compressor
I was thinking about efficiency. If treating data as groups of 4 bytes (vs 1 byte) doesn't hurt compression much (because of specific dataset) then density should fare well.
That speed improvement is supposed to be their main selling point but somehow they only talk about the slightly worse compression ratio and have just one chart about the (de-)compression speed - without detailled comments.Our evaluation results with the data obtained from a running mobile device show that LZ4m outperforms previous compression algorithms incompression and decompression speed by up to 2.1× and 1.8×, respectively, with a marginal loss of less than 3% in compression ratio.