Results 1 to 6 of 6

Thread: LZ4m

  1. #1
    Member
    Join Date
    Nov 2015
    Location
    ?l?nsk, PL
    Posts
    81
    Thanks
    9
    Thanked 13 Times in 11 Posts

    LZ4m

    No source code, just a paper. On the authors' proprietary dataset it outperforms LZ4.
    http://csl.skku.edu/papers/icce17.pdf

  2. Thanks:

    Bulat Ziganshin (5th June 2017)

  3. #2
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,572
    Thanks
    783
    Thanked 687 Times in 372 Posts
    all you need to know:
    In contrast to the original LZ4 algorithm, LZ4m scans an input
    stream and finds the match in a 4-byte granularity. If the hash
    table indicates no prefix match exists, LZ4m advances the
    window by 4 bytes and repeats identifying the prefix match.
    i wonder whether it deserves a paper

  4. Thanks (4):

    encode (5th June 2017),RamiroCruzo (5th June 2017),Shelwien (5th June 2017),snowcat (5th June 2017)

  5. #3
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,497
    Thanks
    26
    Thanked 132 Times in 102 Posts
    If it works on groups of 4 bytes then it should be comparable to https://github.com/centaurean/density

  6. #4
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,572
    Thanks
    783
    Thanked 687 Times in 372 Posts
    density isn't an lz77 compressor

  7. #5
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,497
    Thanks
    26
    Thanked 132 Times in 102 Posts
    I was thinking about efficiency. If treating data as groups of 4 bytes (vs 1 byte) doesn't hurt compression much (because of specific dataset) then density should fare well.

  8. #6
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    47
    Thanks
    14
    Thanked 73 Times in 32 Posts
    Our evaluation results with the data obtained from a running mobile device show that LZ4m outperforms previous compression algorithms incompression and decompression speed by up to 2.1× and 1.8×, respectively, with a marginal loss of less than 3% in compression ratio.
    That speed improvement is supposed to be their main selling point but somehow they only talk about the slightly worse compression ratio and have just one chart about the (de-)compression speed - without detailled comments.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •