View Poll Results: What should I release next?

Voters
16. You may not vote on this poll
  • Improved LZ4X

    3 18.75%
  • Improved ULZ

    3 18.75%
  • Improved ULZ with a large window

    10 62.50%
Page 3 of 4 FirstFirst 1234 LastLast
Results 61 to 90 of 101

Thread: LZ4X - An Optimized LZ4 Compressor

  1. #61
    Member
    Join Date
    Dec 2011
    Location
    Cambridge, UK
    Posts
    486
    Thanks
    167
    Thanked 166 Times in 114 Posts
    Rather off topic, but going with the flow.

    For acceptance of file formats, it can be good to make your code as easily adoptable as possible. The last thing you want is someone to reimplement your format in their own slightly buggy or incompatible code, simply because they're trying to avoid any license issues. Rightly or wrongly, this makes GPL and even LGPL a bad move in those cases. Yes it's just fine for companies to be linking in an LGPL library, but far too many companies have idiots dictating "thou shalt not use GPL" out of fear rather than understanding. That's annoying as there is very little you can do to combat idiocy.

    The flip side is it's nice for improvements to your code to be forced to be public, but it depends on what you're trying to achieve; whether it is to harness a larger work force and get changes from others or whether it is to achieve maximum adoption / user base. Hence no one license is best.

  2. #62
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    4,000
    Thanks
    387
    Thanked 365 Times in 145 Posts
    For the LZ4X I'm choosing the "Public Domain"

  3. Thanks (2):

    jibz (23rd April 2016),Mike (20th April 2016)

  4. #63
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    4,000
    Thanks
    387
    Thanked 365 Times in 145 Posts

  5. Thanks (5):

    comp1 (2nd August 2016),Cyan (2nd August 2016),m^3 (3rd August 2016),Paul W. (4th August 2016),Turtle (4th August 2016)

  6. #64
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    4,000
    Thanks
    387
    Thanked 365 Times in 145 Posts
    The open source version of LZ4X will have -1..-9 compression levels:

    -1..-7 - CompressGreedy(), chain_len=1<<level
    -8 - CompressGreedy(), chain_len=8192
    -9 - CompressOptimal()

    -5 is the default (chain_len=32)

    The work in progress...
    If you have any suggestions or tips - let me know!

  7. Thanks (2):

    Cyan (11th August 2016),Paul W. (11th August 2016)

  8. #65
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    4,000
    Thanks
    387
    Thanked 365 Times in 145 Posts
    Uploaded the source - you all enjoy!

  9. Thanks (9):

    Bulat Ziganshin (12th August 2016),Cyan (12th August 2016),hexagone (12th August 2016),jibz (12th August 2016),Mike (12th August 2016),RamiroCruzo (12th August 2016),stbrumme (12th August 2016),Turtle (13th August 2016),xezz (12th August 2016)

  10. #66
    Member
    Join Date
    Feb 2016
    Location
    USA
    Posts
    3
    Thanks
    1
    Thanked 0 Times in 0 Posts
    Anyway for a winx64 build?

  11. #67
    Member
    Join Date
    Jul 2013
    Location
    United States
    Posts
    194
    Thanks
    44
    Thanked 140 Times in 69 Posts
    Compressor (at least) isn't safe:

    Code:
    nemequ@peltast:~/local/src/lz4x/src$ git:(master 1↑) 10A dd if=/dev/urandom of=rand.data bs=$(expr 1024 \* 1024) count=32
    32+0 records in
    32+0 records out
    33554432 bytes (34 MB, 32 MiB) copied, 2.19973 s, 15.3 MB/s
    nemequ@peltast:~/local/src/lz4x/src$ git:(master 1↑) 10A g++ -g -fno-omit-frame-pointer -O3 -fsanitize=address -o lz4x lz4x.cpp && ./lz4x -f -9 rand.data rand.data.lz4 && ./lz4x -f -d rand.data.lz4 rand.data.unlz4
    Compressing rand.data:
    =================================================================
    ==4442==ERROR: AddressSanitizer: global-buffer-overflow on address 0x000006605720 at pc 0x000000403aa3 bp 0x7ffd5c3ceb00 sp 0x7ffd5c3ceaf0
    WRITE of size 4 at 0x000006605720 thread T0
        #0 0x403aa2 in compress_optimal() /home/nemequ/local/src/lz4x/src/lz4x.cpp:318
        #1 0x401835 in main /home/nemequ/local/src/lz4x/src/lz4x.cpp:653
        #2 0x7fdab9b0e730 in __libc_start_main (/lib64/libc.so.6+0x20730)
        #3 0x4019b8 in _start (/home/nemequ/local/src/lz4x/src/lz4x+0x4019b8)
    
    0x000006605720 is located 32 bytes to the left of global variable 'nodes' defined in 'lz4x.cpp:219:14' (0x6605740) of size 524288
    0x000006605720 is located 0 bytes to the right of global variable 'path' defined in 'lz4x.cpp:226:5' (0x605720) of size 100663296
    SUMMARY: AddressSanitizer: global-buffer-overflow /home/nemequ/local/src/lz4x/src/lz4x.cpp:318 in compress_optimal()
    Shadow bytes around the buggy address:
      0x000080cb8a90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x000080cb8aa0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x000080cb8ab0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x000080cb8ac0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x000080cb8ad0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    =>0x000080cb8ae0: 00 00 00 00[f9]f9 f9 f9 00 00 00 00 00 00 00 00
      0x000080cb8af0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x000080cb8b00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x000080cb8b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x000080cb8b20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x000080cb8b30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    Shadow byte legend (one shadow byte represents 8 application bytes):
      Addressable:           00
      Partially addressable: 01 02 03 04 05 06 07 
      Heap left redzone:       fa
      Heap right redzone:      fb
      Freed heap region:       fd
      Stack left redzone:      f1
      Stack mid redzone:       f2
      Stack right redzone:     f3
      Stack partial redzone:   f4
      Stack after return:      f5
      Stack use after scope:   f8
      Global redzone:          f9
      Global init order:       f6
      Poisoned by user:        f7
      Container overflow:      fc
      Array cookie:            ac
      Intra object redzone:    bb
      ASan internal:           fe
      Left alloca redzone:     ca
      Right alloca redzone:    cb
    ==4442==ABORTING

  12. Thanks:

    encode (12th August 2016)

  13. #68
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    4,000
    Thanks
    387
    Thanked 365 Times in 145 Posts
    Fixed!

  14. #69
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    4,000
    Thanks
    387
    Thanked 365 Times in 145 Posts
    LZ4X v1.12 - made some changes proposed by nemequ


  15. #70
    Member
    Join Date
    Jul 2013
    Location
    United States
    Posts
    194
    Thanks
    44
    Thanked 140 Times in 69 Posts
    Code:
    nemequ@hoplite:~/local/src/lz4x/src$ git:(master) g++ -o lz4x lz4x.cpp 
    lz4x.cpp:33:23: fatal error: sys/utime.h: No such file or directory
     #include <sys/utime.h>
                           ^
    compilation terminated.
    The fix for this (uisng _WIN32 instead of depending on a non-standard NO_UTIME macro) was included in my pull request. AFAICT that is a Windows-specific header (POSIX has a <utime.h>, no idea if it is compatible with Windows' <sys/utime.h>).
    Last edited by nemequ; 12th August 2016 at 17:59. Reason: use [code] not [quote] (oops)

  16. #71
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts
    A few weeks ago I started my own LZ4 engine, called smalLZ4, which is based on optimal parsing, too.
    There is a quick'n'dirty website with full C++/C source code: http://create.stephan-brumme.com/smallz4/

    Code comes with a simple command-line interface and works fine with G++, Clang++ and Visual C++.
    The compressor is a single C++ file (smallz4.cpp) without any external dependencies, while the decompressor is actually plain C code (a single file, too: smallz4cat.c).

    My resulting files are a bit smaller than those produced by encode:

    enwik9:
    LZ4X -9:     372,068,631 bytes
    smalLZ4 -9: 371,690,616 bytes
    Difference: 378,015 bytes

    (For reference: lz4 -9 => 374,905,570 bytes)

    Right now my program needs about 6 minutes to compress enwik9, that's about half as fast as LZ4X.
    Performance is my main issue and I still have to run quite a few tests to make sure my program always emits valid LZ4.
    As soon as things have stabilized, I will add a Git repo on my server.

    PS: I have no intention to ever break LZ4 compatibility (like using larger windows) - I just want to create perfect LZ4 files
    Last edited by stbrumme; 12th August 2016 at 20:00. Reason: optimized end-of-block handling which saves a few bytes

  17. Thanks (5):

    comp1 (12th August 2016),Cyan (12th August 2016),encode (12th August 2016),Stephan Busch (17th August 2016),Turtle (13th August 2016)

  18. #72
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    4,000
    Thanks
    387
    Thanked 365 Times in 145 Posts
    LZ4X compresses to Legacy LZ4 frame = fixed 8 MB independent blocks. smallz4 compresses to a newer frame which allows a solid (dependent) blocks which may improve compression a little bit.

  19. #73
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts
    @encode:
    Is there a special reason behind your decision of using legacy frames (beside slightly less file header overhead) ?

    From https://github.com/Cyan4973/lz4/wiki...rame_format.md :
    Newer compressors should not use this format anymore, as it is too restrictive.

  20. #74
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    4,000
    Thanks
    387
    Thanked 365 Times in 145 Posts
    The main reason is simplicity. The LZ4Demo(Legacy) format was pretty close to what I did with ULZ/CRUSH/LZSS/LZPM. So I just adapted some of my work to be LZ4-compatible!

  21. #75
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    4,000
    Thanks
    387
    Thanked 365 Times in 145 Posts
    Updated to LZ4X v1.15 - improved compiler compatibility (tested with Cygwin), replaced rewind() with fseek(f, 0, SEEK_SET) - since with some compilers rewind() works not properly with large files.



    Quick MSC compile:
    Attached Files Attached Files

  22. Thanks (2):

    Cyan (16th August 2016),Stephan Busch (17th August 2016)

  23. #76
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    4,000
    Thanks
    387
    Thanked 365 Times in 145 Posts
    SmalLZ4's Optimal Parsing is not 100% accurate, especially on binary files, as example:

    mso97.dll from maximumcompression.com:
    Code:
    SmalLZ4 -9 -> 2,508,956 bytes
    LZ4X    -9 -> 2,508,177 bytes

  24. Thanks:

    stbrumme (17th August 2016)

  25. #77
    Tester
    Stephan Busch's Avatar
    Join Date
    May 2008
    Location
    Bremen, Germany
    Posts
    876
    Thanks
    474
    Thanked 175 Times in 85 Posts
    can someone please provide a compile of SmallLZ4 please?

  26. #78
    Member
    Join Date
    Jul 2013
    Location
    United States
    Posts
    194
    Thanks
    44
    Thanked 140 Times in 69 Posts
    Compiled binaries really have no business in a repository… perhaps instead of adding the zip to the git repository you could create a release on GitHub? Putting compiled binaries in the repo basically makes everyone using the git repository download the binary for every version you ever create, and people typically use git because they want the source code, not a pre-compiled binary.

    Less importantly, it would also be helpful if you made one commit per change (with a descriptive commit message, not "Updated to v1.xy"), then when you want to make a release create a tag (which will create a release on GitHub, then you can attach binaries to the release if you wish).

  27. Thanks (2):

    Bulat Ziganshin (17th August 2016),jibz (17th August 2016)

  28. #79
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts
    There was a bug in the cost function which overestimated the number of length bytes needed for long sequences of literals.

    mso97.dll:
    Version 0.3: 2,508,956 bytes
    Version 0.4: 2,508,181 bytes

    Legacy frames have less overhead, that's why LZ4X's file is still 4 bytes smaller.

    enwik9:
    Version 0.3: 371,690,616 bytes
    Version 0.4: 371,681,075 bytes

    As a bonus I added:


    Last edited by stbrumme; 17th August 2016 at 22:05. Reason: added link to basic algorithm description

  29. Thanks (4):

    Bulat Ziganshin (17th August 2016),Cyan (17th August 2016),encode (18th August 2016),JamesB (18th August 2016)

  30. #80
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    42
    Thanks
    11
    Thanked 17 Times in 12 Posts
    Hi, I'm a long-time lurker and finally decided to try some things. Thanks for sharing LZ4X and SmallLZ. I played with
    SmallLZ, trying to optimize it a bit further (in optimal mode only); I don't think it can get as fast as LZ4X in that case
    because of the matcher, which takes > 75% of the run time. I did manage to speed it up slightly, by about 10% as measured
    on enwik8. There are other opportunities, but in -9 mode, it's all about the speed of findLongestMatch, and how often it gets
    called. As always with optimization, YMMV - I used VS 2015 and its profiler to squeeze things a bit.


    Only two changes are immediate wins - drop the default constructor for the Match() class, and try this match finding:



    bool match4(const unsigned char * ptr, const int off) const
    {
    const auto a = reinterpret_cast<const uint32_t*>(ptr);
    const auto b = reinterpret_cast<const uint32_t*>(ptr + off);
    return *a == *b;
    }


    bool back_check_match(
    const unsigned char * current,
    int length,
    const int off) const
    {
    length -= 4;
    while (length > 0)
    {
    if (match4(current + length, off))
    length -= 4;
    else
    return false;
    }
    // no need to handle remaining bytes, we knew the first 4 were OK anyway
    return true;
    }


    const unsigned char * find_match_end(
    const unsigned char * current,
    const unsigned char * end,
    const int off) const
    {
    while (current < end && current[off] == *current)
    ++current;
    return current;
    }

    /// find longest match of data[pos] between data[begin] and data[end], use match chain stored in previous
    Match findLongestMatch(const unsigned char * data, const uint32_t pos, const uint32_t begin, const uint32_t end, const Distance * previous)
    {
    Match result; // no longer default-initialized to anything
    result.length = 1;
    // compression level: look only at the first n entries of the match chain
    int32_t stepsLeft = maxChainLength;


    // pointer to position that is matched against everything in data
    const auto current = data + pos - begin;
    const auto end_p = current + end - pos;


    // get distance to previous match, abort if -1 => not existing
    Distance distance = previous[pos % PreviousSize];
    int neg_offset = 0;
    while (distance != NoPrevious)
    {
    // too far back ?
    neg_offset -= distance;
    if (neg_offset < -MaxDistance)
    return result;

    // stop searching on lower compression levels
    if (--stepsLeft < 0)
    return result;

    // prepare next position
    distance = previous[(pos + neg_offset) % PreviousSize];

    // go backwards to confirm beginning of match; we also know
    // first 4 bytes are a match so we can got 4 bytes at a time
    // and stop early
    if (!back_check_match(current, result.length, neg_offset))
    continue;
    // now go forwards to extend match
    const Length match_len =
    find_match_end(current + result.length, end_p, neg_offset) - current;

    // match longer than before ?
    if (match_len > result.length)
    {
    result.distance = -neg_offset;
    result.length = match_len;
    }
    }
    return result;
    }

  31. Thanks:

    stbrumme (1st September 2016)

  32. #81
    Member
    Join Date
    Oct 2016
    Location
    Berlin
    Posts
    9
    Thanks
    8
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by stbrumme View Post
    Windows executables (compiled with CLang 3.7 x64)
    These aren't stand-alone but need external dlls - could you please provide static builds?

  33. #82
    Member
    Join Date
    Nov 2015
    Location
    boot ROM
    Posts
    95
    Thanks
    27
    Thanked 17 Times in 15 Posts
    Interesting thing. LZ4X is nearly first LZ compressor I'm not really able to use for pragmatic/practical purposes, despite of being cool thing.

    Here is the story:
    It exposes funny properties:
    - On levels 1 to 8 it compresses quite fast, BUT somehow lz4hc (as seen in lzbench's LZ4 version) beats it (in terms of ratio) on all files I've gave to it.
    - On level 9 LZ4x wins, sure. BUT I've got really fancy trouble on the way.

    Problem with level 9 is:
    * give it 1MiB file - works, beats LZ4HC, its cool!
    * Give it a 1,1MiB file: still works, compression time is small enough, sure.
    * 1.2MiB file. Works, but now this takes around 10 seconds to get there. But trend looks a but suspicious, no?
    * 1.25MiB file. Still works, but now it like 35 seconds on my HW. Mere 50 KiB for 3x slowdown? Seems I've entered steep curve.
    * 1.3MiB file... ok, I can wait and ... after roughly ~7 minutes its done, sure. But hey, another 50k crashed speed 15x?
    * If I grab "full" ~2MiB file version, I'm not patient enough to wait until compression would complete.

    So it seems when file is about 1.2-1.3MiB I'm hitting really sharp, exponential slowdown. Is this part of plan?

    All mentioned nearly-1MiB files are the very same file (texture in DDS, fairly compressible by ~3x in LZ4X case), subsequentally truncated to figure out at which point it would work for me, I've did kind of "binary partitioning" of file size to get rough idea. On particular file and hardware it seems I can barely do 1.3MiB of that file and I'm not exactly sure how long it would take to compress whole 2MiB thing, trend does not looks promising at all.

    Another interesting observation is the fact GCC 6.1 beats GCC 5.4 to the dust on this particular compressor, speeding compression up by about 2x or so (its rare sight to get 2x boost by mere compiler upgrade, lol). But granted overall sharp exponential-style trend it barely allows to get, say, 50KiB extra. Whole 2MiB thing did not completed compression even in few hours so I've gave up.

    I've used GCC 5.4 and then 6.1 on x86-64 Linux (with later version proving to be considerably faster on this particular algo). I'm usually using -O3, but I've gave a shot to -O0 and -O2, it kills speed, but overall trend stays the very same, sharp exponential spike grossly outweights anything else.

    Linux kernel (~20MiB highly redundant chunk of code + data) did not finished compression even if I let it run overnight. So, my "standard" test of trying to compress certain Linux kernel binary (it makes fancy crash test dummy and could also bring some practical/pragmatic benefits if crUsh test goes well) has .. failed. Seems in this case guys running test die due to age and crash test dummy gets bored and walks away.

    I still wonder: is this sharp exponential growth of compression time is something really expected and desirable?

  34. #83
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,531
    Thanks
    755
    Thanked 674 Times in 365 Posts
    1. optimal parsing engines have O(n^2) worst-time behavior. it's usually fixed by "fast bytes" optimization, probably lz4x lacks it. bwt-based lz4a is the only optimal-parsing compressor that can overcome this limit
    2. i'm pretty sure that behavior you seen was due to 8 MB L3 cache size. Optimal parsers usually use 8x of input data size to store the search tree

  35. Thanks:

    joerg (19th October 2016)

  36. #84
    Member
    Join Date
    Nov 2015
    Location
    boot ROM
    Posts
    95
    Thanks
    27
    Thanked 17 Times in 15 Posts
    I think I've mostly solved this fancy puzzle. Mentioned file had a large chunk of zeros (0x00 bytes) at the end, starting roughly at 1.2 MiB boundary I've tracked down. Linux kernel also contains several large areas full of zeros in its uncompressed image. It seems its what provokes corner case and that's what I would call WORST case :P. So it seems LZ4X (and LZ4a) are going nuts after facing large chunk of zeros. As simple as that .

  37. #85
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    4,000
    Thanks
    387
    Thanked 365 Times in 145 Posts
    Yep, that is correct!
    With next versions of LZ4X I'll keep -9 as a Brute level - as currently, but will change the -8 to "practical" optimal parsing by adding the search depth limit. Currently, -9 has no search limit of any kind - that's why we facing with some weird corner cases....

  38. Thanks:

    xcrh (20th October 2016)

  39. #86
    Member
    Join Date
    Nov 2015
    Location
    boot ROM
    Posts
    95
    Thanks
    27
    Thanked 17 Times in 15 Posts
    Quote Originally Posted by encode View Post
    Yep, that is correct!
    With next versions of LZ4X I'll keep -9 as a Brute level - as currently, but will change the -8 to "practical" optimal parsing by adding the search depth limit.
    Sounds reasonable, I could suggest even several different levels doing this, using different limits (and resulting speed). I would agree it could be interesting what the best one could get within particular bitstream format, but it seems sometime one may really want it to be slightly suboptimal. Btw, LZ5 v2 stream format looks quite interesting, it targets even better ratios without being slow to decompress.

    Currently, -9 has no search limit of any kind - that's why we facing with some weird corner cases....
    Yeah, its not very hard to guess after taking a look on data using hex editor. Somehow I've forgot about this part and got really puzzled with this behavior. Though it seems I'm good at picking redundant files full of zeros.

  40. #87
    Member
    Join Date
    Mar 2013
    Location
    Berlin
    Posts
    45
    Thanks
    14
    Thanked 71 Times in 30 Posts
    Long runs of identical bytes (e.g. lots of zeros) aren't handled well by LZ4X. My implementation smallz4 (see discussion at http://encode.su/threads/2593-smallz4 or grab the source code at http://create.stephan-brumme.com/smallz4/) contains a small tweak: if the previous byte matched "itself", that means match distance is one, then my program assumes that this match is optimal for the current byte, too, and avoids finding a better match (see lines 505-517).
    In addition, my implementation supports match finding across blocks boundaries and therefore usually achieves a better compression ratio than LZ4X.

  41. Thanks (2):

    Marsu42 (1st November 2016),xcrh (3rd November 2016)

  42. #88
    Member
    Join Date
    Nov 2015
    Location
    boot ROM
    Posts
    95
    Thanks
    27
    Thanked 17 Times in 15 Posts
    Quote Originally Posted by stbrumme View Post
    Long runs of identical bytes (e.g. lots of zeros) aren't handled well by LZ4X. My implementation smallz4 (see discussion at http://encode.su/threads/2593-smallz4 or grab the source code at http://create.stephan-brumme.com/smallz4/) contains a small tweak: if the previous byte matched "itself", that means match distance is one, then my program assumes that this match is optimal for the current byte, too, and avoids finding a better match (see lines 505-517).
    In addition, my implementation supports match finding across blocks boundaries and therefore usually achieves a better compression ratio than LZ4X.
    So it completes compression in foreseeable time, indeed! Also, smallz level 9 beats lz4hc for sure. Btw, thanks for nice description of optimal parsing on your site.

  43. #89
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    4,000
    Thanks
    387
    Thanked 365 Times in 145 Posts
    Well, currently I'm testing LZ4X v1.20...

  44. #90
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    4,000
    Thanks
    387
    Thanked 365 Times in 145 Posts
    LZ4X v1.20 - a quick fix is here:
    https://github.com/encode84/lz4x


Page 3 of 4 FirstFirst 1234 LastLast

Similar Threads

  1. LZF - Optimized LZF compressor
    By encode in forum Data Compression
    Replies: 39
    Last Post: 28th March 2019, 20:49
  2. Optimized LZSS compressor
    By encode in forum Data Compression
    Replies: 11
    Last Post: 13th February 2014, 23:51
  3. M1 - Optimized demo coder
    By toffer in forum Data Compression
    Replies: 189
    Last Post: 22nd July 2010, 00:49
  4. lzop optimized compile
    By M4ST3R in forum Download Area
    Replies: 1
    Last Post: 30th June 2009, 22:31
  5. 7zip >> Sfx optimized - 23,7 kb
    By Yuri Grille. in forum Data Compression
    Replies: 22
    Last Post: 12th April 2009, 22:33

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •