Results 1 to 23 of 23

Thread: new compressor LZNA = "LZ-nibbled-ANS" - "Oodle 1.45"

  1. #1
    Member
    Join Date
    May 2008
    Location
    Germany
    Posts
    412
    Thanks
    38
    Thanked 64 Times in 38 Posts

    new compressor LZNA = "LZ-nibbled-ANS" - "Oodle 1.45"

    http://cbloomrants.blogspot.de/2015/...odle-lzna.html
    ---
    LZNA is a high compression LZ (usually a bit more than 7z/LZMA) with better decode speed. Around 2.5X faster to decode than LZMA.

    Anyone who needs LZMA-level compression and higher decode speeds should consider LZNA. Currently LZNA requires SSE2 to be fast, so it only runs full speed on modern platforms with x86 chips.

    LZNA gets its speed from two primary changes. 1. It uses RANS instead of arithmetic coding. 2. It uses nibble-wise coding instead of bit-wise coding, so it can do 4x fewer coding operations in some cases. The magic sauce that makes these possible is Ryg's realization about mixing cumulative probability distributions . That lets you do the bitwise-style shift update of probabilities (keeping a power of two total), but on larger alphabets.

    LZNA usually beats LZMA compression on binary, slightly worse on text. LZNA is closer to LZHAM decompress speeds.
    ---
    best regards

  2. Thanks (6):

    avitar (27th May 2015),comp1 (27th May 2015),Intrinsic (28th May 2015),Jarek (27th May 2015),lorents17 (27th May 2015),Matt Mahoney (28th May 2015)

  3. #2
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    807
    Thanks
    245
    Thanked 257 Times in 160 Posts
    Any chance for independent benchmarks?

  4. #3
    Member
    Join Date
    May 2015
    Location
    Nowhere
    Posts
    3
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Any executable for LZNA?

  5. #4
    Member
    Join Date
    Oct 2007
    Location
    Germany, Hamburg
    Posts
    409
    Thanks
    0
    Thanked 5 Times in 5 Posts

  6. #5
    Member
    Join Date
    May 2015
    Location
    Nowhere
    Posts
    3
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Simon Berger View Post
    Is that a paid software?!

  7. #6
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,572
    Thanks
    783
    Thanked 687 Times in 372 Posts
    aram: yes

  8. #7
    Member
    Join Date
    Oct 2007
    Location
    Germany, Hamburg
    Posts
    409
    Thanks
    0
    Thanked 5 Times in 5 Posts
    Quote Originally Posted by aram1376 View Post
    Is that a paid software?!
    It's part of compression SDK named Oodle meant to be used in games.
    Maybe they decide to release it as (demo) software at some point. lzmah was released to work side by side with game engines as well as far as I know.
    But I guess it's not likely they'll do.

  9. #8
    Member Skymmer's Avatar
    Join Date
    Mar 2009
    Location
    Russia
    Posts
    688
    Thanks
    41
    Thanked 174 Times in 88 Posts
    I don't want to say anything bad about Charles Bloom but the mentioned post from his blog is a quite badly disguised self-advertising. For those who don't know - Charles works for RAD so only this fact makes his comparison biased.
    lzt99 and baby_robot_shell files used in comparison are not provided so we can't even check the LZHAM and LZMA results. By the way about LZMA. Guys, please help me. Seems that my myopia worsened and I don't see any values of memory consumption. Do you? And how about compression timings? Ahh, I see: "they're all running different amounts of threading". Sure thing. And probably that was a day when affinity assigning to forcibly limit the number of cores used just didn't work. Sometimes it happens. Especially when you try to do some tests with your own product. And how about trying 7z 4.32 -m0=LZMA:a2:mf=pat4h ?

    LZHAM notes are worth to be mentioned too. Just listen: "LZHAM is run at BETTER because UBER is too slow". Strange. No compression timings provided but seems being gifted with "Time predictor" skill author makes a conclusion. Nice.
    But I should thank Charles. Now I know that LZHAM -m3 -d25 -t7 @ 11 062 kb/s is OK and LZHAM -m4 -d25 -t7 @ 9 438 kb/s is too slow.
    How about trying -m4 -c -x8 -o -e -h1 -b -fb257? Of course if you not afraid of the fucking too slow 390 kb/s for your giant <60MB files.

    Why don't you send a test version to somebody under Non-disclosure agreement?
    Until somebody else will do comprehensive tests all that statements are nothing more than a marketing bluff.

  10. #9
    Member
    Join Date
    Oct 2007
    Location
    Germany, Hamburg
    Posts
    409
    Thanks
    0
    Thanked 5 Times in 5 Posts
    I can't evaluate the quality of these benchmarks but man in all offensivness you put in every single post of yours, don't loose your own objectivity.
    This sdk is used in game data files. It has to work mainly with non standard model, texture.... files and more importantly the absolute only concern is decompression speed (beside file size for sure).
    They praise no other use-cases.

  11. #10
    Member Skymmer's Avatar
    Join Date
    Mar 2009
    Location
    Russia
    Posts
    688
    Thanks
    41
    Thanked 174 Times in 88 Posts
    Quote Originally Posted by Simon Berger View Post
    I can't evaluate the quality of these benchmarks but man in all offensivness you put in every single post of yours, don't loose your own objectivity.
    I'm not loosing it, Simon. I like compression and I respect programmers' work, but I hate clumsy tests which are performed on unknown data, with unknown software, under unknown conditions, in unknown environment, but with clear conclusion: "Use our stuff".
    But since you started to play a devil's advocate here I'll tell you more. The cbloom's post we're discussing here contains exaggeration, half-truth, omission and puffery, which are nothing more than types of lie. At least according to this article.
    So correct me if I wrong - you blame me for expressing my personal opinion against the lie?
    Also just for your knowledge. There were no "offensivness" in my post. Its called sarcasm. By the way you can look at this post to see an example of offensivness.

    Quote Originally Posted by Simon Berger View Post
    This sdk is used in game data files. It has to work mainly with non standard model, texture.... files and more importantly the absolute only concern is decompression speed (beside file size for sure).
    They praise no other use-cases.
    I don't know any oodle-based title so far and there is no list at homepage so more correctly would be to say: This SDK is supposed to be used in game data files.
    Actually its not related to games at all. Its a simple general purpose compression, a set of libraries\headers probably based on publicly available and maybe even illegally used compression projects like Tornado, LZHAM, etc..
    Thats quite perfectly explains the absence of test versions by the way.
    And all of this cooked with multi-platform sauce and friendly API.

    But anyway - anyone who needs LZMA-level compression and higher decode speeds should consider LZHAM.
    Last edited by Skymmer; 30th May 2015 at 08:16.

  12. #11
    Member
    Join Date
    May 2008
    Location
    Germany
    Posts
    412
    Thanks
    38
    Thanked 64 Times in 38 Posts
    @skymmer: "LZNA is closer to LZHAM decompress speeds." this means for me: LZHAM is a wonderful program too ...

    and LZHAM with a integration within 7z it is an old dream ..

    http://encode.su/threads/1117-LZHAM?...ll=1#post42373
    http://www.tenacioussoftware.com/7zip_lzham_1_0.zip
    http://www.tenacioussoftware.com/7zi...e_lzham_1_0.7z

    But it is true: in the "world of compression" are some existing "may be very good" programs, which we can not easy test...

    For example the new LZO ultimate 5.0

    http://www.oberhumer.com/products/lzo-ultimate/

    but i think they are real in our world

    it would be wonderful if Mr. cbloom could make this wonderful new compressor available

    for example for Matt Mahoney for an independent test ...

    best regards
    Last edited by joerg; 30th May 2015 at 19:36.

  13. #12
    Tester
    Stephan Busch's Avatar
    Join Date
    May 2008
    Location
    Bremen, Germany
    Posts
    876
    Thanks
    474
    Thanked 175 Times in 85 Posts
    They don't answer my request to get a demo executable,
    so I cannot test and verify its performance.

    Did anyone get an executable?

  14. #13
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    807
    Thanks
    245
    Thanked 257 Times in 160 Posts
    More benchmarks ... still not independent ...
    http://www.cbloom.com/rants.html

  15. #14
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 199 Times in 147 Posts

    lzna, lzma, lzham

    lzt99 (binary file). size : 24,700,820 bytes (source:http://www.cbloom.com/rants.html)
    single thread. cpu? os? compiler?

    Code:
                     size  ratio%   C MB/s     D MB/s  (bold=pareto)    MB=1.000.000
                  9069473   36.7      1.21      77.92  lzna -z7 (Optimal3)
                  9154343   37.1      1.68      78.99  lzna -z6 (Optimal2)
                  9207584   37.3      2.29      77.58  lzna -z5 (Optimal1)
                  9329982   37.8      2.17      32.19  lzma high
                  9938002   40.2      0.13     100.77  lzham uber+extreme  (C-Time=133.37 k/s)
                 10097341   40.9      1.31     103.27  lzham uber
                 10140761   41.1      1.48     102.17  lzham better
                 24700820  100.0         ?         ?   memcpy
    maybe someone can propose a bunch of public binary files to cbloom for compressing with lzna. For example from: http://compressionratings.com/download.html
    Last edited by dnd; 10th June 2015 at 13:03.

  16. #15
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 199 Times in 147 Posts

    Exclamation LzTurbo, lzma, lzham, zstd, zlib

    lzt24 (binary game file) from cbloom see: http://encode.su/threads/1117-LZHAM?...ll=1#post42326
    size: 3,471,552 bytes.
    cpu: Sandy bridge i7-2600k at 4.5 Ghz, all with gcc 5.1

    Code:
                     size  ratio%   C MB/s     D MB/s  (bold=pareto)    MB=1.000.000
                  1260533  36.3       4.04      57.88    lzma 9  v9.38
                  1303842  37.6       0.25     193.59    lzham Extreme  v1.0
                  1355729  39.1       3.18     729.20    lzturbo 39  v1.2
                  1709025  49.2      35.91     827.21    lzturbo 32  v1.2
                  1901869  54.8     198.82     765.53    lzturbo 30  v1.2
                  1999745  57.6     187.07     682.14    zstd  v0.0.2
                  2289878  66.0      15.40     332.38    zlib 9  v1.2.8
                  2338537  67.4      57.38     320.73    zlib 1  v1.2.8
    LzTurbo with TurboANX decodes ~15 times! faster than Lzma with a bit less compression ratio.
    With current (and future) storage and bandwidth this difference in compression ratio is more than negligible.
    LzTurbo 30 with TurboANX compress better and ~13 times faster than zlib.
    Last edited by dnd; 10th June 2015 at 13:48.

  17. #16
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 199 Times in 147 Posts
    New benchmark Seven Test, unfortunately most of the test files are again not available for download.

  18. Thanks:

    Cyan (12th March 2016)

  19. #17
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    542
    Thanks
    239
    Thanked 93 Times in 73 Posts
    He said:
    "This is running the competitors via my build of their SDK."

  20. #18
    Member jibz's Avatar
    Join Date
    Jan 2015
    Location
    Denmark
    Posts
    124
    Thanks
    106
    Thanked 71 Times in 51 Posts
    @cbloom: Building the Brotli command-line tool with MSVC only requires a small change (stdin/stdout will still not work, but I think there's a PR on GitHub for that):
    Code:
    diff --git a/tools/bro.cc b/tools/bro.cc
    index 635751c..fd9d624 100644
    --- a/tools/bro.cc
    +++ b/tools/bro.cc
    @@ -10,7 +10,16 @@
     #include <stdio.h>
     #include <sys/stat.h>
     #include <sys/types.h>
    -#include <unistd.h>
    +#if defined(_MSC_VER)
    +#  include <io.h>
    +#  include <fcntl.h>
    +#  if !defined(STDIN_FILENO)
    +#    define STDIN_FILENO 0
    +#    define STDOUT_FILENO 1
    +#  endif
    +#else
    +#  include <unistd.h>
    +#endif
     
     #include <ctime>
     #include <string>
    And then:
    Code:
    cl /O2 /GL /EHsc tools\bro.cc enc\*.cc dec\*.c

  21. #19
    Member
    Join Date
    Sep 2010
    Location
    US
    Posts
    126
    Thanks
    4
    Thanked 69 Times in 29 Posts
    Quote Originally Posted by jibz View Post
    @cbloom: Building the Brotli command-line tool with MSVC only requires a small change (stdin/stdout will still not work, but I think there's a PR on GitHub for that):
    My home machine is VC 2005/2008, which I think requires a lot more changes than that. VC 2010+ is much more compliant but I can't stand the IDE changes.

    So basically the difficulty is my fault not Brotli's

    (I can't build Density or LZ5 either)
    Last edited by cbloom; 3rd August 2016 at 19:12.

  22. #20
    Member jibz's Avatar
    Join Date
    Jan 2015
    Location
    Denmark
    Posts
    124
    Thanks
    106
    Thanked 71 Times in 51 Posts
    I see. I must admit I rarely use the IDE, so to me they all look pretty much the same .

    Might I suggest considering mingw-w64 or MSYS2? They both provide a command-line GCC 5.3.0 which can build native Windows applications, supports recent standards, as well as what is often just enough POSIX to get by.

  23. #21
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,610
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by cbloom View Post
    My home machine is VC 2005/2008, which I think requires a lot more changes than that. VC 2010+ is much more compliant but I can't stand the IDE changes.

    So basically the difficulty is my fault not Brotli's

    (I can't build Density or LZ5 either)
    I'd blame MS. It's reasonable to use an IDE that you can stand. Both, the noncompliance of that IDE with C standard and newer ones with your needs are their fault.

  24. #22
    Member
    Join Date
    Jan 2017
    Location
    Selo Bliny-S'edeny
    Posts
    24
    Thanks
    7
    Thanked 10 Times in 8 Posts
    According to cbloom rants, LZNA is being deprecated in Oodle.

  25. #23
    Member
    Join Date
    Dec 2015
    Location
    US
    Posts
    57
    Thanks
    2
    Thanked 114 Times in 36 Posts
    Yeah, the next release (coming up soon) will have a replacement.

Similar Threads

  1. Replies: 7
    Last Post: 4th January 2016, 14:06
  2. PAKKA (ZPAQ's Win32 "versioned" unpacker)
    By fcorbelli in forum Data Compression
    Replies: 21
    Last Post: 24th June 2015, 23:29
  3. File "Type" identification tool
    By soor in forum Data Compression
    Replies: 4
    Last Post: 6th June 2011, 04:04
  4. LZ77 speed optimization, 2 mem accesses per "round"
    By Lasse Reinhold in forum Forum Archive
    Replies: 4
    Last Post: 11th June 2007, 21:53
  5. Freeware "Send To" interface for CCM and QUAD
    By LovePimple in forum Forum Archive
    Replies: 2
    Last Post: 20th March 2007, 17:22

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •