Page 1 of 3 123 LastLast
Results 1 to 30 of 84

Thread: Tornado 0.4

  1. #1
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,511
    Thanks
    746
    Thanked 668 Times in 361 Posts
    i'm glad to present new version of my lz77 compressor, Tornado 0.4:
    http://www.haskell.org/bz/tornado04.zip

    improvements are:
    - compression modes -9..-11 made 1.5-2x faster
    - improved user interface
    - i've included both full executable (which contains 120 combinations of compression modes) and small executable (with only 7 practically useful modes). the first is mainly interesting for doing various experiments

    compression modes of main interest are:
    -5 (default) - deflate compression class
    -11 - rar compression class


    threads about previous Tornado versions:
    http://www.encode.su/forums/index.ph...um=1&topic=650
    http://www.encode.su/forums/index.ph...um=1&topic=408

  2. #2
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts
    Thanks Bulat!

    Mirror: Download

  3. #3
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    Thanks Bulat!Hi ! Tornado is Very powerfull LZ77 compressor!

  4. #4
    Member
    Join Date
    May 2008
    Location
    France
    Posts
    48
    Thanks
    1
    Thanked 1 Time in 1 Post
    Hi !

    I've made only one test, but the results are amazing :

    Dataset : my school's huge intranet (php, perl, python, ruby, c, c++, cvs and svn repositories with high redondancy).
    Machine : Intel Prescott 3GHz, 1 Gio RAM, Debian GNU/Linux

    uncompressed : 419 Mio (100,0%) compression / decompression
    gzip -9 : 239 Mio (57,0%) 2:07.26 / 0:23.38
    bzip2 -9 : 227 Mio (54,2%) 4:19.08 / 1:45.87
    7z-lzma : 19 Mio (4,5%) 3:51.02 / 0:19.32
    Tornado -11 : 11 Mio (2,6%) 1:46.53 / 0:05.56

    For information:
    md5sum: 0:02.79

    Great !
    Jérémie

  5. #5
    Member
    Join Date
    Dec 2006
    Posts
    611
    Thanks
    0
    Thanked 1 Time in 1 Post
    I think this should really mean something for Linux community, which uses gz/bz2 for sources packing all the time (because they are "fast")

  6. #6
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    Quote Originally Posted by Bulat Ziganshin
    -11 - rar compression class
    Not quite sure about "RAR" class. Ive made a quick test with two executables which I usually test with:

    Reaktor.exe (14,446,592 bytes):
    BALZ 1.02: 1,998,932 bytes
    RAR 3.51, Best: 2,033,719 bytes
    TOR 0.4, -11: 2,258,453 bytes

    MPTRACK.EXE (1,159,172 bytes):
    BALZ 1.02: 487,355 bytes
    RAR 3.51, Best: 499,767 bytes
    TOR 0.4, -11: 533,726 bytes


  7. #7
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,511
    Thanks
    746
    Thanked 668 Times in 361 Posts
    tor -11 provides rar-class compression ratio on large datasets (such as Squeeze Chart) due to its massive dictionary. moreover, in new tornado version mode -11 made much faster, providing rar-comparable speeds

    ive made experiments with files smaller than 4mb and found that pure lz77 algorithm in rar provides better compression than in tor. this makes me hope that tornado may compress 5-10% better - i just need to make further experiments with various algorithm details

    just now tornado is more like lz77 testbench. its the only lz77 implementation with 1.5*dictsize memreqs. now im incorporating this technology into lzma. lzturbo uses my code to provide 2/4-threaded compression with reasonable memreqs

    Quote Originally Posted by Black_Fox
    I think this should really mean something for Linux community, which uses gz/bz2 for sources packing all the time (because they are "fast")
    unfortunately, these results are not representative. although tornado still provides better speed/compression ratio than gzip/bzip2 - while using more memory

    generally speaking, i believe that tornado default mode may be interesting as better alternative to gzip. for best compression, lzma or ppmd should be used. freearc sources include compressor.cpp which is simple compression utility allowing to specify any compression algorithm on its cmdline (such as lzma:fast:32m or tor:11). at this moment it just shows how to work with my library but general-purpose compression utility may written on its basis

  8. #8
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    Quote Originally Posted by Bulat Ziganshin
    just now tornado is more like lz77 testbench. its the only lz77 implementation with 1.5*dictsize memreqs.
    Its cool, but Im afraid that it may loose too many matches - the string search is not so exhaustive. Just compared your latest TORNADO (-11) with 1 GB buffer with my latest BALZ with 512 KB window:

    pak0.pak - Quake II game resources (183,997,730 bytes):
    BALZ 1.02: 82,955,270 bytes
    TOR, -11: 84,695,271 bytes


  9. #9
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    Another results on a larger file:

    FEAR.Arch00 - A resource file from F.E.A.R. SP Demo game (939,326,262 bytes)
    BALZ 1.02: 379,498,108 bytes
    TOR, -11: 394,720,589 bytes

    Deep search rules here, even if we deal with 512K dictionary VS 1 GB one...

  10. #10
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,511
    Thanks
    746
    Thanked 668 Times in 361 Posts
    that is the point of tornado - while you have optimized balz for maximum compression with given dictionary, i optimized for the best speed/compression ratio. the maximum compression modes -9..-12 existed mainly to test limitations of algorithm and was too slow for real competition

    tor 0.1 in -12 mode used 24k seconds on SqChart test (slower than LPAQ). tor 0.3 in -11 mode should use 3.5k seconds that is comparable to RAR. but there is still room for improvements

    -11 mode tries 256 candidates, hashed by 4 bytes. actually, matches with length <= 6 can't have offsets > 1mb which means that we can find small matches with small hash and use main hash only to search matches of 7+ bytes. this will allow to significanlty decrease size of hash line and test only 32-64 candidates even in -12 mode while being much closer to exhaustive search

    of course, real exhaustive search require to use at least 4*dictsize bytes for hash, but for typical files my proportion (2/3 of memory used for dictionary, 1/3 for hash) seems to provide best results for given memory budget. you may test it yourself, though, using -b/-h options to set dictionary/hash size

    if you use -11 mode on small file, hash size will be up to 4*filesize

  11. #11
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    Quote Originally Posted by Bulat Ziganshin
    that is the point of tornado - while you have optimized balz for maximum compression with given dictionary, i optimized for the best speed/compression ratio.
    Im hoping that soon Ill have some spare time for BALZ to add a practical/efficent mode. Here I have a few possibilities: add a regular Lazy Matching; add Lazy Matching with two bytes lookahead or do some string search limitations - for example, instead of searching all matches at EACH position, after finding a match, search for one or two bytes ahead and skip N bytes (match len minus 1 or 2), after, we may do the same backward parsing since it works momentally. In this case, BALZ should have the same or higher speed compared to TORNADO and other competitors. Having said that I already tested BALZ with simple Lazy Matching - I was impressed by its performance - the compression ratio is not that affected being ultimately faster. (Hash Chains used with BALZ has the worst performance if we search matches from each byte, but its fastest with simple parsing strategies) However, since BALZ was based on my unreleased byte-oriented LZ77 coder from my experimental EXE-packer, which goal is max. compression at any cost, untouching the decompressor, we have BALZ with SS parsing only by now. To be continued....


  12. #12
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,511
    Thanks
    746
    Thanked 668 Times in 361 Posts
    i doubt that balz with lazy parsing will be faster than other lazy-parsing compressors. moreover, hash chains require 2 random access to memory for each match checked while hash tables use only 4-8 bytes of linearly accessed data. i've shown some results in freearc topic. generally speaking, ht4 matchfinder should be ~1.5x faster than hc4 MF for entire-buffer hashing (i.e. 5x memreqs for ht4)

    btw, there is one interesting hc trick: when you've found N-byte match, you can switch to any other hash chain going through this match, i.e. any of N-4+1 chains. it's better to select chain with smallest next element. in particular, if this element is NIL, it's guaranteed that there are no more matches of length >=N

    this trick makes higher compression modes 1.5-2.5x faster

  13. #13
    Programmer
    Join Date
    Feb 2007
    Location
    Germany
    Posts
    420
    Thanks
    28
    Thanked 153 Times in 18 Posts
    Imho, hash chains and the likes should only be used for fast compression modes. Because once you pump up the parsing, your results will suffer.

    When I played with optimal parsing, I first started with hash chains - because they are very easy and quick to implement. I tried several tricks, but in the end hash chains are way too slow for OP. Obviously, hash tables are unqualified, too - because they cut off the search rather quickly.

    For higher compression, things like binary trees should be used. They are not too hard to implement and are much better suited for exhaustive search. e.g. when combining binary trees with ROLZ you can easily reduce memory usage to 2 or 4 bytes per node - depending on the order of offset reduction. I know, memory usage cannot compete with hash tables, but you have to pay for what you get.

    I do not have any experience with lazy parsing. But since it's already a compromise between speed and compression it's totally valid to use hash chains or hash tables, imho.

    For very large dictonaries hash tables are the only way to go - memory requirements force you to do so. On the other hand, a hybrid approach might be the best. e.g using HT for very large distances and a binary tree for smaller distances. This way you can merge optimal parsing with a very, very large dictonary. Sorry for being OT.

    Btw., Tornado is a very solid fast compressor. Great work, Bulat!

  14. #14
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,511
    Thanks
    746
    Thanked 668 Times in 361 Posts
    why OT? people able to discuss details of compression algorithms is most exciting feature of this board

    i know that BT is the right way to exhaustive searching and i sure that Ilya know this too. the problem is only implementation - i dont understand how to do rebalancing. its description isnt available. may be you can write it? btw, is these BT corrresponds to some sort of auto-balancing trees: red-black or something else?

    HT/HC may be still used with OP if you dont target maximum compression - they are faster with small search depth so you got some speed/ratio compromise between usual LP and OP. plus, with HT you can have smaller memreqs

    Quote Originally Posted by Christian
    I do not have any experience with lazy parsing. But since its already a compromise between speed and compression its totally valid to use hash chains or hash tables, imho.
    yes, BT is slow for lazy parsing because they doesnt allow to quickly skip match found


    Quote Originally Posted by Christian
    For very large dictonaries hash tables are the only way to go - memory requirements force you to do so.
    he-he, actually memreqs for BT may be cut down to 3.5x. we need to find only 7+ matches at large distances that may be implemented with indexing by 4 bytes on EVERY 4th POSITION. if there is 7+ byte match then it includes 4-byte match with string at position divisible by 4. because we index only each 4th position, bintrees require only dict*8/4 bytes

  15. #15
    Member
    Join Date
    Aug 2007
    Posts
    30
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Hi, bulat,
    i don't understand why you are always claiming lzturbo is using your code, and propagating only negative things about lzturbo.
    Lzturbo is programed from scratch, using old and new ideas and my experience about software design and in information retrieval. I can understand that you are schocked from the performance of lzturbo, but your methods are only helping the competion. From reading your posts one may think you are the inventor of hashing. Until now you have not realised that, for good lz77 compression you need optimal-parsing and not only a match-finder witch require a minimum of 2 gb to run. The match finder in lzturbo is using only a maximum of ~230 mb (single core), so i can't find a relation with your implementation. You are claiming that multithreading is trivial, start and
    thread hier and there. Why not implement this in your tornado. We all have goog ideas but bringing this into software that is performance. Performance is when lzturbo outperformed deflate in rar by 20 times
    (see http://www.encode.su/forums/index.ph...654&page=9133) using a quad core cpu. Performance is rings, ccm and the others. Performance is not 5% or 10% faster or better.

    The best things are simple, but finding the simple things is not simple.

  16. #16
    Programmer
    Join Date
    Feb 2007
    Location
    Germany
    Posts
    420
    Thanks
    28
    Thanked 153 Times in 18 Posts
    Quote Originally Posted by Bulat Ziganshin
    we need to find only 7+ matches at large distances that may be implemented with indexing by 4 bytes on EVERY 4th POSITION
    Actually, you can do this. But if I understand you correct, you buy the lower memory footprint by increasing the time needed for a search. An example:

    <div class=""jscript""><pre>
    0 4 8 12
    aaaabbbba|aaabbbb (|=current position)

    Now, lets find a match:

    1) look for aaab... -> no
    2) look for aabb... -> no
    3) look for abbb... -> no
    4) look for bbbb... -> yes -> check prefix -> yes
    </pre>[/QUOTE]


    [QUOTE]<div class=""quoting"">Quoting: Bulat Ziganshin</div>i know that BT is the right way to exhaustive searching and i sure that Ilya know this too. the problem is only implementation - i dont understand how to do rebalancing.</div>
    Imho, binary trees in dictonary based data compression *must* provide the so called descending-offset-property - I just call it like that. You badly need this for cost optimization. It means, that a found match of length l has the biggest offset possible.
    Using this idea, youll come to the conclusion, that you want to insert each new node as the root. Then, the insertion itself does some sort of self-balancing because it has to reconstruct the tree. Still, the tree can degenerate - in this rare case you just cut it off. You can do other things to improve the performance of these binary trees, but these are very detailed and out of scope. At least this is what I figured.

  17. #17
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    Quote Originally Posted by donotdisturb
    The best things are simple, but finding the simple things is not simple.

  18. #18
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,511
    Thanks
    746
    Thanked 668 Times in 361 Posts
    dns,
    1. actually i'm in this field around 15 years and know about OP since Igor was cracked cabarc - i.e. 9-10 years ago. moreover, i've heard about SS parsing about 15 years ago from author of BSARC. and you may find full overview of lz77 parsing methods i've put here at once.

    2. my problem is that i know much more about compression that i have time to implement. initially tornado was targeted for fast, thor-like compression modes in freearc and here it does very well (1st place in MFC efficiency rating). now i'm experimenting with high-compression modes. by implementing overlapped io/OP/multithreading you covered these topics, so i've to experiment in other directions

    3. now about lzturbo. tor 0.1 had switches:
    Code:
       -#    --  select predefined compression profile (1..9) 
       -c#   --  coder (1-bytes,2-bits,4-arith), default 4

    while lzturbo 0.0.1 had
    Code:
    	m:compression method (1,2,4)		1:byte-output fastest compression and decompression 
    						2:bit-output 
    						3:not implemented 
    						4:best compression 
            l:compression level (1..9)


    moreover, their speed/compression was close. moreover, you've quickly omitted "byte/bit" descriptions from your program and declined from describing your program (except its original points - io/OP/MT). you just don't know how MF/modeling in your program works so, i have enough facts to be sure that lzturbo was build on tornado basis

    that's even more exciting is that tornado 0.2 was available since June as part of fa sources, but you doesn't realize it. a few days after tor 0.3 release, new lzturbo version arrived, with -3* used my huffman encoder and -*1..-*8 used my lazy match finder. you have not used parts that doesn't increase ENWIK9 compression, though

    so, at this moment lzurbo is partial tor 0.3 clone plus new ideas you've widely advertized. i'm pretty sure that it doesn't contain anything written by you except multithreading/io/OP. you are probably from eastern countries and doesn't realize that you just break laws by stealing my code

    technically, i'm not against lzturbo, of course - it implements improvements to tornado i have described at this forum

  19. #19
    Member
    Join Date
    Aug 2007
    Posts
    30
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Another time for your understanding,
    - Byte output is not your invention, allmost all fast (realtime) lz77 based compressor are using this coding. This is simply packing some
    integers into 2,3,4,... bytes. There is nothing magic, this is known as byte-coding in information retrieval.
    - Bit coding is the same as byte coding using only bits instead of bytes and is
    called variable-length encoding.
    - Huffman and arithmetic coding is not your invention. Lzturbo implementation have no common with yours. There are a lot of papers about huffman and arithmetic coding, simply use google search.

    I think these algorithms are not patented by you. I cannot see what can be new (as algorithm) in your tornado. Your match finder is known as "hashing with linear probing" and i've implemented this since years in a cuckoo hashing
    library. I'm not forced to describe lzturbo internals, as this is not important for users.
    Please don't understand me wrong. I'm respecting also your work.

  20. #20
    Member
    Join Date
    Dec 2006
    Posts
    611
    Thanks
    0
    Thanked 1 Time in 1 Post
    I know nothing about compression methods compared to you (in comparison with some of you I think I'm even in negative numbers ), but when two compressors get released almost simultaneously, they both have 4 coders to choose from and both have coder #3 unimplemented, this looks like quite a coincidence... Anyway, when somebody is interested, they can reverse both compressors and compare.

  21. #21
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    To make the things clear. I can also confirm that releasing a good enough compressor with the same options as some open-source project may add some questions. Many authors on this forum are well known for years, and users may watch for their progress in data compression area - from simple compressors to some innovative and world's best data compression programs. donotdisturb is new to this scene - that's why Bulat may do such presumptions. For example, I know at least two file compressors which are based on LZPX, these compressors are closed source and you never read about nor LZPX not me in README or whatsoever. GPL violation, indeed. Anyway, these compressors has no serious performance and the author has gone from data compression scene...


  22. #22
    Member
    Join Date
    Jan 2007
    Location
    Moscow
    Posts
    239
    Thanks
    0
    Thanked 3 Times in 1 Post
    Bulat, many people appriciate your knowledge, but you make a serious charge. Wish you are at least 99% sure in your conclusions. Anyway, i think we'll not see sources of lzturbo to make correct decision.

  23. #23
    Member
    Join Date
    Oct 2007
    Location
    Germany, Hamburg
    Posts
    408
    Thanks
    0
    Thanked 5 Times in 5 Posts
    Aren?t it ever the same with the things Bulat writes? I was writing a message like this many times and then canceled it because I thought is it my problem...?
    I have really much respect about the inventions you did with freearc and I am happy about every new release. But I see you think too good about what you do and didn?t get to manage to hide it for others. It can be seen in almost every of your posts. Often there is a deprecative touch.
    I hope you can handle this sort of criticism and don?t react with despite.

  24. #24
    Member
    Join Date
    Dec 2006
    Posts
    611
    Thanks
    0
    Thanked 1 Time in 1 Post
    It could happen with Christian as well when he became the biggest surprise of previous year when releasing one of the most efficient compressors, CCM_extra 1.02a - how could he produce so good compressor while being new on the scene? However, this question was answered through following months, when CCM got gradually better. Let's hope donotdisturb will prove similarly succesful...

  25. #25
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    Quote Originally Posted by Black_Fox
    CCM_extra 1.02a - how could he produce so good compressor while being new on the scene?
    You may not believe, but this question is in my head for a long time... Like I said, Chris is a Genius!

  26. #26
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,511
    Thanks
    746
    Thanked 668 Times in 361 Posts
    about freearc - there are two sites, .com and .info, which speculates on its (almost non-existent) popularity

    i'm not wondered by the fact that my code was stolen, i just inform you about this. and yes, i'm pretty sure, as sure Ilya who understands how many knowledge and experince required to develop from scratch modern lzh compressor

    actually, i don't know *any* developed from 0: lzma based on cabarc, cabarc - on rar2, rar2 - on zip2 and so on, up to the first lzss encoder. just look at dns posts: he doesn't understand this simple fact

    let's look further: all the lzh/lzari compressors use 5*dictsize memory or more because they use hash chains or binary trees. tornado and lzturbo are only ones who use less amount of memory. are you believe that dns discovered this technology independent from me, without studying tornado sources, just a few month later?
    actually, he thinks that this is just linear hash probing!

    next: how many compressors you know implements simultaneously byte, bit and arithmetic coders for lz77 output? only two? when i first asked dns about this strange coincedence, he answered in the usual manner - that these technlogies are not copyrighted... and removed their names from the next program version

    that is not everything, though. next 6 months after first version he was mainly debugged program, and version 0.9 arrives. but when i've published tornado 0.3, next two versions of lzturbo arrived in a week! they probably includes my improvements in lazy MF and definitely includes my new huffman encoder which is now used in -3* modes replacing much less efficient bit-coder variation

    a few days ago he becomes so impudent that released "lzturbo compression library" i.e. my code that he is trying to sell!

    so i recommend you to compare lzturbo and tornado speed/compression in various modes and think twice why they are so close. my program incorporates 15 years of lz77 compression experience

  27. #27
    Programmer toffer's Avatar
    Join Date
    May 2008
    Location
    Erfurt, Germany
    Posts
    587
    Thanks
    0
    Thanked 0 Times in 0 Posts
    I'm only going to "slightly touch" this clash - just one statement. My opinion is, that it's very strange that someone else implements a "very similar" command line interface. If i remember correctly, some strings containing a file extension list from freearc could be found inside the lzturbo executables.
    M1, CMM and other resources - http://sites.google.com/site/toffer86/ or toffer.tk

  28. #28
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    Quote Originally Posted by toffer
    If i remember correctly, some strings containing a file extension list from freearc could be found inside the lzturbo executables.
    Just recall that.

  29. #29
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    Quote Originally Posted by MC Gastenboek
    Bulat Ziganshin
    i need to mention that so-called "lzturbo compression library" is nothing more than compilation of tornado 0.4 sources that are freely available. it seems that Hamid just trying to make easy money on selling my work
    Moderator-Comment: Ok, I will look into it. Do you have some sort of "proof"?

    Serial Killer
    >Moderator-Comment: Ok, I will look into it. Do you have some sort of "proof"?

    http://encode.su/forums/index.php?ac...pic=502&page=0

  30. #30
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,511
    Thanks
    746
    Thanked 668 Times in 361 Posts
    i've answered too:

    lzturbo & tornado shares too many common ideas to be created independently. they use exctly the same set of coders - byte, bit, arith, huffman. moreover, huffman coder was silently (!) added just next week after i've published tornado 0.3 containing this coder. they also share unique match finder which uses only 1.5*dictsize memory while any other lz77 compressors has 5*dictsize or larger memory requirements. program speeds are close. moreover, Hamid tries to hide facts which shows programs similarities and demonstrates incompetence in compression algorithms used in his own program. i suspect that he just combined parallel bzip2 for multithreading/background i/o, tornado for match finder and coders and lzma for lazy/optimal parsing in his program

    ps: btw, the most irritating dns phrase was about easiness of developing fast huffman encoder. actually it's so "easy" that even rar and cabarc don't have their own huffman encoders and use good-old zip's one

Page 1 of 3 123 LastLast

Similar Threads

  1. FreeArc compression suite (4x4, Tornado, REP, Delta, Dict...)
    By Bulat Ziganshin in forum Data Compression
    Replies: 554
    Last Post: 26th September 2018, 03:41
  2. Some comments on Tornado
    By m^2 in forum Data Compression
    Replies: 14
    Last Post: 24th November 2008, 02:25
  3. Tornado 0.3
    By Bulat Ziganshin in forum Forum Archive
    Replies: 12
    Last Post: 10th March 2008, 07:16
  4. Tornado - fast lzari compressor
    By Bulat Ziganshin in forum Forum Archive
    Replies: 23
    Last Post: 27th July 2007, 14:26
  5. tornado 0.2 is not yet finished...
    By Bulat Ziganshin in forum Forum Archive
    Replies: 15
    Last Post: 12th July 2007, 00:06

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •