Page 7 of 7 FirstFirst ... 567
Results 181 to 208 of 208

Thread: RAZOR - strong LZ-based archiver

  1. #181
    Member
    Join Date
    Mar 2012
    Location
    Paris
    Posts
    38
    Thanks
    11
    Thanked 2 Times in 2 Posts
    Quote Originally Posted by diskzip View Post
    Seems Razor is still the reigning leader over PA.
    Not on everyday work for now. Found many case where PA has a (far) better ratio, thanks to multimedia filters (jpeg, png, mp3, flash etc.)
    But definitively RAZOR is promising and I hope the development will reach 2.0 step with filters and multi threading.

  2. Thanks:

    diskzip (4th May 2018)

  3. #182
    Member
    Join Date
    May 2018
    Location
    America
    Posts
    1
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Hi All,

    I m new to compression just started. How to use this in cmd? i want to compress PS2 iso fie. please help..

  4. #183
    Member
    Join Date
    Feb 2015
    Location
    United Kingdom
    Posts
    162
    Thanks
    24
    Thanked 69 Times in 40 Posts
    Quote Originally Posted by hireme View Post
    Hi All,

    I m new to compression just started. How to use this in cmd? i want to compress PS2 iso fie. please help..
    Here's a step by step, https://youtu.be/p9akLf3xNpc, drag rz into console then choose your options, then drag the files you want to compress into the console and press enter.

  5. #184
    Programmer
    Join Date
    Feb 2007
    Location
    Germany
    Posts
    420
    Thanks
    28
    Thanked 155 Times in 18 Posts
    Thank you for the great feedback everyone.

    I'm finally able to work a few hours a week on compression again. Therefore, a small sketchy update on progress.

    I've completely seperated the compression engine and the archiver functionality. At the moment, there's no archiver functionality left. I needed to do this in order to improve the internal APIs. I'm still not completely satisfied - but it's a process (I distaste complex APIs). Better container layout, AES, password support, ... is already done.

    So, currently I'm working on rz's backbone - it's LZ-engine.

    1) I managed to speed up single-stream decompression by ~10%. But there's more room for future-improvement. Atm, I'm doing quite a bit of underflow-checking in the ANS-backend.
    2) I've managed to speed up single-stream compression by up to 40%. There's lots of room for future-improvement. Atm, the parser is very brute-force.
    3) I've introduced compression of blocks with (or without) injection of old data. Without injection, you'll get n times speed up of compression and decompression at the cost of some ratio and n times memory usage. With injection, you'll get n times speed up of compression and a tiny, tiny loss in ratio and n times memory usage during compression.

    Work on 3 is not finished, yet. It's pretty nice to compress using rz at 5 MB/s.

  6. Thanks (17):

    78372 (8th May 2018),avitar (6th May 2018),Crispin (8th June 2018),diskzip (6th May 2018),ffmla (14th May 2018),Gonzalo (6th May 2018),Hacker (5th May 2018),hunman (6th May 2018),load (6th May 2018),Mike (6th May 2018),oltjon (6th May 2018),ScottX (9th May 2018),Sergey3695 (6th May 2018),Sportman (6th May 2018),Stephan Busch (8th May 2018),WinnieW (13th May 2018),Zeokat (17th August 2018)

  7. #185
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    536
    Thanks
    236
    Thanked 90 Times in 70 Posts
    Well that's really great news!! Especially the part of compression speedup. That is the only drawback to my taste on rz.

    I compared PA to razor too and let me say, rz is in another league by itself. The only situation where PA is stronger is on reflate-able data. Now, using the aid of the new precomp + preflate, razor outperform by a huge margin PA even with the very best settings. Just to mention one case: original 121 mb, razor 45 mb, PA 62 mb... 3/4!!

  8. Thanks (2):

    De_johan (25th November 2018),diskzip (7th May 2018)

  9. #186
    Member
    Join Date
    May 2017
    Location
    Australia
    Posts
    126
    Thanks
    92
    Thanked 32 Times in 21 Posts
    Quote Originally Posted by Gonzalo View Post
    Well that's really great news!! Especially the part of compression speedup. That is the only drawback to my taste on rz.

    I compared PA to razor too and let me say, rz is in another league by itself. The only situation where PA is stronger is on reflate-able data. Now, using the aid of the new precomp + preflate, razor outperform by a huge margin PA even with the very best settings. Just to mention one case: original 121 mb, razor 45 mb, PA 62 mb... 3/4!!
    How does DiskZIP compare with the override command line string specified in the region below (inside File Explorer, right-click data set, choose "Compress...", and then the Settings dialog):

    Click image for larger version. 

Name:	override.PNG 
Views:	488 
Size:	24.2 KB 
ID:	5911

    Set to the first or second item from the top of the drop-down list?

    It should be smaller than PA unless dataset has reflate compatible containers.

    The dictionary enabled by the first two settings is a whopping 1.5 GB running on two CPU cores.

    Faster than Razor, but how worse on your data set than Razor is the big question I have for you, @Gonzalo...

  10. #187
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    536
    Thanks
    236
    Thanked 90 Times in 70 Posts
    I'm sorry, I don't have windows to use explorer.exe. I will try to install your software over wine but I won't be able to use a 1,5 gb dictionary, that I can tell you right now. I'll post the results when I have them

  11. Thanks:

    diskzip (7th May 2018)

  12. #188
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    903
    Thanks
    84
    Thanked 329 Times in 230 Posts
    Quote Originally Posted by diskzip View Post
    How does DiskZIP compare with the override command line string specified in the region below
    DiskZIP - first command line string:

    2,446,786,568 bytes, 465 sec., x sec., chainstate
    55,207,735 bytes, 50 sec., x sec., nowiki
    116,196,417 bytes, 314 sec., x sec., mongo
    23,356,854 bytes, 82 sec., x sec., iis
    72,489,012 bytes, 159 sec., x sec., gaia

  13. Thanks:

    diskzip (7th May 2018)

  14. #189
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    748
    Thanks
    234
    Thanked 238 Times in 146 Posts
    Great work with RAZOR! Finally some hints about the insides ( https://encode.su/threads/2944-Rans-...ll=1#post56691 ) - let me copy them here:

    Quote Originally Posted by Christian View Post
    I can confirm fabians results.

    During rz's design I had to decide between adaptive or semi-static modeling. I went fully adaptive using rANS. I did lots of experiments but ended up using two alternating states and 16-bit renormalization, too.

    Alternating states are very convenient and provide good performance. I will probably change back to explicit interleaving, once the syntax and everything is frozen. Explicit interleaving can be faster, but is quite an obstacle, if you're experimenting with your codec's syntax.

    With 16-bit renormalization you don't need a loop or anything, if you keep your probability-precision narrow enough. If you're doing pointer arithmetics instead of branches, it doesn't make a difference.

  15. #190
    Member
    Join Date
    May 2017
    Location
    Australia
    Posts
    126
    Thanks
    92
    Thanked 32 Times in 21 Posts
    Quote Originally Posted by Gonzalo View Post
    I'm sorry, I don't have windows to use explorer.exe. I will try to install your software over wine but I won't be able to use a 1,5 gb dictionary, that I can tell you right now. I'll post the results when I have them
    Makes sense. Let me know where to get the dataset, and I'll try myself.

  16. #191
    Member
    Join Date
    May 2017
    Location
    Australia
    Posts
    126
    Thanks
    92
    Thanked 32 Times in 21 Posts
    Quote Originally Posted by Sportman View Post
    DiskZIP - first command line string:

    2,446,786,568 bytes, 465 sec., x sec., chainstate
    55,207,735 bytes, 50 sec., x sec., nowiki
    116,196,417 bytes, 314 sec., x sec., mongo
    23,356,854 bytes, 82 sec., x sec., iis
    72,489,012 bytes, 159 sec., x sec., gaia
    Thank you!

    The results are very far behind 7zip, which should definitely not be the case (they should be slightly better than stock 7zip).

    Are you using a 64-bit OS, or maybe the second command line string would make a difference.

  17. #192
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    536
    Thanks
    236
    Thanked 90 Times in 70 Posts
    Quote Originally Posted by diskzip View Post
    Makes sense. Let me know where to get the dataset, and I'll try myself.
    121 mb? That was just a random folder with pdfs and stuff. I don't think there is anything private in there so I'll just upload it for you in a few hours.

  18. Thanks:

    diskzip (8th May 2018)

  19. #193
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    536
    Thanks
    236
    Thanked 90 Times in 70 Posts
    Ok. Here it is:
    https://drive.google.com/file/d/1u1-...w?usp=drivesdk
    Sorry for the delay. I deleted just a few kb of personal information.
    I'm struggling with my connection these days, so I uploaded it using preflate+razor.

  20. Thanks:

    diskzip (11th May 2018)

  21. #194
    Member
    Join Date
    May 2017
    Location
    Australia
    Posts
    126
    Thanks
    92
    Thanked 32 Times in 21 Posts
    Quote Originally Posted by Gonzalo View Post
    Ok. Here it is:
    https://drive.google.com/file/d/1u1-...w?usp=drivesdk
    Sorry for the delay. I deleted just a few kb of personal information.
    I'm struggling with my connection these days, so I uploaded it using preflate+razor.
    So my results are in - 58.9 (second command line parameter override string) and 59.0 (first command line parameter override string) and 59.0 again (without any command line parameter override at all) MB's.

    About 3 MB smaller than PA - as I was expecting - but I did use your preflated files (I just extracted Razor and recompressed). Since the dataset itself was under 1 GB size, the command line parameter override strings to use a 1.5 GB dictionary of course did not make any difference.

    And again as expected, Razor outdid DiskZIP's best available compression substantially as well.

    @sportman - I am also happy to run your own dataset against 1.5GB dictionary, the results should again be slightly better than PA/stock 7-Zip. Since your datasets are actually larger than 1.5 GB, we can expect to see a more substantial improvement over PA/stock 7-Zip.

  22. #195
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    903
    Thanks
    84
    Thanked 329 Times in 230 Posts
    Quote Originally Posted by diskzip View Post
    Are you using a 64-bit OS, or maybe the second command line string would make a difference.
    Yes 64-bit OS.

    DiskZIP - second command line string:

    2,061,838,997 bytes, 952 sec., x sec., chainstate
    55,207,671 bytes, 173 sec., x sec., nowiki
    115,967,666 bytes, 318 sec., x sec., mongo
    23,356,791 bytes, 81 sec., x sec., iis
    72,524,140 bytes, 157 sec., x sec., gaia

  23. Thanks:

    diskzip (14th May 2018)

  24. #196
    Member
    Join Date
    May 2017
    Location
    Australia
    Posts
    126
    Thanks
    92
    Thanked 32 Times in 21 Posts
    Quote Originally Posted by Sportman View Post
    Yes 64-bit OS.

    DiskZIP - second command line string:

    2,061,838,997 bytes, 952 sec., x sec., chainstate
    55,207,671 bytes, 173 sec., x sec., nowiki
    115,967,666 bytes, 318 sec., x sec., mongo
    23,356,791 bytes, 81 sec., x sec., iis
    72,524,140 bytes, 157 sec., x sec., gaia
    Oh nice! It seems DiskZIP has outperformed PA on 3 out of the 5 data sets, by quite a margin:

    PA, Optimize Strong, Extreme:

    2,262,582,680 bytes, 1194 sec., 601 sec., chainstate
    61,655,558 bytes, 34 sec., 25 sec., nowiki
    143,163,454 bytes, 587 sec., 42 sec., mongo
    16,417,450 bytes, 35 sec., 28 sec., iis
    71,540,886 bytes, 180 sec., 16 sec., gaia

    Comparing speed:

    chainstate - faster, smaller
    nowiki - slower, smaller
    mongo - faster, smaller
    iis - slower, bigger
    gaia - faster, bigger

    If it only weren't for the fact that Razor exists, DiskZIP would have been the file compression king

    Thank you for running the tests.

  25. #197
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    245
    Thanks
    100
    Thanked 48 Times in 32 Posts
    FYI, you might suffer some name collisions with RAZOR:

    1. A "lightweight compression and classification algorithm" called RAZOR was introduced in 2013: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3926547/

    2. Microsoft has had a web templating language called Razor for a few years: https://docs.microsoft.com/en-us/asp...=visual-studio

    Hopefully, these won't cause many problems. But if you google razor compressor, the first hit is this thread, and the second link is the other RAZOR compression algorithm mentioned above.

  26. Thanks:

    Christian (9th December 2018)

  27. #198
    Member
    Join Date
    Jun 2013
    Location
    Sweden
    Posts
    150
    Thanks
    9
    Thanked 25 Times in 23 Posts
    Code:
    What:
    2018-08-11  20:36      17 038 208  Asus.a55a-sx069v.!.ecc
    2018-08-11  20:17             321  Asus.a55a-sx069v.!.md5
    2018-08-11  20:28             657  Asus.a55a-sx069v.!.sfv
    2012-06-17  15:49   4 155 537 408  Asus.a55a-sx069v.DVD1.iso
    2012-06-17  15:52   3 879 389 184  Asus.a55a-sx069v.DVD2.iso
    2012-06-17  15:54   3 878 299 648  Asus.a55a-sx069v.DVD3.iso
    2012-06-17  15:54     623 130 624  Asus.a55a-sx069v.DVD4.iso
    2013-04-04  15:45  47 717 758 477  Asus.a55a-sx069v.Factory.(D599759F).mrimg
    60 271 154 527 bytes in 8 files and 2 dirs    60 271 493 120 bytes allocated
    
    Compressed to:
    30 819 315 607  ASUS.g4.x0.exdupe
    30 603 707 898  ASUS.g8.x0.exdupe
    30 482 071 410  ASUS.g16.x0.exdupe
    19 357 338 201  A55A.x0.g4.exdupe.7z1805  (lzma2 mx mem=1536m)
    19 225 853 872  ASUS.g16.x0.exdupe.7z (lzma2 mx mem=1536m)
    
    Exdupe maybe 15 minutes, and 7z needed almost 4 hours on 3770k with sata
    Code:
    rz.exe a -d 1023m asus.rk a*
    
     *** RAZOR Archiver 1.01 (2017-09-14) - DEMO/TEST version ***
     *** (c) Christian Martelock (christian.martelock@web.de) ***
    
     Scanning y:\asus\a*
     Found 0 dirs, 8 files, 60271154527 bytes.
    
     Creating archive y:\asus.rk
     Window : 1047552K (4096M..1024G)
     Header : 204
     Size   : 16982840131
    
     Archive ok. Added 0 dirs, 8 files, 60271154527 bytes.
     CPU time = 94408,872s / wall time = 65992,220s   ( on laptop 3610qm with usb2.0-disk)
    Extraction from usb2 through gb-wired-network to sata = 30 minutes, md5 ok.

    Edit 2018-08-15:
    Extraction from sata to another sata + temp on 3rd sata
    Done. Processed 1 archives, checked 0 non-archives.
    CPU time = 428,004s / wall time = 993,443s
    All files md5 ok.

    Edit: 2018-08-16:
    extraction from usb2 to 2x-1tb-sata-raid:
    7z x -so A55A.x0.g4.exdupe.7z | exdupe -R -stdin f:\
    WROTE 60,271,154,527 bytes in 8 file(s) Elapsed: 0:15:08,03

    Edit: 2018-08-23
    From Sata-raid to sata (3770k)
    zpaq v7.15 journaling archiver, compiled Aug 17 2016 Adding 60271.154527 MB in 8 files
    m11.Frag.2.zpaq -> 30209.977793 -> 23558.333930) = 23558.333930 MB 461.404 seconds (all OK)
    m11.Frag.6.zpaq -> 39743.956212 -> 30847.415005) = 30847.415005 MB 473.385 seconds (all OK)
    m14.Frag.2.zpaq -> 30209.977793 -> 23083.491356) = 23083.491356 MB 468.861 seconds (all OK)
    m14.Frag.6.zpaq -> 39743.956212 -> 30238.783424) = 30238.783424 MB 509.374 seconds (all OK)
    m21.Frag.2.zpaq -> 30209.977793 -> 23195.851747) = 23195.851747 MB 796.307 seconds (all OK)
    m21.Frag.6.zpaq -> 39743.956212 -> 30405.249224) = 30405.249224 MB 884.074 seconds (all OK)
    m24.Frag.2.zpaq -> 30209.977793 -> 22752.608201) = 22752.608201 MB 943.151 seconds (all OK)
    m24.Frag.6.zpaq -> 39743.956212 -> 29808.746125) = 29808.746125 MB 1108.792 seconds (all OK)
    m26.Frag.2.zpaq -> 30209.977793 -> 22512.61111 = 22512.611118 MB 1276.197 seconds (all OK)
    m41.Frag.2.zpaq -> 30209.977793 -> 21980.871331) = 21980.871331 MB 2711.126 seconds (all OK)

    usb2.0 on 3610qm @ 1600mhz so fan doesn't spin up
    asus.fragment2.m31.zpaq -> 30209.977793 -> 22398.668780) = 22398.668780 MB 4483.702 seconds (all OK)
    asus.fragment2.m34.zpaq -> 30209.977793 -> 22037.121860) = 22037.121860 MB 4600.375 seconds (all OK)
    asus.fragment2.m36.zpaq -> 30209.977793 -> 21842.826063) = 21842.826063 MB 4886.544 seconds (all OK)
    asus.fragment2.m44.zpaq -> 30209.977793 -> 21620.268033) = 21620.268033 MB 9169.801 seconds (all OK)
    asus.fragment2.m46.zpaq -> 30209.977793 -> 21435.655615) = 21435.655615 MB 9970.118 seconds (all OK)

    Edit: 2018-08-28 added g16-exdupe.7z
    Last edited by a902cd23; 28th August 2018 at 19:41.

  28. #199
    Member
    Join Date
    Aug 2018
    Location
    Spain
    Posts
    3
    Thanks
    4
    Thanked 0 Times in 0 Posts
    Definetly i have to test it and compare with others. Amazing job done here with RAZOR, congrats

  29. #200
    Member
    Join Date
    Sep 2007
    Location
    Denmark
    Posts
    895
    Thanks
    54
    Thanked 109 Times in 86 Posts
    Quote Originally Posted by ScottX View Post
    RZ SFX 1.0.1, changes:
    - SFX module rewritten in VC++ 6.0 so no additional libraries neded (old module needed Visual C++ 2008 redist installed).
    - RzSfxCreator - added progressbar and is now "Cancelable".
    - both are now x86 apps

    How to use it:

    • Extract included ZIP to a directory.
    • Copy Christian's RZ.exe to the same directory.
      I recomend to compress RZ.exe using UPX (because RZ.exe will be included to every self-extraciting EXE).
    • Run RzSfxCreator.exe and create SFXs.

    I just tried this again but its not extracting files.
    it just immediately jumps to "Archive was extracted successfuly"

    Also it think it supposed to be "successfully"


    It worked previously but i only had a folder as he root of he archived.
    this time I skipped that folder and have the content of the folder as root of the archive.
    Not sure if that is the trigger

    The archived is 873mb if you want me yo upload it anywhere let me know

  30. #201
    Member
    Join Date
    Feb 2018
    Location
    Czech Republic
    Posts
    12
    Thanks
    5
    Thanked 26 Times in 7 Posts
    Quote Originally Posted by SvenBent View Post
    Also it think it supposed to be "successfully"
    ahaa yes there is a typo. Sorry for that.

    Quote Originally Posted by SvenBent View Post
    It worked previously but i only had a folder as he root of he archived.
    this time I skipped that folder and have the content of the folder as root of the archive.
    Not sure if that is the trigger

    The archived is 873mb if you want me yo upload it anywhere let me know
    I'm not sure, if I understand you perfectly well.
    Could you please upload the archive somewhere? I will look on it. Thanks.

  31. #202
    Member
    Join Date
    Oct 2018
    Location
    Aurangabad
    Posts
    1
    Thanks
    0
    Thanked 0 Times in 0 Posts
    thanx for sharing........

  32. #203
    Member
    Join Date
    Oct 2018
    Location
    Russia
    Posts
    7
    Thanks
    2
    Thanked 1 Time in 1 Post
    Is there any chance to build and share a 64-bit Linux blob?

  33. #204
    Member
    Join Date
    Jan 2017
    Location
    Selo Bliny-S'edeny
    Posts
    24
    Thanks
    7
    Thanked 10 Times in 8 Posts
    I've been reading about ROLZ the other day. The closest I can think of is the hash-chain match finder in lzma/lz4/zstd. Instead of full match distance, it encodes the number of jumps it takes in the hash table to get to the match (hence "reduced offset"). This means the decoder must replicate the hash table, which takes 4.5x the window size.

    So how can this ROLZ thing work in practice with much lower memory requirements, specifically in RAZOR? Does it mean that it runs two match finders, the smaller ROLZ one, replicated in the decoder, and another LZ77 one for longer distances? Are they then encoded as different symbols in the same alphabet, ROLZ-match and LZ-match? Have never thought of anything like this!

  34. #205
    Member
    Join Date
    Apr 2015
    Location
    Greece
    Posts
    84
    Thanks
    34
    Thanked 26 Times in 17 Posts
    Very good point. Maybe its a ROLZ,lz77 hybrid as ROLZ wins over lz77 only for small distance maches.

  35. #206
    Programmer
    Join Date
    Feb 2007
    Location
    Germany
    Posts
    420
    Thanks
    28
    Thanked 155 Times in 18 Posts
    @SolidComp:
    I released razor somewhat in a hurry, because I knew that having children would make coding-time a scarce commodity. The name of the algorithm/software is unfortunate. I'll probably change it for the next iteration.

    @svpv / algorithm:
    You're right. As stated in the first post, razor has a lz/rolz compression engine. The assumption that lz wins over at long distances is not right - enwik9 is a prime example of the opposite. rolz matches are much cheaper than lz matches. Reaching such a low memory foot-print for rolz-decoding (0.6N) was quite some work - while keeping decompression speed up.

    -----------------------

    At the moment, progress is very slow. Sometimes, I'm working on compression related things - but I don't want to bore you with details. I'll let you know, when I have something new.

  36. Thanks (10):

    algorithm (9th December 2018),Amsal (9th March 2020),dado023 (10th December 2018),danlock (11th December 2018),diskzip (24th December 2018),encode (9th December 2018),Godrih (16th April 2019),hunman (9th December 2018),Stephan Busch (10th December 2018),t64 (15th October 2019)

  37. #207
    Member
    Join Date
    Mar 2012
    Location
    Paris
    Posts
    38
    Thanks
    11
    Thanked 2 Times in 2 Posts
    Quote Originally Posted by ScottX View Post
    RZ SFX 1.0.1, changes:
    - SFX module rewritten in VC++ 6.0 so no additional libraries neded (old module needed Visual C++ 2008 redist installed).
    - RzSfxCreator - added progressbar and is now "Cancelable".
    - both are now x86 apps

    How to use it:

    • Extract included ZIP to a directory.
    • Copy Christian's RZ.exe to the same directory.
      I recomend to compress RZ.exe using UPX (because RZ.exe will be included to every self-extraciting EXE).
    • Run RzSfxCreator.exe and create SFXs.
    I am still using the old 1.0 version because the 1.0.1 is cleaned by my antivirus (Generik.HNLYWOU). Anybody else?

  38. #208
    Member
    Join Date
    Feb 2020
    Location
    Area 51
    Posts
    1
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Somehow RZ's results aren't listed at LTCB, so here's my own:

    ENWIK9 results:
    165 104 322 (157 mbytes) drt (ignore add size) + rz -d 625m (enwik9.drt size) #27 just for info, because size of drt decoder & dict ins't included
    165 231 857 (157 mbytes) drt* (127 535 bytes) + rz -d 625m (enwik9.drt size) #28 between PAQ9a and UDA
    176 987 808 (168 mbytes) rz -d 512m


    ENWIK8 results:
    20 899 534 (19.9 mbytes) rz -d 100m
    21 027 069 (20.0 mbytes) drt* (127 535 bytes) + rz -d 60m (enwik8.drt size)
    21 992 522 (20.9 mbytes) drt (ignore add size) + rz -d 60m (enwik8.drt size)

    Notes:
    * total size of "drt.exe + dict" packed by RZ
    drt's dic unpacked file size 465 210 bytes
    no timings for some reason

Page 7 of 7 FirstFirst ... 567

Similar Threads

  1. NanoZip - a new archiver, using bwt, lz, cm, etc...
    By Sami in forum Data Compression
    Replies: 307
    Last Post: 31st March 2020, 21:19
  2. Archiver (GUI-based utility)
    By cade in forum Data Compression
    Replies: 0
    Last Post: 9th January 2014, 02:00
  3. hashing LZ
    By willvarfar in forum Data Compression
    Replies: 13
    Last Post: 24th August 2010, 20:29
  4. LZ differential ?
    By Cyan in forum Data Compression
    Replies: 4
    Last Post: 27th September 2008, 14:00
  5. DARK - a new BWT-based command-line archiver
    By encode in forum Forum Archive
    Replies: 138
    Last Post: 23rd September 2006, 21:42

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •