diskzip (4th May 2018)
Hi All,
I m new to compression just started. How to use this in cmd? i want to compress PS2 iso fie. please help..
Here's a step by step, https://youtu.be/p9akLf3xNpc, drag rz into console then choose your options, then drag the files you want to compress into the console and press enter.
Thank you for the great feedback everyone.
I'm finally able to work a few hours a week on compression again. Therefore, a small sketchy update on progress.
I've completely seperated the compression engine and the archiver functionality. At the moment, there's no archiver functionality left. I needed to do this in order to improve the internal APIs. I'm still not completely satisfied - but it's a process (I distaste complex APIs). Better container layout, AES, password support, ... is already done.
So, currently I'm working on rz's backbone - it's LZ-engine.
1) I managed to speed up single-stream decompression by ~10%. But there's more room for future-improvement. Atm, I'm doing quite a bit of underflow-checking in the ANS-backend.
2) I've managed to speed up single-stream compression by up to 40%. There's lots of room for future-improvement. Atm, the parser is very brute-force.
3) I've introduced compression of blocks with (or without) injection of old data. Without injection, you'll get n times speed up of compression and decompression at the cost of some ratio and n times memory usage. With injection, you'll get n times speed up of compression and a tiny, tiny loss in ratio and n times memory usage during compression.
Work on 3 is not finished, yet. It's pretty nice to compress using rz at 5 MB/s.![]()
78372 (8th May 2018),avitar (6th May 2018),Crispin (8th June 2018),cZar (8th May 2020),diskzip (6th May 2018),ffmla (14th May 2018),Gonzalo (6th May 2018),Hacker (5th May 2018),hunman (6th May 2018),load (7th May 2018),Mike (6th May 2018),oltjon (6th May 2018),ScottX (10th May 2018),Sergey3695 (6th May 2018),Sportman (6th May 2018),Stephan Busch (8th May 2018),WinnieW (14th May 2018),Zeokat (17th August 2018)
Well that's really great news!! Especially the part of compression speedup. That is the only drawback to my taste on rz.
I compared PA to razor too and let me say, rz is in another league by itself. The only situation where PA is stronger is on reflate-able data. Now, using the aid of the new precomp + preflate, razor outperform by a huge margin PA even with the very best settings. Just to mention one case: original 121 mb, razor 45 mb, PA 62 mb... 3/4!!
How does DiskZIP compare with the override command line string specified in the region below (inside File Explorer, right-click data set, choose "Compress...", and then the Settings dialog):
Set to the first or second item from the top of the drop-down list?
It should be smaller than PA unless dataset has reflate compatible containers.
The dictionary enabled by the first two settings is a whopping 1.5 GB running on two CPU cores.
Faster than Razor, but how worse on your data set than Razor is the big question I have for you, @Gonzalo...
I'm sorry, I don't have windows to use explorer.exe. I will try to install your software over wine but I won't be able to use a 1,5 gb dictionary, that I can tell you right now. I'll post the results when I have them
diskzip (7th May 2018)
diskzip (7th May 2018)
Great work with RAZOR! Finally some hints about the insides ( https://encode.su/threads/2944-Rans-...ll=1#post56691 ) - let me copy them here:
diskzip (8th May 2018)
Ok. Here it is:
https://drive.google.com/file/d/1u1-...w?usp=drivesdk
Sorry for the delay. I deleted just a few kb of personal information.
I'm struggling with my connection these days, so I uploaded it using preflate+razor.
diskzip (11th May 2018)
So my results are in - 58.9 (second command line parameter override string) and 59.0 (first command line parameter override string) and 59.0 again (without any command line parameter override at all) MB's.
About 3 MB smaller than PA - as I was expecting - but I did use your preflated files (I just extracted Razor and recompressed). Since the dataset itself was under 1 GB size, the command line parameter override strings to use a 1.5 GB dictionary of course did not make any difference.
And again as expected, Razor outdid DiskZIP's best available compression substantially as well.
@sportman - I am also happy to run your own dataset against 1.5GB dictionary, the results should again be slightly better than PA/stock 7-Zip. Since your datasets are actually larger than 1.5 GB, we can expect to see a more substantial improvement over PA/stock 7-Zip.
diskzip (14th May 2018)
Oh nice! It seems DiskZIP has outperformed PA on 3 out of the 5 data sets, by quite a margin:
PA, Optimize Strong, Extreme:
2,262,582,680 bytes, 1194 sec., 601 sec., chainstate
61,655,558 bytes, 34 sec., 25 sec., nowiki
143,163,454 bytes, 587 sec., 42 sec., mongo
16,417,450 bytes, 35 sec., 28 sec., iis
71,540,886 bytes, 180 sec., 16 sec., gaia
Comparing speed:
chainstate - faster, smaller
nowiki - slower, smaller
mongo - faster, smaller
iis - slower, bigger
gaia - faster, bigger
If it only weren't for the fact that Razor exists, DiskZIP would have been the file compression king
Thank you for running the tests.
FYI, you might suffer some name collisions with RAZOR:
1. A "lightweight compression and classification algorithm" called RAZOR was introduced in 2013: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3926547/
2. Microsoft has had a web templating language called Razor for a few years: https://docs.microsoft.com/en-us/asp...=visual-studio
Hopefully, these won't cause many problems. But if you google razor compressor, the first hit is this thread, and the second link is the other RAZOR compression algorithm mentioned above.
Christian (9th December 2018)
Code:What: 2018-08-11 20:36 17 038 208 Asus.a55a-sx069v.!.ecc 2018-08-11 20:17 321 Asus.a55a-sx069v.!.md5 2018-08-11 20:28 657 Asus.a55a-sx069v.!.sfv 2012-06-17 15:49 4 155 537 408 Asus.a55a-sx069v.DVD1.iso 2012-06-17 15:52 3 879 389 184 Asus.a55a-sx069v.DVD2.iso 2012-06-17 15:54 3 878 299 648 Asus.a55a-sx069v.DVD3.iso 2012-06-17 15:54 623 130 624 Asus.a55a-sx069v.DVD4.iso 2013-04-04 15:45 47 717 758 477 Asus.a55a-sx069v.Factory.(D599759F).mrimg 60 271 154 527 bytes in 8 files and 2 dirs 60 271 493 120 bytes allocated Compressed to: 30 819 315 607 ASUS.g4.x0.exdupe 30 603 707 898 ASUS.g8.x0.exdupe 30 482 071 410 ASUS.g16.x0.exdupe 19 357 338 201 A55A.x0.g4.exdupe.7z1805 (lzma2 mx mem=1536m) 19 225 853 872 ASUS.g16.x0.exdupe.7z (lzma2 mx mem=1536m) Exdupe maybe 15 minutes, and 7z needed almost 4 hours on 3770k with sataExtraction from usb2 through gb-wired-network to sata = 30 minutes, md5 ok.Code:rz.exe a -d 1023m asus.rk a* *** RAZOR Archiver 1.01 (2017-09-14) - DEMO/TEST version *** *** (c) Christian Martelock (christian.martelock@web.de) *** Scanning y:\asus\a* Found 0 dirs, 8 files, 60271154527 bytes. Creating archive y:\asus.rk Window : 1047552K (4096M..1024G) Header : 204 Size : 16982840131 Archive ok. Added 0 dirs, 8 files, 60271154527 bytes. CPU time = 94408,872s / wall time = 65992,220s ( on laptop 3610qm with usb2.0-disk)
Edit 2018-08-15:
Extraction from sata to another sata + temp on 3rd sata
Done. Processed 1 archives, checked 0 non-archives.
CPU time = 428,004s / wall time = 993,443s
All files md5 ok.
Edit: 2018-08-16:
extraction from usb2 to 2x-1tb-sata-raid:
7z x -so A55A.x0.g4.exdupe.7z | exdupe -R -stdin f:\
WROTE 60,271,154,527 bytes in 8 file(s) Elapsed: 0:15:08,03
Edit: 2018-08-23
From Sata-raid to sata (3770k)
zpaq v7.15 journaling archiver, compiled Aug 17 2016 Adding 60271.154527 MB in 8 files
m11.Frag.2.zpaq -> 30209.977793 -> 23558.333930) = 23558.333930 MB 461.404 seconds (all OK)
m11.Frag.6.zpaq -> 39743.956212 -> 30847.415005) = 30847.415005 MB 473.385 seconds (all OK)
m14.Frag.2.zpaq -> 30209.977793 -> 23083.491356) = 23083.491356 MB 468.861 seconds (all OK)
m14.Frag.6.zpaq -> 39743.956212 -> 30238.783424) = 30238.783424 MB 509.374 seconds (all OK)
m21.Frag.2.zpaq -> 30209.977793 -> 23195.851747) = 23195.851747 MB 796.307 seconds (all OK)
m21.Frag.6.zpaq -> 39743.956212 -> 30405.249224) = 30405.249224 MB 884.074 seconds (all OK)
m24.Frag.2.zpaq -> 30209.977793 -> 22752.608201) = 22752.608201 MB 943.151 seconds (all OK)
m24.Frag.6.zpaq -> 39743.956212 -> 29808.746125) = 29808.746125 MB 1108.792 seconds (all OK)
m26.Frag.2.zpaq -> 30209.977793 -> 22512.61111= 22512.611118 MB 1276.197 seconds (all OK)
m41.Frag.2.zpaq -> 30209.977793 -> 21980.871331) = 21980.871331 MB 2711.126 seconds (all OK)
usb2.0 on 3610qm @ 1600mhz so fan doesn't spin up
asus.fragment2.m31.zpaq -> 30209.977793 -> 22398.668780) = 22398.668780 MB 4483.702 seconds (all OK)
asus.fragment2.m34.zpaq -> 30209.977793 -> 22037.121860) = 22037.121860 MB 4600.375 seconds (all OK)
asus.fragment2.m36.zpaq -> 30209.977793 -> 21842.826063) = 21842.826063 MB 4886.544 seconds (all OK)
asus.fragment2.m44.zpaq -> 30209.977793 -> 21620.268033) = 21620.268033 MB 9169.801 seconds (all OK)
asus.fragment2.m46.zpaq -> 30209.977793 -> 21435.655615) = 21435.655615 MB 9970.118 seconds (all OK)
Edit: 2018-08-28 added g16-exdupe.7z
Last edited by a902cd23; 28th August 2018 at 20:41.
Definetly i have to test it and compare with others. Amazing job done here with RAZOR, congrats![]()
I just tried this again but its not extracting files.
it just immediately jumps to "Archive was extracted successfuly"
Also it think it supposed to be "successfully"
It worked previously but i only had a folder as he root of he archived.
this time I skipped that folder and have the content of the folder as root of the archive.
Not sure if that is the trigger
The archived is 873mb if you want me yo upload it anywhere let me know
thanx for sharing........
Is there any chance to build and share a 64-bit Linux blob?
I've been reading about ROLZ the other day. The closest I can think of is the hash-chain match finder in lzma/lz4/zstd. Instead of full match distance, it encodes the number of jumps it takes in the hash table to get to the match (hence "reduced offset"). This means the decoder must replicate the hash table, which takes 4.5x the window size.
So how can this ROLZ thing work in practice with much lower memory requirements, specifically in RAZOR? Does it mean that it runs two match finders, the smaller ROLZ one, replicated in the decoder, and another LZ77 one for longer distances? Are they then encoded as different symbols in the same alphabet, ROLZ-match and LZ-match? Have never thought of anything like this!
Very good point. Maybe its a ROLZ,lz77 hybrid as ROLZ wins over lz77 only for small distance maches.
@SolidComp:
I released razor somewhat in a hurry, because I knew that having children would make coding-time a scarce commodity. The name of the algorithm/software is unfortunate. I'll probably change it for the next iteration.
@svpv / algorithm:
You're right. As stated in the first post, razor has a lz/rolz compression engine. The assumption that lz wins over at long distances is not right - enwik9 is a prime example of the opposite. rolz matches are much cheaper than lz matches. Reaching such a low memory foot-print for rolz-decoding (0.6N) was quite some work - while keeping decompression speed up.
-----------------------
At the moment, progress is very slow. Sometimes, I'm working on compression related things - but I don't want to bore you with details. I'll let you know, when I have something new.
Somehow RZ's results aren't listed at LTCB, so here's my own:
ENWIK9 results:
165 104 322 (157 mbytes) drt (ignore add size) + rz -d 625m (enwik9.drt size) #27 just for info, because size of drt decoder & dict ins't included
165 231 857 (157 mbytes) drt* (127 535 bytes) + rz -d 625m (enwik9.drt size) #28 between PAQ9a and UDA
176 987 808 (168 mbytes) rz -d 512m
ENWIK8 results:
20 899 534 (19.9 mbytes) rz -d 100m
21 027 069 (20.0 mbytes) drt* (127 535 bytes) + rz -d 60m (enwik8.drt size)
21 992 522 (20.9 mbytes) drt (ignore add size) + rz -d 60m (enwik8.drt size)
Notes:
* total size of "drt.exe + dict" packed by RZ
drt's dic unpacked file size 465 210 bytes
no timings for some reason
im kinda new here so can you tell me how to open those rz_1.03.6.zip