New Moc Update is online
Added:
- Flashzip 0.99c2
- uharc 0.6a
- Freearc 0.666
- CSC Archiver 32f (TAR mode)
- Rar 4.01
Added.... new Rate Time/Size rank!
http://heartofcomp.altervista.org/MOC/MOCA.htm
New Moc Update is online
Added:
- Flashzip 0.99c2
- uharc 0.6a
- Freearc 0.666
- CSC Archiver 32f (TAR mode)
- Rar 4.01
Added.... new Rate Time/Size rank!
http://heartofcomp.altervista.org/MOC/MOCA.htm
Something is wrong with the rar 4.01 "option" row. Solid row shows "yes", but in the options you disable solid archiving by "-s-".
option tested is -s not -s-!
Hi!
New Moc Update is online
09.27.2011
Added:
- LZMAT
- LZ4 1.1
- QUAD 1.12
- BALZ 1.15
- BSC 3.0.0
Updated link:
http://heartofcomp.altervista.org/MOC/MOC.htm
New update is online from:
http://heartofcomp.altervista.org/MOC/MOC.htm
Thanks for the update Nania. Nice presentation, i like it.
While looking at the results, i've been interested by Compression & Decompression time.
Let's focus on decompression for now.
Since LZ4 decoding speed is about 1GB/s at CPU level, i assume that LZ4 decoding time is almost entirely i/o speed bound.
Nonetheless, many compressors manage to be faster that LZ4 at decoding, and sometimes by a very huge lead :
WSL : 29.5s
UltraLZ : 31.7s
Tornado : 32.2s
Thor : 44.1s
LZ4 : 52.7s
So, either :
1) Am i doing something wrong with my I/O code ? Are there any important trick to implement in order to achieve better I/O throughput ?
2) Are the compressors tested with the same settings ?
By same settings, i mean obviously same machine, but also, since the compression is done on a mechanical HDD, same fragmentation level, and same cache impact ?
This second point can be very complex to understand and implement
To understand fragmentation, maybe running again the same test on compressors which were tested long ago would help to check any difference. For example, if WSL decoding time was 29.5s back then, but is now >50s today, then there is a fragmentation effect.
To understand cache, this is easier : just run several time the same test, and cache effect should improve results up to a limit.
To implement a stable test using a mechanical HDD, i see no other way than having a specific dedicated partition for compression benchmark, which is cleaned (erase or formatted) before each test. Obviously other methods exist, such as SSD of RAM Drive. But doing tests on mechanical HDD has its own merit, since results and ranking can differ on this support.
To test my own hypothesis, I did some (light) testings on a system, using WSL & LZ4 on enwik9 for comparison on a mechanical HDD.
WSL decoding time was about 14s.
LZ4 decoding time was about 12s.
So i'm not able to reproduce the same result on this system.
Regards
imvho, all compressors that are fast enough, should be tested on ram disk using settings that allow them to not swap to hdd. while slower compressors may be tested on hdd (using more ram for compression itself) since their results aren't seriously depend on media speed
(heh, it's why i bought box with maxmum ram possible. with current ram prices everyone should do it)
I think they should be tested the way they are supposed to be used. So i.e. a filesystem compressor should be tested on a disk. A networked codec should be tested in transfer. And a memory codec (like the one IBM uses to increase usable RAM size) - should be tested in memory. Obviously, some codecs have overlapping uses, then they should be tested multiple times.
For a one-test-fits-all comparison....I would do 2 tests. One on a RAM disk and one on physical disk, choose the better score and indicate which one was used.... That's because I have a suspicion that additional memory transfers during compression might slow down the (de)compressor. But I know little about it, it may have no effect, if that's the case, testing all in RAM seems to provide more info.
>I think they should be tested the way they are supposed to be used
this way times will more depend on the number of files, their sizes, model of your hdd and fragmentation level and give you no information about real speed
Last edited by m^2; 20th October 2011 at 20:50.
01 nov. 2011
New update is online from:
http://heartofcomp.altervista.org/MOC/MOC.htm
08 nov. 2011
New update is online from:
http://heartofcomp.altervista.org/MOC/MOC.htm
It seems that for some programs you tweak switches ("cc m1g p2 t0", "m0 b1024 r ca s h28 M255"), but for others - not at all. I think is would be best if you made a separate ranking for tweaked results or something.
Hi m^2!
Before leaving with the tests I make naturally of the pre-tests in order to make so that the memory cache does not end in order to slow down the compressor! options detail improve the size... but other factors (Time, effic. etc) not!
Well, "m0 b1024 r ca s h28 M255" is clearly not just a memory optimization. And even where it is, it really looks selective when applied to some, but not to others.
Hi Nania, could you specify what data are you using in the results pages?
The link is:
http://heartofcomp.altervista.org/MOC/info.htm
Yep, I saw that.
But here: http://heartofcomp.altervista.org/MOC/MOCA.htm the original data have 2.046.886.013 bits(?), but what kind of data is that?
Bytes! is natural !
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
I have decided to change the Corpus of Monster of Compression reducing it to 1.000.703.703 Byteses for 42 types of file as from attached image. I already hope in these days to adjourn the new release!
Possession a quantity of gives smaller will allow me to make a will, I hope, more compressor/archivers and not to give advantages to one or more program endowed with specific filters!
![]()
New MOC Benchmark release is online !
Link:
http://heartofcomp.altervista.org/MOC/MOCA.htm
MOC Benchmark 2012
New MOC Benchmark release is online !
Added:
- CSC32 Final
- LPAQ8
- CMM4 V.02B
- M1X2
- ZCM V.011
- SBC 0970R2
- BSC 3.1.0
- QUAD 1.12
- GRZIP2
- WINTURTLE 1.6.0
- UHARC 0.6B
- 7zip 9.20 (ppm)
Link:
http://heartofcomp.altervista.org/MOC/MOCA.htm
Last edited by Nania Francesco; 20th February 2012 at 01:22.
Unfortunately my old PC definitively has been dead and has had to change to computer (new Core I7 920 2,67ghz 6GB Ram). Therefore I must make all the tests newly. I have decided to always create a new block for the MOC Benchmark of 1 GByte but available on-line for download (with the link's).![]()
Released for download the file with the links of new MOC 2012!
Hi!
New MOC Benchmark R.B - 1^ release is online !
Tested on Intel Core I7 920 - 2,67 Ghz (4 core) - 6GB Ram DDR3
LINK:
http://heartofcomp.altervista.org/MOC/MOC.htm
New MOC benchmark release is online
added:
- M1 x2 v.06
- ZCM archiver 0.20B
- LPAQ8
link:
http://heartofcomp.altervista.org/MOC/MOCA.htm
Announcement!
I have decided to close after some years MOC!
however it does not close truly, simply becomes WCC (Word Compressor Challenge)
with a structure of test on 15 types of file (BMP, DAT, DCM, DLL, EXE, ISO, JPG, MOV, PDF, PPM, TIF, TXT, WAV, WMV, XML) for 1.024.458.429 bytes, with pages dedicated to the single types of file and a finally the page summary of tests !
Regards , Francesco !
Last edited by Nania Francesco; 22nd April 2012 at 23:21.
You mean, like tha Calgary compression challenge ?![]()
Are you going to publish the test data so others can continue the benchmark?