No comments!
Enjoy!![]()
No comments!
Enjoy!![]()
I saw one test for version 0.04 compressing a big video compressed file very good and now tested it myself.
It seems like bcm works very good for hardly compressable files or at least for divx files.
733,030,400 bytes -> 713,688,280 bytes
Thats very much in my view. lzma won no mb in the middle of the file (I aborted then) and also ccm not this much 720,320,558 bytes.
EDIT:
Seems to be bwt. Blizzard got it down to 713,568,995 byte
Last edited by Simon Berger; 6th March 2009 at 01:38.
Thanks Ilia!![]()
Great
Keep up the good work!
updated results of my test on video dvd:
2,194,866,176 (2,04gb) - original iso
2,012,827,345 (1,87gb) - WinRAR 3.80b5 (best, all filters)
1,943,719,107 (1,80gb) - FreeArc 0.50 (-mx -ld=1gb)
1,930,752,565 (1,79gb) - 7-Zip 4.60 (ultra, 128mb dict)
1,861,798,652 (1,73gb) - CCMx 1.30c (model 7)
1,822,283,097 (1,69gb) - BCM 0.04
1,821,674,227 (1,69gb) - BCM 0.05 (-b65536)
1,807,878,526 (1,68gb) - BCM 0.05 (-b131072)
1,792,963,738 (1,66gb) - BCM 0.05 (-b307200)
1,779,106,230 (1,65gb) - NanoZip 0.06a (-cc -m64m)
1,756,546,785 (1,63gb) - NanoZip 0.06a (-cc -m1.5g)
1,756,449,884 (1,63gb) - NanoZip 0.06a (-cc -m2g)
very nice!! especially considering that bcm is about 2-3x faster than nz_cm
i'm not an expert but i was wondering... would it be possible to use solid blocks + dictionary with this algorithm? like 7-zip lzma
in my guess it could improve ratio on big files or sets of similar files
Why don't you test real competitors of bcm? Sure it is interesting to see programs like ccm and 7z for example. But better show results of blizz, bbb, mcomp and I thought in nanozip has also an implementation of bwt but didn't find something alike.
i've got 171,857,720 with bcm -b406991 enwik9
Actually, BCM is the strongest of all BWT compressors. Strangely, but in some rare cases Blizzard may win, but often it looses with a huge gap.
A10.jpg
BCM -> 824,920 bytes
BLIZ -> 825,413 bytes
BBB -> 827,834 bytes
NZ -> 832,149 bytes
MCOMP -> 833,195 bytes
Original -> 842,468 bytes
![]()
It is very competitive indeedThough I think nz -cO is still usually better, both with performance and speed... But it's got some clever filters too, so there's a room to improve
Do you plan on tranforming it into a full-size archiver with solid archives, recursive directories etc. etc.?
![]()
As a note, as my test shown, NanoZip with its optimal methods additionally uses a few algorithms - i.e. not only pure BWT, along with cool filters of course. And this is a key - we have a bunch of compression methods and filters and we choose the best/right one.
Anyway, I have no such plans for the closest future - it's too time consuming and I have at alomost no spare time currently. I tested a few filters, like delta, with BCM and multimedia data like pictures and audio and it showed very nice results. So, as to BCM, I do plan a set of good filters, better CM maybe, and nice-to-have features...
![]()
I asked encode to build a modified version, with VirtualAlloc and different
allocation order, and here're the results (enwik9):
169,396,682 (with -b488282)
169,428,575 (with -b508752)
508752 is maximum possible, as 508752*4/1024=1987M.
and first setting is near half of the file.
Tested BCM with LZP preprocessing. Well, in some cases like with "bookstar" and "pht.psd" LZP might help, in others the opposite thing - it seriously hurts compression like with "world95.txt". Now I understand why Blizzard with its simpler CM model still may beat BCM in some cases - thanks to its LZP preprocessing - this is the key...
For me, overall performance is better - not single file. If you make it a bit faster and keep compression strength, that will be really goodI'm sure, there must be some places for further improvements.
BIT Archiver homepage: www.osmanturan.com
BCM is a very nice compressor ! I like it. Congrats to you !
Take in mind that Bliz is 6M while BCM is only 5M ! (up too 500M blocks, that's the way I like it)
I wonder what sorting model you have used.. something like Dark.. ?
Do you have an idea about the time use between the sorting and the CM ?
(on txt-like files for example).
(How much does the CM slows down ? )
If I compare the timings for E9 with the timings from Dark, the CM slows down a bit, but not too much. It seems a good balance to me.
I hope, just like osman says, you can squeeze out some time without breaking the good ratio.
Ooh, btw, my calender told me there should be a new BIT around by now ... did I miss it ?![]()
![]()
I am... Black_Fox... my discontinued benchmark
"No one involved in computers would ever say that a certain amount of memory is enough for all time? I keep bumping into that silly quotation attributed to me that says 640K of memory is enough. There's never a citation; the quotation just floats like a rumor, repeated again and again." -- Bill Gates
Next upcoming version will have totally new design which including data fragmentation, deflate recompression, audio/bitmap specialized codecs, filters and more faster LWCX codec...Still working on fragmentation and deflate recompression. I guess it will be ready until May or June.
BIT Archiver homepage: www.osmanturan.com
I already explained that in my previous thread...Cuttent EXE-filter is bad on big blocks - I need a brand new EXE-filter here.
Also, it is not possible to make BCM's CM part faster at the same complexity. All stuff optimized as much as possible. Making it less complex, but more efficient is probably possible, but I'm not sure. Current CM is a light-weight version of my CM, this means, it is already as simple as it possible, adding additional stuff like SSE2 and dynamic mixing will improve compression even further at the cost of compression/decompression speed loss, probably very notable loss...
![]()