Okay, a new version featuring MONSTROUS compression speed improvements is here!
The very special thanks to Uwe Herklotz!
Enjoy!
http://quad.sourceforge.net/
Okay, a new version featuring MONSTROUS compression speed improvements is here!
The very special thanks to Uwe Herklotz!
Enjoy!
http://quad.sourceforge.net/
Thanks Ilia and Uwe!
Thank you! I digged out a little test I did some time ago and added a new QUAD showing its speed improvements' best case! URL
Very quick test...
Test file: emachines.bmp (800 x 600 x 16 million)
Uncompressed size: 1,440,056 bytes
Compressed size: 6,085 bytes
Compression times:
Quad v1.08 -x > 00:01:22.421
Quad v1.09 -x > 00:00:01.078
Quad v1.08 > 00:00:00.562
Quad v1.09 > 00:00:00.141
Huge improvement on this file with max mode!
That's a very interestig improvement!
I think this version represents a new stage of QUAD's development. From now on, the max mode represents something usable. Due this fact, now QUAD has a room for future compression improvements - adding a new max mode, a slower one but with higher compression! (but with same memory usage and decompression speed, as always).
Today, I tested QUAD 1.09 (version number WITHOUT additional letters like 'a' or 'b'!) against ROLZ2 from mcomp.
Well, current QUAD is something like a light version of original ROLZ2 - i.e. QUAD is faster (both compression and decompression) but with less compression. In other words, QUAD 1.09 with Max mode is similar to ROLZ2 with Fast mode - same compression and speed. However, with Normal mode QUAD is notable faster than ROLZ2 with just slightly worse compression.
I guess I must consult with some people about future parsing strategy...
lzpxj-1.2g tighter and faster then quad-1.09 in most cases (IMHO)
Try to use QUAD with normal (default) mode.
Yes, but at the cost of decompression speed and memory usage. Also note that with normal mode QUAD is faster, providing slightly worse compression compared to the max mode.Originally Posted by qqwertyy
QUAD uses ~33 MB for compression/decompression
LZPXJ with 7 uses 300+ MB (With 9 option is uses 1316 MB)
...end of discussion.
>LZPXJ with 7 uses 300+ MB (With 9 option is uses 1316 MB)
300Mb+ - it is not a problem in present days ...
>...end of discussion
yes
ps
ну надо было бы написать, что "quad speed optimized and with not bad compression"
psps
did you have tested my installer(quad-1.09 or lzpxj-1.g) ?
May be better to put "shortcuts" in "Send to" menu instead Desktop ?
Я ориентировался на обычного пользователя - не все захотят тратить по 500 метров оперативки на распаковку. Для многих, скоростные компрессоры это что-то типа LZOP. Да и вообще с появлением PAQ компресоров можно все практические программы называть "с быстой скоростью и плохим сжатием". Так что QUAD это не для хардкорщиков. Такие штуки как LZPXJ и TC - это в основном для экспериментов, для опробывания новых идей - для использования они не подходят, так как каждая новая версия несовместима со старыми. То есть если ты запаковал все свои файлы с помощью LZPXJ 1.2g, то с помощью 1.2h ты их никак не распакуешь. Также нет никаких гарантий о безглючности. С QUAD все по другому - формат сжатия сформирован, декодер стандартизирован, и теперь даже если новая версия имеет более лучшее сжатие, то всеравно все версии между собой полностью совместимы. Всетаки работая над QUAD я знал что делал.
Инсталлер опробывал, но что-то меня эта идея с сетапом не проперла - пусть лучше уж полноценные архиваторы будут иметь поддержку QUAD сжатия - например как у PIM распаковка QUAD файлов, с прогресс баром итд итп. В скором времени, кстати говоря, может QUAD куда и добавят - разговоры во всяком случае уже идут...
P.S.
Простите за мой русский - в школе трояк стоял. А вот по английскому сплошные пятерки!
>Инсталлер опробывал, но что-то мне эта идея с сетапом не проперла
ну тады ой!
я больше постить инсталлеры небуду.
и вообще ухожу на lzpxj*
>1.2h ты их никак не распакуешь.
no problem - соберу для реципиента 1.2h
> Также нет никаких гарантий о безглючности.
cmp/sha -1 даёт определённую гарантию целостности - просто всегда проверю распаковкой
ps
удали мои посты с тестами
Posted: 16 Mar 2007 18:03
Posted: 16 Mar 2007 18:04
thanks for work !
ps
lzpxj-1.2g(and probably quad) can be successfully compiled in qnx 6.3.0 sp3 (x86 target) with default compilers flags
psps
>Я ориентировался на обычного пользователя - не все захотят тратить по 500 метров оперативки на распаковку.
они даже не узнают
Ж
Encode I wanted makes you it compliments indeed, it is fantastic to have a compressor therefore fast and also powerful. The development goes in way much fast one, task that could soon be an alternative to winrar.
Well, currently I'm trying to increase compression at any cost. I already tried different parsing schemes like Storer-Symanski, something like Optimal LZH, including collecting all matches within defined block and pass backwards, brutal scheme which works at 1 KB/sec, current scheme with three to four steps lookahed, but current Flexible parsing is the fastest one and the only scheme which gives the best results so far.
Note that I thought that 1.07 has the best compression possible, but Uwe gave me an advice and the compression became slightly higher.
However, currently looks like no one knows about an improvement. I asked Igor Pavlov (the man #1 in LZ algorithms) - he said that he nows not too much about LZ+PPM hybrid with optimal parsing, Malcolm Taylor as much as all from comp.compresison kept silence.
Or probably the current scheme is the best scheme possible?? I know that only a few fellas on the planet can answer this question - it's too complex...
Dont worry. Although the fixed decompression scheme gives strong restrictions for changes, Im quite sure improvements are possible. Still some ideas to check - will come back later ...Originally Posted by encode
Thank you Uwe!
Hello Uwe,
Are you still working on UHARC?
Its generic multimedia detection scheme is still unrivaled.
I bet you can make good progress if you would combine these with LZMA instead of ALZ.
See:
http://uclc.info/scourge_compression_test.htm
and
http://uclc.info/half-life_mod_compression_test.htm
Still beats 7-Zip
Hi Encode!
I looked through the source code and I found some things which can be improved in order to increase speed.
1)
...
for (int t = TABSIZE - 1; t > 0; t--) // update the table
tab[x][t] = tab[x][t - 1];
tab[x][0] = i;
...
This update always takes TABSIZE operations. You can reduce this to 1 by using a ring buffer.
2)
The TCounter/TPPM class can be improved somewhat, too.
Encode(...) and Decode(...) are both in O(n), where n is alphabet size. If you modify TCounter class to maintain cumulative frequencies both will be in O(log(n)) (if you use some kind of binary search). Thus, Add(...) and AddX(...) will be in O(log(n)), too. At the moment they are in O(1) - but it's always better to have a couple of times O(log(n)) than O(1+n) if n is large enough. Still, Set(...) is in O(n), but I don't see an quick fix here.
I think this will give a very nice speed boost for compression and especially for decompression.
I tried many variants, including circular table (by the way, such thing used with original ROLZ, but there we have 2048 to ~64000 offsets - a different story) and different schemes for freq's storage - at the end I came up with what you see. Again, currently all my eyes on parsing...
Thanks anyway!
By the way, I'll try to implement an improved tab update (circular table), and post results here.
I think, I could change only decoder for expected faster decompression. Like I said due to specific parsing routine such changes in encoder is unwelcomed.
To be continued...
I just did this for the decompression. Its one more line in the code and some small changes. Ill send you the code.Originally Posted by encode
Result on tarred SFC:
-1.09 takes 12.20 seconds / 9.38 seconds with ring buffer
And weve got an additional 20+% decompression speed. Symbol coder changes should give much more.
Well, now the table update looks like this:
tab[x][(n[x] = (n[x] + 1) & (TABSIZE - 1))] = j;
- i.e. only one line of code.
This works faster. The testing results will be posted here later...
- a compression geek like me.
(To Criss - I was first)
Originally Posted by encode
Thank you Criss! Your version is better!
tab[x][--n[x] & (TABSIZE - 1)] = j;
However, I guess your forgot to add the memset to the "tab_pos".
Also, better use unsigned char, instead of int - faster. However, with all improvements new QUAD shows serious decompression speed boost.
Awesome!
This isnt needed because it doesnt matter if the ring buffer starts at 0 or whatever. But since memset is only done once it doesnt matter anyway.Originally Posted by encode
Also, naturally, we can swap the + and -, to get the next code:
unsigned char n[X];
memset(&n, 0, sizeof(n));
// ...
int p = tab[x][(n[x] - ppm.DecodeIndex()) & (TABSIZE - 1)];
// ...
tab[x][++n[x] & (TABSIZE - 1)] = j;
Well, actually, I not really understand what you suggest with PPM encoder. AFAIK, there is no room for improvement. (Keeping same functionality and compatibility)
With memset you're right! I tried to initialize "n" (or tab_pos) with random values - all works fine. So it really doesn't matter...