-
Hello,
First, please forgive me if my questions are useless or have already being discussed.
I recently discovered this very interesting forum and its wonderful programs and hackers. As my topic says, I'm just a user, tester perhaps, of these tools, finding their performances astonishing, especially CCM for compression and QuickLZ/LZPM for speed.
My (simple) questions are : are the algorithms behind all these programs multi-thread/multi-core adaptable? If so, can we see big improvements due to multi-threading in the near future?
Why do I ask these ? Because, here at work, we recently receive a new computer, dedicated to compile our software. Quite a monster, even for our needs, it's a quad Opteron 8220SE, 8GB of RAM, running RHEL4. Using pbzip2 with default settings, compression of enwik8 takes less than 4 seconds! Less than 50 seconds for enwik9!
I don't think that QuickLZ, THOR, LZPM, etc can go faster, limited by I/Os, but having CCM or QUAD running at today's QuickLZ speed for example... http://www.encode.su/images/smilies/tongue.gif as it seems that future is massively multi-core oriented.
I don't want to bother you any longer. Have a nice day!
AiZ
Edit: words missing.
-
it may be rather easy accomplished by splitting data into several chunks according to number of cpus and compressing them independently. to be exact, pbzip2 does the same. its algorithm anyway can't process data in more than 900 kb chunks (limit imposed by bwd compatibility) while other compression algorithms may get some compression penalty but not so much. quad have 16mb block, i already suggested encode to make its MT variant which will compress faster than decompress http://www.encode.su/images/smilies/lol.gif