Results 1 to 9 of 9

Thread: Compression OF WinRar WinZip

  1. #1
    Member A_Better_Vice's Avatar
    Join Date
    Mar 2020
    Location
    Toronto
    Posts
    5
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Lightbulb Compression OF WinRar WinZip

    Currently compressing WinRar and WinZip and the like @ ~30% smaller @ 1kb per 4 seconds or so.
    On a single thread 2.2Ghz, Can of course increase thread count to boost performance rate.

    Does it make sense to make a Cuda version and stack a server with 6 GPU cards.
    Anyone have experience with FPGA programming?

    * interestingly able to compress compressed files from the Hutter Prize recipients as well @ ~30% smaller

  2. #2
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,843
    Thanks
    288
    Thanked 1,245 Times in 698 Posts
    For .zip files there's precomp: https://github.com/schnaader/precomp-cpp

    For .rar something similar can be written too, so your statement is fairly realistic in fact

    For Cuda and FPGA you can try intel devcloud, they provide trial access: https://software.intel.com/en-us/dev...oneapi/sign-up

    And the main problem with Hutter Prize is that you're also required to decompress the archive to original file.

    Don't be surprised when this thread is moved to https://encode.su/forums/19-Random-Compression

  3. #3
    Member A_Better_Vice's Avatar
    Join Date
    Mar 2020
    Location
    Toronto
    Posts
    5
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Oh I am able to compress Flac and FL4 @ ~50% which is not random data re: maybe moved to random data. This is not my site so I am happy to just be here. Move this tread anywhere

    The only problem I have with the Hutter prize is having to release the code at this point. Decompressing to the original file is not an issue .... I think that is part of the entire process of compression/decompression and yes lossless as well.

    I am currently making optimizations. Should be able to get it down to 1kb chunk processing in ~2 seconds in a single threaded 2.2Ghz windows environment.

    I used to program in MASM and I will give that a go to increase speed further. Have not used MASM in about 20 years. I assume I can get sub 1 second performance single thread from MASM

    I ran command line precomp with default settings and -intense and other optimizations and was able to chop off 30% of the resulting pcf files @ ~3 seconds per 1kb chunk.

    I did a few optimizations and got it down to ~3 seconds per 1kb in the last 24 hours.

    Specifically used enwik8.rar and enwik8.zip and ran precomp048dev.exe default and precomp048dev.exe with -intense to create the pcf files for compression.

    With precomp I tried a few various other settings to optimize and ran it to compress it down to ~30% >>~30% LESS NOT DOWN TO ~30%<< at ~3 seconds per 1kb chunk still.

    Thank you for letting me know about precomp so I am able to benchmark it with my software.

    Cheers !

    *** The <<>> was added after the post by me for clarity on March 19th 1pm ish EST
    Last edited by A_Better_Vice; 19th March 2020 at 20:19.

  4. #4
    Member
    Join Date
    Sep 2007
    Location
    Denmark
    Posts
    916
    Thanks
    57
    Thanked 113 Times in 90 Posts
    How is decompression coming along ?
    I can compress anything to 1 bit if i dont care about decompression.

  5. #5
    Member A_Better_Vice's Avatar
    Join Date
    Mar 2020
    Location
    Toronto
    Posts
    5
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Hmmmm well I can compress it down to 0 bits if it can be lossY

    Decompression is fine and robust, much faster than compression.
    Installed a Cuda SDK and looking into inline assembly with C++
    Still tweaking the current code to get it to ~1 sec per 1kb and then will make it multi threaded
    and then move to inline c++ and Cuda work.
    The Cuda version will rock I am sure and then I can rest and count my toilet paper rolls to make sure I am not running out.
    Last edited by A_Better_Vice; 20th March 2020 at 00:39.

  6. #6
    Member A_Better_Vice's Avatar
    Join Date
    Mar 2020
    Location
    Toronto
    Posts
    5
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Have created a C++ version and continued to tweak. Installed Linux as I expect my code to run faster in Linux as well.
    Is there any software out there that does compress precomp04.exe generated files ~30% smaller or more ?
    I have looked and have not found any. Please do share if anyone is familiar with anything else.
    I am not sure if at 51 I have another 40 years in my life to improve this immensely more.

  7. #7
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,843
    Thanks
    288
    Thanked 1,245 Times in 698 Posts
    Precomp is commonly used with disabled internal compression for exactly this reason - its integrated compression is not very good.

  8. #8
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    611
    Thanks
    246
    Thanked 240 Times in 119 Posts
    Have you tried compressing the output files of your program again? Perhaps you can make them 30% smaller, too.
    http://schnaader.info
    Damn kids. They're all alike.

  9. #9
    Member A_Better_Vice's Avatar
    Join Date
    Mar 2020
    Location
    Toronto
    Posts
    5
    Thanks
    0
    Thanked 0 Times in 0 Posts
    I have not attempted to re-compress the output files as I do not expect any type of serious gains.
    I can give it a try for for amusement and maybe it will hit 1% smaller if not have negative gains.
    I did however compress encrypted files from AxCrypt at again 30% smaller.
    Only more recently in the last few weeks or so have I had a serious gain in speeds where it went from originally weeks to days to hours to seconds using various techniques.
    In looking through my code I see that I should easily still be able to chop off some overhead and make it run even faster from removing calls that do not contribute to do anything. Lots of calls all over the place since I was trying so many different techniques until I landed the MUCH more productive technique in terms of speed.
    Within a few days I should get it to ~1kB per second vs 1kb per second in a non parallel non Linux non Cuda environment not from genius re tweaking or gen two but just cleaning the crap up and gutting calls that have no positive impact to this technique that works.
    Off to bed soon.
    My best work is from 11pm to 4am but at around 5am I can start making serious mistakes ...
    I will be posting some specific decompression speeds as well soon and official benchmarks I guess more than likely on my website.
    Sent out some feelers to some companies and a few only accept emails from private domains not coldmail or warmmail or even hotmail... So time to fireup my website.
    So much to do, but happy times are here again

    Hope all is well with everyone.

    How is Ukraine ? My mother in law is from there.
    Last edited by A_Better_Vice; 31st March 2020 at 11:42. Reason: wrote 4pm vs 4am

Similar Threads

  1. WinRAR
    By squxe in forum Data Compression
    Replies: 185
    Last Post: 3rd June 2020, 01:33
  2. How WinZip/WinRar Works ?
    By imransuet in forum Data Compression
    Replies: 3
    Last Post: 4th August 2017, 06:01
  3. Photoshop vs. WinZIP
    By Vronsky in forum Data Compression
    Replies: 1
    Last Post: 28th September 2015, 22:32
  4. WinZip
    By squxe in forum Data Compression
    Replies: 19
    Last Post: 10th November 2014, 00:27
  5. AMD helps WinZip optimize file compression
    By Sportman in forum Data Compression
    Replies: 5
    Last Post: 16th December 2013, 22:26

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •