Results 1 to 9 of 9

Thread: Centmin Mod compression benchmarks

  1. #1
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    353
    Thanks
    131
    Thanked 54 Times in 38 Posts

    Centmin Mod compression benchmarks

    Hi all – The Centmin Mod project ran a very interesting set of benchmarks that includes memory use during compression and decompression. It includes brotli, zstd, pigz, gzip, and more.

    He also includes CPU use, but it looks like it's 99-100% for all codecs so that stat must be erroneous.

    I wish every compression benchmark reported memory and CPU...

    (Centmin Mod is a project that bundles an optimized nginx build with PHP, MariaDB, and a few other pieces.)

  2. #2
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    889
    Thanks
    483
    Thanked 279 Times in 119 Posts

  3. Thanks (3):

    Jarek (2nd May 2020),skal (1st May 2020),SolidComp (2nd May 2020)

  4. #3
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    353
    Thanks
    131
    Thanked 54 Times in 38 Posts
    Oh that looks nice. Zstd is pulling away from brotli for some reason.

  5. #4
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    803
    Thanks
    244
    Thanked 255 Times in 159 Posts
    Let's put the plot here:


  6. #5
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    615
    Thanks
    260
    Thanked 242 Times in 121 Posts
    Some nitpicking from my side:

    - In the tables, the column "compress size (MB)" is actually in MiB, e.g. the xz level 9 result is around 46.47 MiB => ~46.47 * 1024 * 1024 => ~48.7xx.xxx bytes.
    - Automatically determined block sizes destroy multi-threading on higher levels for some algorithms (which can be seen in the table column "compress cpu %"). This can be reproduced/improved when explicitly setting block sizes (see xz results below). So some comparing apples to oranges here (e.g. plzip vs. xz on highest level is similar ratio, but 200% vs. 100% cpu), but at least it's clear what happens in the tables - and it's actually a good test on how well compressors perform when used without any extra command line parameters.

    Results from my quick tests (to verify results and compare with Precomp):

    Code:
    Command line				compressed size		compression time	decompression time	notes
    precomp					46.783.211 (44.62 MiB)	1 min 36 s		6 s			1
    precomp -t+				48.920.911 (46.65 MiB)	1 min 31 s		5 s			1, 2
    precomp -lm4000				46.630.387 (44.47 MiB)  2 min 49 s		6 s			3
    xz -9					48.765.608 (46.51 MiB)	2 min 35 s		4 s
    xz -9 -T 0				48.812.012 (46.55 MiB)	2 min 34 s		4 s
    xz -9 -T 0 --block-size=100663296	48.911.256 (46.65 MiB)	1 min 24 s		4 s			4
    
    notes:
    test machine: cloud server, 2 vCPU, 2 threads, Skylake, ~2.1 GHz, 4 GB RAM
    1) automatically determined block size: 96 MiB on this machine, can be different on others
    2) this is pure LZMA2 without recompression, should be very similar to multithreaded xz -9
    3) allowing more memory usage improves compression ratio, but leads to 192 MiB block size, so no multithreading boost anymore
    4) 100.663.296 bytes = 96 MiB to make it comparable to the first Precomp result
    Last edited by schnaader; 2nd May 2020 at 14:27.
    http://schnaader.info
    Damn kids. They're all alike.

  7. Thanks:

    SolidComp (2nd May 2020)

  8. #6
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    353
    Thanks
    131
    Thanked 54 Times in 38 Posts
    @schnaader That's a good cloud instance. Where do you get it and how much does it cost?

  9. #7
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,572
    Thanks
    780
    Thanked 687 Times in 372 Posts
    Quote Originally Posted by SolidComp View Post
    @schnaader That's a good cloud instance. Where do you get it and how much does it cost?
    something like that is 5 euro/month at https://www.hetzner.com/cloud

  10. Thanks:

    schnaader (3rd May 2020)

  11. #8
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    615
    Thanks
    260
    Thanked 242 Times in 121 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    something like that is 5 euro/month at https://www.hetzner.com/cloud
    And that's exactly the one I use
    http://schnaader.info
    Damn kids. They're all alike.

  12. #9
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    353
    Thanks
    131
    Thanked 54 Times in 38 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    something like that is 5 euro/month at https://www.hetzner.com/cloud
    Interesting. That would cost $20 - $40 per month at Vultr or Digital Ocean in the US. And it might not be Skylake, but Haswell or something instead.

Similar Threads

  1. zpaq benchmarks
    By Sportman in forum Data Compression
    Replies: 100
    Last Post: 17th December 2018, 22:48
  2. Why more people need to do benchmarks
    By calthax in forum Data Compression
    Replies: 2
    Last Post: 17th August 2014, 23:37
  3. Practical compression benchmarks
    By Matt Mahoney in forum Data Compression
    Replies: 36
    Last Post: 14th July 2013, 22:47
  4. MaximumCompression.com Benchmarks
    By osmanturan in forum Data Compression
    Replies: 29
    Last Post: 5th May 2009, 10:31
  5. Replies: 12
    Last Post: 30th June 2007, 16:49

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •