Page 2 of 2 FirstFirst 12
Results 31 to 56 of 56

Thread: Lossless Photo Compression Benchmark

  1. #31
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    700
    Thanks
    210
    Thanked 267 Times in 157 Posts
    Quote Originally Posted by Alexander Rhatushnyak View Post
    1'194'936'668 bytes -- webp -lossless -q 100 -m 6
    1'222'290'070 bytes -- webp -lossless -q 100
    1'128'702'803 bytes -- flif -e -N
    1'283'324'851 bytes -- flif -e -N --effort=0
    1'126'954'659 bytes -- flif -e -N --effort=100
    So for this data (photographs?) WebP lossless is ~5.7 % less dense that FLIF. The latest WebP lossless 0.6.1 seems 3.0 % more dense than the previously tested 0.4.3 for this corpus.

    Did you measure decoding speeds?

    How do they compare with your own closed-source work?

  2. Thanks:

    Stephan Busch (27th December 2017)

  3. #32
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    39
    Thanked 92 Times in 48 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    So for this data (photographs?)
    Yes

    Quote Originally Posted by Jyrki Alakuijala View Post
    Did you measure decoding speeds? How do they compare with your own closed-source work?
    Decompression time is approximately 35 seconds on a faster desktop, but a fraction of this time is due to HDD i/o, so I can't tell whether dwebp would be slower or faster than QLIC2 on the original LPCB desktop.

    cwebp and dwebp from libwebp-0.6.1-windows-x86.zip fail to run on 32-bit Windows XP: "dwebp.exe is not a valid Win32 application". Could you please provide WinXP-friendly 32-bit executables?

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  4. #33
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    700
    Thanks
    210
    Thanked 267 Times in 157 Posts
    Quote Originally Posted by Alexander Rhatushnyak View Post
    cwebp and dwebp from libwebp-0.6.1-windows-x86.zip fail to run on 32-bit Windows XP: "dwebp.exe is not a valid Win32 application". Could you please provide WinXP-friendly 32-bit executables?
    The most likely way to change this is to file a bug to the WebP project. https://bugs.chromium.org/p/webp/issues/list

    You may need to be slightly prosaic to explain why an OS that was abandoned by its manufacturer in 2009 should still be supported.

    (((I wholeheartedly recommend to moving to Linux -- then a company cannot force you to move to .NET/Vista/Windows 8.0/something else that cannot possibly work well.)))

  5. #34
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    39
    Thanked 92 Times in 48 Posts
    Installed 64-bit Ubuntu on the original LPCB desktop.

    Decoding time for images compressed with -lossless -q 100:
    59.27 seconds with 32-bit dwebp, and 35.79 seconds with 64-bit dwebp.

    For images compressed with -lossless -q 100 -m 6:
    56.78 seconds with 32-bit dwebp, and 39.27 seconds with 64-bit.

    Images were compressed with 32-bit cwebp (compressed sizes a bit different, e.g. 1'222'302'550 rather than 1'222'290'070)
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Screenshot_from_2017-12-27_15-21-10.png 
Views:	220 
Size:	47.8 KB 
ID:	5587  

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  6. #35
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    226
    Thanks
    108
    Thanked 106 Times in 65 Posts
    @Alex, will you update the LPCB with the results from paq8px and Emma? I have started running cmix on the LPCB. I am running it on 8 Compute Engine computers simultaneously so that it won't take months to run. Will it be a valid entry for LPCB if it is run on different computers? I am keeping track of the compression/decompression time, but don't really know what hardware the 8 computers have (they are much slower than my home machine). Here are some preliminary results:

    olympus_xz1_01: 7920156
    olympus_xz1_02: 7821592
    olympus_xz1_03: 8713748
    olympus_xz1_14: 11249222
    olympus_xz1_15: 11221854
    olympus_xz1_16: 7364246
    sony_a55_01: 11405379
    sony_a55_02: 11718961
    fujifilm_finepix_x100_01: 9122142
    fujifilm_finepix_x100_02: 9012798
    canon_eos_1100d_01: 9186098
    canon_eos_1100d_02: 9932902
    PIA12811: 221622
    PIA12813: 416880
    PIA13757: 9826864
    STA13452: 11138106
    STA13453: 8658098

  7. #36
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    39
    Thanked 92 Times in 48 Posts
    Updated with the results from paq8px_v125 yesterday, soon with WebP and FLIF.
    Quote Originally Posted by byronknoll View Post
    Will it be a valid entry for LPCB if it is run on different computers? I am keeping track of the compression/decompression time, but don't really know what hardware the 8 computers have
    Yes, it will be a valid entry. No problem if hardware is unknown, I'll just put "a couple months" into Ctime and Dtime columns

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  8. #37
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    39
    Thanked 92 Times in 48 Posts
    Updated with the results from paq8px_v127.

    If we losslessly compress the only image in the WebP 0.4.4 package:
    49167 test_ref.ppm
    23889 test_ref.png
    17365 test_ref.flif-i
    16435 test_ref.flif-n-a
    16435 test_ref.flif-n
    15054 test_ref.webp044-m6
    14958 test_ref.webp044-m4
    14422 test_ref.flif-n-b
    12860 test_ref.bmf
    12560 test_ref.gralic111d
    14996 webp 0.6.1 -lossless -q 100 -m 6
    11948 FLIF -e -N
    11734 FLIF -e -N --effort=100
    11159 paq8im -7
    10615 paq8px_v127 -8

    This image is still the only test image in the libwebp-0.6.1 package. Guess it will soon be removed: the gap is getting a bit too big...

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  9. Thanks:

    Jyrki Alakuijala (2nd January 2018)

  10. #38
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    700
    Thanks
    210
    Thanked 267 Times in 157 Posts
    Quote Originally Posted by Alexander Rhatushnyak View Post
    This image is still the only test image in the libwebp-0.6.1 package.
    It looks to me as a lossy functionality test image, i.e., if you can decode that image there is a non-zero likelyhood that you managed to compile/install/whatever correctly.

    I optimized WebP's design against a random set of 1000 PNG images with a minimum of 2 % of surface transparency. I took these from the internet crawl. I took a similar additional 12'000 PNG image set as a verification set. Still, today, if I do this (with another 1000 PNGs) from the internet, WebP lossless beats FLIF in the size category 16-256 kB compressed size by about 2 %. On images smaller and larger FLIF wins. On very big lossless images (in the multi megabyte category) the win can be around 5 %. The flipside is the 10-30x slower decoding time of FLIF, possibly seconds for big images.

    If I need to characterize the performance with a very small number of images, I'd use one for web graphics (like the one below):

    https://www.gstatic.com/webp/gallery2/6.png -- I did the composition myself :-D

    and another for photographs:

    https://www.gstatic.com/webp/gallery2/5.png -- Fire breathing "Jaipur Maharaja Brass Band" Chassepierre Belgium, Author: Luc Viatour, Photo licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license. Author website is https://lucnix.be/

    (I didn't test how these images perform with FLIF and other compressors, I hope this is not going to end up very embarrassing :-P)

    The photograph case seems less important for lossless compression in the internet. Professionally developed websites (like instagram, flickr, pinterest, amazon, facebook, etc.) don't use lossless compression for photographs, mostly it is a hobbyist websites lacking the understanding what they do, or don't have time to learn about it and do things correctly.

  11. #39
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    700
    Thanks
    210
    Thanked 267 Times in 157 Posts
    Did you ever consider making another smaller resolution lossless benchmark by 4x4 subsampling the source images? That way you'd get a benchmark that is slightly more free from possibly lossy compression artefacts, like jpeg ringing.

  12. #40
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    39
    Thanked 92 Times in 48 Posts
    Quote Originally Posted by byronknoll View Post
    I have started running cmix on the LPCB.
    Will it finish in January?

    Quote Originally Posted by Jyrki Alakuijala View Post
    Did you ever consider making another smaller resolution lossless benchmark by 4x4 subsampling the source images? That way you'd get a benchmark that is slightly more free from possibly lossy compression artefacts, like jpeg ringing.
    Ex-JPEGs were excluded, but yes, I did consider making a set of much smaller images...

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  13. Thanks:

    Jyrki Alakuijala (22nd January 2018)

  14. #41
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    226
    Thanks
    108
    Thanked 106 Times in 65 Posts
    Quote Originally Posted by Alexander Rhatushnyak View Post
    Will it finish in January?
    Yeah, it should finish in a few days.

  15. #42
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    700
    Thanks
    210
    Thanked 267 Times in 157 Posts
    Quote Originally Posted by Alexander Rhatushnyak View Post
    Ex-JPEGs were excluded, but yes, I did consider making a set of much smaller images...
    Ex-JPEGs are not the only one set of possible artefacts. The image reconstruction from the imaging sensor data leaves some rather strange correlations that can be related to a particular acquisition or reconstruction method. Such correlations tend to go away by resampling. I think much of important pics (like sales catalogs etc.) in the internet have been resampled from a higher resolution source, so it becomes not very relevant how well such image reconstruction correlations are modeled.

  16. #43
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    226
    Thanks
    108
    Thanked 106 Times in 65 Posts
    The cmix results are finally ready! This was run at revision 80a5f73. Total compressed size is 938042000 bytes. Compression time: 8152355 seconds. Decompression time: 8199660 seconds. All files validated correctly after decompression. TSV file attached with the results.
    Attached Files Attached Files

  17. Thanks (2):

    Alexander Rhatushnyak (28th January 2018),mpais (28th January 2018)

  18. #44
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    39
    Thanked 92 Times in 48 Posts
    Quote Originally Posted by byronknoll View Post
    Compression time: 8152355 seconds.
    94.356 days...
    Is anything known about the hardware?
    Perhaps types of instances? in which cloud?

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  19. #45
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    226
    Thanks
    108
    Thanked 106 Times in 65 Posts
    Quote Originally Posted by Alexander Rhatushnyak View Post
    94.356 days...
    Is anything known about the hardware?
    Perhaps types of instances? in which cloud?

    This page describes the different machine types: https://cloud.google.com/compute/docs/machine-types
    I used: "custom (4 vCPUs, 30 GB memory)", "Unknown CPU Platform"


    cmix usually runs 2-3x as fast on my home desktop compared to the Compute Engine instances. Also there is a large variance in performance between different VM instances.

  20. #46
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    39
    Thanked 92 Times in 48 Posts
    Thank you Byron!
    Updated LPCB with results from cmix.
    Congratulations with the first place!

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  21. Thanks:

    byronknoll (18th February 2018)

  22. #47
    Member
    Join Date
    Feb 2018
    Location
    Germany
    Posts
    2
    Thanks
    1
    Thanked 0 Times in 0 Posts
    Hello fellows, I am working on a variant of JPEG-LS (specifically to increase compression/decompression speeds). Could any of you please guide me on where how to generate file sizes in the standard format (i.e. from http://qlic.altervista.org/), I have been trying to get the size of files/directory using linux'es 'du -s' command but that command seems to provide an estimate.

    Is there some script by which I can get the formatted size and time of compression? thanks in advance.

  23. #48
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    39
    Thanked 92 Times in 48 Posts
    Perhaps you should try ls -g *.jpegls

    Stackoverflow.com is a good place for such questions.

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  24. Thanks:

    asifrajput (27th March 2018)

  25. #49
    Member
    Join Date
    Feb 2018
    Location
    Germany
    Posts
    2
    Thanks
    1
    Thanked 0 Times in 0 Posts
    Thank you Alexander for sarcastic reply If you see in my question I have asked for a script which is able to provide me with time and file sizes.

    The reason why I am so keen in trying to replicate with 'accepted norm' is that I do not want to get my hopes too high (by overlooking some obvious mistake).

    For instance, If I calculate the time with standard 'time' command then should I just pick system time or user time or perform some calculation? If there is existing script which is used in genarating benchmark results then it would eliminate the ambiguity.

    I hope I explained the original question.

  26. #50
    Member
    Join Date
    Feb 2016
    Location
    Luxembourg
    Posts
    523
    Thanks
    198
    Thanked 750 Times in 304 Posts
    Here are the results for paq8px_v146, total compressed size is 937.571.800 bytes, compression time: 172.817 seconds, decompression and verification time: 172.832 seconds.
    The machine used was an Intel i7 5820k @ 4.4Ghz with 64GB of DDR4-2400 CL11 RAM.

    Best regards
    Attached Files Attached Files

  27. Thanks (2):

    Darek (7th July 2018),Gotty (6th July 2018)

  28. #51
    Member
    Join Date
    Feb 2016
    Location
    Luxembourg
    Posts
    523
    Thanks
    198
    Thanked 750 Times in 304 Posts
    Here are the results for paq8px_v160, option -9s:
    Code:
    Total compressed size: 929.159.073 bytes
    Compression time: 183.493,88 sec
    Decompression time: 193.694,86 sec
    Machine used: Intel i7 5820k @ 4.4Ghz with 64GB of DDR4-2400 CL11 RAM
    Best regards
    Attached Files Attached Files

  29. Thanks:

    Gotty (27th August 2018)

  30. #52
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    226
    Thanks
    108
    Thanked 106 Times in 65 Posts
    Here are the results for cmix synced to revision 4f665e7:

    Code:
    Total compressed size: 908365402 bytes
    Compression time: 6591255.64 sec
    Decompression time: 6584262.29 sec
    TSV file attached. This was run on Google Compute Engine with the following VM configuration: "custom (4 vCPUs, 30 GB memory)", "Unknown CPU Platform"
    Attached Files Attached Files
    Last edited by byronknoll; 15th September 2018 at 21:24.

  31. Thanks (2):

    mpais (15th September 2018),schnaader (15th September 2018)

  32. #53
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    226
    Thanks
    108
    Thanked 106 Times in 65 Posts
    Here are the results for cmix v16:

    Code:
    Total compressed size: 906796260 bytes
    Compression time: 7936821.34 sec
    Decompression time: 7890616.36 sec
    TSV file attached. This was run on Google Compute Engine with the following VM configuration: "custom (4 vCPUs, 30 GB memory)", "Unknown CPU Platform"
    Attached Files Attached Files

  33. Thanks:

    mpais (7th October 2018)

  34. #54
    Member
    Join Date
    Jun 2018
    Location
    Slovakia
    Posts
    143
    Thanks
    40
    Thanked 8 Times in 8 Posts
    Dear Mr. Rhatushnyak!

    I´ve seen at LPCB that there are many entries more than 100% of "original size" - that´s confusing. Why? Original size should be 100% and only improvements lower. Could you repair that?

    Thanks.
    CompressMaster

  35. #55
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    700
    Thanks
    210
    Thanked 267 Times in 157 Posts
    Dr. Rhatushnyak. Not Mr.

  36. #56
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    39
    Thanked 92 Times in 48 Posts
    Updated LPCB: paq8px_v182fix1 and cmix 16.

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  37. Thanks (4):

    byronknoll (8th October 2019),Gotty (7th October 2019),Hakan Abbas (7th October 2019),Mike (6th October 2019)

Page 2 of 2 FirstFirst 12

Similar Threads

  1. Sac: (State-of-the-Art) Lossless Audio Compression
    By Sebastian in forum Data Compression
    Replies: 55
    Last Post: 12th November 2019, 12:24
  2. Unknown Moscow - Photo Gallery
    By encode in forum The Off-Topic Lounge
    Replies: 17
    Last Post: 23rd October 2013, 15:41
  3. .bik no lossless compression?
    By Tidro in forum Data Compression
    Replies: 2
    Last Post: 4th September 2013, 19:39
  4. iz: New fast lossless RGB photo compression
    By cfeck in forum Data Compression
    Replies: 63
    Last Post: 4th December 2012, 12:21
  5. lossless data compression
    By SLS in forum Data Compression
    Replies: 21
    Last Post: 15th March 2011, 12:35

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •