Page 4 of 6 FirstFirst ... 23456 LastLast
Results 91 to 120 of 157

Thread: Google's compression projeсts

  1. #91
    Member
    Join Date
    Nov 2019
    Location
    Moon
    Posts
    21
    Thanks
    5
    Thanked 24 Times in 13 Posts
    Some metrics comparison:
    2048x1320_nitish-kadam-34748


    Butteraugli (AVIF - HEIC - HTJ2K - WebP - JPEG XL - MozJpeg)
    3-norm: (AVIF - HEIC - HTJ2K - JPEG XL - WebP - MozJpeg)
    AVIF 9.2743740082 3-norm: 2.321073
    HEIC 13.8783264160 3-norm: 3.060689
    HTJ2K 17.1095638275 3-norm: 4.099578
    MozJpeg 19.1906070709 3-norm: 8.023822
    JPEG XL 19.0907402039 3-norm: 4.272680
    WebP 17.9173774719 3-norm: 5.007857

    DSSIM
    (AVIF - HEIC - JPEG XL - HTJ2K - WebP - MozJpeg)
    0.007985 AVIF
    0.011601 HEIC
    0.017184 HTJ2K
    0.086505 MozJpeg
    0.015210 JPEG XL
    0.031527 WebP
    SSIMULACRA (AVIF - HEIC - JPEG XL - HTJ2K - WebP - MozJpeg)
    0.02931092493236065 AVIF
    0.038233496248722076 HEIC
    0.05292587727308273 HTJ2K
    0.10630584508180618 MozJpeg
    0.05085379257798195 JPEG XL
    0.06478796154260635 WebP
    VMAF: (AVIF - HEIC - HTJ2K - JPEG XL - WebP - MozJpeg)
    AVIF VMAF: 87.43534236386863, PSNR: 44.77028232276835, SSIM: 0.9923002123832703, MS-SSIM: 0.9906416878743063
    HEIC VMAF: 80.8764654659456, PSNR: 42.94141442285002, SSIM: 0.9905341863632202, MS-SSIM: 0.9884282902889149
    HTJ2K VMAF: 73.2913217099848, PSNR: 40.91610359405568, SSIM: 0.9820453524589539, MS-SSIM: 0.9816313764353033
    MozJpeg VMAF: 54.10929529591918, PSNR: 35.5303285917693, SSIM: 0.9309688806533813, MS-SSIM: 0.936962545830246
    JPEG XL VMAF: 61.59250694065548, PSNR: 39.93377021739749, SSIM: 0.9857387542724609, MS-SSIM: 0.9829265315009978
    WebP VMAF: 55.94460452312931, PSNR: 37.98409101496202, SSIM: 0.969596803188324, MS-SSIM: 0.9705516740404905

    But, on a larger bpp (in other examples, the result is almost similar):
    2048x1320_alex-siale-95113
    (~506000 bytes each image)

    Butteraugli (JPEG XL - MozJpeg - HEIC - AVIF - HTJ2K - WebP)
    3-norm: (JPEG XL - MozJpeg - HEIC - WebP - HTJ2K - AVIF)
    AVIF 8.8833646774 3-norm: 4.289337
    HEIC 8.6990165710 3-norm: 3.963638
    HTJ2K 9.4587421417 3-norm: 4.243970
    MozJpeg 7.4712114334 3-norm: 3.394969
    JPEG XL 1.8749873638 3-norm: 1.050182
    WebP 9.5613374710 3-norm: 3.975210
    DSSIM (JPEG XL - HEIC - AVIF - MozJpeg - WebP - HTJ2K)
    0.011497 AVIF
    0.011003 HEIC
    0.017835 HTJ2K
    0.012662 MozJpeg
    0.009563 JPEG XL
    0.014941 WebP
    SSIMULACRA (JPEG XL - HEIC - AVIF - WebP - HTJ2K - MozJpeg)
    0.03911368176341057 AVIF
    0.03868887946009636 HEIC
    0.05217859894037247 HTJ2K
    0.05393524467945099 MozJpeg
    0.035159382969141006 JPEG XL
    0.04311709105968475 WebP
    VMAF: (MozJpeg - HTJ2K - HEIC - AVIF - WebP - JPEG XL)
    SSIM: (MozJpeg - HEIC - AVIF - WebP - HTJ2K - JPEG XL)
    AVIF VMAF: 91.62228049091999, PSNR: 37.9803197776212, SSIM: 0.9977059364318848, MS-SSIM: 0.9922499111306672
    HEIC VMAF: 92.00663265691058, PSNR: 38.64841753304782, SSIM: 0.9986478090286255, MS-SSIM: 0.9948372815741532
    HTJ2K VMAF: 92.19011608093165, PSNR: 36.41784272626444, SSIM: 0.9966325163841248, MS-SSIM: 0.9897441535051162
    MozJpeg VMAF: 92.73296699245252, PSNR: 34.54616417406431, SSIM: 0.9990101456642151, MS-SSIM: 0.9928436494191371
    JPEG XL VMAF: 89.07241730929319, PSNR: 36.8834302952345, SSIM: 0.9963483810424805, MS-SSIM: 0.9906952076528491
    WebP VMAF: 90.0164064491984, PSNR: 36.00095502257268, SSIM: 0.9976305365562439, MS-SSIM: 0.9916054652684111

    2048x1320_andrew-coelho-46449 (~432000 bytes each image)

    Butteraugli, DSSIM, SSIMULACRA (JPEG XL - best ... HTJ2K - worst)
    VMAF, SSIM: (MozJpeg - best ... JPEG XL - worst) - For example, Netflix uses these metrics in the Framework and on the blog (with a note that VMAF is more suitable for video).
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	WebP.png 
Views:	62 
Size:	581.0 KB 
ID:	7445  
    Last edited by Scope; 27th February 2020 at 19:28.

  2. Thanks (3):

    Hakan Abbas (29th February 2020),Jyrki Alakuijala (27th February 2020),zubzer0 (27th February 2020)

  3. #92
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    811
    Thanks
    234
    Thanked 291 Times in 173 Posts
    Thank you so much for doing this! This is very useful.

    FYI First image is 0.1 BPP, i.e, 1 : 240 compression. The second image is 0.28 BPP, i.e., 1 : 86 compression. I consider that both of these are aggressive compression settings, and in practical use people would use from 1 : 50 to 1 : 15 compression.

  4. #93
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    811
    Thanks
    234
    Thanked 291 Times in 173 Posts
    Quote Originally Posted by Piglet View Post
    ​I am curious 10 KB landscape (https://imgsli.com/MTIyMzA/) mathematical numbers
    Mathematical analysis works best on the y channel, and punishes proper color modeling significantly. Consider repeating the effort for an image that has been converted to gray after compression (or before compression).

  5. Thanks:

    Piglet (27th February 2020)

  6. #94
    Member
    Join Date
    Jun 2018
    Location
    Yugoslavia
    Posts
    51
    Thanks
    6
    Thanked 2 Times in 2 Posts
    it seems that webp is much better than flif on lossless 'png' compression in some cases. I've tried it with lores indexed png (screenshots from old computer games):
    28487 files
    png: 375Mb
    flif: 410Mb
    webp: 320Mb

    flif is actually worse than .png in this kind of images.
    settings used:
    flif -e -K -E100 -Q100
    cwebp -preset drawing -lossless -z 9

    webp took several days!

  7. #95
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    811
    Thanks
    234
    Thanked 291 Times in 173 Posts
    Quote Originally Posted by pklat View Post
    png: 375Mb
    flif: 410Mb
    webp: 320Mb
    I'm happy to hear this. In my own experiments WebP lossless won against flif in density for pngs from the internet in size 16-256 kB. Overall flif gives slightly better density typically, but not as much as advertised. Flif decoding is 10-20x slower than WebP, so it's natural that it gives more density.

    Both I and Jon contributed to Jpeg XL, but this kind of images had a relatively low priority in testing. We wanted to solve the photography use case. There is at least one place where WebP lossless and FUIF interacted in it, and it was the integration of my select predictor (a further development from Paeth).

  8. #96
    Member
    Join Date
    Nov 2019
    Location
    Moon
    Posts
    21
    Thanks
    5
    Thanked 24 Times in 13 Posts
    I haven't found any available comparisons of lossless image formats with Jpeg XL and AVIF, contain enough types of images (tests on a small set may not be representative and it is easier to select them for the desired result).
    So I made my comparison, both on public content from the Internet (Reddit), and on images that I can’t distribute (most of this is High-quality Digital 2D Art in a very high resolution and there are a lot of it, but since there is a limitation in computing power and time, I selected random sets, compared WebP and Jpeg XL and provided results without source images, AVIF is slow and it has memory problems at high resolutions). Public images are divided into different types (including Photo, Pixel Art, Low Poly 3D, Fractals), to make it clearer on which type of content a particular encoder shows better results (it is also possible to select certain images, to demonstrate that one of the formats is "better", which makes less sense to compare with a small type and number of images), usually each image has a link to view if possible.

    I can point out that Jpeg XL (considering speed) shows good results on everything except Pixel Art and images with repeatable texture, color or noise. Speed ​​9 is noticeably slower, but the result itself is not always better than 8, although in general the gain is 0.4-0.6% (and it is taken as the basis for comparison) sometimes Speed ​​3 shows better results (comparison with fast speeds was done specifically to notice the ineffectiveness of the algorithms, models, predictors, etc. in slower modes)

    AVIF
    (YUV444) is the slowest, although the speed is already acceptable for use, compression is good, but the problem for me was consuming Libaom memory at high resolutions (doesn’t matter through Avifenc, Colorist or using Aomenc itself), it happened that there was not enough 32 GB RAM for some images and it could not encode them (even using tiles did not helped and in theory they can give worse results). For example, at not the highest resolution https://i.imgur.com/S1yCVW2.png, maybe there are some other options to reduce consumption or is it a memory issue in Windows. Rav1e also consumes a lot of memory, but it could already encode any images, though it was much slower and the result for lossless compression on my tests was worse.

    FLIF
    is selected for comparison to see where the rest of the encoders show underperforming results (including its successor Jpeg XL). It does not support multithreading, but in single-threaded mode was not as slow as I thought.

    Lossless WebP is already a good workhorse for many types of images (the overall result is worse, but if you take individual images, it rarely has a very noticeable drop in performance on any kind of content)

    Jpeg LS part-2 (Photo) - on certain photographic images shows better results than WebP

    BPG, HEIC, Jpeg 2000, Jpeg LS
    - tested on random images, the results were much worse on my content and I decided not to spend time on these encoders

    BMF
    - strong lossless image compressor for understanding compressibility potential

    PNG
    Optimizers - Pingo has a very good result at high speed, ECT (ect -9 --reuse) applied after Pingo reduces the size by another 0.2-1.8%. Perhaps with Zopflipng or other optimizers you can get even smaller size somewhere, but the difference will not be so significant, and its speed will be incredibly slow. The main goal was not to compare with non-optimized bloated PNGs (the actual size of some was 20-90% larger).

    Public Images (Total Size) https://i.imgur.com/CKcpGR6.png
    Public Images + HQ Art (Total Size) https://i.imgur.com/BFJOlC0.png

    Lossless Image Formats Comparison (Jpeg XL, AVIF, WebP, FLIF, PNG, ...)
    https://docs.google.com/spreadsheets/d/1ju4q1WkaXT7WoxZINmQpf4ElgMD2VMlqeDN2DuZ6yJ8/
    Last edited by Scope; 23rd March 2020 at 23:32.

  9. Thanks (3):

    Hakan Abbas (15th March 2020),Jan Wassenberg (15th March 2020),Shelwien (15th March 2020)

  10. #97
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    36
    Thanks
    1
    Thanked 13 Times in 10 Posts
    using each optimizer separately with maximum settings gives a worse result and takes a lot longer
    using stronger deflate on pingo's results could give smaller files, but it should not be slower than what you did, even with higher levels. do you have a (speed) log of your comparison for PNG?

  11. #98
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    811
    Thanks
    234
    Thanked 291 Times in 173 Posts
    Quote Originally Posted by Scope View Post
    Lossless WebP is already a good workhorse for many types of images.
    Again your comparison work is very interesting.

    Did you compile the webp encoder by yourself? There are old binaries around that contain rather embarrassing bugs and are on the average 10-15 % worse, and more on grayscale images.

    You could also try 'near lossless':

    In JPEG XL, you could also try --distance 0.6 to --distance 0.8 with lossy mode, that will be sufficiently good for most purposes.

    In WebP lossless, you apply cwebp -near_lossless 40 (or cwebp -near_lossless 60) for a near-lossless experience. It uses the lossless codec, but allows some very minor deviation with huge savings, particularly so for photographs.

  12. #99
    Member
    Join Date
    Nov 2019
    Location
    Moon
    Posts
    21
    Thanks
    5
    Thanked 24 Times in 13 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    Did you compile the webp encoder by yourself? There are old binaries around that contain rather embarrassing bugs and are on the average 10-15 % worse, and more on grayscale images.
    Yes, I used the latest versions of all encoders

    Quote Originally Posted by Jyrki Alakuijala View Post
    You could also try 'near lossless':
    In JPEG XL, you could also try --distance 0.6 to --distance 0.8 with lossy mode, that will be sufficiently good for most purposes.
    In WebP lossless, you apply cwebp -near_lossless 40 (or cwebp -near_lossless 60) for a near-lossless experience. It uses the lossless codec, but allows some very minor deviation with huge savings, particularly so for photographs.
    Then it will be completely different comparisons, in these tests I tried to put everyone in the same conditions

    Quote Originally Posted by cssignet View Post

    using stronger deflate on pingo's results could give smaller files, but it should not be slower than what you did, even with higher levels. do you have a (speed) log of your comparison for PNG?
    Hmm, yes, Pingo at maximum settings is not so slow (but the file size is still larger than Pingo + ECT), apparently I remembered it when testing other combinations (ECT + ECT, Oxipng + ECT, ...)
    Last edited by Scope; 16th March 2020 at 16:43.

  13. Thanks:

    Jyrki Alakuijala (16th March 2020)

  14. #100
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    36
    Thanks
    1
    Thanked 13 Times in 10 Posts
    Pingo at maximum settings is not so slow (but the file size is still larger than Pingo + ECT)
    pingo is speed over space and more designed to be used on small web graphics (RGBA, PLTE+tRNS). however, on the first 16 files on your comparison from "Anime"

    pingo -sa -strip:
    Code:
    pingo - (306.01s):
    -----------------------------------------------------------------
    16 files => 11821.59 KB - (23.96%) saved
    -----------------------------------------------------------------
    ​
    Kernel  Time =    38.157 =   12%
    User    Time =  1119.322 =  365%
    Process Time =  1157.480 =  378%    Virtual  Memory =   1321 MB
    Global  Time =   306.025 =  100%    Physical Memory =   1198 MB
    ECT -9 --reuse --mt-deflate --mt-file: (on pingo's results)
    Code:
    Processed 16 files
    Saved 48.51KB out of 36.64MB (0.1293%)
    
    Kernel  Time =   322.142 =   30%
    User    Time =  3536.324 =  339%
    Process Time =  3858.466 =  370%    Virtual  Memory =    976 MB
    Global  Time =  1040.886 =  100%    Physical Memory =    865 MB
    if you want to use your combination, i would suggest trying pingo's maximum level and the -nocompression flag: it should speed-up the process significantly and could give better overall compression because it benefits from image data reductions. this could require more disk space though

    You could also try 'near lossless'

    tried similar approaches in pingo, which are at an early stage of development — this is barely tested on few samples. perhaps metrics are a bit too enthusiastic regarding what is done by this, but it could be compared on this kind of samples. also, the 'WebP lossless'. it is even less tested — probably worse most of the time — but could be able to select other transformations and thus to create occasionally smaller files than the brute-force (-m 6 -q 100 from cwebp 1.1.0) [edit: add some results]
    Attached Files Attached Files
    Last edited by cssignet; 20th March 2020 at 16:36. Reason: add the WebP stuff

  15. Thanks:

    Scope (17th March 2020)

  16. #101
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    767
    Thanks
    236
    Thanked 244 Times in 149 Posts
    Benchmarking JPEG XL image compression (1 April 2020):
    https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11353/113530X/Benchmarking-JPEG-XL-image-compression/10.1117/12.2556264.full?SSO=1

    E.g. speed:
    Codec Encode Decode
    JPEG XL (N=4) 49.753 132.424
    JPEG (libjpeg) 9.013 11.133
    JPEG (libjpeg-turbo) 48.811 107.981
    HEVC-HM-YUV444 0.014 5.257
    HEVC-x265-YUV444 1.031 14.037
    HEVC-x265-YUV444 (N = 4) 3.691 14.100
    HEVC-x265-YUV444 (N = 8 ) 6.345 13.471

  17. Thanks (4):

    Hakan Abbas (6th April 2020),Jan Wassenberg (5th April 2020),Jyrki Alakuijala (5th April 2020),Piglet (15th April 2020)

  18. #102
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    611
    Thanks
    246
    Thanked 240 Times in 119 Posts
    Quote Originally Posted by Jarek View Post
    E.g. speed:
    Codec Encode Decode
    Thanks for the info, but please don't forget the units The paper says "throughput in megapixels/second". I was confused at first because I read it as "seconds".
    http://schnaader.info
    Damn kids. They're all alike.

  19. Thanks (3):

    Hakan Abbas (6th April 2020),JamesB (11th April 2020),Jyrki Alakuijala (5th April 2020)

  20. #103
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    811
    Thanks
    234
    Thanked 291 Times in 173 Posts
    I have to say that I am proud of the jpeg xl team delivering such coding speeds and the strong guarantees on quality above 0.5 bpp (1 : 50 compression).

  21. Thanks:

    Jarek (5th April 2020)

  22. #104
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    767
    Thanks
    236
    Thanked 244 Times in 149 Posts
    Quote Originally Posted by Jon Sneyers View Post
    For Squeeze residuals, the best predictor is just the zero predictor
    I have finally looked at that, and for the last scan in database of 48 grayscale 512x512 images:
    - predicting value from context lead to 0.393 bits/value savings - blue dots, especially thanks to "C-D" giving local gradient,
    - additionally predicting width (error level), the savings grew to 0.645 bits/value - green dots.
    https://arxiv.org/pdf/2004.03391


  23. #105
    Member
    Join Date
    Jan 2015
    Location
    Hungary
    Posts
    12
    Thanks
    21
    Thanked 7 Times in 3 Posts
    There are still interesting things.

    Click image for larger version. 

Name:	amd.png 
Views:	63 
Size:	266.1 KB 
ID:	7539

    Code:
    amd.png        Irfanview png level 9         267 KB
    
    webp :
    
    Irfanview 4.54 - 64 bit lossless             326 KB ???
    
    command line 1.1.0 -lossless
    
    default                                      134 KB
    
    -z 0                                         212 KB
    
    -z 9                                          55 KB

  24. Thanks:

    schnaader (9th April 2020)

  25. #106
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    611
    Thanks
    246
    Thanked 240 Times in 119 Posts
    Interesting test file. I'd guess it could be an ex-jpeg, as the color count is quite high for this content even when considering anti-aliasing (IrfanView counts 5,013 unique colors), but if there are JPG artifacts, they're very subtle. Alex's ex jpeg detector outputs "1.091 Maybe. Try better methods.". On the other hand, the small size from webp suggests that there are blocks with repeating identical content which I think is unlikely for an ex-JPEG if the repeating content isn't aligned at offsets that are a multiple of 8.

    Note that on this type of images (very compressible repeating content), non-image compressors often will lead to similar sizes while compressing faster, e.g. Precomp's LZMA2:

    Code:
    amd.bmp.pcf    56,367
    amd.png.pcf    90,071 (bigger because 8,574 bytes reconstruction data and processing PNG filtered lines)
    Also, if you didn't know, to speed up the long compression time for -z 9, you can enable multithreading using -mt. Unfortunately, this is only available for the slowest modes (-z 9 and the equivalent "-m 6 -q 100")
    http://schnaader.info
    Damn kids. They're all alike.

  26. Thanks:

    Piglet (9th April 2020)

  27. #107
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    811
    Thanks
    234
    Thanked 291 Times in 173 Posts
    The 1.5 % of images where png wins over WebP lossless have differing row by row statistics. WebP lossless defines a predictor for each small square in the image whereas PNG defines a predictor for each row. WebP lossless strategy is generally much better, but occasionally PNG does perform better. The wins by PNG are however not big when they occur.

  28. #108
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    611
    Thanks
    246
    Thanked 240 Times in 119 Posts
    Seems the high color count comes from some strange (dithering?) patterns in the image. What looks like single color areas is not, e.g. zooming in and using a fill tool with color brown leads to single pixels and repeating patterns:

    Click image for larger version. 

Name:	l3_cache_fill.png 
Views:	54 
Size:	70.6 KB 
ID:	7540

    EDIT: Edge detection and contrast modifications makes the pattern visible:

    Click image for larger version. 

Name:	edge_detection_contrast.png 
Views:	45 
Size:	153.4 KB 
ID:	7541
    Last edited by schnaader; 9th April 2020 at 23:30.
    http://schnaader.info
    Damn kids. They're all alike.

  29. Thanks:

    Jyrki Alakuijala (24th April 2020)

  30. #109
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    767
    Thanks
    236
    Thanked 244 Times in 149 Posts
    ISO/IEC DIS 18181-1(en)
    Information technology — JPEG XL Image Coding System — Part 1: Core coding system
    Draft international standard, voting begins on 2020-04-14, terminates on 2020-07-07
    https://www.iso.org/obp/ui#iso:std:i...dis:ed-1:v1:en
    paywalled but looks similar to https://arxiv.org/pdf/1908.03565

  31. Thanks:

    Jyrki Alakuijala (24th April 2020)

  32. #110
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    56
    Thanks
    5
    Thanked 32 Times in 23 Posts
    Thanks for putting the time doing these comparison.

    I'm curious:

    Quote Originally Posted by Scope View Post

    AVIF
    (YUV444) is the slowest, although the speed is already acceptable for use, compression is good#

    [...]

    Lossless Image Formats Comparison (Jpeg XL, AVIF, WebP, FLIF, PNG, ...)
    https://docs.google.com/spreadsheets/d/1ju4q1WkaXT7WoxZINmQpf4ElgMD2VMlqeDN2DuZ6yJ8/
    Which of these tests are absolutely lossless? (that is: starting from PNG's RGB values, you'd get them back exactly after decompression)

    I'm asking because RGB -> YUV444 conversion would incur some loss already, right?

  33. #111
    Member
    Join Date
    Nov 2019
    Location
    Moon
    Posts
    21
    Thanks
    5
    Thanked 24 Times in 13 Posts
    Quote Originally Posted by skal View Post
    I'm asking because RGB -> YUV444 conversion would incur some loss already, right?
    Yes, there were discussions about YUV444:
    https://github.com/joedrago/colorist/issues/26
    https://www.reddit.com/r/AV1/comment...ally_lossless/

    Lossless AVIF is now supported in Avifenc (Libavif)
    https://github.com/AOMediaCodec/liba...0ab2edfde13fc5
    But as I understood, this method is not very effective and later I will add a truly lossless AVIF to the comparisons.

    I am currently testing the new Jpeg XL build (compression has been significantly improved on the worst examples from older builds), but full results at different speeds will not be soon.

    A comparison of various PNG optimizers has also been added:
    https://docs.google.com/spreadsheets/d/1ju4q1WkaXT7WoxZINmQpf4ElgMD2VMlqeDN2DuZ6yJ8/edit#gid=1741276444

    Pingo shows better results by (lossless?) color conversion (or fix alpha?) large 32 bit PNG to 24 bit.
    ECT is very close, and without these two 32 bit PNG it even has a slightly better result.
    Both optimizers work well with incorrect PNGs, when for others they needed to be fixed manually.
    Also added Uncompressed PNG + Brotli for comparison (theoretically there is support in browsers, but in reality this solution can have its disadvantages)

    Quote Originally Posted by dado023 View Post
    Is using ect + pingo on same file yielding higher compression?
    ​If yes, does order of usage matter?
    Yes, I used Pingo + ECT from the very beginning, as shown in the tests, this gives a slightly better optimization (0.27% of the total result, the difference was larger on older versions).
    ECT does not perform color conversion on some 32 bit files, as Pingo does (although I'm not sure if it is lossless).


    Order of usage:
    pingo -sa -nocompression -strip
    ect -9 --reuse

    Quote Originally Posted by cssignet View Post
    i did not look deeply but IMHO, a benchmark should mention at least time, results (size, memory), version (latest if possible), and the exact command line used for each tool. also, consider adding various PNG/image type which not only result as RGB. FWIW, pingo would convert 16->8 bits/samples (web)
    I used the latest version for the current day, the version and the exact command line is written in the tooltip. Unfortunately, I can't measure time and memory consumption because I don't have a separate computer to test for such a long time (and my main PC is constantly used and loaded with something else).

    Examples of large PNG size differences:
    Source: https://i.redd.it/02pm5dl1ipc41.png
    Pingo - https://i.slow.pics/72GfqzO4.png
    ECT - https://i.slow.pics/QDbFoDbW.png

    Source: https://i.redd.it/mwvt7z763ol41.png
    Pingo - https://i.slow.pics/7E5NXFPh.png
    ECT - https://i.slow.pics/q6uXS4eb.png
    Last edited by Scope; 4th May 2020 at 22:03.

  34. Thanks:

    dado023 (4th May 2020)

  35. #112
    Member
    Join Date
    Mar 2016
    Location
    Croatia
    Posts
    189
    Thanks
    81
    Thanked 13 Times in 12 Posts
    @Scope

    Is using ect + pingo on same file yielding higher compression?
    ​If yes, does order of usage matter?

  36. #113
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    36
    Thanks
    1
    Thanked 13 Times in 10 Posts
    i did not look deeply but IMHO, a benchmark should mention at least time, results (size, memory), version (latest if possible), and the exact command line used for each tool. also, consider adding various PNG/image type which not only result as RGB. FWIW, pingo would convert 16->8 bits/samples (web)

  37. #114
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    36
    Thanks
    1
    Thanked 13 Times in 10 Posts
    Quote Originally Posted by Scope View Post
    I used the latest version for the current day
    i just looked the "Details" tab (which was probably not updated)

    Quote Originally Posted by Scope View Post
    I can't measure time
    time would be the most valuable factor. people could not measure efficiency without it, and sadly, it makes this comparison far less relevant. in case you changed your mind, i would also suggest to add pingo lower levels to make it more reliable — do comparison at the same other tool scope/range

    Quote Originally Posted by Scope View Post
    Pingo shows better results by (lossless?) color conversion (or fix alpha?) large 32 bit PNG to 24 bit.
    FYI, there is no alpha. those PNG are neither "32 bit" nor "24 bit". both are truecolor 16 bits/sample (48 bits/pixel). regardless of lossless/lossy, pingo is not a generic optimizer; it optimizes for web context. while they load the whole file, web browsers (or most of viewers) would decode those PNG as 8 bits/sample anyway. pingo lossless would drop those image data, and rendered pixels should be exactly preserved

  38. #115
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    811
    Thanks
    234
    Thanked 291 Times in 173 Posts
    Quote Originally Posted by Scope View Post
    I am currently testing the new Jpeg XL build (compression has been significantly improved on the worst examples from older builds)
    JPEG XL lossless has been designed and optimized for photographs, graphics was more of a second thought.

    WebP lossless is the opposite of that, I designed it to perform best with PNG-like graphics, and lossless photographs was not a priority at all. In fact I took a 10 % hit on photograph compression to keep the decoding fast and the codec simple.

    JPEG XL does not do usually LZ compression in its lossless use. There is a 2d area copy through sprite modeling. Finding the most efficient sprites is still an open topic, whereas doing efficient LZ compression is relatively well-known.

    If you compress PNGs from the internet with WebP lossless, AVIF and JPEG XL -- I would not be surprised if WebP lossless still wins that game. If you do that with photographs, likely JPEG XL wins.

    If we want efficient compression, we don't need PNGs any more. WebP lossless is much more efficient 99 % of the cases and doesn't lose by a lot when it does lose. For any other modern codec (including JPEG XL) this ratio is worse, think 90 % instead of 99 %. Also, occasionally WebP lossless is much better: At its best 1 percentile it produces about 25 % of the size of the respective PNG. WebP lossless decompresses faster when the data includes full channel RGBA and similar speed for indexed images.

  39. #116
    Member
    Join Date
    Nov 2019
    Location
    Moon
    Posts
    21
    Thanks
    5
    Thanked 24 Times in 13 Posts
    Quote Originally Posted by cssignet View Post
    i just looked the "Details" tab (which was probably not updated)
    This information is updated, but it relates to comparing formats and encoders


    Quote Originally Posted by cssignet View Post
    time would be the most valuable factor. people could not measure efficiency without it, and sadly, it makes this comparison far less relevant. in case you changed your mind, i would also suggest to add pingo lower levels to make it more reliable — do comparison at the same other tool scope/range
    I agree, but the goal was to highlight the best optimizers for the final result in a reasonable time.
    Pingo and ECT on these settings are fast enough (there is no difference between them when one takes a minute and another takes months)
    With other optimizers, the time difference can be very large, but because their final result is worse, they can be completely discarded (this is rather a general comparison that among other popular or old optimizers there are no significantly more effective ones).
    But maybe I’ll add a time comparison, even if it’s approximate (they also have different support for multithreading modes, from lack of support to even on one file and their speed depends on the number of files, or if other utilities take care of running them in parallel)

    For faster modes I did not want to overload the comparison (other optimizers have a lot of modes too, but I can test if there is a suggestion in the best settings with a good ratio of speed and efficiency).

  40. #117
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    36
    Thanks
    1
    Thanked 13 Times in 10 Posts
    Quote Originally Posted by Scope
    With other optimizers, the time difference can be very large, but because their final result is worse, they can be completely discarded
    if you plan to add "time", it would be more fair to compare tool in the same range too, whatever the result of *those specific samples* - they could be different on other, especially on different colortype (RGBA, paletted). pingo "-sa" includes some kind of extreme transformation/compression which are not used by zlib-related optimizers such as TruePNG, OptiPNG, etc. it would be more reliable to use the right level to compare such tool because this would change efficiency drastically

    Code:
    >timer pngoptimizer image.png
    Global  Time =     0.933 =  100%    Physical Memory =     10 MB
    out is 151421 bytes
    
    >timer pingo -sa image.png <-- out of range
    Global  Time =     1.842 =  100%    Physical Memory =     29 MB
    out is 133557 bytes
    smaller result but 1.97x slower
    
    >timer pingo -s1 image.png
    Global  Time =     0.265 =  100%    Physical Memory =     14 MB
    out is 138861 bytes
    smaller result and 3.52x faster

  41. #118
    Member
    Join Date
    Nov 2019
    Location
    Moon
    Posts
    21
    Thanks
    5
    Thanked 24 Times in 13 Posts
    Quote Originally Posted by cssignet View Post
    ​if you plan to add "time", it would be more fair to compare tool in the same range too
    I thought about it, but then I decided to make a simpler comparison in order to get the best result in a reasonable time (my original goal was to minimize PNG as much as possible and see how much better the more modern formats would be)
    For example, let's take ECT, its parameter -9 has sufficient speed and produces better results than the faster ones, but with --allfilters-b or use iterations branch with more options, a small improvement of the result will be due to a very inefficient waste of a huge amount of time (although it may be necessary to get the maximum result by any means, but this option is much less suitable for ordinary people).


    Ratio of speed to result would also be interesting, but by what criteria to choose the settings, it’s quite difficult to choose many optimizers for the same resulting size or total time, and if I test many different settings, it will overload spreadsheet (and this is better for graphs).
    Or I can add only a few options for the best optimizers, the fastest, optimal and maximum (which is already there, without extreme settings).

    Btw, I also tested Pingo -sb and the result was slightly worse than -sa on this set.

    Quote Originally Posted by cssignet
    also, consider adding various PNG/image type which not only result as RGB
    I can add this if there is a suggestion where to get a sufficient number of suitable set of such images

  42. #119
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    36
    Thanks
    1
    Thanked 13 Times in 10 Posts
    Quote Originally Posted by Scope
    a suggestion where to get a sufficient number of suitable set of such images
    search on png web design keywords. you should probably find lot of RGBA. then you can transform whatever RGBA to:
    4: grayscale+alpha (pingo -noalpha -nocompression -grayscale -s0 file.png)
    3: paletted (pngquant, pngnq, etc., but not pingo)
    2: truecolor (pingo -notrans -nocompression -s0 file.png)
    0: grayscale (pingo -notrans -nocompression -grayscale -s0 file.png)

    note that pingo could do some optimization during the process, so it could eventually transform the colortype. perhaps you could need some other stuff though: -uncompress or -reset (edit: you have to run those after the transformation). first one just uncompress image data.
    -reset goes further: it encodes the image data unfiltered, uncompressed, to minimal bitdepth, 16->8 bits/sample, changes the palette order (if PLTE), and 'alpha optimization' (RGBA 0,0,0,0) if transparency. so basically, it sets an initial state where any optimizer could be able to recompress the image data. note that some optimizers could benefit from -reset on RGBA samples (PNGOUT or OptiPNG, which do *not* perform this themselves)

    Quote Originally Posted by Scope
    Ratio of speed to result would also be interesting
    perhaps even more than you think on the above samples you would find. that is another reason why it is interesting to introduce pingo lower levels into your comparison


    Quote Originally Posted by Scope
    Pingo -sb and the result was slightly worse than -sa on this set
    as said, pingo is more designed to play with small web graphics. -sa/-sb are using more transformations, and sometimes one could be better than another. -sb is generally better on paletted, while could offer various results on other colortype. it has however some potential to find nearly optimal data transformation under the first second of process

  43. #120
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    36
    Thanks
    1
    Thanked 13 Times in 10 Posts
    Quote Originally Posted by Jyrki Alakuijala
    WebP lossless is much more efficient 99 % of the cases
    perhaps regarding the codec itself, but maybe the implementation could be improved on some cases by speed and size where an optimized PNG could be still better atm

    original file (unoptimized): https://n8k6e2y6.ssl.hwcdn.net/images/_newsImages/2018/March/13/ZephyrPrime_Keyart.png

    Code:
    >timer cwebp -mt -lossless -m 6 -q 100 ZephyrPrime_Keyart.png -o cwebp.webp
    Saving file 'cwebp.webp'
    File:      ZephyrPrime_Keyart.png
    Dimension: 1920 x 1080
    Output:    806972 bytes (3.11 bpp)
    Lossless-ARGB compressed size: 806972 bytes
      * Header size: 6120 bytes, image data size: 800826
      * Lossless features used: PALETTE
      * Precision Bits: histogram=5 transform=4 cache=3
      * Palette size:   256
    
    Kernel  Time =     0.109 =    0%
    User    Time =    24.882 =  170%
    Process Time =    24.991 =  170%    Virtual  Memory =    156 MB
    Global  Time =    14.619 =  100%    Physical Memory =    151 MB
    this "cwebp.webp", created by cwebp, could be losslessly optimized further:

    Code:
    >timer pingo -s0 cwebp.webp
    
      pingo - (0.94s):
      -----------------------------------------------------------------
      1 file => 117.55 KB - (14.92%) saved
      -----------------------------------------------------------------
    
    Kernel  Time =     0.031 =    3%
    User    Time =     0.920 =   94%
    Process Time =     0.951 =   98%    Virtual  Memory =     77 MB
    Global  Time =     0.970 =  100%    Physical Memory =     73 MB
    pingo is very experimental on this, and is *not* producing optimal result, whatever the level. higher level should produce better results though

    this difference comes from the fact that predictors are applied on the optimized one. also, it could be possible to test several method to sort entries in palette, and probably find an alternative from the current one, which could sometimes improve the compression density further, whatever if predictors are efficient or not (what is actually done and is effective on this sample, but not always)

    another random ideas to improve speed/size:

    - you could preprocess image data to guess if image has a chance to be better stored without transformation by counting colors (which cwepb -m 5 -q 100 is not able to do, you need the brute-force atm which is very slow)
    - what is done for grayscale could be not optimal sometimes and transformed to smaller paletted
    - sum the header size + image data and check if it is possible to get smaller header but bigger image data, but still make smaller overall size
    - various other transformations which have less impact and could possibly make it incompatible with the decoder anyway (8 bits/pixel encoding which could be better than lower bitdepth for some images, alpha optimizations, etc.)

    note that i did not really test any of this stuff in large benchmark, some not at all. those are just ideas and/or probably unoptimized implementations
    Attached Files Attached Files

Page 4 of 6 FirstFirst ... 23456 LastLast

Similar Threads

  1. RAISR - image compression with machine learning by Google
    By willvarfar in forum Data Compression
    Replies: 1
    Last Post: 13th January 2017, 19:59
  2. Google Chrome alert
    By Bulat Ziganshin in forum The Off-Topic Lounge
    Replies: 12
    Last Post: 3rd December 2016, 07:45
  3. Google released Snappy compression/decompression library
    By Sportman in forum Data Compression
    Replies: 11
    Last Post: 16th May 2011, 12:31
  4. Interested in Google-Wave?
    By Vacon in forum The Off-Topic Lounge
    Replies: 2
    Last Post: 29th November 2009, 19:11
  5. Did you know the google hashmap
    By thometal in forum Forum Archive
    Replies: 0
    Last Post: 4th February 2007, 15:21

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •