Page 2 of 3 FirstFirst 123 LastLast
Results 31 to 60 of 63

Thread: WebP (lossy image compression)

  1. #31
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    885
    Thanks
    480
    Thanked 278 Times in 118 Posts
    Quite a change from previous article, though i guess that the real difference comes from the size of images in the set. And i agree that web-centric images are supposed to be bandwidth friendly.

  2. #32
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Cyan View Post
    Quite a change from previous article, though i guess that the real difference comes from the size of images in the set. And i agree that web-centric images are supposed to be bandwidth friendly.
    Yes, but not only. The images are just different. Very few photos, and even they were sometimes in terribly low quality (i.e. blurred) that you can't find in benchmarks. 50% of images larger then 30x30 has text on them. Many are just text. Many are partially transparent and transparency patterns are again very different from what you see in usual benchmarks.

  3. #33
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    885
    Thanks
    480
    Thanked 278 Times in 118 Posts
    Have you received some help to focus your study into the right direction ?
    With all known benchmarks focusing on high-quality photographic sources, i guess it was not so natural to go "against the mainstream" and analyze performance for web images...
    Brilliant move by the way

  4. #34
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Have you received some help to focus your study into the right direction ?
    With all known benchmarks focusing on high-quality photographic sources, i guess it was not so natural to go "against the mainstream" and analyze performance for web images...
    I just noticed that most images that I see on the net are different from what people benchmark and decided to see how do compressors work on such data.
    Though it's not exactly new. The idea to look at Alexa was taken from the PNGWolf author, for example. I guess that Google's test was on web data too.
    Also, I thought about establishing a public benchmark dataset, but after noticing the flaw of having too many similar images, I decided that the work should be redone. I still think it would be useful, but I don't have the motivation to do it now.
    Brilliant move by the way
    Thank you.
    Last edited by m^2; 28th November 2011 at 22:49.

  5. #35
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    246
    Thanks
    45
    Thanked 103 Times in 53 Posts
    Quote Originally Posted by m^2 View Post
    I tested webpll and some other codecs on web png data.
    https://extrememoderate.wordpress.co...ion-benchmark/
    Why didn't you consider adding ZPAQ with -mbmp_j4 ? Speed is roughly the same as webpll (decompression is N times slower, but on images smaller than ~256 kb the difference is not that big), while compression quality is probably better than webpll ...at least if you remove first ~600 or 700 bytes containing compressed bmp_j4.cfg from each compressed file.

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  6. #36
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Alexander Rhatushnyak View Post
    Why didn't you consider adding ZPAQ with -mbmp_j4 ? Speed is roughly the same as webpll (decompression is N times slower, but on images smaller than ~256 kb the difference is not that big), while compression quality is probably better than webpll ...at least if you remove first ~600 or 700 bytes containing compressed bmp_j4.cfg from each compressed file.
    Because it's entirely different game. All these algorithms have way faster decompression speed and memory consumption then ZPAQ. Even though the difference in decompression time might be tiny on your PC, I can assure it wouldn't be on my cell phone. Especially that the largest image has 1.8 MB. I had really mixed feelings about JPEG2000 because it's rather slow...

    I considered using BMF and/or Gralic as a way to estimate entropy of the dataset, but after all decided against it. The main reason was that I grew really tired of all these bugs. Most of tools that I used had critical problems and adding another pair of codecs would be a very unpleasant exercise.

  7. #37
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,494
    Thanks
    26
    Thanked 131 Times in 101 Posts
    m^2:
    You've mentioned in your article that HTTP overhead would cancel benefits of WebP's small header size. I think that it's not true, as Google developed also the SPDY protocol which is much less reduntant than HTTP. Google Chrome is the only browser that supports SPDY and is also supports WebP. The same goes for websites that uses WebP and SPDY - AFAIK only some Google services use SPDY, so using a WebP format by Google would make Google's services appear much faster in Google Chrome than in other browsers.

  8. #38
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    I didn't study SPDY protocol, but I guess that you still need at least 1 roundtrip per file, which is a lot comparing to the time needed to transfer even 200 B.
    I'm no expert here, so it's quite likely that I'm wrong. I welcome corrections.

  9. #39
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    63
    Thanks
    7
    Thanked 35 Times in 26 Posts
    Quote Originally Posted by Cyan View Post
    Have you received some help to focus your study into the right direction ?
    With all known benchmarks focusing on high-quality photographic sources, i guess it was not so natural to go "against the mainstream" and analyze performance for web images...
    Brilliant move by the way
    +1

    thanks for the effort and the interesting results!
    The choice of testing set is really on-spot regarding the intended use.

  10. #40
    Member
    Join Date
    Feb 2010
    Location
    Nordic
    Posts
    200
    Thanks
    41
    Thanked 36 Times in 12 Posts
    Quote Originally Posted by m^2 View Post
    I didn't study SPDY protocol, but I guess that you still need at least 1 roundtrip per file, which is a lot comparing to the time needed to transfer even 200 B.
    I'm no expert here, so it's quite likely that I'm wrong. I welcome corrections.
    HTTP 1.1 keep-alive and SPDY both do pipelining.

  11. #41
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by willvarfar View Post
    HTTP 1.1 keep-alive and SPDY both do pipelining.
    So your estimation is that it would matter?

  12. #42
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 64 Times in 33 Posts
    Quote Originally Posted by m^2 View Post
    I didn't study SPDY protocol, but I guess that you still need at least 1 roundtrip per file, which is a lot comparing to the time needed to transfer even 200 B.
    I'm no expert here, so it's quite likely that I'm wrong. I welcome corrections.
    There's another way to get rid of http round-trip: embed the picture coded in Base64 directly into the HTML file, it's called Data URI.

    Apple uses this on icloud.com and I've also spotted some Google logos embedded this way.
    Of course Base64 encoding actually expands the file size by 1/3 since 3 bytes are converted into 4 ASCII chars, afterwards the resulting HTML file can be compressed on the fly by the server (using Deflate) during transmission.

  13. #43
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Thank you Piotr, willvarfar, caveman. I corrected the post.

  14. #44
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,494
    Thanks
    26
    Thanked 131 Times in 101 Posts
    If I remember correctly, SPDY-enabled server can parse the HTML on its side and insert the additional resources (CSS, JS, images, etc) into the compressed stream without prior requests from client.
    For extremely small images, say below 200 bytes, it may be still more efficient to include it directly in HTML in Base64 encoded format. Base64 increases size by 1/3, but without embedding files you need to insert path to image file into HTML, which adds information (so stream size will grow even after compression).
    Last edited by Piotr Tarsa; 29th November 2011 at 16:58.

  15. #45
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 64 Times in 33 Posts
    Quote Originally Posted by m^2 View Post
    I tested webpll and some other codecs on web png data.
    https://extrememoderate.wordpress.co...ion-benchmark/

    To give you a sense what I went through, a JPEG2000 transparency story:
    OpenJPEG doesn't support transparency at all.
    Nconvert doesn't support lossless mode at all.
    ImageMagick corrupts data when used this way.
    Jasper accepts only bitmaps. But not transparent ones produced by ImageMagick.
    Kakadu accepts only bitmaps. But not transparent ones produced by ImageMagick.
    Jasper doesn?t accept bitmaps created by Nconvert either.
    But Kakadu does!
    Nconvert corrupts data with such translation.
    Get a Mac
    Could you check the JPEG2000 file attached to this message?
    It was produced by GraphicConverter (Mac OS X has native JPEG2000 support since big size icons are compressed this way).
    Attached Files Attached Files

  16. #46
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    My image viewer crashes on it....

    But XnView shows it just fine.

    When it comes to NConvert, it didn't corrupt all images. At least I think it didn't. So actually most of the files were transparent.

  17. #47
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 64 Times in 33 Posts
    Does anyone know how webpll handles grayscale pictures?
    Does it have a specific mode for grayscale or does it treat all pictures as if they were RGB?

    Feeded with grayscale PNGs saved as RGB files it produces exactly the same webpll file (and webpll2png outputs apparently only 32-bits RGBA PNGs).

    33544 eyeRGB.png
    23829 eyeG.png

    23291 eyeRGB.webpll
    23291 eyeG.webpll

    3537 firewireRGB.png
    2311 firewireG.png

    1885 firewireRGB.webpll
    1885 firewireG.webpll
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	eyeG.png 
Views:	276 
Size:	23.3 KB 
ID:	1747   Click image for larger version. 

Name:	firewireG.png 
Views:	266 
Size:	2.3 KB 
ID:	1748  
    Attached Files Attached Files
    • File Type: zip G.zip (111.7 KB, 215 views)

  18. #48
    Member
    Join Date
    May 2008
    Location
    England
    Posts
    325
    Thanks
    18
    Thanked 6 Times in 5 Posts
    From my tests it just converts everything, including low bit depth images(ie 3/4 bit) to 24-bit colour, and as mentioned in my post therefore looses whatever fixed palette you had for an image :/ i guess it's not ultra bad though in a way, as if you are working on an image you'd work/save in something like PSD/TIFF, and then publish it as a JPEG. So you'd sorta do the same with this, work in PNG to retain your palette then publish as a WebPLL file.

  19. #49
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    WebPLL slowness makes it unsuitable for using in any other way anyway...

  20. #50
    Member
    Join Date
    May 2008
    Location
    England
    Posts
    325
    Thanks
    18
    Thanked 6 Times in 5 Posts
    Yeah it's hella slow, at least at this time. Maybe they'll optimise it when it's ready for full release, but i wouldn't expect a major speed boost, maybe more switches though to cut out some steps.

  21. #51
    Member
    Join Date
    May 2008
    Location
    England
    Posts
    325
    Thanks
    18
    Thanked 6 Times in 5 Posts
    Just been trying out the new experimental release of WebP lossless and it is significantly faster, it's actually usable now. It's gone from taking several minutes on my some of my files to only a few seconds, depending on the file of course. On some files even though it takes significantly less time to encode the size is actually smaller, and on some files it's bigger but only by a very small margin. On tiny files(ie less than 1k) the size can double and then it's about the same as the original PNG in some cases.

    http://code.google.com/p/webp/downloads/list

    I just used a simple dopus script to test it quickly. -o on the filenames just indicates it was done by the previous version of the encoder.
    @externalonly 
    echo Start : %time%
    png2webpll-o {file} -o {file|noext}-o.webpll
    echo Finished Old: %time%
    png2webpll {file} -o {file|noext}.webpll
    echo Finished New: %time%
    pause

  22. #52
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 64 Times in 33 Posts
    Quote Originally Posted by Intrinsic View Post
    On tiny files(ie less than 1k) the size can double and then it's about the same as the original PNG in some cases.
    I had a quick look at the specs, I'm not sure the previous version had proper RIFF image header (VP8L tag is webpll), if I understand their goal they would like to offer a single lib that would decode .webp and .webpll files.

  23. #53
    Member
    Join Date
    May 2008
    Location
    England
    Posts
    325
    Thanks
    18
    Thanked 6 Times in 5 Posts
    Quote Originally Posted by caveman View Post
    I had a quick look at the specs, I'm not sure the previous version had proper RIFF image header (VP8L tag is webpll), if I understand their goal they would like to offer a single lib that would decode .webp and .webpll files.
    You are right previous versions did not have RIFF image header, it now has: RIFF....WEBPVP8L

  24. #54
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    344
    Thanks
    129
    Thanked 52 Times in 36 Posts
    FYI, the Windows 10 October Update (2018 ) includes "Webp Image Extensions". I don't know if this means that Edge now supports webp. CanIUse says that there's a flag for webp in Edge 18 (which is part of the October Update), but there isn't as far as I can tell (in about:flags)

  25. #55
    Member
    Join Date
    Dec 2016
    Location
    Norway
    Posts
    18
    Thanks
    15
    Thanked 10 Times in 4 Posts
    Quote Originally Posted by SolidComp View Post
    FYI, the Windows 10 October Update (2018 ) includes "Webp Image Extensions". I don't know if this means that Edge now supports webp. CanIUse says that there's a flag for webp in Edge 18 (which is part of the October Update), but there isn't as far as I can tell (in about:flags)
    https://www.cnet.com/news/googles-we...icrosoft-edge/

    "WebP Image Extension" UWP app has been available for install in the Microsoft Store for quite a while: https://www.microsoft.com/store/apps/9pg2dk419drg

  26. Thanks:

    SolidComp (6th October 2018)

  27. #56
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    344
    Thanks
    129
    Thanked 52 Times in 36 Posts
    And Firefox is going to support webp in early 2019: https://www.cnet.com/news/firefox-to...-a-faster-web/

    Looks like critical mass, all except for Apple/Safari.

    Note that it's not clear that webp is better than JPEG if we assume that we have all our JPEG optimizers and encoders available. webp doesn't have the ecosystem of a dozen third-party optimizers – it just has Google's implementation. It's much slower to decode than JPEG, and probably uses more battery, but Google won't release any data.

    webp seems to do better when compared to PNG, but it's not clear that it's better if compared to the gauntlet of PNG optimizers and reducers, much less good lossy PNG like pngquant. In 2018, there's a lot you can do to optimize and compress JPEG and PNG, with dozens of tools. There are no equivalent tools for webp.

    If people want to replace JPEG, they need to discard the myopic web-centric approach, and build a new image acquisition format – the format that cameras use to encode images when photos are taken. All this webp, HEIC, PIK, and AV1 stuff does nothing for image acquisition, so the cameras are still going to be birthing images as JPEGs. Then we're reduced to reducing the size of an already lossy image format. It would be wise to step back and think deeply about the kinds of problems and challenges faced at image acquisition, bring in Nikon, Canon, Fujifilm, Panasonic, Sony, LEICA, et al to map out what we can do to improve upon JPEG for acquisition, and how that would help downstream file sizes on the web.

    HDR should be banked in, and there might be ways to optimize an image format for machine learning and object recognition. That would be really handy.

  28. #57
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    63
    Thanks
    7
    Thanked 35 Times in 26 Posts
    Some precisions:

    Quote Originally Posted by SolidComp View Post
    Note that it's not clear that webp is better than JPEG if we assume that we have all our JPEG optimizers and encoders available. webp doesn't have the ecosystem of a dozen third-party optimizers – it just has Google's implementation. It's much slower to decode than JPEG, and probably uses more battery, but Google won't release any data.
    it's mostly a corpus problem: any serious such study involves a massive corpus of third-party images (namely: crawled on the web) that you can't publish for reproducibility. The reader's (normal) reaction would be: i can't reproduce, must be wrong.

    webp seems to do better when compared to PNG, but it's not clear that it's better if compared to the gauntlet of PNG optimizers and reducers, much less good lossy PNG like pngquant. In 2018, there's a lot you can do to optimize and compress JPEG and PNG, with dozens of tools. There are no equivalent tools for webp.
    Any optimized PNG, using whatever tool, can be further re-compressed losslessly with WebP as post-processing. Except very rare cases, you'll still get extra compression out of it, for the same optimized output.

    If people want to replace JPEG, they need to discard the myopic web-centric approach,
    the volume of images consumed on the web is orders of magnitude larger than the amount of images captured (which in itself is already a very large number, agreed).

    and build a new image acquisition format – the format that cameras use to encode images when photos are taken. All this webp, HEIC, PIK, and AV1 stuff does nothing for image acquisition, so the cameras are still going to be birthing images as JPEGs. Then we're reduced to reducing the size of an already lossy image format. It would be wise to step back and think deeply about the kinds of problems and challenges faced at image acquisition, bring in Nikon, Canon, Fujifilm, Panasonic, Sony, LEICA, et al to map out what we can do to improve upon JPEG for acquisition, and how that would help downstream file sizes on the web.
    Acquisition and delivery to end point are two separate process that target different resolutions, quality range, etc. The user scenario where you deliver the same bytes that were captured is extremely reduced (basically: "i want to look at the photo i just took on my phone right away!"). Most other scenario of delivery over the web include a re-encoding to a set of quality / resolution / format targets.

    And AVIF (based on AV1) is here to address the 'capture' use case, note.

  29. #58
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    829
    Thanks
    239
    Thanked 300 Times in 180 Posts
    Quote Originally Posted by SolidComp View Post
    Note that it's not clear that webp is better than JPEG
    WebP lossy is a lot more efficient at low qualities, but reproduces material textures less accurately than jpeg. WebP used to have a remarkable artefacts and got a bad reputation, but these are mostly fixed. Some may require settings flags, but probably it can be automated later. Two problems will however remain, yuv420 kills some colors and poor performance for subtle textures. Even if it is slightly worse at some use cases, it allows alpha with lossy coding.

    Quote Originally Posted by SolidComp View Post
    ... JPEG optimizers and encoders available. webp doesn't have the ecosystem of a dozen third-party optimizers – it just has Google's implementation. It's much slower to decode than JPEG, and probably uses more battery, but Google won't release any data.
    WebP lossless was built by same people that built Zopfli NG, which is a well-accepted PNG optimizer. Many ideas for Zopfli came from my work for WebP lossless. There are no 3rd party encoders, because the stock encoder is pretty hard to beat.

    I find pngquant adequate and the probability that something we I would build would be significantly better is low. You can run pngquant as a preprocessor for WebP lossless for acquiring a palette. WebP does smart things with the palette: it will even be able to predict palette indices when it makes sense.

    WebP lossy has seen a similar amount of love from Skal. I have stayed away from the lossy coding except for inventing the yuv420 mitigation added to it.

    The camera people are really hardcore technologists. I think they will make right decisions at right time.

  30. #59
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    The camera people are really hardcore technologists. I think they will make right decisions at right time.
    The camera market works differently. I doubt they will go for another non-proprietary lossy format anytime soon. The camera business works by selling people a "low quality JPEG image for all-day purpose" and a "high quality proprietary raw image" which you can "develop" at the PC. "Raw" is here used for a "vendor lock in" mechanism, so camera vendors have only little interest in changing this market. All they need is a format that "works for them good enough" so they do not scare customers away, and JPEG works well enough for this as everybody knows how to handle it. High compression is not needed, and the improvements (though present) in visually transparent compression regime are not interesting enough to justify an incompatibility.

  31. #60
    Member
    Join Date
    Jul 2016
    Location
    Russia
    Posts
    23
    Thanks
    14
    Thanked 10 Times in 7 Posts
    Sadly, lepton (dropbox) didn't go to the masses as a new image format.
    Although it has both good speed and excellent compression, and some backward compatibility ...
    If we compare lepton as a format with other compression formats, only in very-lossy it will be worse than webp and av1 and based on h265
    And in "visual lossless" to be an unattainable leader.

Page 2 of 3 FirstFirst 123 LastLast

Similar Threads

  1. UCI Image Compression
    By maadjordan in forum Data Compression
    Replies: 5
    Last Post: 19th August 2017, 23:15
  2. GraLIC - new lossless image compressor
    By Alexander Rhatushnyak in forum Data Compression
    Replies: 17
    Last Post: 29th November 2010, 20:27
  3. BMF is not binary lossless NOR pictore lossy
    By SvenBent in forum Data Compression
    Replies: 4
    Last Post: 23rd August 2009, 12:54
  4. Image retargeting
    By Black_Fox1 in forum Forum Archive
    Replies: 1
    Last Post: 24th August 2007, 04:02
  5. image compressors
    By maadjordan in forum Forum Archive
    Replies: 5
    Last Post: 13th August 2007, 09:28

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •