Page 2 of 3 FirstFirst 123 LastLast
Results 31 to 60 of 67

Thread: WebP 2: experimental successor of the WebP image format

  1. #31
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    75
    Thanks
    10
    Thanked 25 Times in 21 Posts
    ​i did an automatic (and not checked!) trials with the WebP v2 lossless (377656e) with only few samples. it makes sense to me to compare this more 'fairly' (aka with closer lossless 'level') againts JPEG XL, so i did an ugly hack of cjxl (739e6cd1), identified as "wjxl" (it just does 16->8 bits/s, a=0, 'cleaned' lossy, alternative near-lossless, etc.). this test would be more about how heuristics and lossless automatic transformations are done for web context use (and how fast)
    Attached Files Attached Files

  2. #32
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by cssignet View Post
    ​i did an automatic (and not checked!) trials with the WebP v2 lossless (377656e) with only few samples. it makes sense to me to compare this more 'fairly' (aka with closer lossless 'level') againts JPEG XL, so i did an ugly hack of cjxl (739e6cd1), identified as "wjxl" (it just does 16->8 bits/s, a=0, 'cleaned' lossy, alternative near-lossless, etc.). this test would be more about how heuristics and lossless automatic transformations are done for web context use (and how fast)
    thanks!
    Regarding 44-png.html:
    * making the tile size large with "-tile_shape 2" is likely to help compression in cwp2. It's not 'on' by default since it has memory implication...
    * could you point me to 43-chunks.png please? It seems to make cwp2 crash...

  3. #33
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    75
    Thanks
    10
    Thanked 25 Times in 21 Posts
    Quote Originally Posted by skal
    making the tile size large with "-tile_shape 2" is likely to help compression in cwp2. It's not 'on' by default since it has memory implication...
    i did the test again. indeed, this would affect filesize, but seems to have a huge impact on speed (not multithreaded?)

    Quote Originally Posted by skal
    43-chunks.png
    that would be related to libpng, because IDAT chunks are not consecutive. this PNG could be decoded, but is not "valid" - you would find those here
    Attached Files Attached Files

  4. #34
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by cssignet View Post
    i did the test again. indeed, this would affect filesize, but seems to have a huge impact on speed (not multithreaded?)
    Actually, multithreading will have harder time now, since tiles are larger (tile_shape 2 will make tiles be 512x512 instead of default 256x256).

    Quote Originally Posted by cssignet View Post
    that would be related to libpng, because IDAT chunks are not consecutive. this PNG could be decoded, but is not "valid" - you would find those here
    Thanks! Will try to adapt the png decoder accordingly so that it can recover and proceed in this case...

  5. #35
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by skal View Post
    Will try to adapt the png decoder accordingly so that it can recover and proceed in this case...
    turns out, the error is triggered by libpng (but not lodepng, which seems to recover from this invalid file somehow). Not much i can do here...

  6. #36
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by skal View Post
    turns out, the error is triggered by libpng (but not lodepng, which seems to recover from this invalid file somehow). Not much i can do here...
    File an issue at https://github.com/lvandeve/lodepng

  7. #37
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    75
    Thanks
    10
    Thanked 25 Times in 21 Posts
    ​that would not be a big deal: this file was artificially modified (by me) to be invalid

  8. #38
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    369
    Thanks
    133
    Thanked 57 Times in 40 Posts
    Quote Originally Posted by skal View Post

    • small container overhead, tailored specifically for image compression



    Can you say more about the container? Is it a new format? I'm not very familiar with container formats, although Matroska seems popular.

    What about metadata? Which formats does webp support? It would be nice to have a clean and compact binary metadata format instead of the bloated XML-based approaches like XMP.

  9. #39
    Member
    Join Date
    Nov 2012
    Location
    Johnstonebridge, Scotland, UK
    Posts
    19
    Thanks
    0
    Thanked 3 Times in 3 Posts
    Any chance you can make a static build of this so DLL's are compiled with executable?

  10. #40
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by SolidComp View Post
    Can you say more about the container? Is it a new format? I'm not very familiar with container formats, although Matroska seems popular.

    What about metadata? Which formats does webp support? It would be nice to have a clean and compact binary metadata format instead of the bloated XML-based approaches like XMP.[/FONT]
    Totally agree on the XML problem (also: sometimes, if you want to split the XML blob, you have to parse the tree in order to split it at node boundary!! An XML parser in a codec! argh)

    Anyways: the wp2 container as it is now (warning! It can change further!) is described briefly on this doc from the git repo:
    https://chromium.googlesource.com/codecs/libwebp2/+/refs/heads/master/doc/format

    It's actually some incarnation of a more general 'minimalist image container' reflexion going on here:
    https://chromium.googlesource.com/codecs/libwebp2/+/refs/heads/master/doc/container/

    This page tries to define what a useful minimal container would look like for image, based on experience from WebP.
    There's some demo code for simple I/O in the tree, which will likely grow in the future...

  11. #41
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    Some remarks:

    Limiting width and height to 14-bit seems an unnecessary limit. I have seen e.g. 800x20000 images where we need to fall-back to JPEG because WebP cannot do that.

    For orientation, you may want to use 3 bits so you can do all the orientations. In particular, horizontally flipped images are not so uncommon, since many selfie cameras produce mirrored images.

    Avoiding full icc profiles for the common cases (sRGB, Display P3, Adobe RGB 1998, ProPhoto, Rec2100 PQ and HLG) would be nice. Only the transfer curve is not enough for that, also need custom primaries. Some enum approach is probably good enough, if you want to keep it simple.

    Metadata forced at the end can be a problem. For example, Exif can since recently be used to change the apparant intrinsic dimensions of the image (so you can encode a 500x500 image but tell the browser/application that it should act as if it's a 601x600 image, for example). This is useful only if you can put Exif first, otherwise the preview will be displayed incorrectly.

  12. #42
    Member
    Join Date
    Jul 2018
    Location
    Russia
    Posts
    41
    Thanks
    0
    Thanked 4 Times in 3 Posts
    Quote Originally Posted by Jon Sneyers View Post
    WebP cannot do that
    Examples:
    https://photojournal.jpl.nasa.gov/catalog/PIA23623
    https://www.asteroidmission.org/osprey-recon-c-mosaic/
    Density really matters for such large images.
    Dvizh must go Dvizh, no open source - no Dvizh.

  13. #43
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by e8c View Post

    IMHO, these are exactly the images i don't want to receive on my phone or browser. So, WebP is working as intended here.

    Most (if not all) of the imaging software i have on my desktop/laptop/phone will crash miserably handling a 2.43GB image (in whichever format).

    Do you know that just creating a 65535x65535 jpeg is surprisingly difficult?

    Take Imagemagick for instance:

    time convert 1x1.webp -scale 65535x65535 big.jpg
    convert: Maximum supported image dimension is 65500 pixels `big.jpg' @ error/jpeg.c/JPEGErrorHandler/322.
    real 2m2.566s
    user 1m48.126s
    sys 0m12.648s

    1m48s of wheel-spinning just for a crash is something i'm glad WebP is preventing me from experiencing.

    time convert 1x1.webp -scale 65000x65000 big.jpg
    real 4m59.533s
    user 3m56.006s
    sys 0m49.760s

    3m56s of malloc() traffic!

    The 16383 limit was chosen to a) fit in 32b architecture's memory. b) not be totally impossible to support for a VP8 video decoder (although i'm still curious about any hw video codec able to support 16k resolution)

    So, i'm happy with WebP preventing users from a poor experience by limiting the dimension to something suitable for 99% of use-cases, leaving the remaining cases to app-level handling.
    It's no surprise editing software (GIMP, Photoshop, ...) are using tiling to handle modestly large images. You just don't need large images separately from the app that can handle them.

    Large images (even the 800x20000 large-scrolling one) require a dedicated software to handle them most of the time. I'm yet to meet a happy user of a 65k+ image, esp. on phone but not just.


    (all IMHO)


  14. #44
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    A 100x20000 image is not that large, really. Why not limit area instead of individual dimensions?

  15. #45
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by Jon Sneyers View Post
    A 100x20000 image is not that large really
    this is still fairly above what you can consume in one gulp, i'd say. Something, someone, will have to interact with this image to show it wholly. You might as well split it in two pieces and send them separately (that's even better for latency, probably).

    Quote Originally Posted by Jon Sneyers View Post
    Why not limit area instead of individual dimensions?
    I don't think you (viewer) can reasonably absorb more than ~6k pixels at once in either dimension.
    Additionally, large strides can wreck a software in numerous overflowing ways...

    (imho)

  16. #46
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by Jon Sneyers View Post
    Avoiding full icc profiles for the common cases (sRGB, Display P3, Adobe RGB 1998, ProPhoto, Rec2100 PQ and HLG) would be nice.
    JPEG XL's XYB might be a good addition, too. This colorspace works very well in JPEG XL as it allows to store much less spatial detail in the B channel than other approaches.

  17. #47
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    JPEG XL's XYB might be a good addition, too. This colorspace works very well in JPEG XL as it allows to store much less spatial detail in the B channel than other approaches.
    ​[citation needed]

  18. #48
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by skal View Post
    ​[citation needed]
    It is nothing new in science. It is mostly a mapping of the 1967 Nobel researchers' (Ragnar Granit, Haldan Keffer Hartline, and George Wald) findings into computer vision.

    It may be new to apply it in compression. I suspect XYZ and Lab may have some similar characteristics, but are much slower to decode.

  19. #49
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    It is nothing new in science. It is mostly a mapping of the 1967 Nobel researchers' (Ragnar Granit, Haldan Keffer Hartline, and George Wald) findings into computer vision.
    That's proof by authority, not an actual recent reproducible test.

  20. #50
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    Quote Originally Posted by skal View Post
    this is still fairly above what you can consume in one gulp, i'd say. Something, someone, will have to interact with this image to show it wholly. You might as well split it in two pieces and send them separately (that's even better for latency, probably).

    I don't think you (viewer) can reasonably absorb more than ~6k pixels at once in either dimension.
    Additionally, large strides can wreck a software in numerous overflowing ways...

    (imho)
    This is why scrolling exists.

    But even without scrolling: are you saying that 8K is already too much, because 6k pixels should be enough?

    I think if you want to make something future-proof, avoiding arbitrary limits at the format level is not a good approach. Better to implement the arbitrary limits at the application level. Who knows what kind of images we will have in 10 or 20 years? Perhaps 360 images will become much more of a thing, and WebP 2 would have been a good codec to encode them with, except it cannot represent enough pixels to compensate for the projection. This is just an example. The point is: if you design everything for the state of the web in 2020, you may end up creating something that is obsolete before it gets adopted.

  21. #51
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by Jon Sneyers View Post
    are you saying that 8K is already too much, because 6k pixels should be enough?
    that's my opinion, yes. Any extra need above that is probably peculiar and should be handled at app-level (*)

    Quote Originally Posted by Jon Sneyers View Post
    I think if you want to make something future-proof, avoiding arbitrary limits at the format level is not a good approach.
    Yeah! consumer electronics should be designed to handle 10000V and 500A of power supply, because who knows what the future will be!

    Sorry, i don't buy this argument ("YAGNI" comes to mind) but that's just me. I'd rather try to tackle known incoming pain-points than foresee future ones.



    (* i could borrow arguments from this video)
    (** i've worked on Streetview, and there's a reason we send smallish tiles instead of large 64k JPEGs)

  22. #52
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by skal View Post
    That's proof by authority, not an actual recent reproducible test.
    I consider that there are two kinds of colorspaces, ones that are based on the progression in color reproduction in CRTs, and others that are physiologically inspired.

    As you well know, one needs to have a nonlinearity to go from physically linear space to a linear experience. Usually this is done by gamma correction. In the early CRT work, this non-linearity was easiest to place just before the intensity of the individual RGB components.

    In the eye, this non-linearity happens in the receptor cells as part of the dynamics of opsin-chemicals releasing electricity and then getting re-excited. The opsin chemicals however do not respond to red, green and blue. Their effective reception spectra were measured in the work leading to the 1967 Nobel prize "for the researchers' discoveries concerning the primary physiological and chemical visual processes in the eye."

    The colorspaces whose developers chose to ignore the 1967 science are often based on earlier pre-1930s research (leading to CIE-1931), often with further simplications.

    Cathode-ray output colorspaces based on 1920s science, without even going all the way to state-of-the-art in late 1920s:

    1. gamma-compressed RGB
    2. YUV-spaces based on gamma-compressed RGB (like in JPEG)


    Post-1967 colorspaces:

    1. L*a*b
    2. XYB
    3. ICTCP


    XYB is different from L*a*b in that way that L*a*b is based on 2 degree color samples, which in my experiments mixes the blue signal more strongly into luma than the ~0.05 degree samples that XYB is based on.

    I don't know about the philosophy in ICTCP, but looking at their math, they have the compression non-linearities in right places like L*a*b and XYB.

  23. #53
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    75
    Thanks
    10
    Thanked 25 Times in 21 Posts
    ​automatic/unchecked trials with near and lossy in cwp2 4223db6
    Attached Files Attached Files

  24. #54
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by cssignet View Post
    ​automatic/unchecked trials with near and lossy in cwp2 4223db6
    near-lossless quality 96 is probably a bit extreme in size reduction...
    lossy q=95 is sometimes above or below q=96, which can be surprising.

  25. #55
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    75
    Thanks
    10
    Thanked 25 Times in 21 Posts
    are these the expected results? i guess i did not try as much as you did, but IMHO even -q 98 would not be a good candidate for some samples, where the highest lossy (or like 94, 93) would offer better quality and much smaller files
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	100.png 
Views:	31 
Size:	210.5 KB 
ID:	8176   Click image for larger version. 

Name:	99.png 
Views:	25 
Size:	198.6 KB 
ID:	8177   Click image for larger version. 

Name:	98.png 
Views:	21 
Size:	170.4 KB 
ID:	8178   Click image for larger version. 

Name:	97.png 
Views:	21 
Size:	151.7 KB 
ID:	8179   Click image for larger version. 

Name:	96.png 
Views:	23 
Size:	134.0 KB 
ID:	8180  

    Click image for larger version. 

Name:	95.png 
Views:	26 
Size:	198.7 KB 
ID:	8181  

  26. #56
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by cssignet View Post
    are these the expected results? i guess i did not try as much as you did, but IMHO even -q 98 would not be a good candidate for some samples, where the highest lossy (or like 94, 93) would offer better quality and much smaller files
    Totally agreed, q=96 near-lossless is worse than q=95 lossy. Near-lossless quality is quite hard to keep in check as it is right now.

  27. #57
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by cssignet View Post
    are these the expected results? i guess i did not try as much as you did, but IMHO even -q 98 would not be a good candidate for some samples, where the highest lossy (or like 94, 93) would offer better quality and much smaller files
    Could you please compare with the WebP 2.0 near-lossless with WebP 1.0 near-lossless? I spent a lot of effort getting it right for WebP 1.0, but didn't follow how it was reflected in WebP 2.0. Near-lossless with prediction filters is difficult to get right as one needs to quantize residuals in a way that reduces entropy, while maintaining the precision and continuity (anti-banding) constraints in the actual image.

    In most uses in both JPEG XL and WebP v2 near-lossless is going to be a worse option than going full lossy. Full lossy are kinder for pixels (and more competitive in the visually lossless category) nowadays when we can choose smaller integral transforms for areas where those are needed. Near-lossless may still be useful for pixel art style images in the modern codecs.

    In JPEG XL we haven't yet invested into this, but WebP v. 1.0 near-lossless heuristics were developed by JPEG XL team members. I'm considering that the next attempt I'll give on the near-lossless effort in JPEG XL is likely through custom palettes (combined with the hybrid delta palette). This is already in the format, only needs an encoder that prepares palettes particularly for this use case.

  28. #58
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Update: latest ToT commit 6b0aaad of libwebp2 should behave better on the typical photo-like sources that were used in this comparison...

    (ex. at 'small' : cwp2 -q 50 ...Click image for larger version. 

Name:	pecher_108329_bytes.jpg 
Views:	74 
Size:	225.7 KB 
ID:	8194 - 108329b)

  29. #59
    Member
    Join Date
    Aug 2020
    Location
    Italy
    Posts
    74
    Thanks
    19
    Thanked 2 Times in 2 Posts
    which speed encoded this image was encoded?
    Last edited by fabiorug; 21st December 2020 at 21:45.

  30. #60
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by fabiorug View Post
    which speed you encoded this image?
    The default one (-effort 5).

Page 2 of 3 FirstFirst 123 LastLast

Similar Threads

  1. jpg, png, webp encoder and butteraugli distance
    By Lithium Flower in forum Data Compression
    Replies: 17
    Last Post: 1st January 2021, 14:39
  2. WebP (lossy image compression)
    By Arkanosis in forum Data Compression
    Replies: 62
    Last Post: 12th April 2019, 19:45
  3. Replies: 15
    Last Post: 14th February 2018, 10:18
  4. WEBP - how to improve it?
    By Stephan Busch in forum Data Compression
    Replies: 38
    Last Post: 4th June 2016, 14:43
  5. WebP (Lossless April 2012)
    By caveman in forum Data Compression
    Replies: 32
    Last Post: 19th April 2013, 16:53

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •