Page 1 of 3 123 LastLast
Results 1 to 30 of 67

Thread: WebP 2: experimental successor of the WebP image format

  1. #1
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts

    WebP 2: experimental successor of the WebP image format

    https://chromium.googlesource.com/codecs/libwebp2/


    What to expect?
    The WebP 2 experimental codec is mostly pushing the features of WebP further in terms of compression efficiency. The new features (like 10b HDR support) are kept minimal. The axis of experimentation are:

    • more efficient lossy compression (~30% better than WebP, as close to AVIF as possible)
    • better visual degradation at very low bitrate
    • improved lossless compression
    • improved transparency compression
    • animation support
    • ultra-light previews
    • lightweight incremental decoding
    • small container overhead, tailored specifically for image compression
    • full 10bit architecture (HDR10)
    • strong focus on software implementation, fully multi-threaded
    • The use cases remain mostly the same as WebP: transfer over the wire, faster web, smaller apps, better user experience... WebP 2 is primarily tuned for the typical content available on the Web and Mobile apps: medium-range dimensions, transparency, short animations, thumbnails.


    WebP2 is currently only partially optimized and, roughly speaking 5x slower than WebP for lossy compression. It still compresses 2x faster than AVIF, but takes 3x more time to decompress. The goal is to reach decompression speed parity.




  2. Thanks (6):

    Hakan Abbas (1st December 2020),hexagone (30th November 2020),Jarek (9th December 2020),Jyrki Alakuijala (30th November 2020),Scope (30th November 2020),Shelwien (1st December 2020)

  3. #2
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by Scope View Post
    ​And some questions from me, how to encode and view the triangular previews?
    There's a tool extras/mk_preview in the extras/ directory to generate and manipulate previews as raw bits.
    For instance: ./extras/mk_preview input.png -o preview.bin -d preview.png -m voronoi -g 48 -n 400 -s 250 -i 2000

    You can insert this preview data 'preview.bin' into a final .wp2 compressed bitstream using 'cwp2 -preview preview.bin ...'
    You can also try to generate a preview from cwp2 using 'cwp2 -create_preview ...', but there's not much options available for tweaking.

    The triangle preview can be eyeballed in the 'vwp2' tool by pressing the '\' key if you are visualizing a .wp2 bitstream with preview available.
    Overwise, just use mk_preview's -d option to dump the pre-rendered preview. I'm working on a javascript renderer in HTML btw, that'll be easier.
    Triangle preview is a very experimental field, and this is obvious when looking at the number of options in 'mk_preview' experimentation tool!

    Quote Originally Posted by Scope View Post
    As far as I understood, the near lossless mode will not be changed as much as in Jpeg XL, but what improvements are planned and what are the differences from WebP v1?
    The use-case of near-lossless is still unclear within WebBP2's perimeter: who is using near-lossless and to what effect? I don't have the answer to this question. Suggestions welcome.

    I can totally figure someone using 100% lossless to preserve a pristine copy of an image (and note that in this case, it's likely said-user might as well go for RAW format. File size is probably less an issue than CPU or ease-of-edition for them...).

    But i think there's a mental barrier to using near-lossless for these users: If you don't stick to 100% lossless, you might as well just go lossy (even at -q 95) because it's not going to be pristine anyway.
    Stopping at the near-lossless intermediate stage isn't bring much to the table in terms of filesize reduction, compared to just going hi-quality lossy.

  4. Thanks:

    Scope (1st December 2020)

  5. #3
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by skal View Post
    The use-case of near-lossless is still unclear within WebBP2's perimeter: who is using near-lossless and to what effect? I don't have the answer to this question. Suggestions welcome.
    I wrote the first version of WebP near-lossless in December 2011. The code was not considered an important feature and it was not deployed in the initial launch. The first version only worked for images without PREDICTOR_TRANSFORM, COLOR_TRANSFORM, and COLOR_INDEXING_TRANSFORM. This initial version was enabled much later, possibly around 2014, without an announcement, and would only work with images that were not a great fit for the PREDICTOR_TRANSFORM. Later (around 2016?) Marcin Kowalczyk (Riegeli author) ported the original code to work with the PREDICTOR_TRANSFORM. No announcement was made of that relatively significant improvement.

    When I originally designed it, I considered three variations of near lossless:
    1. replacing the last bit with the first bit. (--near_lossless=80)
    2. replacing two last bits with the two first bits. (--near_lossless=60)
    3. replacing three last bits with the three first bits. (--near_lossless=40)

    I didn't just want to remove values locally, because having some specific values get more population count would naturally reduce their entropy.

    I would only do these when the monotonicity of the image around the pixel would not be affected, i.e., the new pixels would not be a new minimum or maximum.

    In the launch of the feature, two more modes were added (--near_lossless=20 and --near_lossless=0) for one and two more bits of near-lossless loss. I think these are mostly dangerous, can be confusing and rarely if ever useful.

    I was never able to see a difference between --near_lossless=80 and the true lossless, at least in my original design. It was possible to see (with severe pixel oogling) some differences at --near_lossless=60, but it would still be far superior to what was available in lossy formats.

    Quote Originally Posted by skal View Post
    I can totally figure someone using 100% lossless to preserve a pristine copy of an image (and note that in this case, it's likely said-user might as well go for RAW format. File size is probably less an issue than CPU or ease-of-edition for them...).

    But i think there's a mental barrier to using near-lossless for these users: If you don't stick to 100% lossless, you might as well just go lossy (even at -q 95) because it's not going to be pristine anyway.
    Stopping at the near-lossless intermediate stage isn't bring much to the table in terms of filesize reduction, compared to just going hi-quality lossy.
    When you look at https://developers.google.com/speed/...ss_alpha_study and Figure 3, you can wonder what would that chart look like if pure lossy was compared against the PNGs. Turns out that a one-in-a-thousand pure lossy image requires 20x more storage capacity than the same image compressed with lossless, and for about 40-50% of the web's PNG images lossless is smaller than high quality lossy.

    Near-lossless can make that one-in-thousand image smaller still, just normal lossy cannot.

    Placing near-lossless methods in range 96-99 may lead to a situation where the quality 95 produces a 20x larger file than quality 96. This may be surprising for a user.

    The actual performance differences of course depend on the actual implementation of near-lossless -- this is assuming near-lossless means LZ77 but no integral transforms.

  6. Thanks:

    Scope (1st December 2020)

  7. #4
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    When you look at https://developers.google.com/speed/...ss_alpha_study and Figure 3, you can wonder what would that chart look like if pure lossy was compared against the PNGs.
    Cases where lossy is larger than PNG is typically for graphic-like content like charts, drawing, etc.

    Usually, these don't benefit from near-lossless (nor lossy, which smears colors because of YUV420), and are already quite small in size to start with.

    Actually, near-lossless can actually increase the size compared to lossless in such cases.

    Random example from the web: http://doc.tiki.org/img/wiki_up/chart_type.png

    original size: 17809
    webp lossless: 7906
    near-lossless 90: 8990
    lossy -q 96: 30k (+smear!)

    and same with jpeg-xl:

    ./tools/cjxl -s 8 chart_type.png -q 100
    Compressed to 11570 bytes (0.629 bpp).

    ./tools/cjxl -s 8 chart_type.png -q 100 -N 5
    Compressed to 15724 bytes (0.855 bpp).

    and wp2:

    cwp2 chart_type.png -tile_shape 2 -q 100
    output size: 7635 (0.42 bpp) [not saved]

    cwp2 chart_type.png -tile_shape 2 -q 99

    output size: 8737 (0.47 bpp) [not saved]



    Last edited by skal; 1st December 2020 at 01:19. Reason: +wp2

  8. #5
    Member
    Join Date
    Nov 2019
    Location
    Moon
    Posts
    51
    Thanks
    18
    Thanked 54 Times in 31 Posts
    Quote Originally Posted by skal View Post
    Stopping at the near-lossless intermediate stage isn't bring much to the table in terms of filesize reduction, compared to just going hi-quality lossy.
    Sometimes I've seen images where near-lossless in WebP encoded to a much smaller size (compared to lossless), without visual differences (compared to lossy).

    Although, as another opinion with Jpeg XL I mostly had better results with lossy mode (VarDCT with values about -d 0.3 ... 0.5) than with different near-lossless (Modular -N .... ), lossy-palette (--lossy-palette --palette=0) and lossy modular (-q 95-99) and perhaps WebP v2 due to more efficient lossy mode will have the same situation, but also these modes in Jpeg XL are not yet fully developed.

  9. #6
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    4,135
    Thanks
    320
    Thanked 1,397 Times in 802 Posts
    windows binaries.
    Attached Files Attached Files

  10. Thanks (2):

    DZgas (3rd December 2020),Hakan Abbas (1st December 2020)

  11. #7
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    4,135
    Thanks
    320
    Thanked 1,397 Times in 802 Posts
    As to definition of "near-lossless", I think there's another option worth consideration.

    There's a difference between displayed image and how it is stored - palette, colorspace, "depth" bits, alpha stuff, etc.
    So I think we can define near-lossless as providing identical displayed image while not necessarily output identical to input
    (for example with 16-bit input and 8-bit display).
    In this case it should provide strictly better compression than true lossless, since we can save bits on encoding of that lost information.

    Btw, here's what webp2 currently does for 16-bit images in lossless mode.
    I guess its "near-lossless" :)
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	webp2_16bit.jpg 
Views:	106 
Size:	973.7 KB 
ID:	8124  

  12. #8
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by Shelwien View Post
    In this case it should provide strictly better compression than true lossless, since we can save bits on encoding of that lost information.
    That seems to fall into the "Responsive Web" category, where the image serves is adapted to the requester's display (like: resolution, dpi, bit-depth, etc.)

    Quote Originally Posted by Shelwien View Post
    Btw, here's what webp2 currently does for 16-bit images in lossless mode.
    I guess its "near-lossless"
    Not sure i understand your example (attachment is downloaded as a jpg): is the input 16bit? Did you expect a 16bit output? What command did you use to generate the attachment?

    (webp2 only handles 10bit samples at max, btw)

  13. Thanks:

    Alexander Rhatushnyak (1st December 2020)

  14. #9
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by skal View Post
    Actually, near-lossless can actually increase the size compared to lossless in such cases.
    Correct. When I spoke about near-lossless actually reducing the image size, I referred to the experience of doing it for 1000 PNG images from the web. There, it is easy to get 20-40 % further size reduction.

    Small color count images are of course best compressed using a palette. A palette captures the color component correlations wonderfully well.

  15. #10
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    4,135
    Thanks
    320
    Thanked 1,397 Times in 802 Posts
    > attachment is downloaded as a jpg

    It a screenshot from file comparison utility. Blue numbers are different.

    > is the input 16bit?

    Yes, 16-bit grayscale.

    > Did you expect a 16bit output?

    Yes, FLIF and jpegXL handle it correctly.

    > What command did you use to generate the attachment?

    convert.exe -verbose -depth 16 -size 4026x4164 gray:1.bin png:1.png
    cwp2.exe -mt 8 -progress -q 100 -z 6 -lossless 1.png -o 1.wp2
    dwp2.exe -mt 8 -progress -png 1.wp2 -o 1u.png
    convert.exe -verbose -depth 16 png:1u.png gray:1u.bin

    > (webp2 only handles 10bit samples at max, btw)

    That's ok, but cwp2 doesn't say anything about -lossless not being lossless.

  16. #11
    Member
    Join Date
    Nov 2019
    Location
    Moon
    Posts
    51
    Thanks
    18
    Thanked 54 Times in 31 Posts
    Last edited by Scope; 7th December 2020 at 20:02.

  17. Thanks:

    skal (2nd December 2020)

  18. #12
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    What are the key (expected) benefits of WebP 2 compared to AVIF and JPEG XL for web delivery of images?

  19. #13
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by Jon Sneyers View Post
    What are the key (expected) benefits of WebP 2 compared to AVIF and JPEG XL for web delivery of images?
    See post #1

  20. #14
    Member DZgas's Avatar
    Join Date
    Feb 2020
    Location
    Russia
    Posts
    53
    Thanks
    23
    Thanked 12 Times in 9 Posts
    Webp 2 is so fast. But the quality...in this image is difficult question.
    But I don't know how and where WEBP 2 will add...only AVIF is already ready. Maybe, as WEBP now, is stickers in messengers, only.
    Just in development. Google hah why? - you have the AVIF. ok, one more codec.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Novyiy-god.jpg 
Views:	92 
Size:	1.88 MB 
ID:	8132  

  21. #15
    Member
    Join Date
    Jun 2018
    Location
    Yugoslavia
    Posts
    82
    Thanks
    8
    Thanked 6 Times in 6 Posts
    i think there are a lot of uses of (near) lossless, especially if for some reason one doesn't want vector formats.

  22. #16
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    Quote Originally Posted by skal View Post
    See post #1
    That doesn't really answer my question. I see

    • more efficient lossy compression (~30% better than WebP, as close to AVIF as possible)

    and also

    The goal is to reach decompression speed parity.


    but why do I need WebP 2 for that? AVIF itself already achieves those goals trivially

  23. #17
    Member DZgas's Avatar
    Join Date
    Feb 2020
    Location
    Russia
    Posts
    53
    Thanks
    23
    Thanked 12 Times in 9 Posts
    The strange thing is, someone in Google just got bored?
    AVIF was made just now, for its creation they used AV1 and all Google technologies and all other members of AOMedia...
    So of course WebP 2 - is the joke, it definitely can not be better than AVIF, because AVIF would become the Very bad job for Google and AOMedia...
    So yes, is codec, but for who? for what? incomprehensible.

    As experiment only, I Think they made it using the technologies that were Discarded when creating AVIF because of some internal problems, maybe, complexity, different variability (as in webp) or, of course, not effective.

    Webp 2 is so fast.
    AVIF can also be encoded fast if you make this setting.
    Image - same encoded time.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	FORM_2.png 
Views:	85 
Size:	752.7 KB 
ID:	8134  

  24. #18
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by skal View Post
    There's a tool extras/mk_preview in the extras/ directory to generate and manipulate previews as raw bits.
    For instance: ./extras/mk_preview input.png -o preview.bin -d preview.png -m voronoi -g 48 -n 400 -s 250 -i 2000

    ...
    Triangle preview is a very experimental field, and this is obvious when looking at the number of options in 'mk_preview' experimentation tool!
    I've set up a demo page here http://​https://codepen.io/skal65535/project/full/ZwnQWM
    You can visualize the base64 strings generated with the (latest) 'mk_preview' tool.
    The decoder is ~500 lines of hand-written magnified Javascript.
    Click image for larger version. 

Name:	image.png 
Views:	74 
Size:	309.8 KB 
ID:	8136

  25. Thanks (4):

    DZgas (4th December 2020),Jyrki Alakuijala (4th December 2020),Scope (4th December 2020),Shelwien (4th December 2020)

  26. #19
    Member DZgas's Avatar
    Join Date
    Feb 2020
    Location
    Russia
    Posts
    53
    Thanks
    23
    Thanked 12 Times in 9 Posts
    Quote Originally Posted by skal View Post
    I've set up a demo page here http://​https://codepen.io/skal65535/project/full/ZwnQWM
    You can visualize the base64 strings generated with the (latest) 'mk_preview' tool.
    The decoder is ~500 lines of hand-written magnified Javascript.
    Click image for larger version. 

Name:	image.png 
Views:	74 
Size:	309.8 KB 
ID:	8136
    This is the only interesting part of webp 2 because it is about quality/size for so small images, micro-picture, it is looks better than AVIF(at 250 byte) and anyone else. Although my associations are like Codec2 in photo formats.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	Mona_Lisa.jpg 
Views:	84 
Size:	122.9 KB 
ID:	8137  

  27. #20
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by DZgas View Post
    This is the only interesting part of webp 2 because it is about quality/size for so small images, micro-picture, it is looks better than AVIF(at 250 byte) and anyone else. Although my associations are like Codec2 in photo formats.
    According to https://engineering.fb.com/2015/08/0...review-photos/ Facebook has been doing this with JPEG where they remove the relatively large JPEG header from the payload only to patch it up dynamically. They also heavily Gaussian blur the result before showing it to get rid of artefacts. Adding a bit of noise at that stage can also help (in my experiments on this topic), but I don't think facebook is doing that.

    Would it be possible to do the same with AVIF where likely the header is 3/4 of the payload in this size?

    Also interesting -- how does simple JPEG XL perform in this area (at 32x32 or 64x64 resolution)? JPEG XL has relatively small header overhead.

    My belief is that the triangle-based modeling is favorable for much of the geometric stuff (buildings particularly), but can fail when textures need to be communicated (a natural image, a tree against the sky, etc.). My very very brief experience of triangle-based preview is that it is favorable in about 25 % of the cases, another 25 % where it is roughly even, and 50 % where it can be detrimental. Naturally such evaluations are highly subjective since we are pretty far from the original. Also depends if a Gaussian is applied in between or not. I consider that for practical deployments the Gaussian is a necessity, but it will eat some of the spatially-variable benefits that a triangle-based model has.

  28. #21
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    The header overhead of the AVIF container is 282 bytes for images without alpha and 457 bytes for images with alpha. That's just for the obligatory HEIF header boxes, not including the header of the AV1 payload.

  29. #22
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    103
    Thanks
    13
    Thanked 54 Times in 35 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    Naturally such evaluations are highly subjective since we are pretty far from the original.
    That's the thing: these previews don't need to be exactly 1:1 similar to the original image.
    They can also just 'summarize' it enough to trigger interest for the viewer to stay around while the full version downloads.
    Using potentially sharp shapes (triangles) offers more opportunities than just blurring. There's an interesting 'tiny semantic compression' experimentation field to open here.

    Click image for larger version. 

Name:	image.png 
Views:	59 
Size:	286.2 KB 
ID:	8138

  30. #23
    Member DZgas's Avatar
    Join Date
    Feb 2020
    Location
    Russia
    Posts
    53
    Thanks
    23
    Thanked 12 Times in 9 Posts
    @skal
    It would be interesting to look at the pictures with off blurring the triangles.

  31. #24
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by skal View Post
    That's the thing: these previews don't need to be exactly 1:1 similar to the original image.
    They can also just 'summarize' it enough to trigger interest for the viewer to stay around while the full version downloads.
    Using potentially sharp shapes (triangles) offers more opportunities than just blurring. There's an interesting 'tiny semantic compression' experimentation field to open here.

    Click image for larger version. 

Name:	image.png 
Views:	59 
Size:	286.2 KB 
ID:	8138
    Thank you Pascal!

    I believe the progression needs to balance the additional cognitive load caused by temporal changes and the benefit from lower latency.

    One way to reduce the cognitive load is that no preview image features disappear in the final image. One way to achieve that is the one that Facebook chose to deploy, i.e., the use of excessive Gaussian smoothing. In JPEG XL we do the same with high quality preservation of the thumbnail combined with a very high quality upsampling algorithm.

    Why I like minimizing cognitive load:

    1. Imposing new cognitive load on the users may require holistic study than what I'm able to execute. Making the web more irritable altogether may have behaviour changes that cannot be monitored in connection with the data/renderings that are causing the issue.

    2. A large fraction of user base dislikes flickering images and they may want to wish to disable progression when it is creating cognitive load. I consider that it may because some part of the user base consider the changes in the image additional information and their more sensitive brains get flooded with it.

    3. My personal belief system is that human attention is limited, and the attention should go to the deeper semantics of the content rather than images flashing.

    These things are highly subjective. One user on encode.su self-identified in the discussions related to progressive JPEG improvements as benefiting from the flickering as they wanted to know when the final version of the image is coming.

  32. #25
    Member DZgas's Avatar
    Join Date
    Feb 2020
    Location
    Russia
    Posts
    53
    Thanks
    23
    Thanked 12 Times in 9 Posts
    Well I'm just "playing" uses random words in codepen.io/skal65535/project/full/ZwnQWM
    You can rate my gallery heh.
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	musi.png 
Views:	152 
Size:	273.1 KB 
ID:	8139  

  33. Thanks:

    Mike (4th December 2020)

  34. #26
    Member
    Join Date
    Nov 2019
    Location
    Moon
    Posts
    51
    Thanks
    18
    Thanked 54 Times in 31 Posts

  35. Thanks:

    DZgas (8th December 2020)

  36. #27
    Member DZgas's Avatar
    Join Date
    Feb 2020
    Location
    Russia
    Posts
    53
    Thanks
    23
    Thanked 12 Times in 9 Posts
    Non-header ~250 byte data image
    JpegXL can not less than 500 byte at 64x96 image

    AVIF | WEBP2 | WEBP2 preview
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	250_byte.png 
Views:	79 
Size:	144.7 KB 
ID:	8159  

  37. #28
    Member
    Join Date
    Nov 2019
    Location
    Moon
    Posts
    51
    Thanks
    18
    Thanked 54 Times in 31 Posts
    Image formats comparison by eclipseo:
    https://www.reddit.com/r/AV1/comment...son_including/

    https://eclipseo.github.io/image-comparison-web/
    (Shift key to switch between images)
    Last edited by Scope; 9th December 2020 at 15:18.

  38. Thanks:

    skal (9th December 2020)

  39. #29
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    75
    Thanks
    10
    Thanked 25 Times in 21 Posts
    Quote Originally Posted by skal
    who is using near-lossless and to what effect?
    i tried an alternative near-lossless in WebP, just for the nice scoring (metrics, speed, size). it would be eventually useful in WebP v1 where the YUV420 shows limitations (and yet, -sharp_yuv could help). however, a near-lossless transform makes sense to me if it would offer a significant gain (quality or savings) over the highest lossy available: i am not sure it would be the case in WebP v2 (or in JPEG XL), where imo the high lossy would be good enough. for such cases where lossy performs with less efficiency, perhaps a palette transformation could be more appropriate

  40. #30
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    75
    Thanks
    10
    Thanked 25 Times in 21 Posts
    ​anyway, if someone would be interested in testing another way for 'near-lossless' with WebP2 v2, i tried this. be warned, it would be barely tested on few samples and may be instable
    Attached Files Attached Files

Page 1 of 3 123 LastLast

Similar Threads

  1. jpg, png, webp encoder and butteraugli distance
    By Lithium Flower in forum Data Compression
    Replies: 17
    Last Post: 1st January 2021, 14:39
  2. WebP (lossy image compression)
    By Arkanosis in forum Data Compression
    Replies: 62
    Last Post: 12th April 2019, 19:45
  3. Replies: 15
    Last Post: 14th February 2018, 10:18
  4. WEBP - how to improve it?
    By Stephan Busch in forum Data Compression
    Replies: 38
    Last Post: 4th June 2016, 14:43
  5. WebP (Lossless April 2012)
    By caveman in forum Data Compression
    Replies: 32
    Last Post: 19th April 2013, 16:53

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •