Page 1 of 2 12 LastLast
Results 1 to 30 of 36

Thread: JPEG XL status update and key distinguishing features

  1. #1
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    41
    Thanks
    10
    Thanked 30 Times in 15 Posts

    JPEG XL status update and key distinguishing features

    Hi everyone!

    I wrote a blog post about the current state of JPEG XL and how it compares to other state-of-the-art image codecs.

    https://cloudinary.com/blog/how_jpeg...r_image_codecs

  2. Thanks (7):

    boxerab (13th June 2020),Hakan Abbas (26th May 2020),Jarek (26th May 2020),Jyrki Alakuijala (26th May 2020),Mike (26th May 2020),Piglet (27th May 2020),Scope (26th May 2020)

  3. #2
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    63
    Thanks
    7
    Thanked 35 Times in 26 Posts
    Quote Originally Posted by Jon Sneyers View Post
    Hi everyone!

    I wrote a blog post about the current state of JPEG XL and how it compares to other state-of-the-art image codecs.

    https://cloudinary.com/blog/how_jpeg...r_image_codecs
    Jon,

    your "Original PNG image (2.6 MB)"is actually a jpeg (https://res.cloudinary.com/cloudinar...h_fidelity.png) when downloaded.

    did you mean to add 'f_jpg,q_97' in the URL ?

  4. #3
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    63
    Thanks
    7
    Thanked 35 Times in 26 Posts
    Also:

    * you forgot to use '-sharp_yuv' option for the webp example (53kb). Otherwise, it would have give you the quite sharper version:
    Click image for larger version. 

Name:	high_fidelity_webp_sharp.png 
Views:	26 
Size:	344.7 KB 
ID:	7632
    (and note that this webp was encoded from the jpeg-q97, not the original PNG).

    * in the "Computational Complexity", i'm very surprised that JPEG-XL is faster than libjpeg-turbo. Did you forget to mention multi-thread usage?

  5. Thanks:

    Jon Sneyers (27th May 2020)

  6. #4
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    54
    Thanks
    9
    Thanked 18 Times in 15 Posts
    ​i guess the original PNG would be this: https://res.cloudinary.com/cloudinar...h_fidelity.png

    some trials with close filesize (webp = no meta, png = meta):
    Code:
    cwebp -q 91 high_fidelity.png -o q91.webp (52.81 KB) -> q91.png
    cwebp -q 90 -sharp_yuv high_fidelity.png -o q90-sharp.webp (52.06 KB) -> q90-sharp.png
    it would be unrelated with the point of the article itself, but still, since web delivery is mentionned, few points from end-user pov on samples/results:

    - this file (131.23 KB) could be a good example where automatic compression would be useful. this PNG could be losslessy reduced to 19.85 KB for web (or 16.19 KB in lossless WebP), which would make the (high quality) lossy JPEG XL less revelant for users (edited: reformulation to match the initial point, see this comparison)

    - about PNG itself, the encoder used here would make very over-bloated data for web context, making the initial filesize non-representative of the format (original PNG is 2542.12 KB, but expected rendering for web could be losslessly encoded to 227.08 KB with all chunks). as aside note, this PNG encoder also wrote non-standard key for zTxt/tEXt chunks or non-standard chunks (caNv)

    btw, instead (or in complement) of the current lossless, did you plan somehow to provide a "web lossless"? (edited) i did very quick trials and atm, the encoder could create bloated files for web from some PNG (16 bits/sample, no alpha optimization, etc.)
    Last edited by cssignet; 2nd June 2020 at 12:58. Reason: edit links + point

  7. Thanks:

    skal (27th May 2020)

  8. #5
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    349
    Thanks
    130
    Thanked 53 Times in 37 Posts
    Quote Originally Posted by cssignet View Post
    ​i guess the original PNG would be this: https://res.cloudinary.com/cloudinar...h_fidelity.png

    some trials with close filesize (webp = no meta, png = meta):
    Code:
    cwebp -q 91 high_fidelity.png -o q91.webp (52.81 KB) -> q91.png
    cwebp -q 90 -sharp_yuv high_fidelity.png -o q90-sharp.webp (52.06 KB) -> q90-sharp.png
    it would be unrelated with the point of the article itself, but still, since web delivery is mentionned, few points from end-user pov on samples/results:

    - this file could be a good example where automatic compression would be useful. this PNG could be losslessy reduced to 19.85 KB for web (or 16.19 KB in lossless WebP), which would make the comparison (targeted filesize ~53 KB) with lossy JPEG XL or other lossy format less revelant for users

    - about PNG itself, the encoder used here would make very over-bloated data for web context, making the initial filesize non-representative of the format (original PNG is 2542.12 KB, but expected rendering for web could be losslessly encoded to 227.08 KB with all chunks). as aside note, this PNG encoder also wrote non-standard key for zTxt/tEXt chunks or non-standard chunks (caNv)

    btw, instead of math lossless only, did you plan somehow to provide a "web lossless"? i did not try, but feeding the lossless (math) encoder with 16 bits/sample PNG would probably create over-bloated file for web usage
    Your lossless reduction darkened the image though. Look at them side by side.

  9. #6
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    54
    Thanks
    9
    Thanked 18 Times in 15 Posts
    Quote Originally Posted by SolidComp View Post
    Your lossless reduction darkened the image though. Look at them side by side.
    the host (https://i.slow.pics/) did some kind of post-processing on PNG (dropping the iCCP chunk and recompressing the image data worsely). those files are not what i uploaded (see edited link on my first post)

  10. #7
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    41
    Thanks
    10
    Thanked 30 Times in 15 Posts
    Quote Originally Posted by skal View Post
    Jon,

    your "Original PNG image (2.6 MB)"is actually a jpeg (https://res.cloudinary.com/cloudinar...h_fidelity.png) when downloaded.

    did you mean to add 'f_jpg,q_97' in the URL ?
    ​Sorry, yes, drop the f_jpg,q_97 to get the actual original PNG.

  11. #8
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    41
    Thanks
    10
    Thanked 30 Times in 15 Posts
    Quote Originally Posted by cssignet View Post
    ​i guess the original PNG would be this: https://res.cloudinary.com/cloudinar...h_fidelity.png
    Correct.


    - this file could be a good example where automatic compression would be useful. this PNG could be losslessy reduced to 19.85 KB for web (or 16.19 KB in lossless WebP), which would make the comparison (targeted filesize ~53 KB) with lossy JPEG XL or other lossy format less revelant for users
    ​The article shows only that crop, but the sizes are for the whole image. Also, lossless WebP wouldn't be completely lossless since this is a 16-bit PNG (quantizing to 8-bit introduces very minor color banding).

  12. #9
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    41
    Thanks
    10
    Thanked 30 Times in 15 Posts
    Quote Originally Posted by skal View Post
    Also:

    * you forgot to use '-sharp_yuv' option for the webp example (53kb). Otherwise, it would have give you the quite sharper version:
    Click image for larger version. 

Name:	high_fidelity_webp_sharp.png 
Views:	26 
Size:	344.7 KB 
ID:	7632
    (and note that this webp was encoded from the jpeg-q97, not the original PNG).
    Good point, yes, better results are probably possible for all codecs with custom encoder options. I used default options for all.

    * in the "Computational Complexity", i'm very surprised that JPEG-XL is faster than libjpeg-turbo. Did you forget to mention multi-thread usage?
    ​Numbers are for 4 threads, as is mentioned in the blogpost. On single core, libjpeg-turbo will be faster. Using more than four cores, jxl will be more significantly faster. It's hard to find a CPU with fewer than 4 cores these days.

  13. #10
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    63
    Thanks
    7
    Thanked 35 Times in 26 Posts
    Quote Originally Posted by Jon Sneyers View Post
    Good point, yes, better results are probably possible for all codecs with custom encoder options. I used default options for all.
    -sharp_yuv is not default because it's slower: the default are adapted to the general common use, and the image you picked as source is not the common case the defaults are tuned for, far from.

    (all the more that these images are better compressed losslessly!)

    Quote Originally Posted by Jon Sneyers View Post
    ​Numbers are for 4 threads, as is mentioned in the blogpost. On single core, libjpeg-turbo will be faster. Using more than four cores, jxl will be more significantly faster. It's hard to find a CPU with fewer than 4 cores these days.
    Just because you have 4 cores, doesn't mean you want to use them all at once. Especially if you have several images to compress in parallel (which is often the case).
    For making a point with a fair comparison, it would have been less noise to force 1 thread for all codecs. As presented, i find the text quite misleading.

  14. #11
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    41
    Thanks
    10
    Thanked 30 Times in 15 Posts
    Quote Originally Posted by skal View Post
    -sharp_yuv is not default because it's slower: the default are adapted to the general common use, and the image you picked as source is not the common case the defaults are tuned for, far from.

    (all the more that these images are better compressed losslessly!)



    Just because you have 4 cores, doesn't mean you want to use them all at once. Especially if you have several images to compress in parallel (which is often the case).
    For making a point with a fair comparison, it would have been less noise to force 1 thread for all codecs. As presented, i find the text quite misleading.
    The thing is, a bitstream needs to be suitable for parallel encode/decode. That is not always the case. Comparing using just 1 thread gives an unfair advantage to inherently sequential codecs.

    Typical machines have more than 4 cores nowadays. Even in phones, 8 is common. The tendency is towards more cores and not much faster cores. The ability to do parallel encode/decode is important.

  15. #12
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    63
    Thanks
    7
    Thanked 35 Times in 26 Posts
    Quote Originally Posted by Jon Sneyers View Post
    The thing is, a bitstream needs to be suitable for parallel encode/decode. That is not always the case. Comparing using just 1 thread gives an unfair advantage to inherently sequential codecs.
    And yet, sequential codecs are more efficient than parallel ones: tile-based compression have sync points and contention that makes the codec wait for threads to finish. Processing several images separately in parallel doesn't have this inefficiency (providing memory and I/O is not the bottleneck).

    Actually, sequential codecs are at advantage in some quite important cases:
    * image burst on phone camera (sensors is taking a sequence of photos in short bursts)
    * web page rendering (which contains a lot of images, usually. Think YouTube landing page.)
    * displaying photo albums (/thumbnails)
    * back-end processing of a lot of photos in parallel (cloudinary?)

    Actually, I'd say parallel codec are mostly useful for the Photoshop case (where you're using one photo only) and screen sharing (/slide deck).

    side note: jpeg can be made parallelizable using Restart Markers. Fact that no-one is using it is somewhat telling.

    In any case, i would have multiplied the JPEG's MP/s by 4x in your table to get fair numbers.

  16. #13
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    54
    Thanks
    9
    Thanked 18 Times in 15 Posts
    Quote Originally Posted by Jon Sneyers
    ​The article shows only that crop, but the sizes are for the whole image
    (edited) i remember what i tried actually: i used this crop as sample and did lossy JPEG XL on it (still, i misread your table and thought you did the same). the fact would be that this sample could be better stored as lossless than high quality lossy JPEG XL - so automatic compression would be useful in this case
    Code:
    cjpegxl -q 99 -s 6 high_fidelity.png_opt.png high_fidelity-q99.jxl (~ 56KB output)
    and more than this, the default lossless JPEG XL would create bigger file than PNG
    Code:
    cjpegxl -q 100 high_fidelity.png_opt.png high_fidelity-q100.jxl (20.93 KB)
    Quote Originally Posted by Jon Sneyers
    lossless WebP wouldn't be completely lossless since this is a 16-bit PNG (quantizing to 8-bit introduces very minor color banding).
    my observations were about web usage context only, and how 16 bits/sample PNG are rendered in web browsers anyway
    Last edited by cssignet; 2nd June 2020 at 12:56.

  17. #14
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    41
    Thanks
    10
    Thanked 30 Times in 15 Posts
    Quote Originally Posted by skal View Post
    And yet, sequential codecs are more efficient than parallel ones: tile-based compression have sync points and contention that makes the codec wait for threads to finish. Processing several images separately in parallel doesn't have this inefficiency (providing memory and I/O is not the bottleneck).

    Actually, sequential codecs are at advantage in some quite important cases:
    * image burst on phone camera (sensors is taking a sequence of photos in short bursts)
    * web page rendering (which contains a lot of images, usually. Think YouTube landing page.)
    * displaying photo albums (/thumbnails)
    * back-end processing of a lot of photos in parallel (cloudinary?)

    Actually, I'd say parallel codec are mostly useful for the Photoshop case (where you're using one photo only) and screen sharing (/slide deck).

    side note: jpeg can be made parallelizable using Restart Markers. Fact that no-one is using it is somewhat telling.

    In any case, i would have multiplied the JPEG's MP/s by 4x in your table to get fair numbers.
    I wouldn't say sequential codecs are more efficient than parallel ones: you can always just use only a single thread, and avoid the of course unavoidably imperfect parallel scaling.

    If you have enough images to process at the same time (like Cloudinary, or maybe rendering a website with lots of similar-sized images), you can indeed best just parallelize that way and use a single thread per image.

    There are still cases where you don't have enough images to process in parallel single-thread processes to keep your cores busy though. For end-users, I think the "Photoshop case" is probably a rather common case.

    Restart markers in JPEG only allow you to do parallel encode, not parallel decode. A decoder doesn't know if and where the next restart marker occurs, and what part of the image data it represents. You can also only do stripes with restart markers, not tiles. So even if you'd add some custom metadata to make an index of restart marker bitstream/image offsets, it would only help to do full-image parallel decode, not cropped decode (e.g. decoding just a 1000x1000 region from a gigapixel image).
    I don't think the fact that no-one is trying to do this is telling. Applications that need efficient parallel/cropped decode (e.g. medical imaging) just don't use JPEG, but e.g. JPEG 2000.

    Multiplying the JPEG numbers by 4 doesn't make much sense, because you can't actually decode a JPEG 4x faster on 4 cores than one 1 core. Dividing the JPEG XL numbers by 3 (for decode) and by 2 (for encode) is what you need to do to get "fair" numbers: that's the speed you would get on single-core (the factor is not 4 because parallel scalability is never perfect).

    There's a reason why all the HEIC files produced by Apple devices are using completely independently encoded 256x256 tiles. Otherwise encode and decode would probably be too slow. The internal grid boundary artifacts are a problem in this approach though.

  18. Thanks (3):

    boxerab (13th June 2020),Piglet (28th May 2020),spider-mario (28th May 2020)

  19. #15
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    63
    Thanks
    7
    Thanked 35 Times in 26 Posts
    Quote Originally Posted by Jon Sneyers View Post
    Restart markers in JPEG only allow you to do parallel encode, not parallel decode. A decoder doesn't know if and where the next restart marker occurs, and what part of the image data it represents.
    The trick is to put the index ("jump table") in the COM section reserved for comments. Mildly non-standard JPEGs, but workable.

  20. #16
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    41
    Thanks
    10
    Thanked 30 Times in 15 Posts
    Quote Originally Posted by skal View Post
    The trick is to put the index ("jump table") in the COM section reserved for comments. Mildly non-standard JPEGs, but workable.
    Yes, that would work. Then again, if you do such non-standard stuff, you can just as well make JPEG support alpha transparency by using 4-component JPEGs with some marker that says that the fourth component is alpha (you could probably encode it in such a way that decoders that don't know about the marker relatively gracefully degrade by interpreting the image as a CMYK image that looks the same as the desired RGBA image except it is blended to a black background). Or you could revive arithmetic coding and 12-bit support, which are in the JPEG spec but just not well supported.

    I guess the point is that we're stuck with legacy JPEG decoders, and they can't do parallel decode. And we're stuck with legacy JPEG files, which don't have a jump table. And even if we would re-encode them with restart markers and jump tables, it would only give parallel striped decode, not efficient cropped decode.

  21. Thanks (2):

    boxerab (13th June 2020),spider-mario (29th May 2020)

  22. #17
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    54
    Thanks
    9
    Thanked 18 Times in 15 Posts
    ​i did not check results but here is a (late) primitive and automatic trial of JPEG XL 0.0.1-f84edfb2/WebP 1.1.0 from the files used in my benchmark (for web (lossless) usage)
    Attached Files Attached Files

  23. Thanks:

    skal (12th June 2020)

  24. #18
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    63
    Thanks
    7
    Thanked 35 Times in 26 Posts
    Quote Originally Posted by cssignet View Post
    ​i did not check results but here is a (late) primitive and automatic trial of JPEG XL 0.0.1-f84edfb2/WebP 1.1.0 from the files used in my benchmark (for web (lossless) usage)
    Thanks for doing the tests. The numbers checks out for WebP-1.1.0.
    I was wondering: is Pingo webp-lossless re-optimizing an already-compressed WebP-lossless? Or starting back from the PNG source?
    I'm asking because the the timing are pretty good for Pingo-webp-lossless compared to the rather slow WebP-cruncher, which is probably doing too much work...

    skal/

    [what is wjxl, btw?]

  25. #19
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    54
    Thanks
    9
    Thanked 18 Times in 15 Posts
    Quote Originally Posted by skal
    I was wondering: is Pingo webp-lossless re-optimizing an already-compressed WebP-lossless? Or starting back from the PNG source?
    ​in this specific test, it is done from the PNG (-webp-lossless -sN file.png), but it could do both eventually (from WebP: -sN file.webp)

    Quote Originally Posted by skal
    I'm asking because the the timing are pretty good for Pingo-webp-lossless compared to the rather slow WebP-cruncher, which is probably doing too much work...
    on larger benchmark by using various image type, i guess the WebP-cruncher would give smaller results. the point was to make it faster, so it would try to check the image specs first and select an average good transform instead of trying them all

    on paletted samples, it would be possible that an alternative transform could lead to better compression than what is done atm by the WebP-cruncher (but not always). how entries are sorted in palette would be the critical factor, and could let the usage of its predictors (or not) on image data. perhaps somehow it could make WebP more competitive vs other codecs (JPEG XL, FLIF, OLI (<- e.g. on those specific samples, WebP could be 19 210 bytes and 14 484 bytes, or even smaller with more exhaustive/efficient research))

    Quote Originally Posted by skal
    what is wjxl, btw?
    my bad, i thought i mentionned it in results. it is just my ugly-unoptimized quick attempt to do PNG->JXL losslessly for web (basic alpha optimization, etc.) to make the comparison more reliable with other codecs

  26. #20
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    63
    Thanks
    7
    Thanked 35 Times in 26 Posts
    Quote Originally Posted by cssignet View Post
    ​in this specific test, it is done from the PNG (-webp-lossless -sN file.png), but it could do both eventually (from WebP: -sN file.webp)
    nice work!

    Quote Originally Posted by cssignet View Post
    ​i
    . how entries are sorted in palette would be the critical factor, and could let the usage of its predictors (or not) on image data. perhaps somehow it could make WebP more competitive vs other codecs (JPEG XL, FLIF, OLI (<- e.g. on those specific samples, WebP could be 19 210 bytes and 14 484 bytes, or even smaller with more exhaustive/efficient research))
    Note that the github HEAD WebP version produces 18842 bytes and 14168 bytes ouput, respectively (in -lossless -q 100 -m 6 cruncher mode).

    Quote Originally Posted by cssignet View Post
    my bad, i thought i mentionned it in results. it is just my ugly-unoptimized quick attempt to do PNG->JXL losslessly for web (basic alpha optimization, etc.) to make the comparison more reliable with other codecs
    oh, good to know.

  27. Thanks:

    cssignet (14th June 2020)

  28. #21
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    349
    Thanks
    130
    Thanked 53 Times in 37 Posts
    Skal, what's the GitHub "HEAD" version? Is that a nightly or something?

  29. #22
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    63
    Thanks
    7
    Thanked 35 Times in 26 Posts
    Quote Originally Posted by SolidComp View Post
    Skal, what's the GitHub "HEAD" version? Is that a nightly or something?
    it's just the top of the tree, reflecting the current state of development. We usually cut a release every ~6 months out of HEAD.
    Next one will be libwebp-1.1.1, don't know when exactly. There's no 'nightly build' per se.

  30. #23
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    54
    Thanks
    9
    Thanked 18 Times in 15 Posts
    Quote Originally Posted by skal
    Note that the github HEAD WebP version produces 18842 bytes and 14168 bytes ouput, respectively (in -lossless -q 100 -m 6 cruncher mode).
    good news, i did not see this! is there any particular reason why atm this transformation would not be done in lower level?
    Attached Files Attached Files

  31. #24
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    63
    Thanks
    7
    Thanked 35 Times in 26 Posts
    Quote Originally Posted by cssignet View Post
    good news, i did not see this! is there any particular reason why atm this transformation would not be done in lower level?
    hmm... iirc, still need to find a good heuristic to detect when it's advantageous.

  32. #25
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    63
    Thanks
    7
    Thanked 35 Times in 26 Posts
    Quote Originally Posted by skal View Post
    hmm... iirc, still need to find a good heuristic to detect when it's advantageous.
    btw, it's just been added to libwebp HEAD for method 5 (thanks for the suggestion!).

  33. #26
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    54
    Thanks
    9
    Thanked 18 Times in 15 Posts
    WebP 1.1.0/cf2f88b (automatic results on my benchmark with recompiled binaries [on v.low-perf hardware/32-bit OS, where multithread/process could struggle], not checked)
    Attached Files Attached Files

  34. Thanks:

    skal (19th June 2020)

  35. #27
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    63
    Thanks
    7
    Thanked 35 Times in 26 Posts
    Quote Originally Posted by cssignet View Post
    WebP 1.1.0/cf2f88b (automatic results on my benchmark with recompiled binaries [on v.low-perf hardware/32-bit OS, where multithread/process could struggle], not checked)
    Nice! Pingo is really super-fast for -webp-lossless, even in crunch mode

  36. Thanks:

    cssignet (19th June 2020)

  37. #28
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    54
    Thanks
    9
    Thanked 18 Times in 15 Posts
    i guess the main reason is that there is no real crunch mode like cwebp. pingo targets its trials very specifically according to the colortype (so RGBA, paletted, etc. are "tested" differently)

    it would do estimations for paletted: one with default ordering (+no predictor), second with specific ordering (+predictors). those trials are done in PNG format, fast but weak compression (this would be improved later). it would compare both size, pick the smaller, and set the WebP encoder accordingly. the level in pingo (-s1, -s2, etc) sets how strong the compression would be for estimations, how much ordering ways would be tried, and the method/quality for the final WebP encoding (the libwebp has been modified for that purpose)

    perhaps pingo would be faster with this strategy but if someone test a large amount of samples (paletted or not), i guess it would compress worse vs cwebp brute-force. the only reason why it had better compression in this specific case (my benchmark) is because of its palette sorting, which overally performed better than the default in this case. however, it would be inexact science since each way could be better than another according to the sample

  38. Thanks:

    Jyrki Alakuijala (20th June 2020)

  39. #29
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    54
    Thanks
    9
    Thanked 18 Times in 15 Posts
    ​perhaps i made it even more competitive on tested profiles
    Attached Files Attached Files
    Last edited by cssignet; 22nd June 2020 at 01:00.

  40. #30
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    63
    Thanks
    7
    Thanked 35 Times in 26 Posts
    Quote Originally Posted by cssignet View Post
    ​perhaps i made it even more competitive on tested profiles
    very nice speed (and compression gain). That's a motivation to add better heuristics in cwebp!

    Looks like parallel processing of files has better throughput overall (3.8x) compared to multi-threading each file taken sequentially (~ 1.8x). That's not totally unexpected...

  41. Thanks:

    cssignet (24th June 2020)

Page 1 of 2 12 LastLast

Similar Threads

  1. Use of compression in H2O key/value storage
    By m^2 in forum Data Compression
    Replies: 0
    Last Post: 30th March 2014, 10:31
  2. Quo Vadis JPEG - Another update (3)
    By thorfdbg in forum Data Compression
    Replies: 28
    Last Post: 7th January 2013, 17:02
  3. Quo Vadis JPEG - Another update (2)
    By thorfdbg in forum Data Compression
    Replies: 17
    Last Post: 28th September 2012, 21:10
  4. Quo Vadis JPEG - Another update
    By thorfdbg in forum Data Compression
    Replies: 7
    Last Post: 11th September 2012, 20:09
  5. Quo Vadis JPEG - an update
    By thorfdbg in forum Data Compression
    Replies: 8
    Last Post: 31st July 2012, 17:35

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •