Page 1 of 2 12 LastLast
Results 1 to 30 of 56

Thread: JXL reference software version 0.2 released

  1. #1
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts

    JXL reference software version 0.2 released


  2. Thanks (9):

    boxerab (3rd January 2021),Hakan Abbas (25th December 2020),hunman (27th December 2020),Jarek (25th December 2020),Jyrki Alakuijala (25th December 2020),Mike (24th December 2020),pico (30th January 2021),schnaader (24th December 2020),Scope (24th December 2020)

  3. #2
    Member
    Join Date
    Mar 2016
    Location
    USA
    Posts
    61
    Thanks
    9
    Thanked 24 Times in 16 Posts
    Congrats. Is there a place to read about specific changes? The commits all say "Update JPEG-XL with latest changes." so it's hard to know what's going on under the hood and what should be evaluated.

  4. #3
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    It's mostly memory footprint reductions, faster encoding, fuzzer bug fixes for a more robust decoder, and libjxl API improvements. I also added an experimental PSD input for cjxl to test layered images (and spot color channels).

  5. #4
    Member
    Join Date
    Nov 2019
    Location
    Moon
    Posts
    51
    Thanks
    18
    Thanked 54 Times in 31 Posts
    Some comparison results (since the lossless mode has not changed from the last build), now lossless Jpeg XL is the densest format ​(although I noticed a decrease in efficiency at faster speeds compared to some older builds)
    Lossless Image Formats Comparison (Jpeg XL, AVIF, WebP 2, WebP, FLIF, PNG, ...) v1.20
    https://docs.google.com/spreadsheets/d/1ju4q1WkaXT7WoxZINmQpf4ElgMD2VMlqeDN2DuZ6yJ8/

    Attached Files Attached Files
    Last edited by Scope; 29th January 2021 at 14:50.

  6. Thanks (4):

    Hakan Abbas (25th December 2020),JamesB (2nd January 2021),Jon Sneyers (24th December 2020),pico (30th January 2021)

  7. #5
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    For lossless, faster speeds became a lot faster and less memory-hungry, and slightly less dense.

  8. #6
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    820
    Thanks
    253
    Thanked 264 Times in 163 Posts

  9. #7
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    hello, i test some non-photographic images, jpegxl v0.2 is very well,
    thank you very much

    but in cjpegxl -d 1.0 flag compressed image, maxButteraugli still above 1.0,
    maybe i forget some flag? or --adaptive_reconstruction 1 is necessary?
    and cjpegxl -s flag will effect image quality?, in my test -s7 and -s8 get different maxButteraugli.

    cjxl.exe image.png image.jxl -d 1.0 -s 7
    cjxl.exe image.png image.jxl -d 1.0 -s 8

    https://docs.google.com/spreadsheets...it?usp=sharing

  10. Thanks:

    Jyrki Alakuijala (27th December 2020)

  11. #8
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    The -d distance target is only a target, not a guarantee. At speeds 8 and up, it will be closer to the target (because actual Butteraugli iterations are done in the encoder), while at faster speeds, more heuristics are used which might cause some deviations.

    The speed setting does indeed affect the result for lossy — that's kind of logical if you think about it: if speed was only about entropy coding and not things like transform or quantization selection, then the difference between fastest and slowest would be quite small.

  12. Thanks:

    Lithium Flower (27th December 2020)

  13. #9
    Member
    Join Date
    Aug 2015
    Location
    indonesia
    Posts
    514
    Thanks
    63
    Thanked 96 Times in 75 Posts
    Quote Originally Posted by Jon Sneyers View Post

    How to compile it using batch script ? Thank you

  14. #10
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    @ Jon Sneyers
    thank you for your reply

    I compress some png pal8 image, use vardct in pal8 is not good idea,
    so i using modular mode lossless in this image, but result is unexpected,
    how to use correct flag to compress pal8 image?

    And Inside_C.png (attachment file), use vardct mode can't compress too much(-q 80 -s9),
    this image type recommend use vardct mode or modular mode?

    Compress non-photographic images use vardct mode and modular mode(lossy) on near size,
    vardct mode -d0.8 -s8(57.7mb) and modular mode -m -Q90 -s8 (56.4mb),
    if my targe type is non-photographic images, which can compress better?

    https://docs.google.com/spreadsheets/d/1p0jL9M1P9d-9abVjNFgD3qx1dCitMtX7dDBCObf0zCQ/edit?usp=sharing


    ================================================== ============================
    - People1.png 19.3kb

    cjxl: People1.jxl 20.9kb

    cwebp: People1.webp 16.5kb

    Pingo:
    png: People1.png 16.4kb
    webp: People1.webp 16.5kb

    cjxl People1.png People1.jxl --patches=0 -m -s 9

    https://gitlab.com/wg1/jpeg-xl/-/issues/55
    --patches=0 --speed=tortoise
    cwebp -mt -m 6 -q 100 -lossless -progress -alpha_filter best -metadata icc People1.png -o People1.webp

    Pingo -s9 -nosrgb -noconversion People1.png
    pingo -webp-lossless -nosrgb People1.png

    ================================================== ============================
    Input file: People1.png | 19848 bytes
    Image: 384x256 pixels | 8 bits/sample | Indexed & alpha | 93 color(s)
    Delta filter: None
    Filter per scanline:
    00000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000 000000
    Chunks: tRNS

    Input file: Inside_C.png | 420135 bytes
    Image: 512x512 pixels | 8 bits/sample | RGB & Alpha | 72090 color(s)
    Delta filter: None
    Filter per scanline:
    00000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000 000000
    Chunks: only critical
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	People1.png 
Views:	54 
Size:	19.4 KB 
ID:	8217   Click image for larger version. 

Name:	Inside_C.png 
Views:	60 
Size:	410.3 KB 
ID:	8218  
    Last edited by Lithium Flower; 31st December 2020 at 20:47.

  15. #11
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by Jon Sneyers View Post
    The -d distance target is only a target, not a guarantee.
    Iterating with butteraugli was first tried out in guetzli. Pik project was initially started as chaining guetzli+brunsli (in late 2014).

    In guetzli we considered that it is important to meet the max butteraugli target as precisely as possible. It was a proof of concept that it is worthy to optimize for butteraugli -- as many other metric fall apart when optimized for. Getting a precise max butteraugli score requires quite a lot of optimization rounds as the maximum error is influenced by quantization of neighbouring areas, sometimes worse quantization can just be a good fit and better quantization produces a worse max error etc. In guetzli we chose 50 iterations and quite a few tricks to get good scores.

    During the development of pik and jpeg xl, we have increasingly focused on practical encoding speeds, while relying on guetzli and early exhaustive optimization experiences with pik that heavily optimizing for butteraugli does not lead to utterly terrible images. We just optimize fewer rounds of butteraugli: zero or one rounds instead of 50. We developed heuristics that roughly approximates what gives decent butteraugli scores. Such heuristics (or even three rounds of butteraugli optimization with butteraugli gradient approximation search) are not going to bring the maximum error down due to the inherenet non-linearity of quantization.

    We will likely go back to improve guetzli like optimization (-s 9) in a year (or two). For the VarDCT mode currently -s 9 is mostly untested by human eyes and more like old artefacts in the code rather than something useful. I'd recommend using the VarDCT encoding with default speed settings for now. If you need something slower and denser, '-s kitten' is a slower more that is somewhat tested (until about the last year when we mostly looked at default encoding).

    Usually increase in -s optimizations make simple metrics to become worse, i.e., you'll see worse psnr/ssim/etc. -- only the images look slightly better (except no guarantees at -s 9).

  16. Thanks:

    Lithium Flower (31st December 2020)

  17. #12
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    Quote Originally Posted by Lithium Flower View Post
    @ Jon Sneyers
    thank you for your reply

    I compress some png pal8 image, use vardct in pal8 is not good idea,
    so i using modular mode lossless in this image, but result is unexpected,
    how to use correct flag to compress pal8 image?
    Maybe try those with -q 100 -g 2 -s 9 ? Avoiding the group separation should help a bit here...

    Better patch detection would help on these images, by the way. But that's not trivial to implement.

  18. Thanks:

    Lithium Flower (31st December 2020)

  19. #13
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    ​@ Jyrki Alakuijala
    Thank you for teaching me about Butteraugli knowledge, I'm really grateful.

    I think this question is more about cjxl, so i post in jxl thread.
    In cjxl -d 1.0 is a target distance, if i want result image distance near 1.0,
    which method is best practice?

    1. use --adaptive_reconstruction 0 flag, turn off loop filter,
    but in cjxl 0.2 --adaptive_reconstruction is unavailable,
    and i worry turn off loop filter, will get this problem.
    If you turn of the rest of the filtering, you will see more ringing and more block artefacts.
    2. parallel three cjxl instance in -d 1.0, -d 0.8, -d 0.7 -s <8>,
    and parallel three Butteraugli_jxl instance for each file, find near 1.0 image file.

    3. use cjxl -d 0.8 or -d 0.7 -s <8> in every image files.
    Last edited by Lithium Flower; 2nd January 2021 at 20:59.

  20. #14
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by Lithium Flower View Post

    3. use cjxl -d 0.8 or -d 0.7 -s <8> in every image files.
    I would do this. Choose one setting and use it. Compress with 'kitten' speed setting.

    The bulk of the image quality is defined by the setting, and small adjustments by maximum score are likely only going to make results less consistent.

  21. Thanks:

    Lithium Flower (5th January 2021)

  22. #15
    Member
    Join Date
    Jul 2018
    Location
    Russia
    Posts
    41
    Thanks
    0
    Thanked 4 Times in 3 Posts
    Quote Originally Posted by Scope View Post
    Some comparison results
    1. "Public Images" is good, "Easily Obtainable Images" is better: can you provide script or list of URLs (or even torrent file) to download whole set?
    2. In v. 1.20 you remove BMF and JPEG LS Part 2, can you save these results for history in a separate forum thread?
    Last edited by e8c; 4th January 2021 at 01:47.
    Dvizh must go Dvizh, no open source - no Dvizh.

  23. #16
    Member
    Join Date
    Nov 2019
    Location
    Moon
    Posts
    51
    Thanks
    18
    Thanked 54 Times in 31 Posts
    Quote Originally Posted by e8c View Post
    1. "Public Images" is good, "Easily Obtainable Images" is better: can you provide script or list of URLs (or even torrent file) to download whole set?
    Since this is not a special free-for-distribution set of images, but mostly public images with unknown distribution requirements, I have not included this link in the spreadsheet:
    https://drive.google.com/drive/u/1/f...Sa-gGhhikkOoYd

    Quote Originally Posted by e8c View Post
    2. In v. 1.20 you remove BMF and JPEG LS Part 2, can you save these results for history in a separate forum thread?
    https://docs.google.com/spreadsheets...Ml0sQFDnFsR4A/
    But, they are also slightly different sets

  24. #17
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    I made a little summary tab for the lossless comparison spreadsheet, with just the totals per category:

    https://docs.google.com/spreadsheets...it?usp=sharing


    Maybe you would also like to add such a summary to your spreadsheet, I think that's useful to get some quick global insights while still separating per category because obviously there are big differences between categories.

    Some more categories I would like to suggest, if you have time to find suitable images:
    • ​screenshots (non-game, but things like screenshotted tweets etc)
    • maps (road maps, world maps, alls kinds of geographical maps)
    • charts (plots, bar diagrams, all kinds of data visualizations)
    • medical imagery (x-rays etc)
    • logos
    • icons / emojis


    In any case, thanks for doing this work – it's a form of independent benchmarking/validation that I really appreciate!

  25. Thanks:

    Hakan Abbas (5th January 2021)

  26. #18
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    @ Jyrki Alakuijala
    Thank you for your reply

    I still very care maxButteraugli, so i use cjxl in -d 1.0, 0.9, 0.8, 0.7, 0.6, 0.5 (Speed: kitten),
    https://docs.google.com/spreadsheets...tX7dDBCObf0zCQ

    In d 1.0 and 0.9(Speed: kitten), worst maxButteraugli value will above 2.0 (2.29328537, 2.158367634),
    and some file maxButteraugli value above 1.6 (1.612426162, 1.772934079, 1.801316857, 1.94833111).

    In d 0.8 and 0.7(Speed: kitten), worst maxButteraugli value will below 1.6 (1.556723833, 1.548859596),
    but 1 file maxButteraugli get 1.694726825 in d 0.7.

    In d 0.6 and 0.5(Speed: kitten), worst maxButteraugli value will below 1.3 (1.290958762, 0.998811423),
    and in d 0.5(Speed: kitten), all files worst maxButteraugli value will below 1.0.

    d 1.0 ~ 0.9 // maxButteraugli Limit 2.0+
    d 0.8 ~ 0.7 // maxButteraugli Limit 1.6 ~ 1.7
    d 0.6 // maxButteraugli Limit 1.3
    d 0.5 // maxButteraugli Limit 1.0

    I think compress some image, too risk in target butteraugli d 1.0 and d 0.9(Speed: kitten),
    use d 0.8 and 0.7(Speed: kitten) can get more stable maxButteraugli.

    Probably have a formula to calculate expect maxButteraugli in different target butteraugli?
    example: d 1.0 expect maxButteraugli is 1.0 + 1.015 = 2.015
    https://gitlab.com/wg1/jpeg-xl/-/blo...zation.cc#L815
    constexpr float kMaximumDistanceIncreaseFactor = 1.015;
    And in this thread, old version cjxl -d 1.0 target butteraugli, will get maxButteraugli 1.468054,
    Probably cjxl have a new flag to limit maxButteraugli?,
    or have a new flag to set expect maxButteraugli in specific target butteraugli distance?

  27. #19
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    Hello, I'm research jxl-architecture, and have some question.1. in jxl-architecture recommend synthetic content use modular mode, natural content use vardct mode,
    but my non-photographic image get great compress in vardct mode,
    only dot art type image need use lossy modular mode to get great compress,
    use vardct mode in non-photographic image is bad idea?

    2. in cjxl input image is jpg and png, use vardct mode or modular mode,
    the djxl output format only use png (PPM/PFM), can't output visual lossless jpg file,

    only use cjxl in JPEG lossless transcode mode, and in djxl use -j flag,
    can success output visual lossless jpg file,

    What is reconstruction jpeg bitstream?, could you teach me about this funtion?
    probably reconstruction jpeg bitstream can visual lossless output jpg file?

  28. #20
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    Quote Originally Posted by Lithium Flower View Post
    Hello, I'm research jxl-architecture, and have some question.

    1. in jxl-architecture recommend synthetic content use modular mode, natural content use vardct mode,
    but my non-photographic image get great compress in vardct mode,
    only dot art type image need use lossy modular mode to get great compress,
    use vardct mode in non-photographic image is bad idea?
    It depends on the kind of image. If it only uses few colors, for example (say a palette image with 4 colors), then almost certainly (lossless) modular mode will be the best choice. For more 'natural' synthetic images, vardct will give great results. I don't really recommend lossy modular mode at all as it is right now – I think the only thing it might be good for, is if you don't care about perceptual optimization but you want to do something lossy with some kind of maximum difference per pixel guaranty - in that case, the modular transforms can have better properties than the VarDCT approach. In nearly all cases, if you care about how the image actually looks visually, VarDCT will be the best choice (and in some specific cases of non-photographic images, lossless modular will be the best choice).

    2. in cjxl input image is jpg and png, use vardct mode or modular mode,
    the djxl output format only use png (PPM/PFM), can't output visual lossless jpg file,

    only use cjxl in JPEG lossless transcode mode, and in djxl use -j flag,
    can success output visual lossless jpg file,

    What is reconstruction jpeg bitstream?, could you teach me about this funtion?
    probably reconstruction jpeg bitstream can visual lossless output jpg file?
    If you start from a JPEG image as input to cjxl, by default it will losslessly transcode the JPEG to JPEG XL. You can reconstruct the exact original JPEG file from this jxl file, using djxl -j. It is not just the same image, it's actually the same file, byte for byte. For this, JPEG reconstruction data is stored in the jxl file format. This reconstruction data stores anything except the image data itself that is needed to reconstruct the exact original JPEG file (e.g. restart markers, huffman tables, scan scripts, padding bits, app markers, etc etc).

    If you don't want to do this, and want to recompress a JPEG image in a non-reversible way, you can either first convert the JPEG to PNG and use that as input, or call cjxl with the -j option (which for cjxl means "decode the JPEG to pixels and then encode those pixels", while the default cjxl behavior is (for jpeg input) "decode the JPEG file to DCT coefficients and store those coefficients losslessly, and also save reconstruction data to preserve the original JPEG file").

    My recommendation is to use the default behavior of cjxl, but if you have high-quality JPEG images and want to get them to smaller file sizes, then non-lossless transcoding (with cjxl -j) might be an option. In that case you of course can no longer decode the jxl image back to jpeg, at least not in a lossless way: djxl does allow you to use jpeg as an output format but by default that decodes the image to pixels and then encodes those pixels as a quality 95 jpeg, which is not a lossless operation.

  29. Thanks:

    Lithium Flower (8th January 2021)

  30. #21
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    820
    Thanks
    253
    Thanked 264 Times in 163 Posts
    Firefox support is slowly warming up: https://bugzilla.mozilla.org/show_bug.cgi?id=1539075
    Chrome is silent: https://bugs.chromium.org/p/chromium...ail?id=1056172
    Any other adaptation initiatives?

    ps. GIMP: https://gitlab.gnome.org/GNOME/gimp/-/issues/4681

  31. #22
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    Quote Originally Posted by Jarek View Post
    Any other adaptation initiatives?
    ImageMagick has added jxl support already (since version 7.0.10-54).

  32. Thanks:

    Jarek (7th January 2021)

  33. #23
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    820
    Thanks
    253
    Thanked 264 Times in 163 Posts

  34. #24
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    @Jon Sneyers
    Thank you for your reply

    merge to new post

    @Jarek
    my XnViewMP(Version 0.98.0 64bits (Dec 14 2020)) can't decode jxl,
    i think ​they use old libjpegxl...
    Last edited by Lithium Flower; 12th January 2021 at 17:29.

  35. #25
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by Lithium Flower View Post
    I still very care maxButteraugli, so i use cjxl in -d 1.0, 0.9, 0.8, 0.7, 0.6, 0.5 (Speed: kitten)
    I will create an optimizer later (likely in 6-12 months or so) that is more focused on max butteraugli -- now it is mostly tuned to p-norm and max is currently a 2nd class citizen. Max butteraugli is a more precise indicator for the highest fidelity.

  36. Thanks:

    Lithium Flower (9th January 2021)

  37. #26
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by Lithium Flower View Post
    1. in jxl-architecture recommend synthetic content use modular mode, natural content use vardct mode,
    but my non-photographic image get great compress in vardct mode,
    only dot art type image need use lossy modular mode to get great compress,
    use vardct mode in non-photographic image is bad idea?
    Vardct mode is great at high quality (psychovisually lossless) on most if not all content and largely made our near lossless coding efforts reduntant.

    Quote Originally Posted by Lithium Flower View Post
    2. in cjxl input image is jpg and png, use vardct mode or modular mode,
    the djxl output format only use png (PPM/PFM), can't output visual lossless jpg file
    Png is the main output in pixels.

    Quote Originally Posted by Lithium Flower View Post
    What is reconstruction jpeg bitstream?, could you teach me about this funtion?
    probably reconstruction jpeg bitstream can visual lossless output jpg file?
    Jpeg reconstruction is for byte-to-byte lossless jpeg transmission.

  38. Thanks:

    Lithium Flower (9th January 2021)

  39. #27
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    Quote Originally Posted by Jyrki Alakuijala View Post
    I will create an optimizer later (likely in 6-12 months or so) that is more focused on max butteraugli -- now it is mostly tuned to p-norm and max is currently a 2nd class citizen. Max butteraugli is a more precise indicator for the highest fidelity.
    @ Jyrki Alakuijala
    Thank you for your reply

    If i want increase fidelity in vardct mode(jpeg xl 0.2),
    target distance -d 0.8 (Speed: kitten) probably a good distance?

    -q 90 == -d 1.000 // visually lossless (side by side)
    -q 91 == -d 0.900

    -q 92 == -d 0.800
    -q 93 == -d 0.700

    -q 94 == -d 0.600
    -q 95 == -d 0.550 // visually lossless (flicker-test)

  40. #28
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    ​Get a issue in vardct mode, i using jpeg xl 0.2 -d 0.8 (Speed: kitten),
    in some non-photographic(more natural synthetic) image, everything is fine,
    but some blue and red area have tiny artifacts(noise?).

    Use vardct mode in non-photographic(more natural synthetic) type image,
    probably need use other jpeg xl flag(filter) to get great result?
    Attached Files Attached Files

  41. #29
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by Lithium Flower View Post
    ​Get a issue in vardct mode, i using jpeg xl 0.2 -d 0.8 (Speed: kitten),
    in some non-photographic(more natural synthetic) image, everything is fine,
    but some blue and red area have tiny artifacts(noise?).

    Use vardct mode in non-photographic(more natural synthetic) type image,
    probably need use other jpeg xl flag(filter) to get great result?
    Thank you. This is very useful.

    Yes, looks awful.

    I had an off-by-one for smooth area detection heuristic and those areas were detected 4 pixels off. There is likely an improvement on this in the next public release, as well as an overall reduction (5-10 %) of such artifacts from other heuristic improvements -- with some more contrast preserved in the middle frequency band (where other formats do often pretty bad).

    If you find these images in the next version, please keep sending samples. Consider submitting them to http://gitlab.com/wg1/jpeg-xl as an issue.

  42. Thanks:

    Lithium Flower (12th January 2021)

  43. #30
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    Quote Originally Posted by Jyrki Alakuijala View Post
    Thank you. This is very useful.

    Yes, looks awful.

    I had an off-by-one for smooth area detection heuristic and those areas were detected 4 pixels off. There is likely an improvement on this in the next public release, as well as an overall reduction (5-10 %) of such artifacts from other heuristic improvements -- with some more contrast preserved in the middle frequency band (where other formats do often pretty bad).

    If you find these images in the next version, please keep sending samples. Consider submitting them to http://gitlab.com/wg1/jpeg-xl as an issue.
    @ Jyrki Alakuijala
    Thank you for your reply

    I have some question about Pixel Art, i using pingo png lossless -s0 to identify which image can lossless convert to pal8,
    some Pixel Art image can't lossless convert, need use vardct mode or modular mode,

    In my Pixel Art image test, vardct mode -q 80 Speed 8, lossy modular mode -Q 80 Speed: 9,
    vardct mode can't work very well(have tiny artifact),
    lossy modular mode work fine in Pixel Art image,
    but look like lossy modular mode isn't recommend use right now, which mode is best practice?

    And about lossy modular mode quality value(-Q luma_q), this quality value roughly match or like libjpeg quality?
    i don't know use lossy modular Q80 Speed 9 compare vardct q80 Speed 8 , is a fair comparison?


    About Pixel Art png pal8,
    I test Pixel Art png pal8(93 color)in lossless modular mode -q 100 -s 9 -g 2 -E 3,
    but if use png lossless before jxl lossless, will increase file size.

    jpeg xl lossless:
    People1.png 19.3kb => People1.jxl 19.3kb
    People1.png 19.3kb => People1.png 16.4kb (pingo png lossless -s9) => People1.jxl 18.1kb

    Webp Lossless:
    People1.png 19.3kb => People1.webp 16.5kb
    People1.png 19.3kb => People1.png 16.4kb (pingo png lossless -s9) => People1.webp 16.5kb

    Pixel Art png pal8(color near 256), jpeg xl lossless is best.
    // rgb24 605k force convert pal8 168k
    jpeg xl lossless: 135k
    Webp Lossless: 157k

    And a little curious, recommend my artist friend use jpeg xl 0.2, is a good idea?, or i should wait FDIS stage finish?


    https://encode.su/threads/3544-JXL-r...ll=1#post68182
    Jon Sneyers:
    don't really recommend lossy modular mode at all as it is right now – I think the only thing it might be good for,
    is if you don't care about perceptual optimization but you want to do something lossy with
    some kind of maximum difference per pixel guaranty - in that case, t
    he modular transforms can have better properties than the VarDCT approach. In nearly all cases,
    if you care about how the image actually looks visually, VarDCT will be the best choice
    (and in some specific cases of non-photographic images, lossless modular will be the best choice).
    https://www.reddit.com/r/jpegxl/comm..._webp_jpeg_xl/
    ScopeB:
    I can point out that Jpeg XL (considering speed) shows good results on everything
    except Pixel Art and images with repeatable texture, color or noise.
    Attached Files Attached Files

Page 1 of 2 12 LastLast

Similar Threads

  1. JPEG issues a draft call for a JPEG reference software
    By thorfdbg in forum Data Compression
    Replies: 11
    Last Post: 19th April 2017, 17:18
  2. JPEG XT new reference software, online test updated
    By thorfdbg in forum Data Compression
    Replies: 26
    Last Post: 3rd August 2015, 21:27
  3. New version of Matt's BARF compressor released!
    By Biozynotiker in forum Data Compression
    Replies: 9
    Last Post: 5th April 2012, 04:43
  4. QuickLZ 1.20 beta Win32 version released!
    By LovePimple in forum Forum Archive
    Replies: 1
    Last Post: 20th February 2007, 17:29
  5. WinRAR Version 3.70 Beta released
    By LovePimple in forum Forum Archive
    Replies: 2
    Last Post: 11th January 2007, 00:57

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •