Page 2 of 2 FirstFirst 12
Results 31 to 56 of 56

Thread: JXL reference software version 0.2 released

  1. #31
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by Lithium Flower View Post
    @ Jyrki Alakuijala
    Thank you for your reply

    I have some question about Pixel Art, i using pingo png lossless -s0 to identify which image can lossless convert to pal8,
    some Pixel Art image can't lossless convert, need use vardct mode or modular mode,

    In my Pixel Art image test, vardct mode -q 80 Speed 8, lossy modular mode -Q 80 Speed: 9,
    vardct mode can't work very well(have tiny artifact),
    lossy modular mode work fine in Pixel Art image,
    but look like lossy modular mode isn't recommend use right now, which mode is best practice?
    I suspect that VarDCT will be the most appropriate mode for this -- we just need to fix the remaining issues.

    Palette mode and delta palette mode can also be useful for a wide range of pixel art images. They are also not yet tuned for best performance but show quite a lot of promise already.

    Quote Originally Posted by Lithium Flower View Post
    And about lossy modular mode quality value(-Q luma_q), this quality value roughly match or like libjpeg quality?
    My understanding is that -- for photographics images -- lossy modular mode provides a quality that is between of libjpeg quality and VarDCT quality, but closer to libjpeg quality.

    Quote Originally Posted by Lithium Flower View Post
    i don't know use lossy modular Q80 Speed 9 compare vardct q80 Speed 8 , is a fair comparison?
    I always used --distance for encoding and VarDCT.

    Quote Originally Posted by Lithium Flower View Post
    About Pixel Art png pal8,
    I test Pixel Art png pal8(93 color)in lossless modular mode -q 100 -s 9 -g 2 -E 3,
    but if use png lossless before jxl lossless, will increase file size.

    jpeg xl lossless:
    People1.png 19.3kb => People1.jxl 19.3kb
    People1.png 19.3kb => People1.png 16.4kb (pingo png lossless -s9) => People1.jxl 18.1kb

    Webp Lossless:
    People1.png 19.3kb => People1.webp 16.5kb
    People1.png 19.3kb => People1.png 16.4kb (pingo png lossless -s9) => People1.webp 16.5kb

    Pixel Art png pal8(color near 256), jpeg xl lossless is best.
    // rgb24 605k force convert pal8 168k
    jpeg xl lossless: 135k
    Webp Lossless: 157k

    Quote Originally Posted by Lithium Flower View Post
    And a little curious, recommend my artist friend use jpeg xl 0.2, is a good idea?, or i should wait FDIS stage finish?
    We have a final version of the format now, so in that sense it is ok to start using it. For practical use it may be nice to wait before tooling support for JPEG XL is catching up.

    JPEG XL committee members did a final quality review in November/December with many metrics and manual review of images where the metrics disagreed. FDIS phase starts next week.

  2. Thanks:

    Lithium Flower (14th January 2021)

  3. #32
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    @ Jyrki Alakuijala
    Thank you for your reply

    I check my compressed non-photographic set with my eyes, find tiny artifacts in different image,
    and i think in jpeg xl 0.2 -d 1.0(Speed: kitten), if compressed image maxButteraugli above 1.5 or 1.6?,
    probably have error or tiny artifacts in this image.

    A little curious, probably have a plan or patch to improvement non-photographic fidelity, in next jpeg xl public release(jpeg xl 0.3)?

    https://www.reddit.com/r/jpegxl/comm...l_portal_page/
    Sauerstoffdioxid:
    Somewhat offtopic, but I have a question related to synthetic images and squoosh's encoder build in particular.
    The graph on the website states that synthetic images have their own code path, yet when I try them in squoosh.app
    I can notice visible artefacts similar to normal jpeg compression. As I'm not particularly keen on building the actual
    encoder from source to verify, I'm assuming those artefacts stem from squoosh not using the synthetic image mode.
    Would that assumption be correct? And if it is, what kind of artefacts could I expect from compressed images
    using that mode?


    jonsneyers:
    You are correct: the current encoder heuristics only detect some limited kinds of synthetic image parts
    (text on a solid background).

    For synthetic images jxl has modular mode which can be used in a lossless way, or in various lossy ways
    (it has various transforms that can be useful for both lossless and lossy compression). One useful transform is
    delta palette, which can do what the usual png8 can do (just reduce the number of colors and compress that way),
    PLUS the option to have palette entries that are deltas w.r.t. a predictor - that makes it possible to do smooth gradients
    without dithering or without needing many palette entries.

    The kind of artifacts: no visible ones if done right


    There still is quite some room for encoder improvements in this area. I'm pretty sure we have all the possibilities
    in the bitstream to do great lossy compression of synthetic images, but the encoder is by default not fully using all
    the potential yet.
    Attached Files Attached Files
    Last edited by Lithium Flower; 14th January 2021 at 15:19.

  4. #33
    Member
    Join Date
    Jul 2018
    Location
    Russia
    Posts
    41
    Thanks
    0
    Thanked 4 Times in 3 Posts
    https://www.asteroidmission.org/osprey-recon-c-mosaic/
    Code:
    >cjxl.exe -q 100 -s 3 Bennu_Grayscale.png Bennu_Grayscale_s3.jxl 
      ​J P E G   \/ |
                /\ |_   e n c o d e r    [v0.2.0 | SIMD supported: AVX2,SSE4,Scalar]
    
    Read 20237x12066 image, 75.4 MP/s
    Encoding [Modular, lossless, falcon], 2 threads.
    Compressed to 112310632 bytes (3.680 bpp).
    20237 x 12066, 24.96 MP/s [24.96, 24.96], 1 reps, 2 threads.
    
    >cjxl.exe -q 100 -s 4 Bennu_Grayscale.png Bennu_Grayscale_s4.jxl
      J P E G   \/ |
                /\ |_   e n c o d e r    [v0.2.0 | SIMD supported: AVX2,SSE4,Scalar]
    
    Read 20237x12066 image, 75.6 MP/s
    Encoding [Modular, lossless, cheetah], 2 threads.
    Compressed to 109071829 bytes (3.573 bpp).
    20237 x 12066, 1.78 MP/s [1.78, 1.78], 1 reps, 2 threads.
    
    >cjxl.exe -q 100 -s 5 Bennu_Grayscale.png Bennu_Grayscale_s5.jxl
      J P E G   \/ |
                /\ |_   e n c o d e r    [v0.2.0 | SIMD supported: AVX2,SSE4,Scalar]
    
    Read 20237x12066 image, 74.7 MP/s
    Encoding [Modular, lossless, hare], 2 threads.
    terminate called after throwing an instance of 'std::bad_alloc'
      what():  std::bad_alloc
    
    ...
    ...
    ...
    
    >cjxl.exe -q 100 -s 9 Bennu_Grayscale.png Bennu_Grayscale_s9.jxl
      J P E G   \/ |
                /\ |_   e n c o d e r    [v0.2.0 | SIMD supported: AVX2,SSE4,Scalar]
    
    Read 20237x12066 image, 75.7 MP/s
    Encoding [Modular, lossless, tortoise], 2 threads.
    terminate called after throwing an instance of 'std::bad_alloc'
      what():  std::bad_alloc
    
    >dir
    16.01.2021  19:26       128 446 880 Bennu_Grayscale.png
    16.01.2021  19:24       732 538 945 Bennu_Grayscale.ppm
    16.01.2021  19:36       112 310 632 Bennu_Grayscale_s3.jxl
    16.01.2021  19:41       109 071 829 Bennu_Grayscale_s4.jxl
    
    >systeminfo | find "Memory"
    Total Physical Memory:     20,346 MB
    Available Physical Memory: 17,859 MB
    Virtual Memory: Max Size:  20,346 MB
    Virtual Memory: Available: 18,012 MB
    Virtual Memory: In Use:    2,334 MB
    (Scrollable.)
    Dvizh must go Dvizh, no open source - no Dvizh.

  5. #34
    Member
    Join Date
    Jul 2018
    Location
    Russia
    Posts
    41
    Thanks
    0
    Thanked 4 Times in 3 Posts

    Decode speed and memory consumption are even funnier:

    Code:
    >djxl.exe Bennu_Grayscale_s4.jxl tmp.ppm --num_threads=4
    Read 109071829 compressed bytes [v0.2.0 | SIMD supported: AVX2,SSE4,Scalar]
    Done.
    20237 x 12066, 19.71 MP/s [19.71, 19.71], 1 reps, 4 threads.
    Allocations: 7750 (max bytes in use: 7.050512E+09)
    
    >djxl.exe Bennu_Grayscale_s4.jxl tmp.ppm --num_threads=2
    Read 109071829 compressed bytes [v0.2.0 | SIMD supported: AVX2,SSE4,Scalar]
    Done.
    20237 x 12066, 10.50 MP/s [10.50, 10.50], 1 reps, 2 threads.
    Allocations: 7744 (max bytes in use: 7.041389E+09)
    
    >djxl.exe Bennu_Grayscale_s4.jxl tmp.ppm --num_threads=1
    Read 109071829 compressed bytes [v0.2.0 | SIMD supported: AVX2,SSE4,Scalar]
    Done.
    20237 x 12066, 8.80 MP/s [8.80, 8.80], 1 reps, 1 threads.
    Allocations: 7741 (max bytes in use: 7.036826E+09)
    
    >djxl.exe Bennu_Grayscale_s3.jxl tmp.ppm --num_threads=4
    Read 112310632 compressed bytes [v0.2.0 | SIMD supported: AVX2,SSE4,Scalar]
    Done.
    20237 x 12066, 35.77 MP/s [35.77, 35.77], 1 reps, 4 threads.
    Allocations: 7749 (max bytes in use: 7.053651E+09)
    
    >djxl.exe Bennu_Grayscale_s3.jxl tmp.ppm --num_threads=2
    Read 112310632 compressed bytes [v0.2.0 | SIMD supported: AVX2,SSE4,Scalar]
    Done.
    20237 x 12066, 19.57 MP/s [19.57, 19.57], 1 reps, 2 threads.
    Allocations: 7743 (max bytes in use: 7.044529E+09)
    
    >djxl.exe Bennu_Grayscale_s3.jxl tmp.ppm --num_threads=1
    Read 112310632 compressed bytes [v0.2.0 | SIMD supported: AVX2,SSE4,Scalar]
    Done.
    20237 x 12066, 17.63 MP/s [17.63, 17.63], 1 reps, 1 threads.
    Allocations: 7740 (max bytes in use: 7.039963E+09)
    (Scrollable.)
    Dvizh must go Dvizh, no open source - no Dvizh.

  6. #35
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    Yes, further reducing the memory footprint would be nice.

    For large images like this, it would also be useful to have an encoder that does not try to globally optimize the entire image, and a cropped decoder. These are all possible, but quite some implementation effort, and it is not the main priority right now — getting the software ready for web browser integration is a bigger priority.

  7. Thanks:

    Lithium Flower (18th January 2021)

  8. #36
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    Have a question about butteraugli_jxl,
    use cjxl -d 1.0 -j (kitten) and -m -Q 90 -j (speed tortoise), image type is non-photographic(more natural synthetic), jpeg q99 yuv444,
    -d 1.0 still have some tiny artifacts in this image, like previous post issue,

    I use butteraugli_jxl check compressed image, look like butteraugli_jxl didn't find that tiny artifacts(maxButteraugli 1.13),
    those tiny artifacts is a issue? or expected error in vardct mode?

    {
    "01_m_Q90_s9_280k.png": {
    "maxButteraugli": "1.5102199316",
    "6Norm": " 0.640975",
    "12Norm": " 0.862382",
    },
    "01_vardct_d1.0_s8_234k.png": {
    "maxButteraugli": "1.1368366480",
    "6Norm": " 0.610878",
    "12Norm": " 0.734107",
    }
    }


    And vardct mode(-d and -q) can mix two codecs for a single image,
    lossy modular mode(-m -Q) probably can use two mode for a single image?,
    or like fuif lossy only use reversible transforms (YCoCg, reversible Haar-like squeezing)?

    In my non-photographic(more natural synthetic) set, lossy modular mode work very well,
    but vardct mode can produce smaller file.

    JPEG XL is a combination of Google's PIK (~VarDCT mode) and FUIF (~Modular mode).
    You'll be able to mix both codecs for a single image, e.g. VarDCT for the photo parts,
    Modular for the non-photo parts and to encode the DC (1:8 ) in a super-progressive way.
    FUIF uses reversible transforms (YCoCg, reversible Haar-like squeezing);
    optional quantization is the only source of loss.
    Attached Files Attached Files

  9. #37
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by Lithium Flower View Post
    I use butteraugli_jxl check compressed image, look like butteraugli_jxl didn't find that tiny artifacts(maxButteraugli 1.13),
    those tiny artifacts is a issue? or expected error in vardct mode?
    Max butteraugli 1.0 and below are good quality. Butteraugli 1.1 is more 'okeyish' rather than 'solid ok'.

    At max butteraugli of 0.6 I have never yet been able to see a difference.

    Butteraugli scores are calibrated at a viewing distance of 900 pixels, if you zoom a lot, you will see more.

    If you obviously disagree with butteraugli (when using max brightness at 200 lumen or less and with a viewing distance of 900 pixels or more), file a bug in the jpeg xl repository and I'll consider adding such cases into the butteraugli calibration corpus.

    There is some consensus that butteraugli has possibly been insufficiently sensitive for ringing artefacts. 2-3 years ago I have made some changes for it to be less worried about blurring in comparison to ringing artefacts, but those adjustments were somewhat conservative.

    Please, keep writing about your experiences, this is very useful for me in deciding where to invest effort in jpeg xl and butteraugli.

  10. Thanks:

    Lithium Flower (19th January 2021)

  11. #38
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    Quote Originally Posted by Jyrki Alakuijala View Post
    Max butteraugli 1.0 and below are good quality. Butteraugli 1.1 is more 'okeyish' rather than 'solid ok'.
    At max butteraugli of 0.6 I have never yet been able to see a difference.
    @ Jyrki Alakuijala
    Thank you for your reply

    It look like jpeg xl 0.2 -d 1.0 (speed kitten) still in little risk zone,
    in my test some image in -d 1.0, 0.9 will get maxButteraugli 2.0+ ,
    -d 0.8 can limit maxButteraugli below 1.6(1.55),

    In previous post maxButteraugli below 1.3 ~ 1.4, can stay safe zone,
    could you teach me about target distance -d 0.9(q91), 0.8(q92), 0.7(q93), 0.6(q94),
    those distance probably like 1.0 and 0.5 have a special meaning?

    -d 1.0 | q 90 visually lossless (side by side),
    -d 0.5 | q 95 visually lossless (flicker-test),
    https://docs.google.com/spreadsheets...zCQ/edit#gid=0

  12. #39
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    Quote Originally Posted by Jyrki Alakuijala View Post
    If you obviously disagree with butteraugli (when using max brightness at 200 lumen or less and with a viewing distance of 900 pixels or more), file a bug in the jpeg xl repository and I'll consider adding such cases into the butteraugli calibration corpus.

    There is some consensus that butteraugli has possibly been insufficiently sensitive for ringing artefacts. 2-3 years ago I have made some changes for it to be less worried about blurring in comparison to ringing artefacts, but those adjustments were somewhat conservative.

    Please, keep writing about your experiences, this is very useful for me in deciding where to invest effort in jpeg xl and butteraugli.
    @ Jyrki Alakuijala

    About butteraugli and tiny ringing artefacts, previous post sample image eyes_have tiny artifacts2,
    in this sample image, have tiny ringing artefacts on character eyes, that tiny artefacts isn't easy to see,
    but if i compare modular mode(-m -Q 90 speed tortoise) file and vardct mode file,
    that tiny artefacts will a little uncomfortable with visually experience,

    modular mode file don't produce tiny artefacts, but probably have another tiny error,
    vardct mode file compress very well(file size), but will produce tiny artefacts in some area,
    in sample image eyes_have tiny artifacts2, need use jpeg xl 0.2 -d 0.5(Speed: kitten),
    to avoid tiny ringing artefacts issue.

    I guess that tiny ringing artefacts in photographic type image, probably very hard to see,
    so butteraugli will assessment that image is fine or have tiny error,
    but in non-photographic type image, if some image content area have tiny ringing artefacts,
    will very easy to see and a little uncomfortable with visually experience.

    like chroma subsampling, photographic image and non-photographic image have different situation,
    some photographic type image use chroma subsampling still can get good visually experience,
    but in non-photographic type image, chroma subsampling always a bad idea.

    // eyes_have tiny artifacts, have tiny ringing artefacts on character eyes.
    https://encode.su/threads/3544-JXL-r...ll=1#post68264

    // eyes_have tiny artifacts2, have tiny ringing artefacts on character eyes.
    https://encode.su/threads/3544-JXL-r...ll=1#post68293

    // Pixel Art in vardct mode, have tiny ringing artefacts.
    https://encode.su/threads/3544-JXL-r...ll=1#post68258
    Last edited by Lithium Flower; 19th January 2021 at 17:26.

  13. #40
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    @ Jyrki Alakuijala

    A little curious, Probably have a plan or patch to improvement non-photographic fidelity(quality),
    in next jpeg xl public release(jpeg xl 0.3)?
    I’m looking forward to use .jxl replace .jpg, .png, .webp file.

  14. #41
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by Lithium Flower View Post
    @ Jyrki Alakuijala

    A little curious, Probably have a plan or patch to improvement non-photographic fidelity(quality),
    in next jpeg xl public release(jpeg xl 0.3)?
    I’m looking forward to use .jxl replace .jpg, .png, .webp file.
    I have landed four improvements on this topic -- however not yet in the public gitlab. I have not checked if they are effective for your use case, and suspect that some more iterations are needed.

    ​Please keep sending these examples, they are very inspiring.

  15. Thanks:

    Lithium Flower (29th January 2021)

  16. #42
    Member
    Join Date
    Jan 2021
    Location
    Israel
    Posts
    1
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Any Chance for updated mingw64 version? (sources has been updated about a day ago)

    Quote Originally Posted by Scope View Post
    Some comparison results (since the lossless mode has not changed from the last build), now lossless Jpeg XL is the densest format ​(although I noticed a decrease in efficiency at faster speeds compared to some older builds)
    Lossless Image Formats Comparison (Jpeg XL, AVIF, WebP 2, WebP, FLIF, PNG, ...) v1.20
    https://docs.google.com/spreadsheets/d/1ju4q1WkaXT7WoxZINmQpf4ElgMD2VMlqeDN2DuZ6yJ8/


  17. #43
    Member
    Join Date
    Jul 2018
    Location
    Russia
    Posts
    41
    Thanks
    0
    Thanked 4 Times in 3 Posts
    https://cloudinary.com/blog/legacy_a...al_image_codec

    During the 1990s, digital cameras replaced analog ones and, given the limited memory capacities of that era, JPEG became the standard format for photography, especially for consumer-grade cameras.
    JPEG moves the needle by a lot, going from uncompressed or weak, lossless compression— state of the art in the early 1980s—to an actual lossy codec, dramatically reducing file sizes and making itself a clear no-brainer for adoption. To put things in perspective, that meant waiting for five seconds instead of one minute for an image to load, and storing 20 to 50 images instead of only one or two on a flash card. Basically, JPEG enabled the use cases of web images and digital cameras, which would be impracticable without JPEG.
    There are now UFS 3+ and NVMe / SATA 3 SSDs that are fast, cheap, and big enough to handle lossless imaging. The lossy format looks like a thing from the past.

    Hence, to convert them to JPEG XLs, you need not decode JPEGs to pixels. Instead, depict the internal JPEG representation (DCT coefficients) directly in JPEG XL. Even though, in this case, only the subset of JPEG XL that corresponds to JPEG is leveraged, the converted images would be 20 percent smaller. Crucially, those JPEG XL files represent exactly the same images as the original JPEG files.
    Efficient lossless transcoding of existing JPEGs is cool. I want to see this feature in... PNG.
    Dvizh must go Dvizh, no open source - no Dvizh.

  18. #44
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    630
    Thanks
    288
    Thanked 252 Times in 128 Posts
    Quote Originally Posted by e8c View Post
    Efficient lossless transcoding of existing JPEGs is cool. I want to see this feature in... PNG.
    Might be possible after integrating image compressors (currently planned: FLIF, webp, JPEG XL) into Precomp. Depending on which one gives the best result, new file size will be a bit larger than that because of the zlib reconstruction data, but the original PNG file can be restored bit-to-bit-lossless.

    As a synergetic effect side project, zlib reconstruction data and other PNG metadata could be stored as binary metadata in the new format. For FLIF, I'm quite sure that arbitrary metadata can be stored, but I'd expect this to be possible in the two other formats, too.
    This way, full lossless .png<->.png,.flif,.webp,.jxl would be possible, the resulting files would be viewable and the original PNG file could be restored.
    A checksum of the PNG would have to be stored to prevent/invalidate restoration attempts after editing the new file, because the original PNG obviously can't be restored after altering the image data.
    The size of reconstruction data differs depending on what was used to create the PNG, a rough guess would be: if the image data can be compressed to 50% of the PNG size, resulting file including restoration data would usually have a size between 50% and 75% of the PNG - though edge cases with >= 100% are possible, too (e.g. compressing random noise and using a PNG optimizer).

    Of course, integration from the other side would also be possible, by integrating preflate and PNG parsing into some webp/jpegxl transcoding side project.
    Last edited by schnaader; 24th January 2021 at 14:02.
    http://schnaader.info
    Damn kids. They're all alike.

  19. Thanks:

    Mike (24th January 2021)

  20. #45
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    Bitexact PNG file reconstruction gets a bit tricky because unlike JPEG which uses just Huffman, PNG uses Deflate which has way more degrees of freedom in how to encode. Emulating popular existing simple png encoders could help for most cases encountered in practice, but comes at the cost of having to include those encoders in the reconstruction method.

    To be honest, I am more interest in non-bitexact recompression for png, which is still lossless in terms of the image data and metadata.

    For PSD (Photoshop) it might be worthwhile to have a bitexact recompression though - at least for the common case where the image data itself is uncompressed or PackBits (RLE). It shouldn't be hard to get very good recompression ratios on those, and the recompressed jxl file would be viewable in anything that supports jxl, which will hopefully soon be more than what supports psd viewing.

  21. Thanks (2):

    Mike (24th January 2021),schnaader (24th January 2021)

  22. #46
    Member
    Join Date
    Jul 2018
    Location
    Russia
    Posts
    41
    Thanks
    0
    Thanked 4 Times in 3 Posts
    (Khmm ...)

    "this feature in ... PNG" means "this feature in ... LibPNG": transcoding JPG -> PNG, result PNG smaller than original JPG.

    (Just for clarity.)
    Dvizh must go Dvizh, no open source - no Dvizh.

  23. Thanks:

    schnaader (24th January 2021)

  24. #47
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    I guess you could do that, but it would be a new kind of PNG that wouldn't work anywhere. Not everything uses libpng, and not everything will use the latest version. Revising or adding features to an already deployed format is just as hard as introducing a new format.

  25. #48
    Member
    Join Date
    Jul 2018
    Location
    Russia
    Posts
    41
    Thanks
    0
    Thanked 4 Times in 3 Posts
    Quote Originally Posted by Jon Sneyers View Post
    ... not everything will use the latest version. Revising or adding features to an already deployed format is just as hard as introducing a new format.
    Sounds like "... not everything will use the latest version of Linux Kernel. Revising or adding features to an already deployed Linux is just as hard as introducing a new Operating System."
    Dvizh must go Dvizh, no open source - no Dvizh.

  26. #49
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    The thing is, it's not just libpng that you need to update. Lots of software doesn't use libpng, but just statically links some simple png decoder like lodepng. Getting all png-decoding software upgraded is way harder than just updating libpng and waiting long enough. I don't think it's a substantially easier task than getting them to support, say, jxl.

  27. #50
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by e8c View Post
    Efficient lossless transcoding of existing JPEGs is cool. I want to see this feature in... PNG.
    Lode Vandevenne (one of the authors of JPEG XL) has hacked up a tool called grittibanzli in 2018. It recompresses gzip streams using more efficient methods (brotli) and can reconstruct the exactly same gzip bit stream back. For PNGs you can get ~10 % denser representation while being able to recover the original bit exact.

    I don't think people should be using this when there are other options like just using stronger formats for pixel exact lossless coding:
    • PNG recompression tools (like Pingo or ZopfliPNG),
    • br-content-encoded uncompressed but filtered PNGs,
    • WebP lossless and,
    • JPEG XL lossless.
    Last edited by Jyrki Alakuijala; 26th January 2021 at 16:02.

  28. #51
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    About non-photographic image tiny ringing artefacts and noise issue in vardct mode,
    In jpeg xl 0.2 vardct mode,
    Use cjxl -d 0.8 -s 9 --epf=3, can reduce tiny ringing artefacts, i think need -d 0.1 ~ 0.3 to get better result.

    // eyes_have tiny artifacts, have tiny ringing artefacts on character eyes.
    https://encode.su/threads/3544-JXL-r...ll=1#post68264

    // eyes_have tiny artifacts2, have tiny ringing artefacts on character eyes.
    https://encode.su/threads/3544-JXL-r...ll=1#post68293

    // Pixel Art in vardct mode, have tiny ringing artefacts.
    https://encode.su/threads/3544-JXL-r...ll=1#post68258
    Noise issue need use cjxl -d 0.1 ~ 0.3 -s 9, can reduce noise.

    png pal8 lossless use cjxl -q 100 -s 9 -g 2 -E 3 -I 1 can get best compress.

  29. #52
    Member
    Join Date
    Jul 2018
    Location
    Russia
    Posts
    41
    Thanks
    0
    Thanked 4 Times in 3 Posts
    Hi, https://mobile.twitter.com/jonsneyers

    Are IPSO (International PDF Selling Organization) members (or co-authors of JXL standard) receiving royalties for sold PDFs?

    https://www.iso.org/standard/77977.html
    https://www.iso.org/standard/80617.html

    [y/n]:
    Dvizh must go Dvizh, no open source - no Dvizh.

  30. #53
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    88
    Thanks
    15
    Thanked 64 Times in 34 Posts
    Quote Originally Posted by e8c View Post
    Hi, https://mobile.twitter.com/jonsneyers

    Are IPSO (International PDF Selling Organization) members (or co-authors of JXL standard) receiving royalties for sold PDFs?

    https://www.iso.org/standard/77977.html
    https://www.iso.org/standard/80617.html

    [y/n]:
    No, spec writers don't receive royalties. That money when you buy a spec just goes to ISO, I suppose to pay its support staff...




  31. #54
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    ​@Jyrki Alakuijala

    Hello, Thank you very much ,
    In jpeg xl 0.3 tiny ringing artefacts and noise issue have some improvement in non-photographic image,
    (better than jpeg xl 0.2), but jpeg xl 0.3 still have some tiny issue in non-photographic image.

    If non-photographic image that features high contrast sharp edges and smooth area,
    in vardct mode -d 1.0 can see some tiny artefacts and noise, jpeg xl 0.3 -d 0.5 is better than jpeg xl 0.2,
    but still have tiny issue, use --epf=3 can't fix this.

    Probably loop filter-adaptive quantization compress non-photographic image have some bad effect?
    (source is jpeg xl Discord member),
    And in my test use -d 1.0(0.5) -s 7 can reduce some issue, -s 8 is worst than -s 7.
    Attached Files Attached Files

  32. #55
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    26
    Thanks
    21
    Thanked 1 Time in 1 Post
    ​@Jyrki Alakuijala

    In previous discuss(discord), i report this issue to Jon Sneyers,
    I think probably vardct can't handle very well on high contrast sharp edges and smooth have gradient area,
    and this situation probably is vardct process limit, not a issue?

  33. #56
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    976
    Thanks
    266
    Thanked 350 Times in 221 Posts
    Quote Originally Posted by Lithium Flower View Post
    ​@Jyrki Alakuijala

    In previous discuss(discord), i report this issue to Jon Sneyers,
    I think probably vardct can't handle very well on high contrast sharp edges and smooth have gradient area,
    and this situation probably is vardct process limit, not a issue?
    I believe we can make that 10x better or more. I absolutely don't believe we are hitting the wall yet.

    My three main improvement hopes there: integral transforms selection, adaptive quantization and edge-preserving-filter modulation. I'm actually slightly embarrassed of those heuristics in all these three categories.

    1. integral transform selection, that code is weird and ineffective. Luca made a good logical effort on it. Then I used 'machine learning' on it, and the current code performs better against butteraugli, but doesn't really make sense. I suspect it has found dark corners of butteraugli where masking helps too much. I have learned more about butteraugli's bad behaviours in just trying to learn a neural codec against butteraugli. Fixes to that effort may fix the integral transfer selection heuristics, too.

    2. adaptive quantization can fix anything cheaply iff it is happening rarely. We just need to notice that a bad thing is happening and throw more bits at it. JPEG XL uses up to 255 levels of adaptive quantization at highest qualities.

    3. edge preserving filtering. If the area was smooth in the original, it should be smooth in the decompressed version. Cranking up the smothing may improve such things. An annoying artefact from too much smoothing is more pixelized view, so we may need to find a middle-ground here. I try to crank up the smoothing as a last choice, since the other approaches are kinder to the original quality and 'fidelity'.

Page 2 of 2 FirstFirst 12

Similar Threads

  1. JPEG issues a draft call for a JPEG reference software
    By thorfdbg in forum Data Compression
    Replies: 11
    Last Post: 19th April 2017, 17:18
  2. JPEG XT new reference software, online test updated
    By thorfdbg in forum Data Compression
    Replies: 26
    Last Post: 3rd August 2015, 21:27
  3. New version of Matt's BARF compressor released!
    By Biozynotiker in forum Data Compression
    Replies: 9
    Last Post: 5th April 2012, 04:43
  4. QuickLZ 1.20 beta Win32 version released!
    By LovePimple in forum Forum Archive
    Replies: 1
    Last Post: 20th February 2007, 17:29
  5. WinRAR Version 3.70 Beta released
    By LovePimple in forum Forum Archive
    Replies: 2
    Last Post: 11th January 2007, 00:57

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •