Results 1 to 18 of 18

Thread: jpg, png, webp encoder and butteraugli distance

  1. #1
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    20
    Thanks
    19
    Thanked 1 Time in 1 Post

    jpg, png, webp encoder and butteraugli distance

    sorry, my english is not good.

    Hello, I compress a lot of non-photographic images, image type is Japanese Anime and Japanese Manga,
    png rgb24 using mozjpeg jpeg lossy q95~q99, png rgba32 using cwebp webp near-lossless 60,80,
    and pingo png lossy pngfilter=100, get some problem in cwebp webp lossy.

    I using butteraugli check compressed image quality, but had some question with butteraugli distance,
    i need some hint or suggest for those question, thanks you very much.

    my image set like this image set, Tab Anime, AW, Manga, Pixiv,

    1. butteraugli and butteraugli jpeg xl assessment difference butteraugli distance

    I using butteraugli and butteraugli xl to check image, in *Reference 01,
    butteraugli's xyb is likely more accurate,
    but some image butteraugli assessment good distance(1.3), butteraugli xl assessment bad distance(near 2.0),
    and some image butteraugli reject butteraugli xl good distance(1.3),
    how to correct understand butteraugli distance and butteraugli xl 3-norm?


    2. butteraugli safe area or great area

    Compress png rgba32 image, my process is using near-lossless 60 and pngfilter=100 to first compress,
    if compressed image not below safe butteraugli distance, using near-lossless 80 to second compress.

    I collect Jyrki Alakuijala Comment and create a table, in *Reference 02
    '1.0 ~ 1.3 definitely works as designed', '1.0 ~ 1.6 A value below 1.6 is great',
    if i want my compressed image have a great quality, i should choose area 1.0 ~ 1.3 or area 1.0 ~ 1.6?, if i made a mistake please let me know.

    pngfilter=100 butteraugli distance => [0.6 ~ 1.0, 1.3 ~ 1.6], [1.7]
    webp near-lossless 60 butteraugli distance => [0.4 ~ 1.0, 1.3 ~ 1.6], [2.1]
    ,near-lossless 60 [2.1] => near-lossless 80 [1.3]

    cwebp & pingo_rc3 command:
    pingo_rc3.exe -pngfilter=100 -noconversion -nosrgb -nodate -sa "%%A"
    cwebp.exe -mt -m 6 -af -near_lossless 60 -alpha_filter best -progress "%%A" -o "%%~nA.webp"


    3. non-photographic image and jpeg encoder quality suggest

    Compress png rgb24 image, my process is using quality 95 to first compress,
    if compressed image not below safe butteraugli distance, increase quality to second compress.

    In my png rgb24 image set, butteraugli assessment jpeg quality 95 doesn't get good butteraugli distance,

    jpeg quality 95 butteraugli distance => [1.5 ~ 1.6], [1.7 ~ 2.5]
    jpeg quality 95 butteraugli xl distance => [1.2 ~ 1.6], [1.7 ~ 2.2]

    but in cjpeg usage.txt :
    'specifying a quality value above about 95 will increase the size of the compressed
    file dramatically, and while the quality gain from these higher quality values is measurable'
    https://github.com/mozilla/mozjpeg/b...usage.txt#L115
    If i compress non-photographic image to jpeg and want near psychovisual lossless, it is necessary using
    above quality 95 to compress those image?
    or in *Reference 03, possibly butteraugli is too sensitive in some non-photographic image?

    mozjpeg command:
    cjpeg.exe -optimize -progressive -quality 95 -quant-table 3 -sample 1x1 -outfile "mozjpeg\%%~nA.jpg" "%%A"

    Update 20200929:
    I using jpeg xl sjpeg features, sjpeg can get great butteraugli distance in quality 96,
    but look like sjpeg features doesn't using jpeg xl vardct or modular mode?

    ​Size:
    png - 12mb
    mozjpeg -q 97 + jpegtran progressive 4.46mb
    jpegxl sjpeg -q 96 + jpegtran progressive 4.0mb

    cjpegxl command:
    cjpegxl.exe" "%%A" "xl\%%A" --jpeg1 --jpeg_quality=96 --progressive

    4. webp lossy q100 and butteraugli distance

    I test another non-photographic image set in webp lossy q100, but some image get larger butteraugli distance,
    it possibly webp lossy 420 subsampling and fancy upsampling will make some larger errors in some area?

    and i test webp lossy alpha(alpha_q) features, this features will increase butteraugli distance,
    but i don't understand, why lossy alpha will effect butteraugli distance?

    webp lossy q100 and lossy_alpha:
    q100.png 2.013666
    q100_lossy_alpha 80.png 2.035022
    q100_lossy_alpha 50.png 2.099735

    webp lossy q100 butteraugli distance => [1.2 ~ 1.6], [1.8 ~ 2.3], [3.1, 4.4, 5.5, 10.8]
    dssim => [0.000150 ~ 0.000749]

    cwebp command:
    ​cwebp.exe -mt -m 6 -q 100 -sharp_yuv -pre 4 -af -alpha_filter best -progress "%%A" -o "%%~nA.webp"


    I creating some table and quality test data, i will upload later, thanks you very much.

    ================================================== ===============================================
    Reference Area (From Jyrki Alakuijala Comment)
    *Reference 01
    From Jyrki Alakuijala Comment:
    Butteraugli vs Butteraugli(jpeg xl)
    butteraugli's xyb is likely more accurate,
    because of asymptotic log behaviour for high intensity values (instead of raising to a power),

    jpeg xl's xyb modeling is going to be substantially faster to computer,
    because gamma is exactly three there.

    *Reference 02
    From Jyrki Alakuijala Comment:
    0.6 ~ 0.7 // most critical use

    1.0 // normal use

    1.0 ~ 1.3 // definitely works as designed

    1.0 ~ 1.6 // A value below 1.6 is great

    1.6 ~ 2.1 // a value below 2.1 okayish

    2.1+ // Above 2.1 there is likely a noticeable artefact in an inplace flip test.

    2.5+ // not necessarily psychovisually relevant and fair

    4.0+ /* The non-linearities near the just-noticeable-boundary in scale At larger errors (4.0+)
    butteraugli becomes less useful. Current versions of butteraugli only extrapolate these values
    as multiples of just-noticeable-difference, but the human visual system is highly non-linear and
    large extrapolationdoesn't bring much value. */

    https://github.com/google/butteraugli/issues/22

    *Reference 03
    From Jyrki Alakuijala Comment:
    Butteraugli is a lot more sensitive for lines (displaced, emerging or removed) than any other visual measure,
    Last edited by Lithium Flower; 27th December 2020 at 10:25.

  2. #2
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    967
    Thanks
    264
    Thanked 346 Times in 218 Posts
    Quote Originally Posted by Lithium Flower View Post

    1. butteraugli and butteraugli jpeg xl assessment difference butteraugli distance

    I using butteraugli and butteraugli xl to check image, in *Reference 01,

    but some image butteraugli assessment good distance(1.3), butteraugli xl assessment bad distance(near 2.0),
    and some image butteraugli reject butteraugli xl good distance(1.3),
    how to correct understand butteraugli distance and butteraugli xl 3-norm?

    In the very highest quality area (q95 to q99 jpeg), you need to use a higher norm than 3-norm. You could use the maximum butteraugli score or something like the 12th norm.

    If you compress around q60-q75, then the 3rd norm may be appropriate. Even there I have some signals that 6th norm is more appropriate.

    PSNR as a visual error metric is MSE-based and is homologous to using 2nd norm, and that is too low of a norm for aggregating visual errors. Consider 3rd norm as a compromise between tradition and what actually works. Lower norms are also easier for machine learning approaches to optimize for. Also note that butteraugli creates a mix of X, 2*X and 4*X p-norms for the given X, i.e., the 3-norm for butteraugli is actually an equal mix of 3-norm, 6-norm and 12-norm. In your use you'd need likely 12-norm, 24-norm and 48-norm and you can get them by setting the --error_pnorm=12

    Quote Originally Posted by Lithium Flower View Post

    2. butteraugli safe area or great area

    Compress png rgba32 image, my process is using near-lossless 60 and pngfilter=100 to first compress,
    if compressed image not below safe butteraugli distance, using near-lossless 80 to second compress.

    I collect Jyrki Alakuijala Comment and create a table, in *Reference 02

    if i want my compressed image have a great quality, i should choose area 1.0 ~ 1.3 or area 1.0 ~ 1.6?, if i made a mistake please let me know.

    pngfilter=100 butteraugli distance => [0.6 ~ 1.0, 1.3 ~ 1.6], [1.7]
    webp near-lossless 60 butteraugli distance => [0.4 ~ 1.0, 1.3 ~ 1.6], [2.1]
    ,near-lossless 60 [2.1] => near-lossless 80 [1.3]

    cwebp & pingo_rc3 command:
    pingo_rc3.exe -pngfilter=100 -noconversion -nosrgb -nodate -sa "%%A"
    cwebp.exe -mt -m 6 -af -near_lossless 60 -alpha_filter best -progress "%%A" -o "%%~nA.webp"
    My understanding is that near_lossless 60 will provide sufficient quality for all uses where 8-bit per channel RGBA is ok and does not need an additional butteraugli analysis.

    If someone has generated an image with webp near_lossless 60 that has a visually observable degradation, I'd like to learn about it.

    Quote Originally Posted by Lithium Flower View Post
    3. non-photographic image and jpeg encoder quality suggest

    Compress png rgb24 image, my process is using quality 95 to first compress,
    if compressed image not below safe butteraugli distance, increase quality to second compress.

    In my png rgb24 image set, butteraugli assessment jpeg quality 95 doesn't get good butteraugli distance,

    jpeg quality 95 butteraugli distance => [1.5 ~ 1.6], [1.7 ~ 2.5]
    jpeg quality 95 butteraugli xl distance => [1.2 ~ 1.6], [1.7 ~ 2.2]

    but in cjpeg usage.txt :
    If i compress non-photographic image to jpeg and want near psychovisual lossless, it is necessary using
    above quality 95 to compress those image?


    My experience is that libjpeg quality 95 with yuv444 is often enough. About 1/1000 images it is necessary to go to 97-98.

    Quote Originally Posted by Lithium Flower View Post
    or in *Reference 03, possibly butteraugli is too sensitive in some non-photographic image?


    It has been argued that butteraugli is not sufficiently sensitive for DCT8x8 artefacts.

    Butteraugli does not make an attempt to model the pupil size, making it sometimes too sensitive and other times not sensitive enough.

    Quote Originally Posted by Lithium Flower View Post
    mozjpeg command:
    cjpeg.exe -optimize -progressive -quality 95 -quant-table 3 -sample 1x1 -outfile "mozjpeg\%%~nA.jpg" "%%A"
    Quote Originally Posted by Lithium Flower View Post

    Update 20200929:
    I using jpeg xl sjpeg features, sjpeg can get great butteraugli distance in quality 96,
    but look like sjpeg features doesn't using jpeg xl vardct or modular mode?


    sjpeg is a new jpeg encoder created by the WebP library maintainer Pascal Massimino.

    I don't know which jpeg encoder (after guetzli) gets the best butteraugli scores or has the best psychovisual properties otherwise. I didn't follow the development of that field after the launch of guetzli.
    Quote Originally Posted by Lithium Flower View Post
    ​Size:
    png - 12mb
    mozjpeg -q 97 + jpegtran progressive 4.46mb
    jpegxl sjpeg -q 96 + jpegtran progressive 4.0mb

    cjpegxl command:
    cjpegxl.exe" "%%A" "xl\%%A" --jpeg1 --jpeg_quality=96 --progressive

    4. webp lossy q100 and butteraugli distance

    I test another non-photographic image set in webp lossy q100, but some image get larger butteraugli distance,
    it possibly webp lossy 420 subsampling and fancy upsampling will make some larger errors in some area?


    I have seen lossy WebP have problems on the borders of the images. YUV420 can be a problem.

    You can dump the butteraugli heatmap and see by yourself the area where butteraugli thinks the problem is.

    Quote Originally Posted by Lithium Flower View Post
    and i test webp lossy alpha(alpha_q) features, this features will increase butteraugli distance,
    but i don't understand, why lossy alpha will effect butteraugli distance?
    Butteraugli models human vision, and humans don't see translucency. So, it should not affect the valuation.

    The butteraugli executable, when seeing a translucent image, places the compared pair on white background and measures the scores.

    Then it places the same pair on a black background and measures the scores again.

    It reports the worse score of the two measurements. Losses in alpha are noticed by this methodology.

    This is not a perfect approach, but it may be better than nothing.

  3. Thanks:

    Lithium Flower (22nd December 2020)

  4. #3
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    73
    Thanks
    10
    Thanked 25 Times in 21 Posts
    Quote Originally Posted by Lithium Flower
    my process is using near-lossless 60 and pngfilter=100 to first compress
    Quote Originally Posted by Lithium Flower
    ​pingo_rc3.exe -pngfilter=100 -noconversion -nosrgb -nodate -sa "%%A"
    cwebp.exe -mt -m 6 -af -near_lossless 60 -alpha_filter best -progress "%%A" -o "%%~nA.webp"
    if you actually combine those, you would do loss over loss — so the result of near_lossless would be biaised and non-representative. if you want to compare, i would suggest instead to experiment the latest pingo with WebP, since all transformations (-webp-lossless, -webp-lossy, -webp-near, -webp-color, etc.) have heuristics/pre-processing which would not be done by the reference encoder atm; those would affect size/metrics, but could not be always better

  5. Thanks (2):

    Jyrki Alakuijala (6th October 2020),Lithium Flower (22nd December 2020)

  6. #4
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    967
    Thanks
    264
    Thanked 346 Times in 218 Posts
    Quote Originally Posted by cssignet View Post
    ​if you actually combine those, you would do loss over loss — so the result of near_lossless would be biaised and non-representative. if you want to compare, i would suggest instead to experiment the latest pingo with WebP, since all transformations (-webp-lossless, -webp-lossy, -webp-near, -webp-color, etc.) have heuristics/pre-processing which would not be done by the reference encoder atm; those would affect size/metrics, but could not be always better
    Well spotted!

    Simply:

    cwebp --near_lossless=60 -q 100 -m 6 xyzzy.png -o xyzzy.webp

    (in cwebp near-lossless and lossless there are too parameters controlling the effort of computation, -q and -m)

    I haven't tried pingo myself, but the published results suggest that it can be a better option than cwebp.

  7. Thanks:

    Lithium Flower (22nd December 2020)

  8. #5
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    73
    Thanks
    10
    Thanked 25 Times in 21 Posts
    Quote Originally Posted by Jyrki Alakuijala
    I haven't tried pingo myself, but the published results suggest that it can be a better option than cwebp
    ​i consider my tools more like experimental stuff, implementations/demos of ideas: they do not pretend (and imo should not) replace well tested reference, but results could be compared sometimes

    Quote Originally Posted by Jyrki Alakuijala
    I have seen lossy WebP have problems on the borders of the images
    not sure if it is the same issue, but among several solutions, some simple pre-processing (RGB->255 if a=0) could be done and would impact quality/size

    Code:
    original.png
    
    cwebp -q 100 -sharp_yuv (56.78 KB)
    butteraugli: 5.670024
    butteraugli xl: 8.0223999023 (3-norm: 2.246375)
    ssimulacra xl: 0.01917124
    
    pingo -webp-lossy=100 (51.29 KB) <-- this use sharp_yuv too
    butteraugli: 1.928053
    butteraugli xl: 1.6332571507 (3-norm: 0.712645)
    ssimulacra xl: 0.00517459
    Quote Originally Posted by myself
    some simple pre-processing (RGB->255 if a=0) could be done
    edit: better solution could be done
    Attached Files Attached Files
    Last edited by cssignet; 9th October 2020 at 15:09. Reason: upload good files + better solution

  9. Thanks:

    Lithium Flower (22nd December 2020)

  10. #6
    Member
    Join Date
    Apr 2013
    Location
    France
    Posts
    73
    Thanks
    10
    Thanked 25 Times in 21 Posts
    about the near_lossless in cwebp, perhaps i am wrong but it seems that the current implementation would change an alpha value from 1 to 0, even with low quantization value. i am not sure this would be the expected behavior, since it would impact things significantly (R-G-B should be altered by the alpha optimization). reusing the sample from my precedent post:

    Code:
    cwebp -near_lossless 80 (112.46 KB)
    butteraugli: 1.606584
    butteraugli xl: 50.7915344238 (3-norm: 15.189873)
    ssimulacra xl: 0.03084179
    
    pingo -webp-near=80 (113.97 KB) <-- avoid quantization on alpha
    butteraugli: 0.844221
    butteraugli xl: 1.0970934629 (3-norm: 0.365217)
    ssimulacra xl: 0.00218288
    
    -near_lossless 80 (112.70 KB) <-- avoid quantization if alpha < 2
    butteraugli: 1.609372
    butteraugli xl: 1.1307358742 (3-norm: 0.364205)
    ssimulacra xl: 0.00218043
    Attached Files Attached Files
    Last edited by cssignet; 8th October 2020 at 11:01. Reason: upload good files

  11. Thanks:

    Lithium Flower (22nd December 2020)

  12. #7
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    20
    Thanks
    19
    Thanked 1 Time in 1 Post
    rework some question in 20201221 new post
    Last edited by Lithium Flower; 20th December 2020 at 22:32. Reason: ?rework some question in 20201221 new post

  13. #8
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    101
    Thanks
    13
    Thanked 53 Times in 34 Posts
    Quote Originally Posted by cssignet View Post


    Code:
    original.png
    
    cwebp -q 100 -sharp_yuv (56.78 KB)
    butteraugli: 5.670024
    butteraugli xl: 8.0223999023 (3-norm: 2.246375)
    ssimulacra xl: 0.01917124
    
    pingo -webp-lossy=100 (51.29 KB) <-- this use sharp_yuv too
    butteraugli: 1.928053
    butteraugli xl: 1.6332571507 (3-norm: 0.712645)
    ssimulacra xl: 0.00517459

    edit: better solution could be done
    Interesting... Is this happening only for qualities around q=100? Or at lower ones too? (q=60-80 for instance)...

    skal/

  14. #9
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    20
    Thanks
    19
    Thanked 1 Time in 1 Post
    rework some question in 20201221

    @ Jyrki Alakuijala

    sorry, my english is not good.

    I'm sorry for reply so lately.
    Thank you for your reply, I'm really grateful.

    1. use butteraugli metric compare different encoder and maxButteraugliSocre
    I develop a python multithread program use butteraugli metric compare different encoder,
    and choose below maxButteraugliSocre and smaller image file.
    maxButteraugliSocre setting on 1.6 is a great value? or 1.6 is too risk?

    2. libjpeg q95 and Butteraugli Socre
    in libjpeg q95 most image can get Butteraugli Socre below 1.6,
    but some image can't reach 1.6, need use libjpeg q96, q97.
    libjpeg not like Guetzli, pik, jpegxl have vardct,
    if i want my image can get great quality, use libjpeg q96, q97 is necessary?

    3. Butteraugli Socre and Butteraugli_jxl p-norm reference
    In my first post *Reference 02 Butteraugli Socre reference and guetzli quality.cc,
    those score reference list still available in Butteraugli and Butteraugli_jxl?
    Butteraugli_jxl p-norm have quality reference?
    I collect some comment, could you teach me how to read p-norm?
    below 1.0 is fine, above 1.0 is bad?
    max Butteraugli // q95+(q95~q99)
    12-norm // q95+(q95~q99)
    ?p-norm // q80-q92
    3-norm(6-norm) // q60-q75
    // Butteraugli_jxl --error_pnorm=12 (3-norm, 6-norm, 12-norm, 24-norm, 48-norm)
    4. Butteraugli and Butteraugli_jxl return different butteraugli score
    if Butteraugli and Butteraugli_jxl return different butteraugli score,
    i should choose Butteraugli score or Butteraugli_jxl score?
    exsample:
    butteraugl 1.748285
    butteraugl_jxl 1.3875815868,0.553373
    butteraugl 1.650636
    butteraugl_jxl 1.9129729271,0.648280

    5. lossy alpha(translucency) feature
    humans can't see translucency, webp lossy -alpha_q have compression factor,
    if i setting -alpha_q compression factor to 1 or 0, image translucency will get more lossy compression,
    maybe lossy alpha this feature will make some unvisible bad effect?
    I find some lossy alpha information in AV1 reddit discuss,
    i very curious this question, could you teach me about this feature?
    When the alpha channel is opaque or uniform (all one value), lossless AV1 alpha encodes should be very small.
    However, for any kind of "interesting" alpha (masks, gradients, etc), I think it is a mistake to do lossy encoding as
    the kinds of artifacts you'll get will not just be a slightly blurrier image, but your masks will stop lining up or
    you'll get holes in places you didn't intend.
    -alpha_q int
    Specify the compression factor for alpha compression between 0 and 100. Lossless compression of alpha is achieved using
    a value of 100, while the lower values result in a lossy compression. The default is 100.


    @ cssignet
    Thank you for your reply, I'm really grateful.

    I'm sorry, i did't make myself clear,
    I using butteraugli metric compare pingo near-lossless and pingo pngfilter,
    and choose below maxButteraugliSocre and smaller image file.

    Thank your suggest, pingo near-lossless is working very well,
    and Thank you develop pingo.

  15. #10
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    967
    Thanks
    264
    Thanked 346 Times in 218 Posts
    Quote Originally Posted by Lithium Flower View Post
    sorry, my english is not good.
    General advice on communicating on forums: ask one thing in one post. Don't be worried to post many posts.

    Your communication is clear and structured -- no reason to apologize for it.

    Quote Originally Posted by Lithium Flower View Post
    1. use butteraugli metric compare different encoder and maxButteraugliSocre
    I develop a python multithread program use butteraugli metric compare different encoder,
    and choose below maxButteraugliSocre and smaller image file.
    maxButteraugliSocre setting on 1.6 is a great value? or 1.6 is too risk?

    Depends on viewing distance. At a viewing distance of 900 pixels, 1.6 is on the high side. I recommend values between 0.6 (perfect) to 1.2 (with some issues).

    If your viewing distance is much longer than 900 pixels, the current versions of butteraugli become less efficient and less accurate in modeling the viewing. I'll probably improve on that in the next two years.

    Quote Originally Posted by Lithium Flower View Post
    2. libjpeg q95 and Butteraugli Socre
    in libjpeg q95 most image can get Butteraugli Socre below 1.6,
    but some image can't reach 1.6, need use libjpeg q96, q97.
    libjpeg not like Guetzli, pik, jpegxl have vardct,
    if i want my image can get great quality, use libjpeg q96, q97 is necessary?

    My personal viewing experience is that about 0.1-0.3 % of images at yuv444 q97 jpegs still have visible faults. Mostly problematic areas are deep reds containing gray details. Somehow velvet-like red cloth is often very difficult.

    For practical use I consider jpeg yuv444 q93 or q94 great compromises and q95 good quality. Yuv420 is irritating for me.

    Quote Originally Posted by Lithium Flower View Post
    3. Butteraugli Socre and Butteraugli_jxl p-norm reference
    In my first post *Reference 02 Butteraugli Socre reference and guetzli quality.cc,
    those score reference list still available in Butteraugli and Butteraugli_jxl?
    Butteraugli_jxl p-norm have quality reference?
    I collect some comment, could you teach me how to read p-norm?
    below 1.0 is fine, above 1.0 is bad?

    Low p-norm values depend on how large are the areas that are easy to encode. This means that anything else than the maximum aggregation (infinite-p-norm) is more or less indicative only and application dependant.

    In some use case (like kodak test images) a p-norm of 14 was found to work well.

    Butteraugli's p-norm is not a simple p-norm but a mixture of three p-norms, N, 2*N and 4*N. This is to have a mixture of emphasis of not making big errors in one area, but also giving some pressure on making improvements in large areas.

    Likely the p-norm value of 6 to 12 could be a good compromise. I don't have a guideline how to interpret those values.

    With p-norm of 3 I can observe that good compression produces a butteraugli p-norm value * BPP is roughly 1.0 for complex detail-rich images.

    Quote Originally Posted by Lithium Flower View Post
    4. Butteraugli and Butteraugli_jxl return different butteraugli score
    if Butteraugli and Butteraugli_jxl return different butteraugli score,
    i should choose Butteraugli score or Butteraugli_jxl score?
    exsample:
    butteraugl 1.748285
    butteraugl_jxl 1.3875815868,0.553373
    butteraugl 1.650636
    butteraugl_jxl 1.9129729271,0.648280


    Shouldn't matter in the end. All butteraugli's have been calibrated to the same reference corpus with pretty good results. The reference corpus contains 2500 image pairs in the area of 0.6--1.3 max butteraugli values. Earlier butterauglis may be slightly more accurate, later butterauglis more practical and compatible with compression. Later butterauglis can be slightly better at detecting faint larger scale degradations as they do recursive 2x multiscaling of top of the Laplacian pyramid, not just a single run of four levels of Laplacian pyramid.

    I, as the author of butteraugli, use the latest butteraugli in jpeg xl for my use.

    Quote Originally Posted by Lithium Flower View Post
    5. lossy alpha(translucency) feature
    humans can't see translucency, webp lossy -alpha_q have compression factor,
    if i setting -alpha_q compression factor to 1 or 0, image translucency will get more lossy compression,
    maybe lossy alpha this feature will make some unvisible bad effect?
    I find some lossy alpha information in AV1 reddit discuss,
    i very curious this question, could you teach me about this feature?
    One early version of alpha_q just tried to minimize sum of squared error. I haven't followed up on what is going on there, so I don't know the current state. If one considers maximum error (like in butteraugli) minimizing sum of square error is not the best strategy. Considering PSNR, it is great -- a near optimal strategy.

  16. Thanks:

    Lithium Flower (22nd December 2020)

  17. #11
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    20
    Thanks
    19
    Thanked 1 Time in 1 Post
    ​@ Jyrki Alakuijala
    Thank you for your reply

    here is my program structure,
    First i split image type by color space, rgb24, rgba32, pal8,

    in present version maxButteraugliSocre set 1.66, minButteraugliSocre set 1.55.
    but 1.66 is too risk, i researching a new maxButteraugliSocre,

    rgb24 using libjpeg [q92, q93, q94] [q95] [q96, q97, q98] (yuv444),
    compress q95 and use Butteraugli to quality assessment,
    if q95 below maxButteraugliSocre and above minButteraugliSocre use q95,
    if q95 below maxButteraugliSocre use q92, q93, q94,
    if q95 above maxButteraugliSocre use q96, q97, q98,

    rgba32 using pingo pngfilter 80,100 and near-lossless 40,60(convert to png),
    use Butteraugli to quality assessment and find below maxButteraugliSocre and smaller file.

    pal8 using png lossless or webp lossless compress.

    in multithread processing, 1 image need 2~3s(average case).
    [CPU:r5 1600 6c12t(all core 3.65oc), RAM:16GB]


    but in ButteraugliSocre i get some problem,
    Butteraugli_jxl assessment some image in q95 still get maxButteraugli 1.6~2.0,
    in q97 still some image get high maxButteraugli(1.6~2.0),

    I think in non-photographic images use over q95 is not worth and image will very huge,
    in next version libjpeg probably can set in [q91, q92], [q93], [q94, q95],
    pingo still keep setting [pngfilter 80, 100, near-lossless 40, 60],
    and add check p-norm to quality assessment,

    if my target quality is libjpeg q92~q95,
    i want use 6-norm and 12-norm check this image quality area, maxButteraugli look like working in q95+ quality area,
    could you give me some suggest, how to use 6-norm and 12-norm guide my program to find good quality(can allow not easy to see error)
    and smaller file?
    thank you very much for your help

    ​=============================================== ========================================
    Please forgive me if I’ve offended you,
    in Butteraugli_jpegxl p-norm flag is change, --error_pnorm=12 is unavailable,
    new flag is, --pnorm <int> [3, 6, 12, 24, 48]
    https://gitlab.com/wg1/jpeg-xl/-/blo...i_main.cc#L127

    sample:

    01_02_09_libjpeg_q95.png
    maxButteraugli: 2.0917696953
    3-norm: 0.708287
    6-norm: 0.958020
    12-norm: 1.274732
    24-norm: 1.579831
    48-norm: 1.802553
    Butteraugli_old: 1.975622

    CloudySky1_pf100.png
    maxButteraugli: 2.3613216877
    3-norm: 1.706211
    6-norm: 1.984321
    12-norm: 2.156714
    24-norm: 2.253438
    Butteraugli_old: 1.234330

    CloudySky2_pf100.png
    maxButteraugli: 1.8451137543
    3-norm: 0.671635
    6-norm: 0.902970
    12-norm: 1.199747
    24-norm: 1.455440
    Butteraugli_old: 1.003504

    CloudySky1_pf80.png
    maxButteraugli: 1.4213246107
    3-norm: 0.745485
    6-norm: 0.904969
    12-norm: 1.072658
    24-norm: 1.203832
    Butteraugli_old: 1.440932

    CloudySky2_pf80.png:
    maxButteraugli: 1.7285125256
    3-norm: 0.758040
    6-norm: 0.920547
    12-norm: 1.136067
    24-norm: 1.347241
    Butteraugli_old: 1.626053
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	CloudySky1_original.png 
Views:	14 
Size:	192.8 KB 
ID:	8197   Click image for larger version. 

Name:	CloudySky1_pf80.png 
Views:	11 
Size:	91.5 KB 
ID:	8198   Click image for larger version. 

Name:	CloudySky1_pf100.png 
Views:	12 
Size:	125.1 KB 
ID:	8199   Click image for larger version. 

Name:	CloudySky2_original.png 
Views:	13 
Size:	381.4 KB 
ID:	8200   Click image for larger version. 

Name:	CloudySky2_pf80.png 
Views:	8 
Size:	152.0 KB 
ID:	8201  

    Click image for larger version. 

Name:	CloudySky2_pf100.png 
Views:	11 
Size:	215.3 KB 
ID:	8202  
    Last edited by Lithium Flower; 25th December 2020 at 10:29.

  18. #12
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    20
    Thanks
    19
    Thanked 1 Time in 1 Post
    ​update 20201225:
    I think i find a answer, in this issue,
    https://github.com/google/butteraugli/issues/15
    take an average of 3, 6 and 12-norms to support decision making and mostly use the maximum based butteraugli score for reference only.
    use average pnorm to assessment, and average pnorm need below 1.0.

    but this method need too many process time, i need find another method to handle this.

  19. #13
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    20
    Thanks
    19
    Thanked 1 Time in 1 Post
    ​I using three value to assessment image quality,average of 6-norm and 12-norm, 12-norm, and maxButteraugli,


    // q91 too risk, q97 not worth
    libjpeg [q92, q93, q94 ,q95 ,q96]


    use two conditions to check file, linear searsh,
    if first condition fail, use second condition,
    if two condition fail, use libjpeg q96.


    first condition:
    averageNorm(6-norm + 12-norm) < 1.0
    & 12-norm < 1.2
    & maxButteraugli < 1.55


    second condition:
    averageNorm(6-norm + 12-norm) < 1.0
    & 12-norm < 1.2
    & maxButteraugli < 2.1


    I'm not sure this good idea, maybe i made a mistake?

  20. #14
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    967
    Thanks
    264
    Thanked 346 Times in 218 Posts
    Quote Originally Posted by Lithium Flower View Post
    ​I using three value to assessment image quality,average of 6-norm and 12-norm, 12-norm, and maxButteraugli,
    You don't need to do anything, you can just call --pnorm=3 and it automatically uses 3, 6, and 12 internally and mixes them in equal portions. That argument only specifies the lowest value and the two higher are generated internally by 2 * N and 4 * N.

    (For high quality work pnorm=12 may be more appropriate. It would compute internally 12th, 24th and 48th norm.)

  21. Thanks:

    Lithium Flower (27th December 2020)

  22. #15
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    20
    Thanks
    19
    Thanked 1 Time in 1 Post
    ​@ Jyrki Alakuijala
    Thank you for your reply

    Actually i want my program imitate jpegxl and guetzli use Butteraugli to assessment image,
    could you teach me, if in cjpegxl target distance is 1.0 or 1.5, cjpegxl use which pnorm and maxButteraugli to find best image?

    My compress program is prepare use on legacy program, but look like use cjpegxl compress png and jpg to jxl,
    and when legacy program need use image, use djpegxl convert to original format is best method,
    but i still want finish this compress program.

    use 12-norm to assessment image, and use maxButteraugli make sure compressed image don't have big error,
    I think condition probable like this?

    condition:
    12-norm < 1.2
    maxButteraugli < 1.6

    Fail: use q96 and near-lossless 60

    libjpeg [q92, q93, q94 ,q95 ,q96]
    pingo [pngfilter 80, 100, near-lossless 40, 60]

  23. #16
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    967
    Thanks
    264
    Thanked 346 Times in 218 Posts
    Quote Originally Posted by Lithium Flower View Post
    ​Actually i want my program imitate jpegxl and guetzli use Butteraugli to assessment image,
    could you teach me, if in cjpegxl target distance is 1.0 or 1.5, cjpegxl use which pnorm and maxButteraugli to find best image?
    Cool. The basic answer is that no pnorm value is used there. You should get the same .jxl file regardless of the pnorm goals.

    In butteraugli-iterative version of cjxl:

    Adaptive quantization field is searched using gradient-like search where the gradient is approximated from a simple multiplicative relation of bpp * butteraugli score.

    There, we use the butteraugli scores directly, no reason to aggregate. There are some tricks that are used, like if by default we get a really good butteraugli score, I don't make it worse by more than 40 %.

    In guetzli we used a less principled and more heuristic search that worked better for max-butteraugli score.

    All local correction that I used use a local maximum butteraugli score, i.e., the optimization algorithms do not aggregate using a p-norm.

    In cjxl, the optimization over the max-butteraugli correctness is done using a quadratic objective, but it is a quadratic (L2) optimization over the local max butteraugli scores.

    For coming up with heuristics and default quantization matrices etc. I used pnorm=3 a lot for finging balanced heuristics. It is much easier to optimize for than more appropriate values like pnorm=6..12. The optimization space is flatter because all pixels participate more evenly.

  24. Thanks:

    Lithium Flower (31st December 2020)

  25. #17
    Member
    Join Date
    Aug 2020
    Location
    taiwan
    Posts
    20
    Thanks
    19
    Thanked 1 Time in 1 Post
    @ Jyrki Alakuijala
    Thank you for your reply

    I think my word is too conceited in previous post, sorry,
    My program is more like Jpeg Archive(SSIM), cjpeg-dssim(dssim), squoosh(butteraugli),
    use single metrics to assessment image, jpegxl and guetzli doing more advanced thing.

    I have some question in butteraugli_jxl, i compress some image in jpegxl_v0.2 d1.0 s8,
    butteraugli_jxl assessment some image get big maxButteraugli(2.29328537),
    https://docs.google.com/spreadsheets...tX7dDBCObf0zCQ

    The -d distance target is only a target, not a guarantee.
    About this answer, cjxl use heuristics and target distance(1.0) to keep quality and compess image,

    In old butteraugli, if image get big butteraugli distance, this image will assessment bad quality image,
    or have big error in this image.
    butteraugli_jxl maxButteraugli metrics still like old butteraugli is a error metrics?

    if cjxl d1.0 s8 image in butteraugli_jxl get big maxButteraugli, butteraugli_jxl assessment this image have big error,
    but cjxl assessment this image is conform 1.0 target distance, this image should assessment good quality or bad quality?

    If i using butteraugli_jxl to compare different image encoder, like libjpeg, cwebp, cjxl,
    only use maxButteraugli to assessment image quality(big error), this method can correct reflect image quality?
    or i should use other metrics, like butteraugli_jxl pnorm, help to assessment image and get more accurate result?

    In my non-photographic image set test, 6-norm and 12-norm can reflect image quailty in high area,
    ​but i don't know, how to use pnorm to assessment image in correct method.

  26. #18
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    967
    Thanks
    264
    Thanked 346 Times in 218 Posts
    Quote Originally Posted by Lithium Flower View Post
    butteraugli_jxl maxButteraugli metrics still like old butteraugli is a error metrics?
    Both work relatively well. Should not matter much to the utility of your system which one you use. For individual images there can be differences, but likely that neither is more accurate for a large group of images.

    Quote Originally Posted by Lithium Flower View Post
    if cjxl d1.0 s8 image in butteraugli_jxl get big maxButteraugli, butteraugli_jxl assessment this image have big error,
    but cjxl assessment this image is conform 1.0 target distance, this image should assessment good quality or bad quality?
    You can consider looking at the butteraugli heat map and seeing by yourself how bad the situation is.

    Quote Originally Posted by Lithium Flower View Post
    If i using butteraugli_jxl to compare different image encoder, like libjpeg, cwebp, cjxl,
    only use maxButteraugli to assessment image quality(big error), this method can correct reflect image quality?
    or i should use other metrics, like butteraugli_jxl pnorm, help to assessment image and get more accurate result?
    pnorm may be more appropriate for -d1.5 and worse or -q 90 and worse.

    So, use maximum butteraugli for decent quality photos (-d1.0 or -q 94 yuv444), the worse the quality the lower the norm should be (likely useful range is 3-24).

  27. Thanks:

    Lithium Flower (2nd January 2021)

Similar Threads

  1. butteraugli
    By lorents17 in forum Download Area
    Replies: 9
    Last Post: 23rd February 2019, 13:59
  2. New butteraugli version
    By Jyrki Alakuijala in forum Data Compression
    Replies: 26
    Last Post: 5th October 2017, 21:14
  3. WEBP - how to improve it?
    By Stephan Busch in forum Data Compression
    Replies: 38
    Last Post: 4th June 2016, 14:43
  4. WebP (Lossless April 2012)
    By caveman in forum Data Compression
    Replies: 32
    Last Post: 19th April 2013, 16:53
  5. lossy simplifier of a png bmp jpg
    By toi007 in forum The Off-Topic Lounge
    Replies: 7
    Last Post: 7th July 2012, 01:44

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •