Results 1 to 26 of 26

Thread: Better progressive rendering of JPEGs

  1. #1
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts

    Better progressive rendering of JPEGs

    Any thoughts on making progressive JPEGs look better? This issue relates to having only a dc scan available.

    https://github.com/libjpeg-turbo/lib...rbo/issues/459

    Less artefacts and less flicker -> more comfort -> more use of progression?


    Original:
    Click image for larger version. 

Name:	95075827-25814780-0711-11eb-8a04-c91b72ca42a8.jpg 
Views:	121 
Size:	402.6 KB 
ID:	7951

    Current libjpeg-turbo 8x8 progression behaviour:
    Click image for larger version. 

Name:	95075914-4ea1d800-0711-11eb-95c8-6a46e6754d7c.png 
Views:	112 
Size:	210.9 KB 
ID:	7950

    Possible improved libjpeg-turbo 8x8 progression behaviour:

    Click image for larger version. 

Name:	95075965-66795c00-0711-11eb-969c-979425b77d9b.png 
Views:	104 
Size:	222.7 KB 
ID:	7949

  2. #2
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    69
    Thanks
    16
    Thanked 21 Times in 16 Posts
    The blockier look has two things going for it - a) it's obvious (at the full resolution) that the image is still decoding, and b) it appears sharper because it has more edges (granted, these are block/block edges and not real image content, but the perception is the same).

    I wonder if a sharpen pass on the second won't be able to resolve the issue b); This is not visible in the thumbnails, just when I look at the full resolution partially decoded images. I am frequently frustrated with the lack of indication (on my Android phone in Google Photos, for example) of whether I am looking at a partially or fully decoded image. A simple overlay, progress bar, anything to indicate that an image is partially decoded would be welcome as an option - though I don't see how you could do that in the jpeg decoder itself, looks like the wrong place for it.

    Anyhow, my $0.02

    (Edit: A simple smoothing filter that preserves unity like the one being applied will always make the smooth version perceptually blurrier; I think some small amount of edge enhancement must be baked in; not sure if you can afford a nonlinear filter; or just dither the image before or after smoothing; a fake film grain will be better than a plasticky look IMHO)

    Last edited by Stefan Atev; 5th October 2020 at 21:31.

  3. #3
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Stefan Atev View Post
    The blockier look has two things going for it - a) it's obvious (at the full resolution) that the image is still decoding, and b) it appears sharper because it has more edges (granted, these are block/block edges and not real image content, but the perception is the same).
    The GUI designed inside me say that it is wrong to give information (edges) and then quickly take it away and replace with other information. The progression should only increase information. Because of this I believe that the adding edges to make it look sharper is not the most stress free experience for a user.

    ((GUIs are complex and it would be great if browsers had progressive rendering configurable -- for example, allow removing it altogether from people who are not in a hurry.))

    Quote Originally Posted by Stefan Atev View Post
    I wonder if a sharpen pass on the second won't be able to resolve the issue b); This is not visible in the thumbnails, just when I look at the full resolution partially decoded images. I am frequently frustrated with the lack of indication (on my Android phone in Google Photos, for example) of whether I am looking at a partially or fully decoded image. A simple overlay, progress bar, anything to indicate that an image is partially decoded would be welcome as an option - though I don't see how you could do that in the jpeg decoder itself, looks like the wrong place for it.
    I tried to add just the right amount of sharpening :-D

    Also, the sharpening and the smothing are different in different directions, i.e., the filter is not linearly separable like simpler scaling filters. The filter is linear though.

    In this use the upsampling result needs to be expressed sparsely in the DCT domain (otherwise too much computation at decoding time), so there are some residual square-like artefacts because of that.

    Quote Originally Posted by Stefan Atev View Post
    (Edit: A simple smoothing filter that preserves unity like the one being applied will always make the smooth version perceptually blurrier; I think some small amount of edge enhancement must be baked in; not sure if you can afford a nonlinear filter; or just dither the image before or after smoothing; a fake film grain will be better than a plasticky look IMHO)
    Some added noise is indeed usually better, but unfortunately not easy to realise in this use for technical reasons.

    The intermediate progression result in libjpeg-turbo needs to be represented in DCT space. Adding the noise would mean formulating that noise in the DCT space. For the most beautiful noise it needs operate on laplacian and that is not easy because it has local interactions between blocks.

  4. #4
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    69
    Thanks
    16
    Thanked 21 Times in 16 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    The GUI designed inside me say that it is wrong to give information (edges) and then quickly take it away and replace with other information. The progression should only increase information. Because of this I believe that the adding edges to make it look sharper is not the most stress free experience for a user.
    I think that's why it probably doesn't make much sense to ask I strongly _dislike_ the whole progressive loading of images - in my book, as long as you can tell the layout engine about the image size, it can go do its own thing (and not re-layout my page on me as I'm reading it); if it's smart enough to first decode images that are actually visible in full resolution - great!

    Just imagine what has to go through my brain when I see a blurry image:
    1) Some really smart progressive decoding is trying to minimize flicker and present information to me kindly and gently; worth waiting to see the full-quality image
    2) Some bozo is using an image with too-small of a resolution and I get to see it upscaled
    3) Bozo in fact didn't know the difference between optical and digital zoom, and here I am, staring at a crop that's blurry as hell, and transmitted at a too-high resolution for what it contains.

    While I understand that you imagine the answer to the quiz to be 1), real-world experience will point inevitably to explanations 2) and 3). So I am not really sure the feature is worth the effort. But that's really for you to decide in the end...

    ​Cheers

  5. #5
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Stefan Atev View Post
    I think that's why it probably doesn't make much sense to ask I strongly _dislike_ the whole progressive loading of images - in my book, as long as you can tell the layout engine about the image size, it can go do its own thing (and not re-layout my page on me as I'm reading it); if it's smart enough to first decode images that are actually visible in full resolution - great!
    Progressive rendering knows the image size the same way as an eventual complete rendering and does not need to relayout the page.

    Quote Originally Posted by Stefan Atev View Post
    Just imagine what has to go through my brain when I see a blurry image:
    1) Some really smart progressive decoding is trying to minimize flicker and present information to me kindly and gently; worth waiting to see the full-quality image
    2) Some bozo is using an image with too-small of a resolution and I get to see it upscaled
    3) Bozo in fact didn't know the difference between optical and digital zoom, and here I am, staring at a crop that's blurry as hell, and transmitted at a too-high resolution for what it contains.
    Image loads are often ~2-5 seconds. Usually in progression one would see the first version from 500 ms, but another 500 ms later it would be replaced with a finer quality version that is difficult to differentiate from the original at a quick look.
    1. DC only
    2. quality 50-level rendering
    3. final quality rendering

    If you are searching for a photo from a collection of photos, you may end up 3-10x slower with the search if without progression.

    Quote Originally Posted by Stefan Atev View Post
    While I understand that you imagine the answer to the quiz to be 1), real-world experience will point inevitably to explanations 2) and 3). So I am not really sure the feature is worth the effort. But that's really for you to decide in the end...

    ​Cheers
    Do you like the image to appear sequentially from top to bottom -- or rather wait fully for the image to appear at once?

    Are you ok with image's average color reflected in the color of the background box before the image arrives?

  6. #6
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    69
    Thanks
    16
    Thanked 21 Times in 16 Posts
    Yes, I prefer to see the image loading as data is available - when it's fast, I don't find it distracting; when it's slow - I find it an useful indicator of the network conditions I am experiencing; I don't expect browsers or apps to put progress indicators on individual images as they load, so a clearly visible progressive load is my only indication. I mentioned relayout because that's the only "progressive" aspect I care about (which also clearly has a solution that doesn't depend on any of the image being decoded, but people are lazy...)

    Now, you clearly have a lot of arguments as to why other behavior is preferable, and I'm wondering why you you are asking for confirmation of your own conclusions, especially in a forum full of somewhat "atypical" users. I don't need convincing that the option you've chosen is better for me when it's not - my preferences are against software being too smart for my own good and not giving me a clear indication of what's going on.

    The smoothing of partially decoded image makes it look too blurry without giving an indication that more detail is to come later. And I find the "pop" when an image loads fully not distracting.

    I will shut up because I didn't intend for my response to hijack the thread or to even sound confrontational - I like the work you and your team are doing. On a meta level, I find that when an engineering team reaches outside of its tight circle to ask for "opinion" on a matter of taste, that just means only one of two things - a) the question is actually unimportant or b) there is disagreement in the team and people are fishing for confirmation of their position.

    Cheers and keep up the good work.
    Last edited by Stefan Atev; 6th October 2020 at 16:53.

  7. #7
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    69
    Thanks
    16
    Thanked 21 Times in 16 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    If you are searching for a photo from a collection of photos, you may end up 3-10x slower with the search if without progression.
    I am not against progressive loading / decoding - so I don't think there is a time difference; I just think that trying to hide the fact that progressive decoding is occurring is unnecessary.

    Quote Originally Posted by Jyrki Alakuijala View Post
    Do you like the image to appear sequentially from top to bottom -- or rather wait fully for the image to appear at once?

    Are you ok with image's average color reflected in the color of the background box before the image arrives?
    I personally prefer not waiting for the image to be fully decoded before it's shown. I can see why matching a color could be useful, but not for the average image but it's boundaries (say, when you have image that's designed to seamlessly blend into a background and is not using transparency to do so).

  8. #8
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Stefan Atev View Post
    I am not against progressive loading / decoding - so I don't think there is a time difference; I just think that trying to hide the fact that progressive decoding is occurring is unnecessary.



    I personally prefer not waiting for the image to be fully decoded before it's shown. I can see why matching a color could be useful, but not for the average image but it's boundaries (say, when you have image that's designed to seamlessly blend into a background and is not using transparency to do so).
    Is this the right interpretation of your words:

    If progressive loading happens, the intermediate results should contain carefully chosen artefacts or other visual markers to indicate that it is a temporary image?

    Showing the solid background color while an image loads is ok, but not white background and not a progressive version of the image?

  9. #9
    Member
    Join Date
    Sep 2020
    Location
    USA
    Posts
    6
    Thanks
    3
    Thanked 2 Times in 2 Posts
    My appreciation for progressive JPEGs is due almost solely to the fact that they are more size-efficient. While better than the alternative of sudden image pop-in, a low-res image is an annoyance for me as my brain tries to interpolate what it should be seeing (and even more so if the image contains text). I'd have to see it in action to be sure, but I can imagine that I would find the smoothed preview more annoying than the blocky preview because my brain would momentarily think that my eyes are unfocused and prompt me to blink. The sharp discontinuities between blocks in the current typical progressive decoding at least reassure my brain that there isn't something wrong with my eyes. To me, that's less mental stress.

  10. Thanks:

    Stefan Atev (6th October 2020)

  11. #10
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    69
    Thanks
    16
    Thanked 21 Times in 16 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    Is this the right interpretation of your words:

    If progressive loading happens, the intermediate results should contain carefully chosen artefacts or other visual markers to indicate that it is a temporary image?

    Showing the solid background color while an image loads is ok, but not white background and not a progressive version of the image?
    Yes on the first interpretation - knowing that the image is temporary is of value (to me). I don't think what I said implies the second; I think that if a solid color is shown, there is an argument to be made whether that color matches the average image content or the image surroundings; showing a progressive view of the image (and updating if the process takes long) is fine, but I would rather it be visually obvious that I'm looking at a partially decoded image - which the blocky version clearly shows (or an image that only has its bottom or top third at full res and the rest is not).

    Also, I guess it really matters about what the display resolution of the image is. If the image is scaled down below 20-25% of original resolution, then your blurry option and a blocky option are essentially the same to me. If the image is displayed at a higher resolution, the blurry version fails my preference for clearly knowing that the image I'm looking at is not final.

    For example, when I swipe though images in Google Photos on my phone, I hate the lack of indication that I'm looking at a preview, especially if I am browsing to find the "best looking" photo I've taken. In this case, I am probably looking at a downscaled image, but not by so much that I can't notice the difference between a preview and a properly downscaled full res image. Now, if I'm scrolling though thumbnails of multiple images at a much lower resolution, I don't know that I would care or distinguish between a downscaled "smooth" or downscaled "blocky" image.

  12. #11
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    69
    Thanks
    16
    Thanked 21 Times in 16 Posts
    Quote Originally Posted by Adreitz View Post
    MI'd have to see it in action to be sure, but I can imagine that I would find the smoothed preview more annoying than the blocky preview because my brain would momentarily think that my eyes are unfocused and prompt me to blink. The sharp discontinuities between blocks in the current typical progressive decoding at least reassure my brain that there isn't something wrong with my eyes. To me, that's less mental stress.
    Jyrki, I think that's maybe a more coherent summary of my own thoughts than what I managed to write

  13. #12
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    75
    Thanks
    8
    Thanked 40 Times in 29 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    Any thoughts on making progressive JPEGs look better? This issue relates to having only a dc scan available.

    https://github.com/libjpeg-turbo/lib...rbo/issues/459

    Less artefacts and less flicker -> more comfort -> more use of progression?


    Original:
    Click image for larger version. 

Name:	95075827-25814780-0711-11eb-8a04-c91b72ca42a8.jpg 
Views:	121 
Size:	402.6 KB 
ID:	7951
    or, you could use 164 bytes in the header for a quick triangulated thumbnail and skip the progressiveness...

    original:
    Click image for larger version. 

Name:	95075827-25814780-0711-11eb-8a04-c91b72ca42a8.jpg 
Views:	121 
Size:	402.6 KB 
ID:	7951
    triangulated preview:
    Click image for larger version. 

Name:	thumb.jpg 
Views:	42 
Size:	28.7 KB 
ID:	7957

    [link to paper: https://arxiv.org/abs/1809.02257]

    skal/

  14. #13
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by skal View Post
    or, you could use 164 bytes in the header for a quick triangulated thumbnail and skip the progressiveness...

    original:
    Click image for larger version. 

Name:	95075827-25814780-0711-11eb-8a04-c91b72ca42a8.jpg 
Views:	121 
Size:	402.6 KB 
ID:	7951
    triangulated preview:
    Click image for larger version. 

Name:	thumb.jpg 
Views:	42 
Size:	28.7 KB 
ID:	7957

    [link to paper: https://arxiv.org/abs/1809.02257]

    skal/
    Stefan, would Pascal's proposal work better for you?

  15. #14
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Stefan Atev View Post
    Jyrki, I think that's maybe a more coherent summary of my own thoughts than what I managed to write
    If you can observe mass behaviour of people, people on average prefer progression even when it comes with some additional cost. For example, WebP vs. progressive jpeg. WebP is smaller, but switching from progressive jpeg to webp has often resulted in drop of sales or interaction and some users migrated back to progressive (even with an increase of bytes send).

    It can be acknowledged that there are individuals for whom progressive images are not pleasant. They rather wait than have to deal with more information. (Personally, I'm the opposite of that, I hate slow systems and waiting for the computer more than anything else. :-D )

    The good point here is that the client software is in control of doing the progression -- iff the format has support for progression. Thus, the best solution looks like adding more control to disable, control or delay progression in the browser. For example, often progression is done in three passes. Leaving the first pass not rendered for those who don't enjoy progressive could be an interesting option. Usually the difference between 2nd and 3rd pass is so small that it is difficult to observe at all in that time, so it might be more easy to agree with even by the most sensitive people.

  16. #15
    Member
    Join Date
    Nov 2011
    Location
    france
    Posts
    75
    Thanks
    8
    Thanked 40 Times in 29 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    t switching from progressive jpeg to webp has often resulted in drop of sales or interaction
    What is the reference for that claim?

  17. #16
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    69
    Thanks
    16
    Thanked 21 Times in 16 Posts
    Jyrki,

    I don't know how this got sidetracked about discussing whether progressive (encoding, decoding, loading, display) is bad/good. My claim was very specific to your question - between the two different possible ways of displaying a partially decoded image, I prefer the blocky, clearly-not-complete look as it lowers the cognitive load of figuring out whether I'm looking at the final image or some in-progress-preview.

    I like anything that gets me to see a page faster as long as I don't have to deal with weird input races / progressive relayout. So, let's take for a given that a page is using progressive jpegs, and is showing images at the appropriate resolution. In that context, I would prefer the existing decoding over the smooth one.

    The "triangular preview" tires very hard to solve a problem I don't have, and requires extra effort both on the server side (to prepare the encodings) and on the client side (to decode something that's not as widely supported as jpeg). The "show a partially decode image to minimize flicker" also solves a problem I don't have, at a cost that's likely negligible effort-wise; Based on the comments about what is feasible and not I assume you're trying to keep the computational overhead of smoothing small.

    Of the three stages you mention, to me stage 1 and 3 are valuable, if it's easy at a glance to figure out whether I'm looking at 1 (earliest preview) or 3 (final image). An intermediate, almost-final quality is not useful and increases cognitive load, because just by looking at it, I don't know if I'm looking at 2 of a good image or 3 of a slightly blurry image. Efforts to make 1 less like the blocky low-res preview that it really is, using weird upscaling / smoothing, is similarly a minus: it reduces visual cues to me about whether I'm looking at a preview or a final image. This is still clearly contextual - there are images where I care to know the final quality (my own photos / things I intend to print / order / etc) and things where I don't care about the image anyway (where for all I care, you could decode to pass 2 quality and I won't know any better).

  18. #17
    Member
    Join Date
    Jan 2014
    Location
    USA
    Posts
    9
    Thanks
    11
    Thanked 3 Times in 2 Posts
    Quote Originally Posted by Stefan Atev View Post
    Of the three stages you mention, to me stage 1 and 3 are valuable, if it's easy at a glance to figure out whether I'm looking at 1 (earliest preview) or 3 (final image). An intermediate, almost-final quality is not useful and increases cognitive load, because just by looking at it, I don't know if I'm looking at 2 of a good image or 3 of a slightly blurry image. Efforts to make 1 less like the blocky low-res preview that it really is, using weird upscaling / smoothing, is similarly a minus: it reduces visual cues to me about whether I'm looking at a preview or a final image. This is still clearly contextual - there are images where I care to know the final quality (my own photos / things I intend to print / order / etc) and things where I don't care about the image anyway (where for all I care, you could decode to pass 2 quality and I won't know any better).
    It sounds to me like you (and people who think similarly) will not be happy with anything less than either an internal (part of the image and shown while loading the image itself) or external (displayed as an overlay by something in the browser) indicator of how much of the image has loaded or some indication that the image is fully loaded.

    A progressive model of some sort which displays images at full size while indicating whether they're still loading or complete seems to be a good option if the browser itself is unable to keep the in-view text from moving while graphical content loads and is placed (by the browser) above and below any textual portion the user is reading. However, if the browser can indicate before it loads the page that it keeps the text in place, do you prefer seeing the final images only after they've finished loading?

    How much overhead would an in-progress indicator (either provided by the browser or an extension, or as part of a loading image) require? If it's built in and the browser keeps in-view text from moving, the additional bytes and time required might not be much. It might need to be able to display whether the image is still loading, has successfully finished loading, or has aborted after a partial load. If the indicator is part of the image(s) being loaded, well, that's another matter and how it will change actual and perceived loading time will differ depending on the size of the extra data, the transmission speed to the user, and other things. A page with a large number of images, each with its own in-line metadata, might take a long time to fully load.

    Think of security as well. If JPEGs can have in-line or separate but simultaneously loaded metadata that indicates how much of an image has been loaded/displayed, it might introduce a way for "bad guys" to insert data into those images and/or metadata that causes the rendering of the image(s) to break out of the browser's internal sandbox or give scripts/etc. access to more of the user's OS than is warranted, as has been demonstrated in the past with deliberately-malformed images.

    I know some browsers have experimented with options that load image(s) only after the page is scrolled to or beyond the image(s), which might reduce the perceived page load time to very little depending on various factors, but I think that's off-topic in this discussion, which is about image encoding, or... actually, rendering. Am I over-thinking this and going too far beyond the topic Better progressive rendering of JPEGs ?

  19. #18
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    69
    Thanks
    16
    Thanked 21 Times in 16 Posts
    danlock, the original question in the start of the thread had a very simple question: which of these two slightly different looking low-res in-progress images do you think is better for progressive JPEG display? I prefer the first, because it gives me a clearer indication of the temp status of the image. However, "people like me" are satisfied and "happy" with things as they are - I don't expect that an actual progress overlay for images that are being loaded will ever be implemented by any mainstream browser; that would be cool, but nothing that would drive which browser I choose to use on a daily basis.

    I guess people are stuck on the "I strongly dislike progressive loading", but really my biggest issue is relayout, and I should have expressed that better. Let's try again: "I positively hate pages changing layout after I've started interacting with them, in many contexts I strongly dislike not knowing if an image is done loading or not yet, and the proposed change to the decoding will move things in the wrong direction, IMHO"

  20. #19
    Member
    Join Date
    Jan 2014
    Location
    USA
    Posts
    9
    Thanks
    11
    Thanked 3 Times in 2 Posts
    Thank you, Stefan Atev. I guess I did, after all, get sidetracked discussing progressive loading. Unfortunately, I tend to do that... digress from a topic as I continue thinking about it. That wasn't a problem prior to my traumatic brain injury, but that was long ago and beside the point (as I digress now from what I was saying! sigh).

    I know the frustration caused by layout changes while I'm browsing. When I'm reading pages on a mobile browser, it seems that annoying advertisements move the page content around far more frequently than I'd like. Also, I don't know if it's just me, but it seems that those ads load after the page content (text and images), something which can be especially frustrating at times.

    I mentioned keeping the in-view text from moving as the page is assembled a couple of times (paragraph 2), and, in fact, although I neglected to mention it specifically the second time, rendering the page with placeholders for images while they load would keep the layout the same no matter where on the page the user is reading or to what place in the page the user has scrolled, without regard for the number of images or to what extent they've loaded.

    @Jyrki Alakuijala: You wondered whether changing the background of a loading image to the average of the loaded portion would be helpful. If the images use placeholders and therefore do not alter the page layout but change the background color as they load, they might still be distracting. But it might also solve the problem of knowing whether the complete image is shown. If the image is loading from top to bottom in a placeholder space and the background is changing as it loads, the user would be able to see the full image size during load and would know the image is finished when no solid color which differs from the page background remains at the bottom of the image! That would work for both images the user sees during loading and unseen images, unless (in the latter case) the average is the same as the page background, something which seems unlikely.

  21. #20
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Stefan Atev View Post
    I prefer the first, because it gives me a clearer indication of the temp status of the image. However, "people like me" are satisfied and "happy" with things as they are
    Did you notice when jpeg progression moved from true 8x8 pixels to the first example?

  22. #21
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    69
    Thanks
    16
    Thanked 21 Times in 16 Posts
    Jyrki, I am not sure what you're asking. The first low-res example is "blocky enough" - even if it's not pure 8x8, it serves the same purpose. If you had decoded the low-res and used bilinear or other upscaling instead of something approximating nearest-neighbor, it would also not have clearly indicated that more's to come. Maybe I have learned to interpret NN interpolation as "temporary" and other interpolation (or apparent interpolation) as "final, upscaled".

  23. #22
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Stefan Atev View Post
    Jyrki, I am not sure what you're asking. The first low-res example is "blocky enough" - even if it's not pure 8x8, it serves the same purpose. If you had decoded the low-res and used bilinear or other upscaling instead of something approximating nearest-neighbor, it would also not have clearly indicated that more's to come. Maybe I have learned to interpret NN interpolation as "temporary" and other interpolation (or apparent interpolation) as "final, upscaled".
    https://github.com/libjpeg-turbo/lib...rbo/issues/343 improved progressive rendering smoothing behaviour in browsers significantly in 2019

  24. #23
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    69
    Thanks
    16
    Thanked 21 Times in 16 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    https://github.com/libjpeg-turbo/lib...rbo/issues/343 improved progressive rendering smoothing behaviour in browsers significantly in 2019
    The pure DC and DC with smoothing are both "blocky enough". I do actually disagree with the github comment that says the "pure DC" is worse - I do prefer it over the "smoothed DC". Like the many Russian speakers here would say, "на вкус и на цвет товарищей нет". The 12000 byte example of decoding is also clearly demonstrating that the image is partially loaded at a glance.

  25. #24
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Stefan Atev View Post
    The pure DC and DC with smoothing are both "blocky enough". I do actually disagree with the github comment that says the "pure DC" is worse - I do prefer it over the "smoothed DC". Like the many Russian speakers here would say, "на вкус и на цвет товарищей нет". The 12000 byte example of decoding is also clearly demonstrating that the image is partially loaded at a glance.
    Do you think the 8x8 blockiness is the most appropriate visual hint for indicating image that hasn't fully loaded?

    What about slanted stripes over the image, or something like a plastic foil that is 'peeled off' when the further AC is being loaded?

  26. #25
    Member
    Join Date
    Aug 2016
    Location
    USA
    Posts
    69
    Thanks
    16
    Thanked 21 Times in 16 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    Do you think the 8x8 blockiness is the most appropriate visual hint for indicating image that hasn't fully loaded?

    What about slanted stripes over the image, or something like a plastic foil that is 'peeled off' when the further AC is being loaded?
    I don't know about "most appropriate". Of the three I've seen (pure DC, smooth DC, veeery smooth DC), it is the clearest to me. I am not sure it's the job of the jpeg decoder to worry about it, but there may be a better option. I just don't know what it is without actually doing a comparison.

  27. #26
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Stefan Atev View Post
    I don't know about "most appropriate". Of the three I've seen (pure DC, smooth DC, veeery smooth DC), it is the clearest to me. I am not sure it's the job of the jpeg decoder to worry about it, but there may be a better option. I just don't know what it is without actually doing a comparison.
    Some people love progressive jpegs (vs sequential). My understanding (based on fully non-scientific twitter discussions and my own sampling) is that it is about 60 % of people. 30 % don't have an opinion when shown two alternatives. 10 % dislike progressive.

    The best hypothesis I have heard why some dislike progressive is that it is 'too much information' for their brains, and it reduces their focus.

    Switches in rendering style are contributing to the features that those brains need to process. Not having features in the progressive image that disappear in the final version will reduce the amout of work that the brain needs to do in interpreting the image.

    Of course many alternative hypotheses are possible.

Similar Threads

  1. Replies: 9
    Last Post: 11th June 2015, 23:28
  2. Why does progressive helps compression so much in jpg's
    By SvenBent in forum Data Compression
    Replies: 5
    Last Post: 17th May 2014, 13:20

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •