Page 1 of 2 12 LastLast
Results 1 to 30 of 33

Thread: WebP (Lossless April 2012)

  1. #1
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 62 Times in 33 Posts

    WebP (Lossless April 2012)

    According to this post in WebP Discussion, .webpll is nearing completion.
    This new thread is here to receive contributions/tests/talks about the new - March/April 2012 - specs of WebP Lossless Bitstream.

  2. #2
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,479
    Thanks
    26
    Thanked 122 Times in 96 Posts
    LZ77 for image data? I thought LZ77 is outdated.

  3. #3
    Tester
    Stephan Busch's Avatar
    Join Date
    May 2008
    Location
    Bremen, Germany
    Posts
    876
    Thanks
    474
    Thanked 175 Times in 85 Posts
    oh yes it is, but the developers wrote me that webp lossless is dedicated for web graphics and therefore needs fast compression/decompression.
    They were not interested using p.ex. BCIF or any other free lossless image compression library that focuses on compression performance.

  4. #4
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by Stephan Busch View Post
    oh yes it is, but the developers wrote me that webp lossless is dedicated for web graphics and therefore needs fast compression/decompression. They were not interested using p.ex. BCIF or any other free lossless image compression library that focuses on compression performance.
    Highly irritating, given that there are fast codecs that perform quite good, not even on web graphics. For example LS is quite fine for lossless coding, and WebP lossless has, up to now at least, been extremely slow. So I wonder why they are re-inventing the wheel instead of using what is available. There are faster codecs available, and there are better, but slower codecs available, so what is the point?

  5. #5
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by Piotr Tarsa View Post
    LZ77 for image data? I thought LZ77 is outdated.
    Well, that too. But LZ77 is not even using a good model for image data. It is the wrong type of compression, namely dictionary based, for images in first place. Here random Markov field models would be (and are) more appropriate.

  6. #6
    Member Karhunen's Avatar
    Join Date
    Dec 2011
    Location
    USA
    Posts
    91
    Thanks
    2
    Thanked 1 Time in 1 Post
    Since I am not familiar with the theory behind what constitutes a good lossless codec, are you speaking of PPM type methods ? I have found that low color, high contrast images like cartoons compress well with PPM coders like 7zip, and so do bitonal ones although the long series of zeroes in a b&w image usually means I write them as RLE pcx or Utah Raster RLE before applying ppm.

  7. #7
    Member
    Join Date
    May 2008
    Location
    England
    Posts
    325
    Thanks
    18
    Thanked 6 Times in 5 Posts
    Quote Originally Posted by thorfdbg View Post
    Highly irritating, given that there are fast codecs that perform quite good, not even on web graphics. For example LS is quite fine for lossless coding, and WebP lossless has, up to now at least, been extremely slow. So I wonder why they are re-inventing the wheel instead of using what is available. There are faster codecs available, and there are better, but slower codecs available, so what is the point?
    WebPll is now extremely fast encoding as i mentioned in the other WebP thread. Files have literally gone from taking 7/8 minutes to encode down to 2 or 3 seconds. And in many cases compression is improved, but overall it's about ~3% worse size wise. It's actually usable now, and gives far better results over PNG.

  8. #8
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by Intrinsic View Post
    WebPll is now extremely fast encoding as i mentioned in the other WebP thread. Files have literally gone from taking 7/8 minutes to encode down to 2 or 3 seconds. And in many cases compression is improved, but overall it's about ~3% worse size wise. It's actually usable now, and gives far better results over PNG.
    Well, we'll see. 2-3 seconds is still pretty slow compared to "state of the art". But as said, I'll probably run it over the JPEG core 1 test set in the next days and see how it performs.

  9. #9
    Member
    Join Date
    May 2008
    Location
    England
    Posts
    325
    Thanks
    18
    Thanked 6 Times in 5 Posts
    Well the time taken isn't for the compression part, it's for pre-processing the data to be compressed.

  10. #10
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 62 Times in 33 Posts
    Quote Originally Posted by thorfdbg View Post
    But as said, I'll probably run it over the JPEG core 1 test set in the next days and see how it performs.
    WebP Lossless is intended to replace the aging PNG file format (web graphics are a mix of stuff like icons, background patterns, pictograms, buttons, logos, rasterized texts...) why would you test it against continuous tone pictures?
    https://developers.google.com/speed/...ss_alpha_study

  11. #11
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by caveman View Post
    WebP Lossless is intended to replace the aging PNG file format (web graphics are a mix of stuff like icons, background patterns, pictograms, buttons, logos, rasterized texts...) why would you test it against continuous tone pictures? https://developers.google.com/speed/...ss_alpha_study
    Why should I replace something that is working? I mean, PNG is in all browsers already, and it would take years to replace that - remember how long it took to have PNG supported by the IE browser family. It's an uphill battle for few returns. Did JPEG 2000 replace JPEG, even though it is better? No, it didn't. Why would I test it with continuous tone images? Because people use it this way, basically? (-; I don't think the average web developer makes much of a difference there. If it has to go into a web page, use PNG or JPEG, that's as much as they know now.

  12. #12
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,479
    Thanks
    26
    Thanked 122 Times in 96 Posts
    With the advent of HTML5 and other stuff, you can implement any decompression algorithm on the client side, I think. There are WebP demos for browsers with no support for WebP AFAIR.

  13. #13
    Member
    Join Date
    Jan 2007
    Location
    Moscow
    Posts
    239
    Thanks
    0
    Thanked 3 Times in 1 Post
    Forget those IE6 days. IE is not monopolist now. Chrome + FF support the feature = all other browser will support it in 6 months, or they'll die.
    Decompression via JS is POC only. 1 image - OK, 10 - OK, 100 - like on many sites * open browser windows = slow memory hog.

  14. #14
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,479
    Thanks
    26
    Thanked 122 Times in 96 Posts
    There are usually heavy JPEGs, but PNGs are usually light. So for replacing PNGs JS decoder would be enough IMO. Layout could be cached in local storage reducing the resource requirements.

  15. #15
    Member Karhunen's Avatar
    Join Date
    Dec 2011
    Location
    USA
    Posts
    91
    Thanks
    2
    Thanked 1 Time in 1 Post
    New binaries for windows are available and the version number is 0.2.0 The upload date listed is August 16. Apparently lossless is now handled by cwebp and dwebp themselves, but the lossless files they produce are apparently not compatible with webpll2png.

  16. #16
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 62 Times in 33 Posts

    Arrow

    "Faster, smaller and more beautiful web with WebP" event by GDL (Google Developers Live) today 1 PM PST (10 PM CET).

    I don't know the content of the event.

  17. #17
    Member
    Join Date
    Sep 2010
    Location
    US
    Posts
    126
    Thanks
    4
    Thanked 69 Times in 29 Posts
    Quote Originally Posted by caveman View Post
    "Faster, smaller and more beautiful web with WebP" event by GDL (Google Developers Live) today 1 PM PST (10 PM CET).

    I don't know the content of the event.
    I'm so bothered by the constant lies about compressor effectiveness :

    "30-80% smaller image files when compared to jpeg and png!"

    30% on the low end? The low end has got to be 0% and probably negative %. (if they were all bijective, negative percent would be required, but since they're not 0% is possible but unlikely). (and the 80% on the high end is at best misleading)

    The space/speed tradeoff of webp-ll does seem to be very good.

    A better hybrid image compressor should be possible very easily though. Do arbitrary size rectangular blocks, where a block can either be a 2d LZ copy or it can be residual-predicted-entropy-coded like BMF/TMW/whatever.
    Last edited by cbloom; 3rd August 2016 at 21:42.

  18. #18
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by cbloom View Post
    I'm so bothered by the constant lies about compressor effectiveness : "30-80% smaller image files when compared to jpeg and png!"
    That's pretty much an apples to oranges comparison. JPEG is a lossy protocol, so you can make it as small as you like - just the quality is lousy. So a rate-distortion graph is being called for. Second, JPEG is not even state of the art anymore - they should compare to JPEG 2000. Third, PNG is not exactly a good image compressor for lossless either, because LZ is not a good engine for residual signals. Such signals are statistical in nature, and do not follow regularities as in human text LZ was designed for. If lossless performance is of interest, JPEG LS is a much better candidate.
    Quote Originally Posted by cbloom View Post
    A better hybrid image compressor should be possible very easily though. Do arbitrary size rectangular blocks, where a block can either be a 2d LZ copy or it can be residual-predicted-entropy-coded like BMF/TMW/whatever.
    All the BMF/LZ/TMW approaches are not reasonable choices for natural images. Such sources are quite different from computer programs/human text sources, and any type of "dictionary" or "lookup" approach all these methods work on - namely to find exact copies of the signal in the already processed data - are doomed to fail on natural images. Natural images never copy data precisely, only approximately The quality of image compression codecs is measured by rate-distortion graphs, as for example generated by the online test: http://jpegonline.rus.uni-stuttgart.de/index.py

  19. #19
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 72 Times in 56 Posts
    Quote Originally Posted by thorfdbg View Post
    Natural images never copy data precisely, only approximately The quality of image compression codecs is measured by rate-distortion graphs, as for example generated by the online test: http://jpegonline.rus.uni-stuttgart.de/index.py
    I'm pretty sure the gold standard for measuring lossy codecs like jpeg would have to involve A-B testing with real people, because it's perceptual, and so tries to eliminate what the eye can't see. A computerized test is only standing in for real eyes. Somewhere I read that jpeg tends to overperform on tests with real people, even though it gets mediocre numbers in computerized tests.

  20. #20
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,479
    Thanks
    26
    Thanked 122 Times in 96 Posts
    There are deblocking algorithms for JPEG, like: http://www.cs.tut.fi/~foi/SA-DCT/res...ml#ref_deblock
    Additionally there are stronger entropy coders for lossless JPEG recompression like PackJPG or StuffIt (or PAQ if someone has a lot of time to spend).

    Those two things combined makes JPEG much more competitive to more recent proposals. And don't forget that JPEG itself has relatively low computational complexity. Are there benchmarks of speed vs coding efficiency of JPEG and WebP lossy?

  21. #21
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by Piotr Tarsa View Post
    There are deblocking algorithms for JPEG, like: http://www.cs.tut.fi/~foi/SA-DCT/res...ml#ref_deblock Additionally there are stronger entropy coders for lossless JPEG recompression like PackJPG or StuffIt (or PAQ if someone has a lot of time to spend). Those two things combined makes JPEG much more competitive to more recent proposals. And don't forget that JPEG itself has relatively low computational complexity. Are there benchmarks of speed vs coding efficiency of JPEG and WebP lossy?
    I agree that subjective testing is the gold standard. We (as in "The JPEG People") do that, of course. Result is that up to about 1 to 1.5 bpp, all methods are equivalent and images cannot really be distinguished by people. Below that, JPEG 2000 and JPEG XR are not too far apart, but JPEG is much worse. Concerning speed vs. quality: Indeed, benchmarks have been made. I can send you a paper if you like, please contact me at thor(at)math(dot)tu(dash)berlin(dot)de. The result is actually a bit surprising: I did *not* measure on open source implementations, but highly optimized industrial implementations (including assembly language optimizations etc.). I also included the IJG code for completeness. It is then interesting to see that JPEG 2000 is actually *less* complex (bummer!) than JPEG XR up to about 1 bpp for encoding, and even for some higher rates at decoding. JPEG works best in terms of "quality per millisecond spend" for the industrial solution, but JPEG 2000 is actually not so much harder - it's not a factor of 10x as people often claim, but probably a factor of 2 or so, and IIRC, on par with IJG (bummer!). The implementations there only used one CPU core, but J2K is trivial to multithread and scales almost linear with the number of cores. It's not so easy for JPEG due to the inter-block dependency, and its almost outright impossible for JPEG XR. Only some trivial parts can be parallelized (like, component decorrelation, upsampling/downsampling). Again, if you're interested, write me a short note. This paper appeared, IIRC, on last year's SPIE "Applications of Digital Image Processing", part of the Optics+Photonics convention.
    Last edited by thorfdbg; 10th March 2013 at 17:25.

  22. #22
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,479
    Thanks
    26
    Thanked 122 Times in 96 Posts
    I agree that subjective testing is the gold standard. We (as in "The JPEG People") do that, of course. Result is that up to about 1 to 1.5 bpp, all methods are equivalent and images cannot really be distinguished by people. Below that, JPEG 2000 and JPEG XR are not too far apart, but JPEG is much worse.
    I assume those tests were conducted with Huffman-backed JPEGs. What about replacing Huffman with something based on arithmetic coding, adding deblocking filters and optimizing quantization tables? JPEG should be much more competitive then.

    The implementations there only used one CPU core, but J2K is trivial to multithread and scales almost linear with the number of cores. It's not so easy for JPEG due to the inter-block dependency, and its almost outright impossible for JPEG XR.
    What's so hard in multithreading plain JPEG? IIRC JPEG supports reset markers, so even entropy coding can be completely separated into concurrently encoded/ decoded chunks.

    What about lossless versions of JPEG, JPEG 2000 and JPEG XR? Do they easily multithread?

  23. #23
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by Piotr Tarsa View Post
    I assume those tests were conducted with Huffman-backed JPEGs. What about replacing Huffman with something based on arithmetic coding, adding deblocking filters and optimizing quantization tables? JPEG should be much more competitive then.
    You assume right. I personally haven't done these tests - I don't have a vision lab for proper subjective testing - but the reason for *not* using AC was to my knowledge that it isn't simply used in the field. Deblocking, however, is, though since the EPFL group did not seem to have access to such an implementation, they didn't do it. I myself would... I can only deliver SSIM and PSNR at jpegonline.
    Quote Originally Posted by Piotr Tarsa View Post
    What's so hard in multithreading plain JPEG? IIRC JPEG supports reset markers, so even entropy coding can be completely separated into concurrently encoded/ decoded chunks.
    First of all, for that the source must have restart markers enabled, which you cannot assume for decoding. Second, you need at least a serial post-processing step that un-does the DC prediction step (also possible). Thus, for encoding it is an option, yes. For decoding, running the DCT in a GPU might be an option, but there is a noticable overhead in triggering the GPU in first place, so it only makes sense if you need to decode a lot of images sequentially, for example as in Motion-JPEG, and memory is not at a premium.
    Quote Originally Posted by Piotr Tarsa View Post
    What about lossless versions of JPEG, JPEG 2000 and JPEG XR? Do they easily multithread?
    JPEG: No. This is a purely sequential prediction based code, rather primitive. Again, on encoding, you could run lines in parallel, and either insert restart markers, or fiddle the Huffman output together afterwards by bit-shifting larger amounts of data, which is hardly worth it. For decoding, you can't since line N depends on the values of line N-1. JPEG 2000: Yes, EBCOT blocks are coded independently, and thus can be decoded independently. They are also large enough to make scheduling on several CPUs feasible (been there, done that). JPEG 2000 lossless isn't different from the lossy case, it's only a different wavelet. Again, the wavelet can be done on the GPU, but its usually not worth it due to the GPU call overhead (been there, done that). CPU vector instructions (SSE,SSE2) work equally well and cause less overhead. JPEG XR: Near to impossible, because the prediction direction for the lowest frequency AC coefficients serially depends on previous blocks. You *could* parallelize if the image uses tiles as these are coded independently, but you cannot take this for granted. Again, lossy and lossless do not differ here at all, the PCT and POT are lossless right away, it is only a matter of entropy coding.

  24. #24
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,479
    Thanks
    26
    Thanked 122 Times in 96 Posts
    What about memory consumption then? JPEG has minimal memory requirements if we encode/ decode sequentially. JPEG2000 is hierarchical AFAIR, so unless the image is split into independent not-so-large tiles, memory consumption would be proportional to image size (ie would be huge).

    As to GPU call overhead:
    I don't know which API you've used, but I've developed some OpenCL programs once (more precisely: that was a failed attempt to do BWT transfrom on GPGPU) and I remember that OpenCL programs can be precompiled on a client machine and then reused on that machine. Such reuse should be much faster than recompilation and in default mode OpenCL programs are recompiled on every run, I think.

  25. #25
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by Piotr Tarsa View Post
    What about memory consumption then? JPEG has minimal memory requirements if we encode/ decode sequentially. JPEG2000 is hierarchical AFAIR, so unless the image is split into independent not-so-large tiles, memory consumption would be proportional to image size (ie would be huge).
    That depends on the progression mode. If you use a progression mode with position as slowest variable, and enable precincts, you can compress images that do not fit into the host memory without using tiles. Rate allocation becomes trickier, then - you need a control loop to hit the target rate approximately. With tiles, you only need to hold one tile in memory, of course. But the same rules also apply to JPEG and JPEG XR: JPEG in progressive mode also has to keep the image in memory (in one way or another), and JPEG XR in frequency order requires the same.
    Quote Originally Posted by Piotr Tarsa View Post
    As to GPU call overhead: I don't know which API you've used, but I've developed some OpenCL programs once (more precisely: that was a failed attempt to do BWT transfrom on GPGPU) and I remember that OpenCL programs can be precompiled on a client machine and then reused on that machine. Such reuse should be much faster than recompilation and in default mode OpenCL programs are recompiled on every run, I think.
    Well, that's only one problem. Actually, the precompilation is - on NVIDIA cards - only a pre-compilation into a pseudo-assembly language, still clear text and human-readable, that still needs to be assembled to the actual card. But the real overhead is elsewhere: You need to move the image data first from main memory to GPU memory, fire the GPU through the host Os API, then - probably - do something else until the GPU is finished - and collect the data, for which you need to copy back over the PCIe bus. This, plus the Os API overhead, can easily kill the performance. Interestingly, the API overead on Windows, at least for NVIDIA cards, is much higher than that on Linux.

  26. #26
    Member
    Join Date
    Sep 2010
    Location
    US
    Posts
    126
    Thanks
    4
    Thanked 69 Times in 29 Posts
    Quote Originally Posted by Piotr Tarsa View Post
    I assume those tests were conducted with Huffman-backed JPEGs. What about replacing Huffman with something based on arithmetic coding, adding deblocking filters and optimizing quantization tables? JPEG should be much more competitive then.
    Indeed, you are correct. I looked into this extensively; see for example :

    http://cbloomrants.blogspot.com/2012...ison-post.html
    http://cbloomrants.blogspot.com/2011...-jxr-test.html

    Summary : JPEG+arith+deblock is on par with all the modern compressors.

    J2K with human-visual RD is the best know lossy compressor at the moment. I'm sure that some other modern codecs could be better (H265 for example), but they haven't had the hard work of HVS-RD done on them yet.

    Whenever someone posts a claim about JPEG being terrible, it's always because they've driven the compression down well below 1 bpp where JPEG-huff runs into major inefficiency. (Basically, they're cheating liars, it's like running a text compressor on binary and claiming it's terrible)

    The only thing really compelling about webp-lossy is that it is now gauranteed patent-free due to Google's recent VP8 licensing action.
    Last edited by cbloom; 3rd August 2016 at 21:42.

  27. #27
    Member
    Join Date
    Sep 2010
    Location
    US
    Posts
    126
    Thanks
    4
    Thanked 69 Times in 29 Posts
    Quote Originally Posted by thorfdbg View Post
    That's pretty much an apples to oranges comparison. JPEG is a lossy protocol, so you can make it as small as you like - just the quality is lousy.
    Google is comparing webp-lossless to PNG and webp-lossy to JPEG.

    They (the webp's) are really two completely unrelated technologies with the same name.
    Last edited by cbloom; 3rd August 2016 at 21:42.

  28. #28
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by cbloom View Post
    Indeed, you are correct.
    Summary : JPEG+arith+deblock is on par with all the modern compressors.
    I don't think this really applies, but it is of course a question of the quality indices (avoiding "metric", because that is something different) you use. Anyhow, I'm not trying to hide anything, at least for multiscale-SSIM the online test allows you to test yourself. (Sorry, was offline the last week due to a configuration error in the server, is now working again). I do not see much of an advantage of AC coding here, the improvements are not too high.

    As for deblocking: We're currently struggling with this issue as well, and trying to come to a conclusion of wether or not to include it in tests. There are arguments for and against it, and it's not quite so easy. Conceptionally, any postprocessing is not part of the standard, so you're not testing a standard, but an implementation of it, and the output of any codec could be improved by post-filtering. So when comparing codecs side-by-side, you should also compare post-processors side-by-side, which extends the complexity and the available options. On the other hand, the approach is not quite "down to earth" because such postprocessors should then be used anyhow because they are in the real world - but which processor to use is again task-dependent, i.e. on the type of image you compress and on the artistic intent of the image, so it's not quite so obvious in what to do.

    Quote Originally Posted by cbloom View Post
    J2K with human-visual RD is the best know lossy compressor at the moment. I'm sure that some other modern codecs could be better (H265 for example), but they haven't had the hard work of HVS-RD done on them yet.
    That's currently happening in a joint MPEG/JPEG ad-hoc, and I don't have finite results yet. Again, for me it looks as if AVC does have an advantage here - and of course our folks from WG11 also know about HVS-properties, so there's something in their reference code. However, the question is again in how you measure. MPEG world lives in YCbCr, thus their images are always in YCbCr, and losses due to the color decorrelation transformation are not accounted for in MPEG experiments. JPEG "thinks" in RGB, and has a more end-to-end approach (but see above).

    Anyhow, I wouldn't hold my breath - I wouldn't be suprised if AVC or HEVC would win here.

    Quote Originally Posted by cbloom View Post
    Whenever someone posts a claim about JPEG being terrible, it's always because they've driven the compression down well below 1 bpp where JPEG-huff runs into major inefficiency. (Basically, they're cheating liars, it's like running a text compressor on binary and claiming it's terrible)
    I don't quite understand your point here. If the outcome of the experiment is that it is bad below 1bpp, then this is what it says. It's a fact that it is terrible below 1bpp. Of course, the full story is in the R/D curves (or R/Q curves, actually, for whatever Q you believe in).


    Quote Originally Posted by cbloom View Post
    The only thing really compelling about webp-lossy is that it is now gauranteed patent-free due to Google's recent VP8 licensing action.
    NO! It's not "patent free"! It's at best "royalty free" as the MPEG-LA agreed in a cross-license agreement. That is, they believe (I'm not a lawer, so I don't claim to believe here anything anymore) that some patents in the MPEG-LA pool apply to VP8, though MPEG LA provides a royalty free access to such IPs. That's something very different to "patent free".

  29. #29
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by cbloom View Post
    Google is comparing webp-lossless to PNG and webp-lossy to JPEG.

    They (the webp's) are really two completely unrelated technologies with the same name.
    Thanks for the clarification, then! In any case, PNG is not really state of the art, and LZ is really quite a bad choice for an entropy coder of image prediction residuals.

  30. #30
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 62 Times in 33 Posts
    Quote Originally Posted by cbloom View Post
    Google is comparing webp-lossless to PNG and webp-lossy to JPEG.

    They (the webp's) are really two completely unrelated technologies with the same name.
    WebP lossless was first called webpll. What I really don't like about this is that you can't tell if an image has been saved losslessly or not simply based on file extension.
    A lossy WebP image with an alpha channel actually uses both compression algorithms, it's a bit like the JNG file format that could use JPEG for the image and PNG or JPEG for the transparency mask.

Page 1 of 2 12 LastLast

Similar Threads

  1. WebP (lossy image compression)
    By Arkanosis in forum Data Compression
    Replies: 62
    Last Post: 12th April 2019, 19:45
  2. Lossless image coders
    By Madgeniy in forum Data Compression
    Replies: 26
    Last Post: 11th July 2011, 10:06
  3. JPEG Compression Test [April 2010]
    By Skymmer in forum Data Compression
    Replies: 18
    Last Post: 7th February 2011, 23:30
  4. 12th April - The Day of Astronautics
    By encode in forum Forum Archive
    Replies: 37
    Last Post: 13th April 2007, 11:26
  5. Squeeze Chart 2006 - 02 April/13 May
    By encode in forum Forum Archive
    Replies: 9
    Last Post: 17th July 2006, 05:39

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •