Results 1 to 18 of 18

Thread: Lossless image compressors

  1. #1
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    495
    Thanks
    228
    Thanked 83 Times in 63 Posts

    Question Lossless image compressors

    I was thinking about the concept of a "practical archiver". A program allowing you to choose between several degrees of compression at the cost of your system's resources and time spent.

    It has been proved, the best choice for lossless compression is to understand first what are you trying to compress, then adapt the method to the data. That's what makes FreeArc, NanoZip or Zpaq efficient archivers, against, let's say, .tar.bz method. And that´s also why they say data compression is an AI problem.

    Leaving parsing and recognition alone for the moment, I realized that the solution in most archivers is far from optimal today when it comes to still images. For example, FreeArc uses TTA algo for PCM audio, which is great. But it also uses mm+grzip to compress *.BMP, and it does it NON-SOLID way. A catastrophe when you have a bunch of small images.

    So, here is the question: Given the nice amount of image compressors out there, which one would be worth a try to include as a dedicated method in an archiver?

    As I see the thing, the following conditions are needed or at least wanted, in no specific order:

    1) Open source, or available as a library
    2) Asymmetric
    3) Multi-threading capable
    4) Stable and robust
    5) Reasonably fast

    Who's your candidate?
    Last edited by Gonzalo; 24th February 2016 at 23:42.

  2. #2
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    29
    Thanks
    9
    Thanked 10 Times in 8 Posts
    Lossless WebP is currently probably the best choice in terms of speed/compression trade-off, if you ask me. It satisfies all 5 of your conditions.

    FLIF is slower and still experimental, but probably achieves the best compression density.

    (Disclaimer: I'm the author of FLIF)

  3. Thanks:

    Gonzalo (26th February 2016)

  4. #3
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    495
    Thanks
    228
    Thanked 83 Times in 63 Posts
    Quote Originally Posted by Jon Sneyers View Post
    (Disclaimer: I'm the author of FLIF)


    Well, if it is experimental it means we'll see improvements soon. Actually, I think both or even more method should be together in order to provide a flexible setting case. From "very fast" to "very strong"

    What about PAQ models? Does they require mandatory PAQ engine? Or can be used in a LZ-style compressor?

  5. #4
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    700
    Thanks
    210
    Thanked 267 Times in 157 Posts
    Quote Originally Posted by Gonzalo View Post
    Well, if it is experimental it means we'll see improvements soon. Actually, I think both or even more method should be together in order to provide a flexible setting case.
    The main weakness in WebP lossless for me is that it cannot losslessly compress jpegs . The result is pretty much always bigger than the original. I like a lot the near-lossless options in WebP lossless, both the classic near-lossless as well as the experimental-only delta-palettization. I consider them as jewels that the practical field didn't find yet.

    Disclaimer: I'm the author of WebP lossless.

  6. #5
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    29
    Thanks
    9
    Thanked 10 Times in 8 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    The main weakness in WebP lossless for me is that it cannot losslessly compress jpegs . The result is pretty much always bigger than the original. I like a lot the near-lossless options in WebP lossless, both the classic near-lossless as well as the experimental-only delta-palettization. I consider them as jewels that the practical field didn't find yet.

    Disclaimer: I'm the author of WebP lossless.
    Lossless compression of lossy formats is always tricky. Unless the compression method is very similar, you're basically spending lots of bits on encoding the artifacts of the lossy format, which are of course "for free" in the lossy method but significant sources of entropy in a lossless method.

    Perhaps lossless compression of JPEGs is possible with a JPEG-aware preprocessing transform: e.g. instead of encoding the RGB pixels, do the YCbCr color transform and then JPEG's DCT transform, and encode 64 tiny grayscale images per YCbCr channel, one for each DCT coefficient, treating the DCT coefficients as pixel values. We could use WebP or FLIF or whatever format for those 64*3 tiny images. It should be better than JPEG because it takes spatial correlation between those DCT coefficients into account (versus just treating them as 1D number lists that are fed directly to an entropy coder). If the original has lots of quantization (low-quality JPEG), the corresponding tiny images for those high-frequency DCT coefficients should be very easy or trivial to compress.

  7. #6
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 779 Times in 486 Posts
    There isn't much correlation between JPEG AC coefficients in neighboring blocks. The most important contexts are the color (Y,Cr,Cb) and u,v, but otherwise order 0. This is why progressive JPEG compresses better than baseline. I describe JPEG recompression in http://mattmahoney.net/dc/dce.html#Section_616

  8. #7
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by Jon Sneyers View Post
    Perhaps lossless compression of JPEGs is possible with a JPEG-aware preprocessing transform: e.g. instead of encoding the RGB pixels, do the YCbCr color transform and then JPEG's DCT transform, and encode 64 tiny grayscale images per YCbCr channel, one for each DCT coefficient, treating the DCT coefficients as pixel values.
    Question: What do you mean by "lossless JPEG compression?". First, we have JPEG "after burners" like the StuffIt compression or PackJPEG. They make use of correlation between neighbouring AC coefficients. Then, there is lossless JPEG as in JPEG XT Part 8, which is a compression engine that is backwards compatible to JPEG. Here two different methods are available: First, compress in RGB space, and use an int-to-int DCT combined with quantization factors of 1. Second, a residual compression scheme with a fully defined DCT transformation, and an a spatial encoding of the residual between the reconstructed DCT values and the original image. I.e. this requires an additional decoder at encoder side - for which this type of encoding is also known as "closed loop coding". As usual, the JPEG XT part 8 reference implementation is available at www.jpeg.org.
    Quote Originally Posted by Jon Sneyers View Post
    We could use WebP or FLIF or whatever format for those 64*3 tiny images. It should be better than JPEG because it takes spatial correlation between those DCT coefficients into account (versus just treating them as 1D number lists that are fed directly to an entropy coder). If the original has lots of quantization (low-quality JPEG), the corresponding tiny images for those high-frequency DCT coefficients should be very easy or trivial to compress.
    PackJPEG in a sense does that - it uses correlation between neighboring coefficients in the AC bands. At least in the low-frequency bands some correlation is left, especially at horizontal and vertical bands.

  9. #8
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by Matt Mahoney View Post
    There isn't much correlation between JPEG AC coefficients in neighboring blocks.
    There is a little bit, at least for the low-frequency bands (i.e. (1,0) and (0,1)), which is one of the reasons why PackJPEG can improve the compression result. Another is arithmetic coding, of course.

    Quote Originally Posted by Matt Mahoney View Post
    The most important contexts are the color (Y,Cr,Cb) and u,v, but otherwise order 0. This is why progressive JPEG compresses better than baseline.
    Actually, not really. You can already define separate Huffman tables for Cb and Cr in sequential JPEG if you want to (there are four tables in total one can make use of, even though typically only two are allocated, one for luma, and one for two chroma channels).

    The important feature in progressive JPEG is that you first have the block-skip mode, i.e. blocks with all AC coefficients can be skipped without requiring a sequence of EOB symbols, and the Huffman table can be redefined by scan. IOWs, the Huffman code can be made frequency-dependent, providing another context. Last but not least, successive approximation also allows to split-off the LSBs and encode the (not yet significant) coefficients by a separate Huffman code.

  10. #9
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 779 Times in 486 Posts
    Quote Originally Posted by thorfdbg View Post
    There is a little bit, at least for the low-frequency bands (i.e. (1,0) and (0,1)), which is one of the reasons why PackJPEG can improve the compression result. Another is arithmetic coding, of course.
    A better way to use neighboring coefficients is to model the constraint that neighboring pixels in different blocks should have similar values. So (0,0)-(0,1) in the block above minus (0,0) in the current block is a useful context to predict (0,1). PAQ extends this idea to compute the IDCT of the neighboring block in one dimension and subtracting the partial IDCT in one dimension of the current block.

  11. #10
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    700
    Thanks
    210
    Thanked 267 Times in 157 Posts
    Quote Originally Posted by Matt Mahoney View Post
    There isn't much correlation between JPEG AC coefficients in neighboring blocks. The most important contexts are the color (Y,Cr,Cb) and u,v, but otherwise order 0. This is why progressive JPEG compresses better than baseline.
    I'm not sure if I'm repeating with what you say in other words, but this is my view why progressive wins over non-progressive:

    In non-progressive all frequency components are codified with the same entropy code. In progressive one can switch between entropy codes for frequency components. There is more similar distributions within frequency than within a block, thus progressive is more efficient.

  12. #11
    Member
    Join Date
    Mar 2016
    Location
    Kiev
    Posts
    6
    Thanks
    0
    Thanked 1 Time in 1 Post
    Hello guys. I'm looking for good "bitmap" archiver. Quotes are used because actually my files are not true bitmaps, but uncompressed raws from camera.
    The best compressor I've found is nanozip, but unfortunately it is not maintained, is not opensource, etc.
    Very good results are obtained from conversion to the DNG (which is not actually archiving, but converting to another format), but it is not appropriate.
    3d place is for winrar 4 (5 is worse even with much bigger dictionary), and 7-zip is far beyond these 3 archivers. I also tried durilca, freearc, dgca, but neither of them performed better than nz and DNG convertor, especially if speed is also taken into score.

    So the question is - is there any known "best performer" for my case?
    Thank you in advance.

  13. #12
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    495
    Thanks
    228
    Thanked 83 Times in 63 Posts
    Is speed important to you?
    Which settings uses FA? See in the details section if the method is "mm+grzip". If not, try this:
    Code:
    fazip mm+grzip INPUT OUTPUT
    It is pretty fast, at a best ratio than Nz and so.
    Attached Files Attached Files

  14. #13
    Member
    Join Date
    Mar 2016
    Location
    Kiev
    Posts
    6
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by Gonzalo View Post
    Is speed important to you?
    In reasonable limits - no. I mean 1-2 minutes per 85MB file is ok, 10 mins per same file is too slow.

    Quote Originally Posted by Gonzalo View Post
    Which settings uses FA? See in the details section if the method is "mm+grzip". If not, try this:
    Code:
    fazip mm+grzip INPUT OUTPUT
    It is pretty fast, at a best ratio than Nz and so.
    Tried fazip with options you suggested - result is even bigger than 7zip.
    Currently resulting sizes are:
    zpaq -method 5 : 33MB (but 200+ seconds..)
    nz -cc -m2g : 34MB
    7z with max settings that are settable from GUI : 39.7MB
    fazip mm+grzip : 42 MB, but I have to admit - it is lightning fast compared to all other nominants - 4.5s per 85MB file. With the best simple options (-mx -max) freearc achieved ~39MB.

  15. #14
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    495
    Thanks
    228
    Thanked 83 Times in 63 Posts
    What kind of image is? Camera? Real world?
    If artificial, can benefit sometimes of rep filter, also available in fazip.exe (think of it like a fancy RLE, which is not, but the result is similar)
    Then exist other bitmap-specific compressors like above mentioned. BMF2 is the strongest I know. But you'll have to check if they support RAW images.

    Also: If you use memory-hungry packers, like rep+something, zpaq, nz or 7z with LZMA or LZMA2, pack your files all together (solid mode), not one by one.

  16. #15
    Member
    Join Date
    Mar 2016
    Location
    Kiev
    Posts
    6
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by Gonzalo View Post
    What kind of image is? Camera? Real world?
    Real world. Just usual photos.

    Quote Originally Posted by Gonzalo View Post
    If artificial, can benefit sometimes of rep filter, also available in fazip.exe (think of it like a fancy RLE, which is not, but the result is similar)
    Then exist other bitmap-specific compressors like above mentioned. BMF2 is the strongest I know. But you'll have to check if they support RAW images.
    Unfortunately - they does not. Almost all image compressors are just about image compression, so they require plain bitmaps, without any arbitrary data in source file.

    Quote Originally Posted by Gonzalo View Post
    Also: If you use memory-hungry packers, like rep+something, zpaq, nz or 7z with LZMA or LZMA2, pack your files all together (solid mode), not one by one.
    Yes, I tried, of course. Compression winner is zpaq, but it is pretty slow, nz is almost as good as zpaq in terms of compression, but is much faster. But it is nz - closed source and non-maintained, unfortunately. So right now I choose zpaq, until I found something better.

  17. #16
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    495
    Thanks
    228
    Thanked 83 Times in 63 Posts
    Well, although Nz has years since last version, it has not known bugs, and it just works. So you can use it anyway...

  18. #17
    Member
    Join Date
    Mar 2016
    Location
    Kiev
    Posts
    6
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by Gonzalo View Post
    Well, although Nz has years since last version, it has not known bugs, and it just works. So you can use it anyway...
    Yes. But I just found another (also orphanned ) tool created exactly for my case that compresses much faster and even better than nz or zpaq. So as neither nanozip nor rawzor are maintained, I would stick with rawzor as it is much faster and compresses better.

  19. Thanks:

    Gonzalo (9th March 2016)

  20. #18
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    495
    Thanks
    228
    Thanked 83 Times in 63 Posts
    So rawzor, uh? Never heard of it. Let's play around a little bit...

Similar Threads

  1. DLI ? Other top level lossy image compressors?
    By cbloom in forum Data Compression
    Replies: 49
    Last Post: 11th September 2014, 11:07
  2. BIM (a new lossless image compressor) is here!
    By encode in forum Data Compression
    Replies: 43
    Last Post: 17th September 2013, 16:00
  3. New lossless image compressor
    By encode in forum Data Compression
    Replies: 105
    Last Post: 10th January 2013, 10:36
  4. Lossless image coders
    By Madgeniy in forum Data Compression
    Replies: 26
    Last Post: 11th July 2011, 10:06
  5. image compressors
    By maadjordan in forum Forum Archive
    Replies: 5
    Last Post: 13th August 2007, 10:28

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •