Results 1 to 9 of 9

Thread: jpegrescan: now with arithmetic coding!

  1. #1
    Member
    Join Date
    Jun 2013
    Location
    USA
    Posts
    98
    Thanks
    4
    Thanked 14 Times in 12 Posts

    jpegrescan: now with arithmetic coding!

    I made a fork of jpegrescan: https://github.com/neheb/jpegrescan


    For those that don't know, jpegrescan is the best tool to losslessly shrink down jpeg files while still keeping them usable by most software. It works by abusing the progressive scan system which is why I refer to it as Ultra-Progressive JPEG.


    Unfortunately, arithmetic coding is not widely used(I only know of IrfanView supporting it) because of patents which have now expired. This tool may however be a cool pre-processor for stuff like PackJPG or STUFFIT.


    Requirements: perl, libjpeg(-turbo) which includes jpegtran.


    Some data on one image. Expect similar results with other images:


    Baseline JPEG: 996,728 bytes


    Progressive JPEG: 935,512 bytes


    Ultra-Progressive JPEG: 932,866 bytes


    Now for some arithmetic coding


    Baseline JPEG + Arithmetic Coding: 888,355 bytes


    Progressive JPEG + Arithmetic Coding: 873,321 bytes


    Ultra-Progressive JPEG + Arithmetic Coding: 868,672 bytes


    Here is a zip file containing all the images for the poor souls who cannot get the tool working: https://mega.co.nz/#!GVxnCbIa!H52mFU...zDqIqBaHFtViqk


    Before I get asked this, yes this is all lossless. You can use jpegtran to go back and forth between the sizes. I'll provide the specific command line options if anyone wants them.
    Last edited by Mangix; 29th June 2013 at 02:49. Reason: clarifications

  2. #2
    Member
    Join Date
    Sep 2007
    Location
    Denmark
    Posts
    920
    Thanks
    57
    Thanked 113 Times in 90 Posts
    to losslessly shrink down jpeg files

    you mean the files get smaller but the actually decode image is pixel perfect copy? or is there any kind of "unnoticeable" quality loss ?


    -- edit --
    doh maybe i should have read it all before posting

  3. #3
    Member
    Join Date
    Jun 2013
    Location
    USA
    Posts
    98
    Thanks
    4
    Thanked 14 Times in 12 Posts
    As jpegrescan just calls jpegtran with a scans file, here is information straight from the jpegtran man page:

    jpegtran works by rearranging the compressed data (DCT coefficients), without ever fully decoding the image. Therefore, its transformations
    are lossless: there is no image degradation at all, which would not be
    true if you used djpeg followed by cjpeg to accomplish the same conver‐
    sion. But by the same token, jpegtran cannot perform lossy operations
    such as changing the image quality.
    edit: actually that's not entirely true. If you feed jpegtran with an incomplete scans file, you can corrupt the image. For example if you use jpegrescan with the -v option and use one of the intermediate scans, you will corrupt the image.
    Last edited by Mangix; 29th June 2013 at 10:23.

  4. #4
    Member
    Join Date
    Apr 2011
    Location
    Russia
    Posts
    168
    Thanks
    163
    Thanked 9 Times in 8 Posts
    Explain to me what your version differs http://akuvian.org/src/jpgcrush.tar.gz

  5. #5
    Member
    Join Date
    Jun 2013
    Location
    USA
    Posts
    98
    Thanks
    4
    Thanked 14 Times in 12 Posts
    The github page mentions all of the differences. EXIF metadata is now kept by default and arithmetic coding is also optionally used.

  6. #6
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by SvenBent View Post
    to losslessly shrink down jpeg files
    you mean the files get smaller but the actually decode image is pixel perfect copy? or is there any kind of "unnoticeable" quality loss ?
    Certainly not a pixel-perfect copy of the original, but whether you compress a file with the AC coding option or the Huffman option does not impact quality. However, JPEG does not define the decoder fully, IOW, the standard allows some leeway in the implementation that may result in differing images depending on the decoder you pick.

    But except that, the transformation itself does not introduce additional loss. However, the AC coding option is not very popular, and I would not suggest to use it as existing software may or may not be able to read such images. Regardless of whether it is a standard.

    Backwards compatible *lossless* coding of JPEG is on the way with JPEG XT.

  7. #7
    Member
    Join Date
    Jun 2013
    Location
    USA
    Posts
    98
    Thanks
    4
    Thanked 14 Times in 12 Posts
    The resulting image is not lossless as compared to the BMP or PNG file which might have come before it. However all of the images in my zip file contain identical data when uncompressed. A simple experiment:

    Code:
    mangix@Mangix ~/arith-jpeg
    $ jpegtran -arithmetic < original.jpg | md5sum
    8f67c18ea05cb25a7fa3532bf3778f96 *-
    
    
    mangix@Mangix ~/arith-jpeg
    $ cat original-ac.jpg | md5sum
    8f67c18ea05cb25a7fa3532bf3778f96 *-
    
    mangix@Mangix ~/arith-jpeg
    $ jpegtran.exe -optimize < original-ac.jpg | md5sum
    44974c16964219b8c22130c96caf2868 *-
    
    
    mangix@Mangix ~/arith-jpeg
    $ cat original.jpg | md5sum
    44974c16964219b8c22130c96caf2868 *-
    The first example turns original.jpg to original-ac.jpg and md5sums it. The second just reads original-ac.jpg and md5sums it. The third and fourth are the same but backwards.
    Last edited by Mangix; 3rd July 2013 at 08:06. Reason: fixed formatting

  8. #8
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 64 Times in 33 Posts
    Quote Originally Posted by thorfdbg View Post
    Certainly not a pixel-perfect copy of the original, but whether you compress a file with the AC coding option or the Huffman option does not impact quality. However, JPEG does not define the decoder fully, IOW, the standard allows some leeway in the implementation that may result in differing images depending on the decoder you pick.

    But except that, the transformation itself does not introduce additional loss. However, the AC coding option is not very popular, and I would not suggest to use it as existing software may or may not be able to read such images. Regardless of whether it is a standard.

    Backwards compatible *lossless* coding of JPEG is on the way with JPEG XT.
    By the way how does JPEG XT handle the differences you get when decompressing with a fast (integer based) or precise (floating-point based) decoder, does it enforce some precision level to ensure all decoders will behave the same way?

  9. #9
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by caveman View Post
    By the way how does JPEG XT handle the differences you get when decompressing with a fast (integer based) or precise (floating-point based) decoder, does it enforce some precision level to ensure all decoders will behave the same way?
    It specifies the reverse DCT fully - it's an integer DCT. Of course, for lossy coding you may or may not follow the specs, but as soon as you need lossless, it will be the integer version. Float itself is not precisely defined enough (even IEEE) as rounding modes are left open to the implementation - thus not an option.

    The integer DCT currently proposed is a correctly scaled DCT, i.e. it does some operations more than the known optimum "unscaled" DCT. However, this way it has the advantage that you do not pick up additional distortion from uncorrectly scaled quantization buckets, and there is no additional specification required how to scale the quantization parameters.

Similar Threads

  1. SMAC : SMall Arithmetic Coding
    By Jean-Marie Barone in forum Data Compression
    Replies: 79
    Last Post: 14th July 2018, 19:56
  2. Arithmetic Coding : Underflow problem
    By Jean-Marie Barone in forum Data Compression
    Replies: 11
    Last Post: 24th October 2012, 02:53
  3. Arithmetic coding broken in IJG-code
    By thorfdbg in forum Data Compression
    Replies: 1
    Last Post: 10th September 2012, 21:22
  4. Raising performance bar in arithmetic coding
    By stbit in forum Data Compression
    Replies: 43
    Last Post: 28th April 2011, 09:07
  5. flzp_ac2 (flzp + an order-2 arithmetic coder)
    By inikep in forum Data Compression
    Replies: 4
    Last Post: 25th June 2008, 21:37

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •