Results 1 to 12 of 12

Thread: jpeg to zpaq lossy compression

  1. #1
    Member toi007's Avatar
    Join Date
    Jun 2011
    Location
    Lisbon
    Posts
    35
    Thanks
    0
    Thanked 0 Times in 0 Posts

    jpeg to zpaq lossy compression

    http://www.geometriadescritiva.eu/jp...essor-V.01.zip (5 jpg files and executables) 18 mb (m2 fast compression)
    http://www.geometriadescritiva.eu/jp...essor-V.02.zip (5 jpg files and executables) 18 mb (m4 slow compression better size)


    unzip the files
    unrar the internal executebles.rar file
    all in the same directory
    read the usage.txt
    dont forget to maximize the window
    it works for png bmp and jpg
    and the output its a temp file a remade file and the zpaq file
    only zpaq file is needed
    only works on windows
    and only for 2560x1600 pixels files of rgb

    but in quantitization3.exe it gets "similar" results to photoshop save for web in quality and less 30% size in general
    if its good as stuffit....no its lossy the chrominace tables of the values areduplified (x2) exect for the DC(0,0)
    and uses a 16 bits storage values in other order (luminance(0,0)/chromo1(0,0) chromo2(0,0) and next 64000 squares)
    then does (0,1) next 64000 squares, until 7,7.
    this method save 30% on a jpg.

    compare with photoshop save for web... got the same quantietization tables

    enjoy and reply

    the program as its source in blitzbasic pretty damn slow and inacurate language )
    need improve to java or c or c++ but if you can do it on those languages i will porvide assistance and bb sources...
    Last edited by toi007; 30th December 2012 at 16:45.

  2. #2
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,495
    Thanks
    26
    Thanked 131 Times in 101 Posts
    I use Windows only for gamin, so I haven't tested the program.

    What it exactly does?

    If I understand you correctly then your program:
    - firstly encodes a bitmap into JPEG,
    - then uses ZPAQ to compress the resulting JPEG,

    So the only novelty is in the conversion of bitmap to JPEG. And you've made some subjective claims. Please provide at least some samples, ie both original bitmaps and JPEGs coming out of your program and out of Photoshop. Also it would be helpful to compare the results with http://www.jpegmini.com/ output.

  3. #3
    Member toi007's Avatar
    Join Date
    Jun 2011
    Location
    Lisbon
    Posts
    35
    Thanks
    0
    Thanked 0 Times in 0 Posts
    i will provide some samples... the jpgmini seems quite good but the quality can also be reduced on my program
    its a test version only for 2560x1600 pixels and some original samples are provided
    im trying to use insted of 16 bits per value a 8 bits like jpg
    its will boost a bit my compression but i simply dont know how does jpg do in a -1023 DCT quantified by one = into a 8 bits value
    but if you try you will get some values and quality samples into a remade bmp
    but yes i might be able to post some samples here

    sample remade in zpaq ocupies 1023kb compression 50 quality (this file as been converted to jpg again it might be a bit lossy diference)
    Click image for larger version. 

Name:	000.remade.1023kb.jpg 
Views:	468 
Size:	1.10 MB 
ID:	2148

    sample produced in jpgmini quality may be 60 of photoshop save for web in order to produce 1340kb jpg file
    Click image for larger version. 

Name:	000_mini.jpg 
Views:	290 
Size:	1.11 MB 
ID:	2149
    Last edited by toi007; 6th January 2013 at 17:51.

  4. #4
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,495
    Thanks
    26
    Thanked 131 Times in 101 Posts
    DCT coefficient are anything between -1023 and 1023 because of how the algorithm works. Most of them have usually small magnitudes and are quantized heavily.

    PAQ has special model for JPEG files ie PAQ decodes JPEGs to DCT coefficient and then compresses that with own method losslessly.

    If you're decoding JPEG to a stream of DCT coefficients and compress using generic compression techniques then that's not enough to gain much compression ratio. You need to have dedicated model for DCT coefficients.

    And as for the images - they are of different size, so they are not comparable.

  5. #5
    Member toi007's Avatar
    Join Date
    Jun 2011
    Location
    Lisbon
    Posts
    35
    Thanks
    0
    Thanked 0 Times in 0 Posts

    i use a diferent aproach from lossless

    even using all the gimmiks of jpg until after the quantetization i dont use huffman so theres the temp files of 2560x1600x3x2 for the 16 bits pixel data but since i use a diferent quantitization 2 times bigger for chrome i get in same quality levels less size (less quality) but believe me the files even not being in the same quality or size can be visualy compared and this program can be used for comparation between photoshop jpg and my attempt of lossy compression (never better than 30% of the original size in same quality factors)

  6. #6
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,495
    Thanks
    26
    Thanked 131 Times in 101 Posts
    Well, human eye is generally not very sensitive when it comes to recognizing details in chroma intensity.

    Matt has done some shot but very informative analysis here: http://mattmahoney.net/dc/dce.html#Section_615

    Here is original image: http://mattmahoney.net/lena/lena.bmp
    And here is JPEG with all 63 AC coefficients in each 64-coefficients chroma block zeroed: http://mattmahoney.net/lena/lenadccolor.jpg

    I would say the second one looks sharper And the visible color degradation is rather minimal so your method of somewhat stronger quantization than usual is not nearly as strong as Matt's approach.

    Of course chroma is sometimes important - especially in cases where differently coloured areas with common border have incidentally the same luminance value. Or red or blue text overlaid on almost any background - such red or blue text is severely degraded when encoded as JPEG with chroma subsampling and strong chroma quantization.


    So to recap:
    Simply quantizing chroma stronger isn't going to make a revolution. You need to make some heuristic and/ or psychovisual model to determine where detailed chroma is not important and remove or simplify chroma in those areas. It's possible that JPEGmini does that.
    Last edited by Piotr Tarsa; 7th January 2013 at 19:23. Reason: corrections

  7. #7
    Member toi007's Avatar
    Join Date
    Jun 2011
    Location
    Lisbon
    Posts
    35
    Thanks
    0
    Thanked 0 Times in 0 Posts
    So to recap:
    Simply quantizing chroma stronger isn't going to make a revolution. You need to make some heuristic and/ or psychovisual model to determine where detailed chroma is not important and remove or simplify chroma in those areas. It's possible that JPEGmini does that.[/QUOTE]


    im not goint to revolutionize a new format or soo and much as been done in testing for jpg that seems a lot of work to be able to produce any even reasneble results on any aproach to be done but you say interesting stuff about the chroma areas in possibility that jpgmini might use in order to produce better results in size and image.... im going to see if i can get to any heuristic or visual model in order to produce a better chroma apliance to compression... this one i use its just 2 times worst in quality maybe 10% smaller size it will need lot of research for chroma and yes... testing....
    I have like 100 jpgs to test ehehe

    thanks sir piort .... am not a porgrammmer of any good use and after this tests if any thing can be done in c++ to improve (i wount say the size but the speed) i may be able to learn c++ with a neighbor of mine)
    including the diferent formats and sizes of jgp and bmp and png or gif (256 colors)

    as people would notice... im not a great english writer too.

  8. #8
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,257
    Thanks
    307
    Thanked 796 Times in 488 Posts
    You can download the test images from jpegmini.com. I did this for a couple of them. It increases the quantization values to reduce the quality. It is about equivalent to:

    djpeg -bmp image.jpg image.bmp
    cjpeg -quality 80 -optimize image.bmp imagemini.jpg

    except it preserves the chroma sampling rate and JFIF/EXIF headers. The default sampling rate for cjpeg is -sample 2x2,1x1,1x1 but a lot of images use 2x1 or 1x1 instead of 2x2 so you have to look. Here is an example (picture of a girl):

    Code:
    rolands.lakis.jpg
           0 SOI [0]
           0 SOI
           4 APP1 [4496] Exif  MM *                 ₧           ñ
        4502 Comment [12] AppleMark
        4516 DQT [132]
        Q0 = 1 1 1 1...4...7
        Q1 = 2 2 2 2...7...7
        4650 DHT [418]
        DC0: (2: 00)(3: 01 02 03 04 05)(4: 06)(5: 07),1,1,1,1,0,0,0,0,0,0,0
        DC1: (2: 00 01 02)(3: 03)(4: 04)(5: 05),1,1,1,1,1,1,0,0,0,0,0
        AC0: (2: 01 02)(3: 03)(4: 00 04 11)(5: 05 12 21),2,4,3,5,5,4,4,0,0,1,125
        AC1: (2: 00 01)(3: 02)(4: 03 11)(5: 04 05 21 31),4,3,4,7,5,4,4,0,1,2,119
        5070 Baseline DCT [17] 3484x2393, 8 bit:(C1=2x1,Q0)(C2=1x1,Q1)(C3=1x1,Q1)
        5089 APP2 [1320] ICC_PROFILE       appl    scnrRGB XYZ  ╙
        6411 SOS [12] Coef 0-63, bits 0-0 of (C1 DC0 AC0)(C2 DC1 AC1)(C3 DC1 AC1)
     4349100 EOI [0]
     4349102 end of file
    
    rolands.lakis_mini.jpg
           0 SOI [0]
           0 SOI
           4 APP1 [4496] Exif  MM *                 ₧           ñ
        4502 Comment [12] AppleMark
        4516 APP2 [1320] ICC_PROFILE       appl    scnrRGB XYZ  ╙
        5838 Comment [33] Optimized by JPEGmini 3.7.53.2L
        5873 DQT [67]
        Q0 = 6 4 5 5...20...38
        5942 DQT [67]
        Q1 = 7 7 7 9...38...38
        6011 Baseline DCT [17] 3484x2393, 8 bit:(C1=2x1,Q0)(C2=1x1,Q1)(C3=1x1,Q1)
        6030 DHT [28]
        DC0: (2: 02 03)(3: 00 01 04)(4: 05)(5: 06),1,1,0,0,0,0,0,0,0,0,0
        6060 DHT [73]
        AC0: (2: 01 02)(3: 11)(4: 00 03 12 21)(5: 31),3,3,3,3,2,5,4,1,0,1,21
        6135 DHT [25]
        DC1: (2: 00 01 02)(3: 03)(4: 04)(5: 05),0,0,0,0,0,0,0,0,0,0,0
        6162 DHT [52]
        AC1: (1: 01)(3: 00 02 11)(5: 12 21),2,2,1,3,3,2,7,1,0,1,5
        6216 SOS [12] Coef 0-63, bits 0-0 of (C1 DC0 AC0)(C2 DC1 AC1)(C3 DC1 AC1)
     1401935 EOI [0]
     1401937 end of file
    
    rolands.lakis-80-optimize-sample2x1.jpg
           0 SOI [0]
           0 SOI
           4 APP0 [16] JFIF ver 1.1 units 0, density 1 by 1
          22 DQT [67]
        Q0 = 6 4 5 6...20...40
          91 DQT [67]
        Q1 = 7 7 7 10...40...40
         160 Baseline DCT [17] 3484x2393, 8 bit:(C1=2x1,Q0)(C2=1x1,Q1)(C3=1x1,Q1)
         179 DHT [28]
        DC0: (2: 02 03)(3: 00 01 04)(4: 05)(5: 06),1,1,0,0,0,0,0,0,0,0,0
         209 DHT [69]
        AC0: (2: 01 02)(3: 11)(4: 00 03 12 21)(5: 31),3,3,2,4,5,4,2,1,1,2,15
         280 DHT [25]
        DC1: (2: 00 01 02)(3: 03)(4: 04)(5: 05),0,0,0,0,0,0,0,0,0,0,0
         307 DHT [52]
        AC1: (2: 00 01)(3: 02 11 21)(5: 12 31),2,2,2,2,2,2,2,1,4,0,7
         361 SOS [12] Coef 0-63, bits 0-0 of (C1 DC0 AC0)(C2 DC1 AC1)(C3 DC1 AC1)
     1400481 EOI [0]
     1400483 end of file
    If you use the default sampling rate, you get a smaller file and I can't see any difference. You can also get a smaller file with -progressive.

    Code:
     1,172,985 rolands.lakis-80-optimize-progressive.jpg
     1,198,984 rolands.lakis-80-optimize.jpg
     1,242,929 rolands.lakis-80.jpg
     1,400,483 rolands.lakis-80-optimize-sample2x1.jpg
     1,401,937 rolands.lakis_mini.jpg
     1,536,624 rolands.lakis-85-optimize.jpg
     1,761,628 rolands.lakis-80-optimize-sample1x1.jpg
     4,349,102 rolands.lakis.jpg
    25,011,690 rolands.lakis.bmp

  9. #9
    Member toi007's Avatar
    Join Date
    Jun 2011
    Location
    Lisbon
    Posts
    35
    Thanks
    0
    Thanked 0 Times in 0 Posts
    i find very interesting that people how have good skills (much better than mine) are interested on this
    but as you can see it becomes a bit confusing for my mind to absorve all of this
    including the explanaition of sir matt ....
    i do need to read all of this carfully and maybe people would like to improve to a better program than mine
    reading also all of this and using better formulas and more acurate that the ones in blitz basic program
    for instances DCT on my program using 6 digit floating point does a sharp 8x8 squares also because its taken from the net
    (i cant figure out the complete story yet)
    and i realize that maybe people dont use none of this programs because of the lact of objectivity on it
    slow porcessing and inacurate results
    but yes i supouse the encode.su also gives space to embrionic projects and ideas that may be better even for other users to know the faults and dont have to pass throw the same mistakes over and over
    to learn the better way to produce results....
    so for the pacience of all the useres of my program the best thing on it its not the speed or the userfriendlyness of it... its just a crasy attempt to compress a jpg to a smaller value even smaller that lossless compression and dont loss much on visual...

  10. #10
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 64 Times in 33 Posts
    [QUOTE=Matt Mahoney;31863]
    If you use the default sampling rate, you get a smaller file and I can't see any difference. You can also get a smaller file with -progressive.

    You can get even smaller files using JPEGrescan:
    https://github.com/kud/jpegrescan
    http://code.google.com/p/imageoptim/...ran/jpegrescan

    There are hundreds of different ways to produce progressive JPEGs, JPEGrescan is a Perl script used to call jpegtran to trial many progressive configurations.

  11. #11
    Member toi007's Avatar
    Join Date
    Jun 2011
    Location
    Lisbon
    Posts
    35
    Thanks
    0
    Thanked 0 Times in 0 Posts

    it seems that this subject of producing a better size and quality jpgs its hard

    no one seems relly interested

  12. #12
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,257
    Thanks
    307
    Thanked 796 Times in 488 Posts
    You should post some benchmarks and images on your website. For each test post 3 images: original (PNG), compressed with JPEG, and compressed with your algorithm to the same size and decompressed to PNG so we can compare quality.

Similar Threads

  1. WebP (lossy image compression)
    By Arkanosis in forum Data Compression
    Replies: 62
    Last Post: 12th April 2019, 18:45
  2. the zero redunduncy in zpaq normal jpg compression
    By toi007 in forum Data Compression
    Replies: 3
    Last Post: 16th August 2012, 23:40
  3. Quo Vadis JPEG - New Movements in Still Image Compression
    By thorfdbg in forum Data Compression
    Replies: 37
    Last Post: 14th June 2012, 20:47
  4. JPEG Compression Test [April 2010]
    By Skymmer in forum Data Compression
    Replies: 18
    Last Post: 7th February 2011, 22:30
  5. JPEG Compression Test [December 2009]
    By Skymmer in forum Data Compression
    Replies: 9
    Last Post: 23rd December 2009, 20:06

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •