Page 1 of 3 123 LastLast
Results 1 to 30 of 82

Thread: Zopfli & ZopfliPNG KrzYmod

  1. #1
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts

    Lightbulb Zopfli & ZopfliPNG KrzYmod v20.2.5/6 & KrzYdefC converter

    Zopfli & ZopfliPNG KrzYmod v20.2.5/6 + KrzYdefC converter (ZIP/GZ/ZLIB/PNG).

    Zopfli Compression Algorithm is a compression library programmed in C to perform
    very good, but slow, deflate or zlib compression.
    * highly improved and extended by Mr_KrzYch00 and other contributors as Mr_KrzYch00's Modyfications under the acronym of KrzYmod

    Various modes for advanced users (--h to see help), ZIP support, filename in GZIP support, timestamps support, disk stored ZopfliDB cache/database of best statistics to avoid processing exactly the same data per block again when the best result for given amount of iterations is already known, improved verbosity, speed, multithreaded with thread affinity locking, CTRL+C to stop iterating and output the result as soon as possible and more.

    Based on original Github commit: a29e46ba9f268ab273903558dcb7ac13b9fe8e29 Mar 18,2015 with updates manually merged; Aaron Kaluszka's fork: https://github.com/MegaByte/zopfli/tree/cryogenetic for ZopfliPNG adapted by Mr_KrzYch00 for compatibility with Zopfli KrzYmod, and various fixes and improvements from other contributors.
    This is a fork of Zopfli (https://github.com/google/zopfli) available at: https://github.com/MrKrzYch00/zopfli

    KrzYdefC is a deflate containers converter which may be used to fix wrong output container accidentally produced by Zopfli or to extract PNG's IDAT to ZIP/GZ in order to optimize that file and join it together with original PNG file's header to produce new smaller image file without doing any changes to filters, palettes etc. Short readme available at: http://virtual.4my.eu/KrzYdefC/readme.txt (example run: http://virtual.4my.eu/KrzYdefC/examplerun.png)

    --testrec difference at 999 iterations: -10bytes for Deflopt.exe with --pass99 --testrec:

    Code:
    d:\github\zopfliryzen>zopfli --i999 --mb0 --t12 --v5 DeflOpt.exe
    Zopfli, a Compression Algorithm to produce Deflate streams.
    KrzYmod extends Zopfli functionality - version 18.2.8
    
    Saving to: DeflOpt.exe.gz
    
          [BSR: 9] Estimate Cost: 213643 bit
    Block split points: 202 11735 38188 39552 (hex: ca,2dd7,952c,9a80)
    Total blocks: 5
    
    Progress:   0.4%  ---  Block:    1 / 5 [0001]  ---  Data Left: 52534  B
    Progress:  22.3%  ---  Block:    2 / 5 [0002]  ---  Data Left: 41001  B
    Progress:  24.8%  ---  Block:    3 / 5 [0004]  ---  Data Left: 39637  B
    Progress:  49.8%  ---  Block:    4 / 5 [0005]  ---  Data Left: 26453  B
    Progress: 100.0%  ---  Block:    5 / 5 [0003]  ---  Data Left:     0  B
          [BSR: 9] Estimate Cost: 205845 bit
    Block split points: 203 11503 38194 39361 (hex: cb,2cef,9532,99c1)
    Total blocks: 5
    
    BLOCK 0001: Compressed block size: 139 (0k) (unc: 202)
     > Used Fixed Tree Block: 1108 bit < 1233 bit
    BLOCK 0002: Compressed block size: 6039 (5k) (tree: 108) (unc: 11533)
    BLOCK 0003: Compressed block size: 14457 (14k) (tree: 119) (unc: 26453)
    BLOCK 0004: Compressed block size: 427 (0k) (tree: 49) (unc: 1364)
    BLOCK 0005: Compressed block size: 4661 (4k) (tree: 87) (unc: 13184)
    !! BEST SPLIT POINTS FOUND: 202 11735 38188 39552 (hex: ca,2dd7,952c,9a80)
    Input size: 52736 (51K)
    Deflate size: 25723 (25K)
    Ratio: 51.223%
    
    Progress: 100.0%
    Input size: 52736 (51K)
    Output size: 25741 (25K)
    Deflate size: 25723 (25K)
    Ratio: 51.189%
    
    
    d:\github\zopfliryzen>zopfli --i999 --mb0 --t12 --v5 --testrec DeflOpt.exe
    Zopfli, a Compression Algorithm to produce Deflate streams.
    KrzYmod extends Zopfli functionality - version 18.2.8
    
    Saving to: DeflOpt.exe.gz
    
          [BSR: 2] Estimate Cost: 213947 bit
          [BSR: 3] Estimate Cost: 213657 bit < 213947 bit
          [BSR: 4] Estimate Cost: 213613 bit < 213657 bit
    Block split points: 13219 38091 39708 (hex: 33a3,94cb,9b1c)
    Total blocks: 4
    
    Progress:  25.1%  ---  Block:    1 / 4 [0001]  ---  Data Left: 39517  B
    Progress:  28.1%  ---  Block:    2 / 4 [0003]  ---  Data Left: 37900  B
    Progress:  52.8%  ---  Block:    3 / 4 [0004]  ---  Data Left: 24872  B
    Progress: 100.0%  ---  Block:    4 / 4 [0002]  ---  Data Left:     0  B
          [BSR: 2] Estimate Cost: 206012 bit
          [BSR: 5] Estimate Cost: 205837 bit < 206012 bit
          [BSR: 6] Estimate Cost: 205796 bit < 205837 bit
          [BSR: 9] Estimate Cost: 205790 bit < 205796 bit
          [BSR: 13] Estimate Cost: 205777 bit < 205790 bit
    Block split points: 165 14381 38230 39616 39677 (hex: a5,382d,9556,9ac0,9afd)
    Total blocks: 6
    
    BLOCK 0001: Compressed block size: 113 (0k) (unc: 165)
     > Used Fixed Tree Block: 904 bit < 1041 bit
    BLOCK 0002: Compressed block size: 7417 (7k) (tree: 113) (unc: 14216)
    BLOCK 0003: Compressed block size: 13122 (12k) (tree: 119) (unc: 23849)
    BLOCK 0004: Compressed block size: 434 (0k) (tree: 52) (unc: 1386)
    BLOCK 0005: Compressed block size: 15 (0k) (unc: 61)
     > Used Fixed Tree Block: 127 bit < 242 bit
    BLOCK 0006: Compressed block size: 4622 (4k) (tree: 87) (unc: 13059)
    !! BEST SPLIT POINTS FOUND: 165 14381 38230 39616 39677 (hex: a5,382d,9556,9ac0,9afd)
    Input size: 52736 (51K)
    Deflate size: 25723 (25K)
    Ratio: 51.223%
    
    Progress: 100.0%
    Input size: 52736 (51K)
    Output size: 25741 (25K)
    Deflate size: 25723 (25K)
    Ratio: 51.189%
    
    
    d:\github\zopfliryzen>zopfli --i999 --mb0 --t12 --v5 --pass99 DeflOpt.exe
    Zopfli, a Compression Algorithm to produce Deflate streams.
    KrzYmod extends Zopfli functionality - version 18.2.8
    
    Saving to: DeflOpt.exe.gz
    
          [BSR: 9] Estimate Cost: 213643 bit
    Block split points: 202 11735 38188 39552 (hex: ca,2dd7,952c,9a80)
    Total blocks: 5
    
    Progress:   0.4%  ---  Block:    1 / 5 [0001]  ---  Data Left: 52534  B
    Progress:  22.3%  ---  Block:    2 / 5 [0002]  ---  Data Left: 41001  B
    Progress:  24.8%  ---  Block:    3 / 5 [0004]  ---  Data Left: 39637  B
    Progress:  49.8%  ---  Block:    4 / 5 [0005]  ---  Data Left: 26453  B
    Progress: 100.0%  ---  Block:    5 / 5 [0003]  ---  Data Left:     0  B
          [BSR: 9] Estimate Cost: 205845 bit
    Block split points: 203 11503 38194 39361 (hex: cb,2cef,9532,99c1)
    Total blocks: 5
    
     Recompressing, pass #1.
    Progress:   0.4%  ---  Block:    1 / 5 [0001]  ---  Data Left: 52533  B
    Progress:  21.8%  ---  Block:    2 / 5 [0002]  ---  Data Left: 41233  B
    Progress:  24.0%  ---  Block:    3 / 5 [0004]  ---  Data Left: 40066  B
    Progress:  49.4%  ---  Block:    4 / 5 [0005]  ---  Data Left: 26691  B
    Progress: 100.0%  ---  Block:    5 / 5 [0003]  ---  Data Left:     0  B
    !! RECOMPRESS: Bigger, using last (205795 bit > 205777 bit) !
    BLOCK 0001: Compressed block size: 139 (0k) (unc: 202)
     > Used Fixed Tree Block: 1108 bit < 1233 bit
    BLOCK 0002: Compressed block size: 6039 (5k) (tree: 108) (unc: 11533)
    BLOCK 0003: Compressed block size: 14457 (14k) (tree: 119) (unc: 26453)
    BLOCK 0004: Compressed block size: 427 (0k) (tree: 49) (unc: 1364)
    BLOCK 0005: Compressed block size: 4661 (4k) (tree: 87) (unc: 13184)
    !! BEST SPLIT POINTS FOUND: 202 11735 38188 39552 (hex: ca,2dd7,952c,9a80)
    Input size: 52736 (51K)
    Deflate size: 25723 (25K)
    Ratio: 51.223%
    
    Progress: 100.0%
    Input size: 52736 (51K)
    Output size: 25741 (25K)
    Deflate size: 25723 (25K)
    Ratio: 51.189%
    
    
    d:\github\zopfliryzen>zopfli --i999 --mb0 --t12 --v5 --pass99 --tesrec DeflOpt.exe
    Zopfli, a Compression Algorithm to produce Deflate streams.
    KrzYmod extends Zopfli functionality - version 18.2.8
    
    Saving to: DeflOpt.exe.gz
    
          [BSR: 9] Estimate Cost: 213643 bit
    Block split points: 202 11735 38188 39552 (hex: ca,2dd7,952c,9a80)
    Total blocks: 5
    
    Progress:   0.4%  ---  Block:    1 / 5 [0001]  ---  Data Left: 52534  B
    Progress:  22.3%  ---  Block:    2 / 5 [0002]  ---  Data Left: 41001  B
    Progress:  24.8%  ---  Block:    3 / 5 [0004]  ---  Data Left: 39637  B
    Progress:  49.8%  ---  Block:    4 / 5 [0005]  ---  Data Left: 26453  B
    Progress: 100.0%  ---  Block:    5 / 5 [0003]  ---  Data Left:     0  B
          [BSR: 9] Estimate Cost: 205845 bit
    Block split points: 203 11503 38194 39361 (hex: cb,2cef,9532,99c1)
    Total blocks: 5
    
     Recompressing, pass #1.
    Progress:   0.4%  ---  Block:    1 / 5 [0001]  ---  Data Left: 52533  B
    Progress:  21.8%  ---  Block:    2 / 5 [0002]  ---  Data Left: 41233  B
    Progress:  24.0%  ---  Block:    3 / 5 [0004]  ---  Data Left: 40066  B
    Progress:  49.4%  ---  Block:    4 / 5 [0005]  ---  Data Left: 26691  B
    Progress: 100.0%  ---  Block:    5 / 5 [0003]  ---  Data Left:     0  B
    !! RECOMPRESS: Bigger, using last (205795 bit > 205777 bit) !
    BLOCK 0001: Compressed block size: 139 (0k) (unc: 202)
     > Used Fixed Tree Block: 1108 bit < 1233 bit
    BLOCK 0002: Compressed block size: 6039 (5k) (tree: 108) (unc: 11533)
    BLOCK 0003: Compressed block size: 14457 (14k) (tree: 119) (unc: 26453)
    BLOCK 0004: Compressed block size: 427 (0k) (tree: 49) (unc: 1364)
    BLOCK 0005: Compressed block size: 4661 (4k) (tree: 87) (unc: 13184)
    !! BEST SPLIT POINTS FOUND: 202 11735 38188 39552 (hex: ca,2dd7,952c,9a80)
    Input size: 52736 (51K)
    Deflate size: 25723 (25K)
    Ratio: 51.223%
    
    Progress: 100.0%
    Input size: 52736 (51K)
    Output size: 25741 (25K)
    Deflate size: 25723 (25K)
    Ratio: 51.189%
    
    
    d:\github\zopfliryzen>zopfli --i999 --mb0 --t12 --v5 --pass99 --testrec DeflOpt.exe
    Zopfli, a Compression Algorithm to produce Deflate streams.
    KrzYmod extends Zopfli functionality - version 18.2.8
    
    Saving to: DeflOpt.exe.gz
    
          [BSR: 2] Estimate Cost: 213947 bit
          [BSR: 3] Estimate Cost: 213657 bit < 213947 bit
          [BSR: 4] Estimate Cost: 213613 bit < 213657 bit
    Block split points: 13219 38091 39708 (hex: 33a3,94cb,9b1c)
    Total blocks: 4
    
    Progress:  25.1%  ---  Block:    1 / 4 [0001]  ---  Data Left: 39517  B
    Progress:  28.1%  ---  Block:    2 / 4 [0003]  ---  Data Left: 37900  B
    Progress:  52.8%  ---  Block:    3 / 4 [0004]  ---  Data Left: 24872  B
    Progress: 100.0%  ---  Block:    4 / 4 [0002]  ---  Data Left:     0  B
          [BSR: 2] Estimate Cost: 206012 bit
          [BSR: 5] Estimate Cost: 205837 bit < 206012 bit
          [BSR: 6] Estimate Cost: 205796 bit < 205837 bit
          [BSR: 9] Estimate Cost: 205790 bit < 205796 bit
          [BSR: 13] Estimate Cost: 205777 bit < 205790 bit
    Block split points: 165 14381 38230 39616 39677 (hex: a5,382d,9556,9ac0,9afd)
    Total blocks: 6
    
     Recompressing, pass #1.
    Progress:   0.3%  ---  Block:    1 / 6 [0001]  ---  Data Left: 52571  B
    Progress:   2.9%  ---  Block:    2 / 6 [0004]  ---  Data Left: 51185  B
    Progress:   3.1%  ---  Block:    3 / 6 [0005]  ---  Data Left: 51124  B
    Progress:  30.0%  ---  Block:    4 / 6 [0002]  ---  Data Left: 36908  B
    Progress:  54.8%  ---  Block:    5 / 6 [0006]  ---  Data Left: 23849  B
    Progress: 100.0%  ---  Block:    6 / 6 [0003]  ---  Data Left:     0  B
    !! RECOMPRESS: Smaller (205698 bit < 205827 bit) !
          [BSR: 2] Estimate Cost: 206119 bit
          [BSR: 3] Estimate Cost: 205806 bit < 206119 bit
          [BSR: 8] Estimate Cost: 205802 bit < 205806 bit
          [BSR: 9] Estimate Cost: 205727 bit < 205802 bit
          [BSR: 11] Estimate Cost: 205726 bit < 205727 bit
          [BSR: 13] Estimate Cost: 205719 bit < 205726 bit
    Block split points: 165 14381 38225 39628 39677 (hex: a5,382d,9551,9acc,9afd)
    Total blocks: 6
    
     Recompressing, pass #2.
    Progress:   0.3%  ---  Block:    1 / 6 [0001]  ---  Data Left: 52571  B
    Progress:   3.0%  ---  Block:    2 / 6 [0004]  ---  Data Left: 51168  B
    Progress:   3.1%  ---  Block:    3 / 6 [0005]  ---  Data Left: 51119  B
    Progress:  30.0%  ---  Block:    4 / 6 [0002]  ---  Data Left: 36903  B
    Progress:  54.8%  ---  Block:    5 / 6 [0006]  ---  Data Left: 23844  B
    Progress: 100.0%  ---  Block:    6 / 6 [0003]  ---  Data Left:     0  B
    !! RECOMPRESS: Bigger, using last (205698 bit > 205698 bit) !
    BLOCK 0001: Compressed block size: 113 (0k) (unc: 165)
     > Used Fixed Tree Block: 904 bit < 1015 bit
    BLOCK 0002: Compressed block size: 7412 (7k) (tree: 106) (unc: 14216)
    BLOCK 0003: Compressed block size: 13120 (12k) (tree: 119) (unc: 23849)
    BLOCK 0004: Compressed block size: 431 (0k) (tree: 51) (unc: 1386)
    BLOCK 0005: Compressed block size: 16 (0k) (unc: 61)
     > Used Fixed Tree Block: 127 bit < 242 bit
    BLOCK 0006: Compressed block size: 4621 (4k) (tree: 86) (unc: 13059)
    !! BEST SPLIT POINTS FOUND: 165 14381 38230 39616 39677 (hex: a5,382d,9556,9ac0,9afd)
    Input size: 52736 (51K)
    Deflate size: 25713 (25K)
    Ratio: 51.242%
    
    Progress: 100.0%
    Input size: 52736 (51K)
    Output size: 25731 (25K)
    Deflate size: 25713 (25K)
    Ratio: 51.208%
    
    
    d:\github\zopfliryzen>
    Compiling profiled binaries on your own:
    1. Compile with --fprofile-generate and use other flags that best suit your CPU (sometimes GCC may not pick the best ones automatically),
    2. Delete all gcda extension files to avoid profile inconsistency (do it EVERY TIME after undesired run - if for example you get help instead of program running),
    3. Run the application the way you would normally run it (if you plan to use a lot of iterations in the future, it may be good to run it like that and just wait till it finishes),
    4. Recompile with --fprofile-use and --fprofile-partial-training (second command is GCC 10 only; runs default compiler optimization switches for not covered by PGO code. Normally GCC optimizes those parts for size which may degrade the performance for unusual runs),
    5. If there are errors about profile corruption, use --fprofile-correction or run it in a single-threaded compatibility mode with --t0 switch (this produces no corruption, however, I'm not sure if multi or single-threaded mode produces most optimal PGO binary).


    Version 20.2.6 changes (zopflipng only builds):
    - Update LodePNG to 20191107 from original repo while maintaining compatibility with previous changes
    Not affecting Zopfli in any way so for zopfli v20.2.5 == v20.2.6

    Version 20.2.5 changes:
    - Fix crash with >1 MASTER BLOCK + pass# and progress overflow,
    - Append LZ77 store much faster by allocating the memory that is known to be needed for queued blocks and be verbose about it (in some cases it now takes few seconds instead of minutes),
    - Reuse created threads to avoid flooding the system with new thread every time new block is to be compressed as well as, wait until they gracefully shutdown and correctly release them on WIN32 systems (separate for block splitter and MASTER BLOCKS),
    - Reduce MASTER BLOCK size back to 20MB as there is some performance degradation when the amount of data is being compressed at once exceeds that value (cause currently unknown),
    - Automatically increase MASTER BLOCK size with --cbs parameter to overlap on block split points. Avoids the last block within the MASTER BLOCK area to be cut in two or more pieces. There are no sanity checks here so if block ends up being too big, it may increase MASTER BLOCK by a lot. Should also produce the EXACT same block split points as passed with --cbs or --cbsfile,
    - Fix memory leaks and crashes with --statsdb, --aas and --all (no other problematic commands known as for now).
    So far the above changes fix a lot of problems with zopfli and big files to compress. So far I could compress 1.2GB WAV file and other smaller ones without a problem, using 7-zip's block split points (the memory, however, was exceeding 2GB). The results were usually few MB smaller files with max 999 unsuccessful iterations per block. Split points were produced exactly the same as in original ZIP file. The zopfli was no longer stuck for too long when merging blocks into store, be it 20 or 100MB master block.

    Version 18.8.2 changes:
    - Add --v6 to display more information with --all switch,
    - Display mode used in threads progress display: xxx1 - lazy, xx1x - ohh, x1xx - rc, 1xxx - brotli (e.g. 0011 means that lazy and ohh modes are used),
    - Speed up threads progress refresh 2x,
    - Improve result validating speed with --all switch, especially visible in "small amount of iterations" runs,
    - Allow passing optional numeric parameter next to --testrec switch to change the default of 20 maximum unsuccessful tries to some other number (this number can still be exceeded with threads).

    Version 18.6.2 changes:
    - Fix display update regression with threads and --v>2
    - Fix inefficient threading on --v<3 (including default)
    - Fix other verbosity issues
    - Fix wrong buffer size

    Version 18.3.1 changes:
    - Fix excessive CPU usage with MASTER controller thread when more SLAVE threads finished their work and are idle, and verbosity <3 or default (introduce 50ms sleep in threads check loop and change 250ms to 200ms (200+50ms) for displaying SLAVE threads status. This slows down blocks queue check by MASTER a bit, mostly noticeable when they are fast processed and gives delays to display of threads status when more of them are idle and verbosity is set to display this information. Overall zopfli will run faster by up to 100/number of logical processors % comparing default verbosity with previous versions when all logical processors are used) - in case you know how to improve it, send me a pull request with your code ;) ,
    - Code should now be also OS X portable including threads and their affinity setting, thanks to MegaByte.

    Version 18.2.13 changes:
    - --testrec is now multithreaded,
    - Every memory allocator now uses built-in wrapper which shouldn't crash Zopfli on out of memory or heap fragmentation, instead will throw an error and wait for memory to become available. Retried every minute.
    - Fix wrong time offset in ZIP files,
    - CTRL+C will now properly abort program when --mui is already set to 1.

    Version 18.2.12 changes:
    - Display block splitter counters only on verbosity level 5,
    - Ratio is now a compression ratio not the % of original,
    - Block progress is now displayed when the block compression finishes not starts and it's a bit improved,
    - Use B, KB, MB in Data Left,
    - Add ZOPFLI_REALLOC_BUFFER for TraceBackwards and ZopfliStoreLitLens for a small speed up of ~2% (measured on Odroid U3),

    Version 18.2.7 changes:
    - add --slowdyn# which uses LZ77 Optimal (iteration mode, 0 verbose) runs in splitter instead of simple split point calculations. # is the maximum unsuccessful iterations in this step. This mode may extremely slow down the splitter, especially when it's used with --slowfix, --bs99999 OR --testrec,
    - rename --slowsplit to --slowfix,
    - allow usage of LZ77 Optimal (iteration mode) before splitter (similar to splittinglast in old version of zopfli) when number is added after --testrec command (for example: --testrec999), then split and do final LZ77 Optimal runs on blocks.
    - remove 6th level of verbosity by changing on how the splitter displays information that it's still doing its work (using counters on v>0),
    - --sb# to set minimum size of lz77 on which expensive search for split points is performed (1024 is the original zopfli default),
    - --testrec to test various --bsr combinations in splitter, works also in --pass# recompressions (the behavior is a bit different when --maxrec is used),
    - Fix ARM devices crashes once and for all - this is the original's bug being fixed.

    Version 18.1.5 changes:
    - simplify zopflipng help and keep details in the Readme file instead of polluting the screen,
    - improved display a bit for compression and splitter,
    - compress blocks from biggest to smallest to improve multi-threading,
    - add --maxrec to change how recursion in lz77 is performed. Instead of splitting lz77 ranges to static amounts (for example 9 by default), use formula of range / bsr. For example for range 900 and the default bsr it should use recursion of 100, while for 450 - 50.
    (for now tests tell that maxrec won't provide better results right out of the box, may require some finetuning with --bsr switch, improvements came from slowsplit being used; tested on zopfli.exe)
    - Use native WIN32 threads on windows (required for --aff to work)
    - add --aff switch to set per thread affinity. This may improve speed a bit on Ryzen processors, recommended:
    * 4 cores (0-3,4-7): --aff15,240
    * 6 cores (0-5,6-11): --aff63,4032
    * 8 cores (0-7,8-15): --aff255,65280
    * 12 cores (threadripper, 0-5,6-11,12-17,18-23): --aff63,4032,258048,16515072
    * 16 cores (threadripper, 0-7,8-15,16-23,24-31): --aff255,65280,16711680,4278190080
    * or per 1 per logical OR 2 per physical core (2 logical cores) located on same CCX

    Older versions:
    - Please check commits on Github . . .

    Known bugs: [any fix commits for the below highly welcomed!]
    - there is (was?) some small memory leak occurring with threads
    (fixed in v20.2.5, no more leaks detected),
    - merging blocks into the stream is slow (fixed in v20.2.5, it's now hundreds times faster),
    - CTRL+C speeds up only Iterating mode,

    - ZopfliPNG has bad typecasting (e.g: char -> int) which WILL crash the program on ARM devices randomly (similar thing was preset in Zopfli which is now fixed; original bug),
    - The program will silently ignore unknown switches (Zopfli only),
    - Using different amount of threads may give a bit different results when --testrec switch is used,
    - Windows may fail to decompress zopfli compressed files (the issue is unknown, please use 7-zip, winrar or other ZIP support unpacker then)

    Additional command line switches
    * Please check README on Github . . .

    Fixes few bugs/crashes found in original Zopfli project.

    Attached files:
    - Profiled Zopfli for Linux x64,
    - Profiled Zopfli for Linux x86,
    - Profiled Zopfli for Linux armv7,
    - Profiled Zopfli for Windows x64,
    - Profiled Zopfli for Windows x64 Ryzen 1xxx CPU tuned,
    - Profiled Zopfli for Windows x86,
    - Profiled Zopflipng for Linux x64,
    - Profiled Zopflipng for Linux x86,
    - Profiled Zopflipng for Linux armv7,
    - Profiled Zopflipng for Windows x64,
    - Profiled Zopflipng for Windows x64 Ryzen 1xxx CPU tuned,
    - Profiled Zopflipng for Windows x86,
    - krzydefc windows, linux all,
    - test sample.

    Built as static, all windows versions are compiled with MSYS2\mingw32|64.

    Files are also available on server where I host my other projects: http://virtual.4my.eu/Zopfli%20KrzYmod/ .

    For more information and full description of commands in KrzYmod version please refer to README file: https://github.com/MrKrzYch00/zopfli/blob/master/README

    Enjoy!
    Last edited by Mr_KrzYch00; 2nd March 2020 at 04:52. Reason: A lot of fixes and improvements in v20.2.5/6

  2. Thanks (6):

    comp1 (21st July 2015),encode (16th February 2018),Jaff (28th April 2015),lorents17 (27th April 2015),pico (5th June 2016),SolidComp (20th May 2016)

  3. #2
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 64 Times in 33 Posts
    I wrote a small patch for zopfli deflate.c (in the Huffman header it tries to record runs of length 8 as two length 4 runs instead of a length 6 run followed by two single values) this saves sometimes a few bits, but it may also impact block splitting therefore results with and without the patch may greatly vary.

    Drop in replacement of src/zopfli/deflate.c and patch file in the enclosed archive.
    Also on my GitHub fork of zopfli.
    Attached Files Attached Files
    Last edited by caveman; 27th April 2015 at 18:05. Reason: Added GitHub link

  4. Thanks:

    Mr_KrzYch00 (29th April 2015)

  5. #3
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    Right now I'm testing mbs switch on that png file I mentioned in first post, so far 66 blocks produced best result (which I believe is max for default mls of 1024). After I finish all possible combination of blocks (with altering mls most likely to exceed 66), I will run a lot of iterations and try to do the same amount of iterations with Your suggestion to see if there is any difference. (:

  6. #4
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 64 Times in 33 Posts
    Quote Originally Posted by Mr_KrzYch00 View Post
    Right now I'm testing mbs switch on that png file I mentioned in first post, so far 66 blocks produced best result (which I believe is max for default mls of 1024). After I finish all possible combination of blocks (with altering mls most likely to exceed 66), I will run a lot of iterations and try to do the same amount of iterations with Your suggestion to see if there is any difference. (:
    You can check the actual number of blocks using defdb, I had the equivalent of mbs (called maxblocks) in my custom version of zopflipng with cryopng inside, it really helps on large files.

  7. #5
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    Quote Originally Posted by caveman View Post
    You can check the actual number of blocks using defdb, I had the equivalent of mbs (called maxblocks) in my custom version of zopflipng with cryopng inside, it really helps on large files.
    This is how I checked for 66 actually. For a bit different file of smaller resolution dos-like screen the number of blocks was 11 by default, interesting how 10 provided 4 bytes reduction (tested on 5 iterations though, now running it for 99999 - will take some time to get a valid comparison).

  8. #6
    Member
    Join Date
    Apr 2011
    Location
    Russia
    Posts
    168
    Thanks
    163
    Thanked 9 Times in 8 Posts
    Mr_KrzYch00
    Could you introduce Zopfli in PNGWolf
    very interesting to compare with PNGWolfZopfli

  9. Thanks:

    Mr_KrzYch00 (29th April 2015)

  10. #7
    Member
    Join Date
    May 2012
    Location
    United States
    Posts
    342
    Thanks
    197
    Thanked 58 Times in 42 Posts
    Quick test with my WIN95 file set archived to into 1 shar file using Shelwien's SHAR program: File set contains binary, text, image, audio, etc.

    Code:
    58,194,746  UNCOMPRESSED
    ------------------------------------
    24,104,966  zopfli64 --i10 --mbs15
    23,609,073  KZIP /s0 /b1024
    23,448,668  7-zip ZIP (Ultra, Deflate, etc.)
    23,343,213  PIGZ -11
    23,273,667  zopfli64 --i10 --mbs0
    23,248,135  zopfli64 --i1000 --mbs0
    Someone needs to make this into a full archiver so that multiple files can be put into one zopfli zip archive!

  11. #8
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    Quote Originally Posted by comp1 View Post
    Someone needs to make this into a full archiver so that multiple files can be put into one zopfli zip archive!
    You can use my GZ2KZIP script, the newest version supports gz timestamps and filename (if both are valid inside GZ file) and converts headers to ZIP accordingly while copying deflate stream to new container (if GZ doesn't contain filename header it's wiser to name file something like myprogram.exe.gz so it is properly named myprogram.exe in new created myprogram.exe.zip). Unfortunately it's written in php, not in C/EXE so You need php for windows for it to run (may also work on linux, not tested).
    After this You can simply ZIPMIX produced zip file, I usually have A archive as the one containing all files, B being the optimized file then run /sub command first to take away that file from A then /or to put in optimized one. However this way ZIPMIX doesn't check if B is actually smaller so You need to be sure Yourself...

    EDIT: Actually You are right, to properly support directory structure and all other stuff, php script, however, could easly support this stuff by scanning prepared in some directory files to ZIP. Then use zopfli on each file and automatically apply the result to created file, so it's updated on-fly on each successful gz file created from that directory while maintaining correct file order and directory structure in ZIP format. Or making some GUI program-like thing, autohotkey script could pretty much work in GUI but I'm not sure if working on files with it is also easy and efficient as in PHP.
    Long time ago I made a tool in VB6 which is pretty outdated (I still use it to work with kzip to find best number of blocks and run 20,000 -rn tries for zip and png files), I programmed whole analysing of ZIP structure in it to detect all files and their sizes, however, it uses external tools to extract, optimize and put file back in, like 7z, kzip, zipmix. It also was my first attempt to convert zip to gz and get rid of extended PK headers most JAR files have. So something similar that could replace kzip with zopfli would be nice idea, I'm not sure if I can still make vb6 studio running and if I'm much interested in this, since I got too used to C like structure in PHP. Maybe I should start learning C++, lol.

    The program is not anymore available for download, the last version was described here: http://krzyrio.blogspot.com/2011/11/krzflopt-123.html
    Last edited by Mr_KrzYch00; 29th April 2015 at 15:08.

  12. #9
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    Quote Originally Posted by lorents17 View Post
    Mr_KrzYch00
    Could you introduce Zopfli in PNGWolf
    very interesting to compare with PNGWolfZopfli
    Never tried PNGWolf and the source would require deep analysing in order to at least try hooking to zopfli source from pngwolf src the way it will want to work with it... Guess would be time consuming, don't expect miracles to happen though, this topic was my biggest C modification ever, lol. I basically script in PHP.

    EDIT: btw. You don't want to know how many warnings pngwolf produces on mingw, not to mention it doesn't want to produce output file - I'm trying to fix serious problems by modifying parts of code of all the dependencies, getting further down the road but still without promising results... nvm solved it... after a bit of sourcecode modifications of pngwolf, 7-zip, galib and properly including mingw libraries...

    Anybody familiar with pngwolf could check this compilation against some tests he/she did in the past (if it works properly)? It's standard sourcecode compilation using 7-zip 9.38 beta, zlib 1.2.8 and galib 2.4.7 built with mingw32 using 64 bit toolchain... It will be also interesting to see if x64 version has any advantages in speed over x86 version. x86 version of my build available upon request.

    EDIT2: yup, pngwolf+zopfli won over zopflipng. I was working on produced pngwolf GZIP file. The number of blocks got reduced and file is 4KB smaller in GZ format than produced by zopflipng in png format... I think it would be easier to work on GZ container (using the ability of pngwolf to put optimized IDAT into GZ file) by some kind of script that could also convert GZ back to PNG (if possible!), even if it was to be based on original PNG file information with small modifications (CRC32 I guess, anything else?).

    EDIT3: Actually it's already done, but using ZLIB produced with zopfli, here is the output of script I just finished writting:

    Code:
    e:\php>php zlib2png.php ss04.png ss04_new.zlib ss04_new.png
    
    Original PNG filename . . . . . . . . . . . . . . . . . ss04.png
    Original ZLIB filename  . . . . . . . . . . . . . . . . ss04_new.zlib
    New PNG filename  . . . . . . . . . . . . . . . . . . . ss04_new.png
    
    
    New IDAT Size:  . . . . . . . . . . . . . . . . . . . . 345 323 bytes
    New IDAT CRC32: . . . . . . . . . . . . . . . . . . . . 22649BF9
    
    
    Original PNG Size:  . . . . . . . . . . . . . . . . . . 349 203 bytes
    Original ZLIB Size: . . . . . . . . . . . . . . . . . . 345 319 bytes
    New PNG Size: . . . . . . . . . . . . . . . . . . . . . 345 376 bytes
    
    
    Successfully produced new PNG file from PNG + ZLIB !
    It's available here.
    Attached Files Attached Files
    Last edited by Mr_KrzYch00; 28th April 2015 at 17:32.

  13. Thanks:

    lorents17 (28th April 2015)

  14. #10
    Member
    Join Date
    Apr 2011
    Location
    Russia
    Posts
    168
    Thanks
    163
    Thanked 9 Times in 8 Posts
    Good day!
    Tell me how your project Zopfli can change the default values --mbs / - MLS?

  15. #11
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    --mbs is added as a wrapper to already existing program option that had maximum blocks to split to set to 15,
    --lazy is a modified define type constant that was only alterable before compilation, so it was changed to defined number in compiled executable, so I simply added a variable to options class and changed code to read that class variable, instead of being replaced with defined value,
    -w was added also as a wrapper for verbose_more option that was also in, but I changed "if" evaluation for iterations a bit for it to be displayed using same terminal line and start new line only when best iteration was found, so it doesn't flood Your terminal (but You can still TELL that Your program is working and which iteration it's currently running - very good for big files),
    --mls on the other hand was based on Caveman findings that altering 1024 value in some length scoring function can provide different results. I guess it's used for some decissions made by zopfli like block splits (it also has a bit of impact on the progress of iterations even if only one block is used), it was added as additional option variable in options class and that class needed to be passed from upper function to lower function as a parameter, so it could be then used as a 3rd parameter being max passed to the final function.

    With zopflipng all 5 command line switches were added to png_options class (I think it was named like that) to be later assigned to zopfli options class that are used as zopfli defaults just when zopflipng wants to run zopfli from within itself.

    Ah yes and I also altered some definable contants:
    Code:
    /*
    For longest match cache. max 256. Uses huge amounts of memory but makes it
    faster. Uses this many times three bytes per single byte of the input data.
    This is so because longest match finding has to find the exact distance
    that belongs to each length for the best lz77 strategy.
    Good values: e.g. 5, 8.
    */
    #define ZOPFLI_CACHE_LENGTH 256 <- was 8 (this is what increases memory usage A LOT in KrzYmod)
    
    /*
    limit the max hash chain hits for this hash value. This has an effect only
    on files where the hash value is the same very often. On these files, this
    gives worse compression (the value should ideally be 32768, which is the
    ZOPFLI_WINDOW_SIZE, while zlib uses 4096 even for best level), but makes it
    faster on some specific files.
    Good value: e.g. 8192.
    */
    #define ZOPFLI_MAX_CHAIN_HITS 32768 <- was 8192
    
    /*
    Whether to use the longest match cache for ZopfliFindLongestMatch. This cache
    consumes a lot of memory but speeds it up. No effect on compression size.
    */
    #define ZOPFLI_LONGEST_MATCH_CACHE 1 <- was 0
    
    /*
    Enable to remember amount of successive identical bytes in the hash chain for
    finding longest match
    required for ZOPFLI_HASH_SAME_HASH and ZOPFLI_SHORTCUT_LONG_REPETITIONS
    This has no effect on the compression result, and enabling it increases speed.
    */
    #define ZOPFLI_HASH_SAME 1 <- was 0
    
    /*
    Switch to a faster hash based on the info from ZOPFLI_HASH_SAME once the
    best length so far is long enough. This is way faster for files with lots of
    identical bytes, on which the compressor is otherwise too slow. Regular files
    are unaffected or maybe a tiny bit slower.
    This has no effect on the compression result, only on speed.
    */
    #define ZOPFLI_HASH_SAME_HASH 1 <- was 0
    ZOPFLI_MAX_CHAIN_HITS is kind of a questionable change since I really don't know if it has any real or BIG impact and I was also thinking about making it configurable by something like --mch?

    I also noticed that I didn't change ZOPFLI_SHORTCUT_LONG_REPETITIONS which is 0 by default for ZOPFLI_HASH_SAME to actually have any effect (according to what it says). I think it's because it said that it SHOULD not have any impact on compression ratio...
    Last edited by Mr_KrzYch00; 28th April 2015 at 23:54.

  16. #12
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 64 Times in 33 Posts
    Either I did not exactly grasp what your are trying to do or you took a really complicated path.
    If you want to recompress pngwolfs output while keeping the work it did (picking the optimal filters for each line), just run zopflipng using the --filters=p option (the same can be done with pngout using -f6).
    It is relatively easy to by-pass the final compression done with the 7-zip code to slightly speed-up pngwolf.

  17. Thanks:

    lorents17 (29th April 2015)

  18. #13
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    I pretty much myself liked my approach for which idea came from Lorents17 request. First let's find best method to optimize Image Data then try running various tools on its uncompressed stream in case zopfli fails to win (sometimes, that's very rare, but kzip -r wins by running a lot of times in a script, mostly, however, on small files; and pngout FAILS to reuse filter; sometimes yet 7zip [deflate] wins over kzip - the magic can happen at very unusual fb set, like 57, with pass parameter being high), also if any tool happens to pop-up in the future, or Ken somehow makes kzip yet better then simply extracting original IDAT from PNG and running it against various Deflate optimizers to then put it back using my PHP CLI converters ([gz2zip ->] zip2zlib -> zlib2png) is kind of nicer.
    Last edited by Mr_KrzYch00; 29th April 2015 at 00:56.

  19. #14
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 64 Times in 33 Posts
    Ok I see, I wrote a png2gz converter a while ago to quickly check if other compression methods could easily outperform Deflate on prefiltered data or run the result straight through advdef, but never bothered to write the code that would reinject the new compressed stream into the PNG.

    When dose pngout fail to reuse filter?

    Quote Originally Posted by Mr_KrzYch00 View Post
    I pretty much myself liked my approach for which idea came from Lorents17 request. First let's find best method to optimize Image Data then try running various tools on its uncompressed stream in case zopfli fails to win (sometimes, that's very rare, but kzip -r wins by running a lot of times in a script, mostly, however, on small files; and pngout FAILS to reuse filter; sometimes yet 7zip [deflate] wins over kzip - the magic can happen at very unusual fb set, like 57, with pass parameter being high), also if any tool happens to pop-up in the future, or Ken somehow makes kzip yet better then simply extracting original IDAT from PNG and running it against various Deflate optimizers to then put it back using my PHP CLI converters ([gz2zip ->] zip2zlib -> zlib2png) is kind of nicer.

  20. #15
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    Quote Originally Posted by caveman View Post
    Ok I see, I wrote a png2gz converter a while ago to quickly check if other compression methods could easily outperform Deflate on prefiltered data or run the result straight through advdef, but never bothered to write the code that would reinject the new compressed stream into the PNG.

    When dose pngout fail to reuse filter?
    When zopflipng produced smaller file with other filters than the standard 0-4 (being 1-5 in pngout), like brute force or entropy. When I used /f6, it always ended up with same size that was achieved before zopflipng optimization, and zopflipng reusing filter after pngout always ended up with bigger file.

    Well, the converters are quite enjoyable to write (especially if You don't need to think about C++ structure and make it stright PHP which later can be also made easly into CGI interface on site), since they basically just end up with still the same deflate stream, just the container for it that changes. On a side note, it's quite easy to try various things with just a simple BATCH script running all tools I currently want in certain order, it's just supid script performing iterations on pngwolf'ed IDAT and original IDAT (that could be produced by any PNG optimizing program before) using zopfli (and it's quite easy to loop kzip -r there too but that can be done in separate script just by extracting original IDAT once again):

    Code:
    pngwolf --in=%1 --max-stagnate-time=300 --max-evaluations=999999 --normalize-alpha --population-size=999 --even-if-bigger --original-idat-to=%1_o.gz --best-idat-to=%1_b.gz
    7z x %1_b.gz
    zopfli64 --i99999 --mbs0 --zlib -w -v %1_b
    ..\php zlib2png.php %1 %1_b.zlib %1_new_b.png
    call dodf %1_new_b.png
    7z x %1_o.gz
    zopfli64 --i99999 --mbs0 --zlib -w -v %1_o
    ..\php zlib2png.php %1 %1_o.zlib %1_new_o.png
    call dodf %1_new_o.png
    dodf is a subscript to run a lot of these lines:
    Code:
    (deflopt /s %1 & defluff <%1 >%1.png & move /y %1.png %1 & deflopt /s /b %1) >NUL 2>NUL
    How did You manage to write png2gz? Tried decompressing it first? Since in gz You need to specify uncompressed size. Also anybody know a png tool that MAY win with pngwolf and zoplfipng when applying filters? Very concerned about these last bytes to be taken care of when the files are served on server.

    EDIT: And just by running tests while increasing functionality with zopflipng I found a bug in my zlib2png script... The problem was with how PHP substr works with full ASCII data, so simpy fixed it by not using this function and instead to dump all original PNG header data zlib2png is finding (including 4 bytes of old IDAT size and 4 bytes "IDAT" string) and just fseek 8 bytes backwards to replace it with new data (4 bytes of new IDAT size+"IDAT"+zlib stream+CRC[...]). This bug doesn't appear in other converting tools: zip2gz, gz2zip and zip2zlib, since they don't use any string functions on data that is to be written to file.
    Last edited by Mr_KrzYch00; 29th April 2015 at 10:40. Reason: zlib2png v1.02 is out

  21. #16
    Member
    Join Date
    Apr 2011
    Location
    Russia
    Posts
    168
    Thanks
    163
    Thanked 9 Times in 8 Posts
    When used in setting zopflipng option --ohh gives me

    Unknown flag: --ohh

  22. Thanks:

    Mr_KrzYch00 (30th April 2015)

  23. #17
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    Quote Originally Posted by lorents17 View Post
    When used in setting zopflipng option --ohh gives me
    Fixed and reuploaded. I know I did change it before but must've forgotten to save the changes, this one was one of last changes I performed...

  24. #18
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    * KrzYmod v6 uploaded (changes since v5):
    - --ohh is now FULLY implemented,
    - added x64-AVX linux build,
    - new names for executables: zopfli/zopflipng - x86 or armv7, zopfli64/zopflipng64 - x64, zopfli64a/zopflipng64a - x64-AVX (Core iX instruction set).
    - added zopfliavx, zopflipngavx, zopflineon, zopflipngneon to makefile to make it easier compiling AVX and neon builds,
    - sourcecode now in separate archive.

  25. #19
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    * KrzYmod v7 uploaded (changes since v6):
    - GZIP now ALWAYS stores file modification time,
    - --gzipname will also store filename inside gzip archive.
    - a small fix to zopflipng by JayXon: https://github.com/JayXon/zopfli/com...f16847b0e99fad

    KrzYmod is now officially a Zopfli fork: https://github.com/MrKrzYch00/zopfli
    Last edited by Mr_KrzYch00; 6th May 2015 at 21:00.

  26. Thanks:

    Jaff (7th May 2015)

  27. #20
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    * KrzYmod pre-v8 uploaded (windows versions only):
    - added --zip command line switch to produce ZIP file, currently only one file is supported, hence the prerelease version... As far as I tested it produced exactly the same output as when converted with gz2zip converter, still if You notice any problem, let me know... It should be now easier for You to zipmix those file, if You want to make 2 zip files with zopfli and zipmix them into fileA.zip it's now as simple as this:

    Code:
    zopfli64a fileA --zip
    zopfli64a fileB --zip
    zipmix fileA.zip fileB.zip /or
    del /f fileB.zip
    Last edited by Mr_KrzYch00; 7th May 2015 at 21:33.

  28. Thanks:

    Jaff (8th May 2015)

  29. #21
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    * KrzYmod v8 uploaded (changes since v7):
    - added --zip command, which was already tested in pre-v8, however this version has rewritten working with zip container, to properly store central directory in memory and update it,
    - added --dir command, which adds all files and directories from pointed with it directory (just put directory name instead of file as input when You use this command, like zopfli --zip --dir src). Does not store empty files and directories. Directory structure is maintaned by filenames inside archive being "somedirectory/somefile.txt". ZIP file is updated to fully working ZIP archive everytime a file is successfully compressed, so if You run a lot of iterations and it finishes compressing one file, You can copy the resulting ZIP file already, but make sure not to delete it while the program is running, otherwise You will end up with grabage data in ZIP archive. The file will contain all files that are alredy compressed. The program simply replaces old central directory structure in ZIP file with new compressed file data and writes new directory structure and so on. All the data before old directory structure is NOT rewritten or copied, so You can rest assured that there will not be a lot of HD thrashing.
    This was a lot of code, as far as I tested there are no errors, however there should be some checks like directory depth etc done most likely. So use wisely, otherwise the program may crash.

    EDIT: I just noticed that modification times are not stored with --dir - fixed in v8f1.
    Last edited by Mr_KrzYch00; 10th May 2015 at 19:45.

  30. #22
    Member Jaff's Avatar
    Join Date
    Oct 2012
    Location
    Dracula's country
    Posts
    104
    Thanks
    115
    Thanked 22 Times in 18 Posts
    For ZopfliPNG please write the optimised image (if smaller than original) after CTRL+C is pressed (or after each filter, if image is not re-readed from disk before every filter pass). Pressing N when asked to terminate batch job makes no difference, program will end. Sometimes you just don't want to wait for all filters to finish....

  31. #23
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    If we want for zopfli/zopflipng to still finish some work even after pressing ctrl+c or some other key combination, we should still finish it. Of course if somebody set a lot of iterations this can take "ages". The idea could be to reduce number of iterations to 1 then but still let zopfli/zopflipng finish the work, unless ctrl+c is pressed for the 2nd time (if that's possible, would need to research signaling to process). Updating png on each optimization step could be a good idea providing that saving it can be easly injected in some loop that most likely is running there (didn't dig into it deeply).

    There is one zopfli fork that implemented time based iterating, didn't test it but I guess it can for example end last iteration if X time passes per block. My fork on the other hand implemented --mui# switch to end after nth unsuccessful iteration after last best.

    Practically there is nothing impossible, however, too much code disassembling just to inject one new functionality takes time and a lot of thinking. It took me 3 days to plan and properly implement compressing to ZIP file on-fly while maintaining correct structure in memory for its central directory headers.

    Also, since there is mostly quite noticible difference in size when different filter is applied I think iterating too much with zopflipng is simply pointless. Best solution is to run default number of iterations and then extract optimized IDAT with pngwolf to GZ file and let zopfli work on it producing zlib stream. Then use zlib2png converter I wrote to inject new zlib stream to png file - this is what I do when using zopflipng and standard pngwolf version filters...
    Of course most interesting idea for me is to add some signal key to let zopfli/zopflipng know that we want it to finish ASAP from the time it's pressed so it then runs only 1 iteration / block.

    EDIT: Note that zopfli is not an usual compression tool, it's designed to be CPU/time wasting to provide most squeezed file, and when starting it we must be prepared to donate that amount of CPU time to its process.
    Last edited by Mr_KrzYch00; 11th May 2015 at 04:10.

  32. Thanks:

    Jaff (12th May 2015)

  33. #24
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    * KrzYmod v9 uploaded (changes since v8):
    - added support for SIGINT (CTRL+C) to set Maximum number of unsuccessful iterations after best (--mui) to 1 while zopfli/zopflipng is compressing data. This will result in work being finished faster, iterations still progressing as long as there are bit reductions in a row. Pressing it multiple times will abort work as usual.

    @Jaff, that would be my idea of implementing CTRL+C. Making zopflipng update PNG file on each smaller size requires a bit more changes that I'm not much interested about to do, especially that You can easly run it in a BATCH/sh script to try different filters as separate runs. However, if You know C++ You can always create pull request and code this functionality Yourself. If the software compiles and works correctly I will gladly implement it into my fork code. Thanks for understanding.

  34. #25
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    * KrzYmod v9 Fix 4/6 changes (since v9):
    - ~15%+ faster compression,
    - fix incorrect file data if file already existed and new file was smaller,
    - fix Segmentation Fault/Crash when output file is to be stored in read only directory or without using sudo (display error and exit instead),
    - fix incorrect displaying of Deflate size due to container header adding to that data,
    - fix data to stdout on windows (now You can produce valid file when doing -c >somefile.gz etc.) and make stdout not work when --dir is used,
    - properly display ZIP size when multiple files are packed (--zip and --dir),
    - JayXon fix: Take RGB value of first encountered transpared pixel, not last,
    - Fix new line on Linux when displaying error messages.
    - Fix Segmentation Fault/Crash when there is not enough memory to allocate for cache, display error and exit instead,
    - Display compression progress percentage without the need for -v, also display bytes left to process and current / total blocks when -v is passed.
    Last edited by Mr_KrzYch00; 16th May 2015 at 23:20. Reason: fix6 info

  35. #26
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    EDIT... Accidental double post... check one below...

  36. #27
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    * KrzYmod v10 changes (since v9f6):
    - use 25x less memory (or ~10% more than original builds) [reduced cache to 9 (256 before, original: 8 and use shortcutting of repetitions because it actually has no effect on ratio but provides a very small speed up],
    - ~6% speed up because of the reason above (~22% total since v9).
    Last edited by Mr_KrzYch00; 18th May 2015 at 11:50.

  37. #28
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    * KrzYmod v11 changes:
    - added --fmr# command line switch which alters recursive searching by FindMinimum function that has an impact on block splitting and added verbosity for it when -v is passed. This is another finetuning option that can provide better or worse results. By default it's 9 as it was defined in original zopfli, so when not passed the block splitting proceeds as usual.

  38. #29
    Member caveman's Avatar
    Join Date
    Jul 2009
    Location
    Strasbourg, France
    Posts
    190
    Thanks
    8
    Thanked 64 Times in 33 Posts
    Quote Originally Posted by Mr_KrzYch00 View Post
    When zopflipng produced smaller file with other filters than the standard 0-4 (being 1-5 in pngout), like brute force or entropy. When I used /f6, it always ended up with same size that was achieved before zopflipng optimization, and zopflipng reusing filter after pngout always ended up with bigger file.
    That's really strange, I use filter reuse in pngout and zopflipng quite often and it works as expected (well the files are not always smaller and I use -force/--always_zopflify to get an output that was really computed by pngout/zopflipng). Did you try to check which type of filtering was in place using pngcheck -vv (verbose twice)?

    Quote Originally Posted by Mr_KrzYch00 View Post
    How did You manage to write png2gz? Tried decompressing it first? Since in gz You need to specify uncompressed size.
    png2gz has to decompress the image for 2 reasons:
    1) computing the uncompressed data size based on image geometry is a bit tricky when Adam7 interlacing is on
    2) check-sums used by PNG (Adler32) and Gzip (CRC-32) are not the same and are computed on uncompressed data

    The current version of png2gz has a build in size limit (will not work if decompressed data is larger than 64 MB, equivalent of a 4096x4096 RGBA image).
    Attached Files Attached Files

  39. #30
    Member Mr_KrzYch00's Avatar
    Join Date
    Apr 2015
    Location
    Poland
    Posts
    65
    Thanks
    10
    Thanked 40 Times in 24 Posts
    There is also problem with deflopt or defluff (not sure which one). In some cases they end up with file size bigger by few bytes (usually 1-10) when used on certain zoplified files.
    As for Adler32... right! That's an error I did when writing my zip2zlib converter... it should calculate it on uncompressed data, of course... well at least I can say for sure that most png decoders don't care for that checksum much as long as idat has correct size and png header info is correct as well.

Page 1 of 3 123 LastLast

Similar Threads

  1. pngout vs pngwolf-zopfli
    By SvenBent in forum Data Compression
    Replies: 77
    Last Post: 31st July 2016, 20:28
  2. Google: Compress Data More Densely with Zopfli
    By roytam1 in forum Data Compression
    Replies: 64
    Last Post: 9th July 2016, 00:09
  3. ZopfliPNG
    By Lode in forum Data Compression
    Replies: 22
    Last Post: 1st October 2015, 14:59

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •