Page 1 of 3 123 LastLast
Results 1 to 30 of 65

Thread: Raw Pixel Image data compression

  1. #1
    Member
    Join Date
    Feb 2016
    Location
    USA
    Posts
    98
    Thanks
    36
    Thanked 8 Times in 8 Posts

    Raw Pixel Image data compression

    Hi all, what is the state-of-art for lossless compression of raw pixel image data? Tried JPEG 2K and JPEG-LS, either can outperform the other for some data, but not always, in terms of compression ratio. Sometimes, even gzip can compress more than both of them. So my questions:

    1. are there any other good algorithms?

    2. without actual compression, are there any fast algorithms ( by checking data quickly ) to decide which algorithm will compress best?

    Thx in advance for any insights, experiences and lessons.

  2. #2
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    250
    Thanks
    46
    Thanked 105 Times in 54 Posts
    Quote Originally Posted by smjohn1 View Post
    Sometimes, even gzip can compress more than both of them.
    Guess the images on which gzip is better are mostly artificial images.
    If some of your images are photographic, have a look at Lossless Photo Compression Benchmark.

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  3. Thanks:

    smjohn1 (22nd December 2017)

  4. #3
    Member
    Join Date
    Aug 2017
    Location
    Mauritius
    Posts
    59
    Thanks
    67
    Thanked 22 Times in 16 Posts
    Try FLIF

  5. Thanks:

    smjohn1 (22nd December 2017)

  6. #4
    Member
    Join Date
    Feb 2016
    Location
    Luxembourg
    Posts
    546
    Thanks
    203
    Thanked 796 Times in 322 Posts
    I'm assuming by raw pixel image data you mean the raw files from digital cameras (Canon, Nikon, Sony, etc) or similar? Usually with 10bits-per-component precision or higher?
    If so, assuming the format is supported, you can try EMMA if you want maximum compression, or PackRAW if you need pratical (fast), real-world usage, good-enough compression ratios. Both can be found in this forum.

    If you mean just the usual RGB 8bpp image pixel data without any headers (as in, unrecognized as an image by any compressor), the state-of-the-art will most likely be cmix, by Byron Knoll.

    Best regards

  7. Thanks (2):

    smjohn1 (22nd December 2017),Stephan Busch (22nd December 2017)

  8. #5
    Member
    Join Date
    Feb 2016
    Location
    USA
    Posts
    98
    Thanks
    36
    Thanked 8 Times in 8 Posts
    What about medical images, such as from CT, etc?

    Quote Originally Posted by mpais View Post
    I'm assuming by raw pixel image data you mean the raw files from digital cameras (Canon, Nikon, Sony, etc) or similar? Usually with 10bits-per-component precision or higher?
    If so, assuming the format is supported, you can try EMMA if you want maximum compression, or PackRAW if you need pratical (fast), real-world usage, good-enough compression ratios. Both can be found in this forum.

    If you mean just the usual RGB 8bpp image pixel data without any headers (as in, unrecognized as an image by any compressor), the state-of-the-art will most likely be cmix, by Byron Knoll.

    Best regards

  9. #6
    Member
    Join Date
    Feb 2016
    Location
    Luxembourg
    Posts
    546
    Thanks
    203
    Thanked 796 Times in 322 Posts
    EMMA has limited support for DICOM medical images (uncompressed only).
    PackRAW doesn't, but it's just a matter of porting the parser to it, the codec itself can handle them.

  10. #7
    Member
    Join Date
    Feb 2016
    Location
    USA
    Posts
    98
    Thanks
    36
    Thanked 8 Times in 8 Posts
    Thx for the info.

    I just checked, it seems both EMMA and PackRaw (cmix as well ) are closed source and only .exe available for Windows, so cannot use it for a storage application. I am mainly looking for open source algorithms that can be implemented and tailored, possibly improved for applications.

    Quote Originally Posted by mpais View Post
    EMMA has limited support for DICOM medical images (uncompressed only).
    PackRAW doesn't, but it's just a matter of porting the parser to it, the codec itself can handle them.

  11. #8
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    615
    Thanks
    260
    Thanked 242 Times in 121 Posts
    Quote Originally Posted by smjohn1 View Post
    I just checked, it seems both EMMA and PackRaw (cmix as well ) are closed source and only .exe available for Windows
    True for EMMA and PackRAW, but cmix is open source (GPL) and should compile on Windows, Linux and OS X - see https://github.com/byronknoll/cmix
    http://schnaader.info
    Damn kids. They're all alike.

  12. Thanks (2):

    boxerab (23rd December 2017),smjohn1 (23rd December 2017)

  13. #9
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    876
    Thanks
    242
    Thanked 325 Times in 198 Posts
    Disclaimer: I'm only writing about opensourced image compression and purposedly ignoring fine work in closed source (like gralic).

    You state that you want to have a mixture of speed and quality. You don't indicate if decoding speed is of any importance to your use case. Often decoding speed matters quite a lot. If you consider both compression density and decoding speed, then the practical paretofront is pretty much two algorithms: WebP lossless and FLIF. In rough terms, FLIF is currently the most dense algorithm for lossless compression, but it is a tad slow on decoding. WebP lossless is likely a better match for real life use. JPEG-LS and JPEG 2K can be okyish with photos, but are no competition with graphics to FLIF and WebP lossless. BCIF can be great for photographs if there is demosaicking artefacts in the image to be compressed (if the image is coming straight from the camera resolution) -- but this advantage is lost if the image has been resized afterwards, which is practically always the case for internet use.

    According to http://flif.info/ website FLIF gives 14 % gains against WebP lossless. Last time I checked I saw that in the size range of 16-256 kB of PNG images from the internet WebP lossless compressed 2 % more densely and decompresses about 10x faster. For a wider set of image material and larger PNGs FLIF wins in density (about 5 %), but that comes at a cost of 10x slower decoding. WebP lossless can decode roughly the same speed as PNG (50-600 uncompressed MB/s on a desktop, 10-100 MB/s on mobile), and FLIF 5-60 MB/s on desktop, 1-10 MB/s on mobile. 1 uncompressed MB/s means ~200 compressed kB/s, which is often the performance bottleneck, even when the image is not coming from the cache. (On http://qlic.altervista.org/ WebP lossless decompresses 64 MB/s, FLIF 2 MB/s, a 30x difference.)

    FLIF is a lot better than WebP lossless at compressing fractal images -- images that have extremely smooth noiseless areas mixed with sudden high entropy areas. I suspect that this is not a format problem in WebP, but just the current parametrization uses relatively low resolution entropy definition fields. Using higher resolution entropy images would likely make WebP competitive there, too. In my book the fractal images are an esoteric use case, and I'd just rather leave them out from a benchmark or not include more than 0.1 % volume of the benchmark of such images.

    FLIF has a better near-lossless behavior than WebP near-lossless especially in FLIF's interlaced mode. However, the interlaced mode comes with further slowdown in decoding speed.
    Last edited by Jyrki Alakuijala; 23rd December 2017 at 18:04.

  14. Thanks (4):

    boxerab (23rd December 2017),Bulat Ziganshin (6th January 2018),khavish (23rd December 2017),smjohn1 (23rd December 2017)

  15. #10
    Member
    Join Date
    Feb 2016
    Location
    USA
    Posts
    98
    Thanks
    36
    Thanked 8 Times in 8 Posts
    Decoding speed certainly is important, as this is for storage subsystem, which has to spit back the original data for read operations ( and compressing for write operations ), i.e., applications using the storage subsystem do not see compressed data at all.

    What about WebP's performance ( and of course FLIF's ) on medical data ( e.g., CT data ) rather than from "ordinary" camera, in terms of compression ratio? Assume resizing is not performed at all.


    Quote Originally Posted by Jyrki Alakuijala View Post
    Disclaimer: I'm only writing about opensourced image compression and purposedly ignoring fine work in closed source (like gralic).

    You state that you want to have a mixture of speed and quality. You don't indicate if decoding speed is of any importance to your use case. Often decoding speed matters quite a lot. If you consider both compression density and decoding speed, then the practical paretofront is pretty much two algorithms: WebP lossless and FLIF. In rough terms, FLIF is currently the most dense algorithm for lossless compression, but it is a tad slow on decoding. WebP lossless is likely a better match for real life use. JPEG-LS and JPEG 2K can be okyish with photos, but are no competition with graphics to FLIF and WebP lossless. BCIF can be great for photographs if there is demosaicking artefacts in the image to be compressed (if the image is coming straight from the camera resolution) -- but this advantage is lost if the image has been resized afterwards, which is practically always the case for internet use.

    According to http://flif.info/ website FLIF gives 14 % gains against WebP lossless. Last time I checked I saw that in the size range of 16-256 kB of PNG images from the internet WebP lossless compressed 2 % more densely and decompresses about 10x faster. For a wider set of image material and larger PNGs FLIF wins in density (about 5 %), but that comes at a cost of 10x slower decoding. WebP lossless can decode roughly the same speed as PNG (50-600 uncompressed MB/s on a desktop, 10-100 MB/s on mobile), and FLIF 5-60 MB/s on desktop, 1-10 MB/s on mobile. 1 uncompressed MB/s means ~200 compressed kB/s, which is often the performance bottleneck, even when the image is not coming from the cache. (On http://qlic.altervista.org/ WebP lossless decompresses 64 MB/s, FLIF 2 MB/s, a 30x difference.)

    FLIF is a lot better than WebP lossless at compressing fractal images -- images that have extremely smooth noiseless areas mixed with sudden high entropy areas. I suspect that this is not a format problem in WebP, but just the current parametrization uses relatively low resolution entropy definition fields. Using higher resolution entropy images would likely make WebP competitive there, too. In my book the fractal images are an esoteric use case, and I'd just rather leave them out from a benchmark or not include more than 0.1 % volume of the benchmark of such images.

    FLIF has a better near-lossless behavior than WebP near-lossless especially in FLIF's interlaced mode. However, the interlaced mode comes with further slowdown in decoding speed.

  16. #11
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    876
    Thanks
    242
    Thanked 325 Times in 198 Posts
    Quote Originally Posted by smjohn1 View Post
    Decoding speed certainly is important, as this is for storage subsystem, which has to spit back the original data for read operations ( and compressing for write operations ), i.e., applications using the storage subsystem do not see compressed data at all.

    What about WebP's performance ( and of course FLIF's ) on medical data ( e.g., CT data ) rather than from "ordinary" camera, in terms of compression ratio? Assume resizing is not performed at all.
    I'd say neither is a good solution for CT images.

    WebP loses much of its capabilities on grayscale images and becomes a rather ordinary compressor. Also, WebP is only 8 bits which is not enough for CT, which are usually stored in 12 bits (?).

    I'd guess that FLIF is likely too slow for decoding medical images.

  17. Thanks:

    boxerab (24th December 2017)

  18. #12
    Member
    Join Date
    May 2014
    Location
    Canada
    Posts
    141
    Thanks
    72
    Thanked 21 Times in 12 Posts
    Jyrki, what would you say is the best codec for lossless compression of medical images (10 and 12 bit grayscale) ?

  19. #13
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    876
    Thanks
    242
    Thanked 325 Times in 198 Posts
    Quote Originally Posted by boxerab View Post
    Jyrki, what would you say is the best codec for lossless compression of medical images (10 and 12 bit grayscale) ?
    I'd go for delta coding against against the top-pixel-left-pixel average, and then using ANS for compressing deltas. Some CT machines (at least 20 years ago when I was active in that field) try to clean (some of) the air volume outside the patient with zeros, so 0-rle might be a good add on to that coding scheme. Intuitively, you'd get 1.5 GB/s in decompression speed, very high speed compression, and the same kind of density you'd get from more advanced codecs.

  20. Thanks (3):

    boxerab (24th December 2017),Bulat Ziganshin (6th January 2018),smjohn1 (24th December 2017)

  21. #14
    Member
    Join Date
    Feb 2016
    Location
    USA
    Posts
    98
    Thanks
    36
    Thanked 8 Times in 8 Posts
    OK, I will try all those mentioned here plus your delta + ANS and report results here.

    What is a good reading for beginners like me to get CT image formats so as to implement delta + RLE + ANS ? I am just wondering why there are no good algorithms already implemented, such as delta + ANS ? Or maybe there are, just a layman like me doesn't know ?

    Quote Originally Posted by Jyrki Alakuijala View Post
    I'd go for delta coding against against the top-pixel-left-pixel average, and then using ANS for compressing deltas. Some CT machines (at least 20 years ago when I was active in that field) try to clean (some of) the air volume outside the patient with zeros, so 0-rle might be a good add on to that coding scheme. Intuitively, you'd get 1.5 GB/s in decompression speed, very high speed compression, and the same kind of density you'd get from more advanced codecs.

  22. #15
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    876
    Thanks
    242
    Thanked 325 Times in 198 Posts
    Quote Originally Posted by smjohn1 View Post
    OK, I will try all those mentioned here plus your delta + ANS and report results here.

    What is a good reading for beginners like me to get CT image formats so as to implement delta + RLE + ANS ? I am just wondering why there are no good algorithms already implemented, such as delta + ANS ? Or maybe there are, just a layman like me doesn't you ?
    First approximate of the above implementation could be: split the 12 bit result into low 8-bits of delta and the upper 4 bits of delta in separate bytes. Most of the upper bit deltas will be zeros, as typically there are around 2.5 bits of entropy per delta symbol in CT images. Now you have byte compatible entropy left, so you can try encoding it with ZStandard or Brotli (or possibly with FiniteStateEntropy or Huff0).

  23. #16
    Member
    Join Date
    Feb 2016
    Location
    USA
    Posts
    98
    Thanks
    36
    Thanked 8 Times in 8 Posts
    Let me check if my understanding is correct: a CT image is a sequence of 12-bit integers ( all positive ? ), delta could be negative then? Also is it possible to know width and height of image so that more advanced delta, such as loco-i, can be applied?

    Quote Originally Posted by Jyrki Alakuijala View Post
    First approximate of the above implementation could be: split the 12 bit result into low 8-bits of delta and the upper 4 bits of delta in separate bytes. Most of the upper bit deltas will be zeros, as typically there are around 2.5 bits of entropy per delta symbol in CT images. Now you have byte compatible entropy left, so you can try encoding it with ZStandard or Brotli (or possibly with FiniteStateEntropy or Huff0).

  24. #17
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    876
    Thanks
    242
    Thanked 325 Times in 198 Posts
    Quote Originally Posted by smjohn1 View Post
    Let me check if my understanding is correct: a CT image is a sequence of 12-bit integers ( all positive ? ), delta could be negative then? Also is it possible to know width and height of image so that more advanced delta, such as loco-i, can be applied?
    The CT image values are a linear mapping of Hounsfield scale. In an ideal world it wouldn't go below 0 since that is the value for air -- the lowest density material available in the imaging situation. However, noise, the reconstruction process and ringing artefacts lead consistently to negative Hounsfield scale values in the CT images.

    You can possibly improve the density with a more advanced predictor. The compression gains (or losses) from a more advanced algorithm can become related to the make of the CT scanner.

  25. Thanks:

    smjohn1 (24th December 2017)

  26. #18
    Member
    Join Date
    Dec 2014
    Location
    Berlin
    Posts
    29
    Thanks
    36
    Thanked 26 Times in 12 Posts
    You could write a custom ZPAQ model, similar to this PNM model which parses the header to get the width and then uses context mixing with the neighbor pixels:
    https://github.com/pothos/zpaqlpy/bl...est/pnm.py#L88 (More info in README or the thesis)

  27. Thanks:

    smjohn1 (1st January 2018)

  28. #19
    Member
    Join Date
    Feb 2016
    Location
    USA
    Posts
    98
    Thanks
    36
    Thanked 8 Times in 8 Posts
    OK, some reports here:

    Finally I managed to put a few methods discussed here, but not the ZPAQ model ( too complicated for me to implement, as I am no expert in this field ). and tested on two raw pixel files as attached ( they are not text files, as without .txt extension, the files couldn't be uploaded ). They both are of size 320 x 320, each pixel is a 12-bit non-negative integer, but represented using 2 bytes, in BIG endian.

    The best I can do so far are: test1.txt compressed to 43202 bytes, and test2.txt to 41475 bytes.

    I wouldn't tell the tools used for now, as I don't want to influence anyone else's try. If you like to try, please report your best results here

    Quote Originally Posted by pothos2 View Post
    You could write a custom ZPAQ model, similar to this PNM model which parses the header to get the width and then uses context mixing with the neighbor pixels:
    https://github.com/pothos/zpaqlpy/bl...est/pnm.py#L88 (More info in README or the thesis)
    Attached Files Attached Files

  29. #20
    Member
    Join Date
    Feb 2016
    Location
    Luxembourg
    Posts
    546
    Thanks
    203
    Thanked 796 Times in 322 Posts
    Renamed to 1.RAW and 2.RAW. Results:
    Code:
    paq8px_v128 -8[a]
    1.RAW               39.304 bytes, in 11.53s
    2.RAW               37.651 bytes, in 11.50s
    
    PackRAW 0.3, modified to recognize the format
    1.RAW               42.765 bytes, in 0.026s
    2.RAW               40.677 bytes, in 0.025s
    
    EMMA 0.1.25 x64, modified to recognize the format
    1.RAW               33.712 bytes, in 2.1s
    2.RAW               31.861 bytes, in 2.1s

  30. Thanks (2):

    Bulat Ziganshin (6th January 2018),smjohn1 (5th January 2018)

  31. #21
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    539
    Thanks
    238
    Thanked 92 Times in 72 Posts
    Your optimal result will depend on the time you can afford to spend compressing/decompressing. Some programs give you the very best results but they require high amounts of time and memory.

    Here is a good compromise in my opinion:

    Razor:

    Code:
    1.bin     37844 
    2.bin     36542
    Razor is rolz, so is highly asymmetric. It will be slower than lzma but faster than paq/EMMA to compress, and faster than both to decompress.

    Another plus is that an archiver such as razor can take advantage of similarities across files because it uses a solid method. For example, if you were to archive a folder of all the images acquired today, or for a given patient.

  32. Thanks:

    smjohn1 (5th January 2018)

  33. #22
    Member
    Join Date
    Feb 2016
    Location
    USA
    Posts
    98
    Thanks
    36
    Thanked 8 Times in 8 Posts
    What is the compression algorithm used in Razor? I would like to try, but it is close-source.
    Last edited by smjohn1; 6th January 2018 at 00:48.

  34. #23
    Member cfeck's Avatar
    Join Date
    Jan 2012
    Location
    Germany
    Posts
    50
    Thanks
    0
    Thanked 17 Times in 9 Posts
    https://github.com/encode84/bcm

    Compressing test1.txt:
    204800 -> 42245 in 0.03s
    Compressing test2.txt:
    204800 -> 40339 in 0.03s

  35. Thanks (2):

    Bulat Ziganshin (6th January 2018),smjohn1 (6th January 2018)

  36. #24
    Member
    Join Date
    Feb 2016
    Location
    USA
    Posts
    98
    Thanks
    36
    Thanked 8 Times in 8 Posts
    With bcm, now I can compress down to 36176 and 35075!

    A real nice algorithm, simple yet powerful.

    Quote Originally Posted by cfeck View Post
    https://github.com/encode84/bcm

    Compressing test1.txt:
    204800 -> 42245 in 0.03s
    Compressing test2.txt:
    204800 -> 40339 in 0.03s

  37. #25
    Member cfeck's Avatar
    Join Date
    Jan 2012
    Location
    Germany
    Posts
    50
    Thanks
    0
    Thanked 17 Times in 9 Posts
    I suggest to apply a simple delta predictor, then use bcm.

    Results:
    32372 t1.bcm
    31037 t2.bcm

    Code:
    // raw16delta.cpp
    
    #include <cstdio>
    
    
    static const int w = 320;
    static const int h = 320;
    
    
    static unsigned short d[h][w];
    
    
    void writePixel(int x)
    {
        fputc(x / 256, stdout);
        fputc(x & 255, stdout);
    }
    
    
    int main(int argc, char *argv[])
    {
        for (int r = 0; r < h; ++r) {
            for (int c = 0; c < w; ++c) {
                d[r][c] = fgetc(stdin) + 256 * fgetc(stdin);
            }
        }
    
    
        unsigned short p = 0;
        writePixel(d[0][0] - p);
        for (int c = 1; c < w; ++c) {
            p = d[0][c - 1];
            writePixel(d[0][c] - p);
        }
        for (int r = 1; r < h; ++r) {
            p = d[r - 1][0];
            writePixel(d[r][0] - p);
            for (int c = 1; c < w; ++c) {
                p = (3 * (d[r - 1][c] + d[r][c - 1]) - 2 * d[r - 1][c - 1] + 2) >> 2;
                writePixel(d[r][c] - p);
            }
        }
    
    
        return 0;
    }

  38. Thanks (3):

    Bulat Ziganshin (6th January 2018),Gonzalo (6th January 2018),smjohn1 (6th January 2018)

  39. #26
    Member cfeck's Avatar
    Join Date
    Jan 2012
    Location
    Germany
    Posts
    50
    Thanks
    0
    Thanked 17 Times in 9 Posts
    Correction: Replace
    Code:
    fputc(x / 256, stdout);
    with
    Code:
    fputc((x >> 8) & 255, stdout);
    Results: 32416 t1.bcm 31045 t2.bcm

  40. Thanks (2):

    Alexander Rhatushnyak (6th January 2018),Bulat Ziganshin (6th January 2018)

  41. #27
    Member
    Join Date
    Feb 2016
    Location
    USA
    Posts
    98
    Thanks
    36
    Thanked 8 Times in 8 Posts
    That is actually what I did, but instead of using your coefficients:
    p = (3 * (d[r - 1][c] + d[r][c - 1]) - 2 * d[r - 1][c - 1] + 2) >> 2

    I used loco-i prediction, but it seems yours achieve better results.

    Where does this formula come from?

    Quote Originally Posted by cfeck View Post
    I suggest to apply a simple delta predictor, then use bcm.

    Results:
    32372 t1.bcm
    31037 t2.bcm

    Code:
    // raw16delta.cpp
    
    #include <cstdio>
    
    
    static const int w = 320;
    static const int h = 320;
    
    
    static unsigned short d[h][w];
    
    
    void writePixel(int x)
    {
        fputc(x / 256, stdout);
        fputc(x & 255, stdout);
    }
    
    
    int main(int argc, char *argv[])
    {
        for (int r = 0; r < h; ++r) {
            for (int c = 0; c < w; ++c) {
                d[r][c] = fgetc(stdin) + 256 * fgetc(stdin);
            }
        }
    
    
        unsigned short p = 0;
        writePixel(d[0][0] - p);
        for (int c = 1; c < w; ++c) {
            p = d[0][c - 1];
            writePixel(d[0][c] - p);
        }
        for (int r = 1; r < h; ++r) {
            p = d[r - 1][0];
            writePixel(d[r][0] - p);
            for (int c = 1; c < w; ++c) {
                p = (3 * (d[r - 1][c] + d[r][c - 1]) - 2 * d[r - 1][c - 1] + 2) >> 2;
                writePixel(d[r][c] - p);
            }
        }
    
    
        return 0;
    }

  42. #28
    Member cfeck's Avatar
    Join Date
    Jan 2012
    Location
    Germany
    Posts
    50
    Thanks
    0
    Thanked 17 Times in 9 Posts
    It's the default predictor from ImageZero. https://github.com/cfeck/imagezero

  43. Thanks (2):

    Bulat Ziganshin (6th January 2018),smjohn1 (6th January 2018)

  44. #29
    Member
    Join Date
    Feb 2016
    Location
    USA
    Posts
    98
    Thanks
    36
    Thanked 8 Times in 8 Posts
    I tried your formula, but got slightly different results: 32416 and 31045 instead of your 32372 and 31037.

    Now, if your formula is changed to p = (3 * (d[r - 1][c] + d[r][c - 1]) - 2 * d[r - 1][c - 1] ) >> 2

    then I got 32270 and 30920. Try and see if you could get similar results ( maybe with ImageZero too ).

    I will tweak the last constant coefficient a bit to see what is the best.

    Cheers.


    Quote Originally Posted by cfeck View Post
    It's the default predictor from ImageZero. https://github.com/cfeck/imagezero

  45. #30
    Member cfeck's Avatar
    Join Date
    Jan 2012
    Location
    Germany
    Posts
    50
    Thanks
    0
    Thanked 17 Times in 9 Posts
    Ah yes, rounding may hurt here. It was added in ImageZero, because there -1 was cheaper than +1. Regarding the different values, see the Results: line from my correction post.

Page 1 of 3 123 LastLast

Similar Threads

  1. loseless data compression method for all digital data type
    By rarkyan in forum Random Compression
    Replies: 244
    Last Post: 23rd March 2020, 16:33
  2. Replies: 13
    Last Post: 4th January 2017, 18:54
  3. lossless recompression of camera raw
    By Stephan Busch in forum Data Compression
    Replies: 74
    Last Post: 22nd March 2016, 23:40
  4. Compressed Pixel Formats
    By gregkwaste in forum Data Compression
    Replies: 20
    Last Post: 1st April 2014, 03:33
  5. Where can I get best Raw video samples?
    By mark in forum Data Compression
    Replies: 4
    Last Post: 17th October 2011, 22:43

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •