New codec is lossless! Meaning compressing images with no loss, bit-to-bit identical to the original!
New codec is lossless! Meaning compressing images with no loss, bit-to-bit identical to the original!
Is BIM tuned for some specific type of images?
Can it handle a screen capture as well as a photo for instance?
What about smaller files?
I've attached an archive containing 15 pictures, in case you need more varied material.
I aim on game screenshots and camera RAWs converted to BMPs. So, it's more likely a photographic images. On artificial images BIM(it's just a name for reference, new codec may have different name) already performs well, but has major problems with analogy-photographic images. BMF and FLIC define a high standard in image compression that hard to beat or be on par with. Although BIM's GUI is mostly written(you can view images, image resize according to window size, you may copy and paste image data)and I can add any codec to such GUI, even FLIC, I guess it will be a long trip to made my own worlds best image codec...
Yes it can handle a screen capture - press "print screen" key and then Ctrl+V inside BIM... To copy - Ctrl+C and then Ctrl+V in other program like MS Paint.
Thanks for sample images, BTW!
This can be my reference image:
Tested the files with the ex-JPEG detector. 3 of them give YES:
Code:m3.ppm 2.727 ! YES, this looks like an ex-JPEG image, no smoothing. s4.ppm 1.984 ! YES, this looks like an ex-JPEG image, no smoothing, shifts: 7, 3. s5.ppm 1.152 ! YES, this looks like an ex-JPEG image, no smoothing, shifts: 7, 7.
http://schnaader.info
Damn kids. They're all alike.
I must pay atention to this i did not had time to read all or test it yet
I dont know much about lossy or lossless
but it must be better than PNG in size and its a new format?
it could be very ussefull if compressed with zpaq or a faster compressor like sitx
Ha faster that zpaq ok
now I read it
it seems a good goal sir encoder im going to be aware of it!
Last edited by toi007; 29th July 2011 at 18:58.
File format extension links (apart from wiki):
http://www.file-extensions.org/
http://filext.com/
http://www.openwith.org/
I tend to convert every bitmap into lossless png or tif.
After editing, the same for jpgs, to at least hold the quality.![]()
This newsgroup is dedicated to image compression:
http://linkedin.com/groups/Image-Compression-3363256
I have 100% featured and working GUI/Viewer that ready for any image codec and a command line version of a codec for experiments. FLIC defined a high level of performance that hard-to-beat. Currently, my codec is fast and shows nice performance on artificial images, but it's not that strong on photographic images. So I need to achieve some compression level first. Unfortunately these days I have a small amount of spare time - working 24/7 filming TV shows...
Mainly, I'm searching for papers and image codec descriptions. Unfortunately, too little information out there about state-of-the-art codecs (trade secret?). At the same time I'm working on BCM and ULZ-like compressor. For descent image compression I just need a new approach. Like I said GRALIC and FLIC changed meaning of the 'descend' word...
When it comes to image compression, I wonder why does it seem that nobody puts research behind MRP and similar codecs...
Photographic images have a lot of noise. If there are 3 bits of noise in 10-bits channel then you can't achieve better compression ratio than 10:3. Additionally, commonly used cameras uses Bayer filter with different algorithms for demosaicing, so decent coder should detect all of the demosaicing methods and apply appropriate compression methods. That's Sisyphean task.
I think no one can make revolutionary codec for photographs as the noise amount prohobits that. Look at audio data compression - high speed codecs like TAK are very close in terms of compression ratio to eg LA or OptimFROG.
Compression of artifical images is very important I think. While photographs are usually stored as JPGs or custom format RAWs, lossless compression for them seems to be a niche. On the other side typical game contains gigabytes of textures, which are mainly artifical images compressed with some special lossy algorithms for texture compression. Those lossy algorithms are pretty inefficient, but on the other side they provide fixed compression ratio and uses very small independent blocks, ie they are meant to be rapidly decompressed from any starting point with (almost) zero memory overhead. If one can develop an strong algorithm for lossless compression of compressed textures then it would certainly be interesting.
One interesting thing about lossless image compression is that (atm/afaik) its relatively easy to reach the best compression for these.
Just look at paq8px's RGB model - http://encode.su/threads/1195-Using-...ll=1#post23673
There're just a few points in context, no color transform, and that's it - results are frequently better than anything else.
And we can ignore paq8's speed here - its bmp model alone (without other submodels) would be much faster,
and there's actually not much sense to use direct bitwise models for analog data -
usually some kind of unary codes make much more sense, so it can be made 3x faster just by changing the "decomposition" of values.
Anyway, speed optimization is a separate task, while for now (imho) we're quite far from real image redundancy
and it kinda makes sense to reach the best compression first, then think what can be compromised for speed.
So we need some proper colorspace analysis (source colorspace, gamma level, possible limited palette, actually used bits, etc),
real pattern matching and a tuned CM backend.
Tuning a few different models for different image sets would be good too.
Also some special handling for ex-jpegs makes sense, although ideally it needs a different model (with DCT coef contexts etc),
but even a specially tuned normal model (with maybe added alignment contexts) would be already better than nothing.
Isn't inventing algorithms on your own more challenging?
Well, last year I made MRP compression engine a lot faster ( http://encode.su/threads/595-GraLIC-...ll=1#post21588 ...either ~1.5 or ~2 times faster, afair) and yet it looks like no one is using it.
This newsgroup is dedicated to image compression:
http://linkedin.com/groups/Image-Compression-3363256
> Isn't inventing algorithms on your own more challenging?
My guess is that he tried directly applying the BCM model to image data (and tuning it),
and discovered that its relatively fast, but doesn't compress that good :)
> Well, last year I made MRP compression engine a lot faster
You didn't say that before ("some speed optimization" != 2x faster imho), but comparing to BMF and such MRP is not quite interesting anyway.
Also I think that its a good idea to post such codecs with support for bmps, because its actually easier to implement than proper support for PGMs and such
(because bmp has a fixed header, while PGMs can have # comments, weird spacing, ascii data and maybe other annoying things)
And I'd bet that 99% of people here who looked at these image coders didn't actually test them because of that.
Actually zpaq -mbmp_j4 compresses .bmp better than paq8px_v69.
Its not really helpful if its only better due to
It first uses a preprocessor to implement a color transform: (R, G, B) -> (G, R-G, B-G) on the image data following the 54 byte header
any news ? how does bim compare to a std. impl of png?
PNG is out of the game! Its simple delta+deflate can be easily beaten by e.g. delta+o0!
18361655 bytes uncompressed size (PPM format)
6418336 -- paq8px_v69 -6
6416844 -- paq8px_v69 -7
6342116 -- BMF -s
6341772 -- BMF -s -q9
6196549 -- GraLIC 1.11.demo, twice faster than BMF -s
I don't think ZPAQ can compress this image better than BMF. Needless to mention it's N times slower than BMF -s.
Last edited by Alexander Rhatushnyak; 22nd October 2011 at 06:21. Reason: + paq8px_v69 -7
This newsgroup is dedicated to image compression:
http://linkedin.com/groups/Image-Compression-3363256
Tested many image compression techniques and came up with CM. I'm using my brute force optimizer to test each idea - what's why it took so long! Anyway, have a look at some statements and principles of my image codec:
- It's truly lossless! You can compress, as example, even EXE file and correctly decompress it! So the image file with its comments and other metadata will be unpacked and 100% untouched. Furthermore you will able to compress even TAR files with lots of images with the same
width! (TAR file with lots of 1080p screenshots as example)- It can compress 24-bit RGB images in ANY format. All you need is just set the image width manually. For common image formats the codec will read and set the width automatically. The image height is not limited, the width limit is about 20000 to 1000000 pixels - practically unlimited!
- It's fast enough to view and compress large images (30+ MB in size)
Some features will be added only in stable non-beta release:
- Fully featured image file format, for backward compatibility
- CRC32 checking
- Comments (Started with first GUI release, if users vote for that feature)
- User requested features
Until then, the codec will live the beta live, meaning no backward compatibility etc.
I'll probably reuse the PIM name. Now the PIM will stand for Packed IMage
As to algo, according to traditions of image codec authors, I will keep silence! I just say that new codec is based on CM and have some essentially new modeling techniques that cannot be found elsewhere! Anyway I'll keep it fast enough to be practical
Testing zpaq4.00b1 (executable built on 1-Nov-2011, 8:04pm)
C:\PATH>zpaq4.exe -mbmp_j4 -t1 c zpq4-t1 DSC_1062.bmp
Using model bmp_j4.cfg
Creating archive zpq4-t1.zpaq
zpaq -h -mcolorpre r DSC_1062.bmp C:\DOCUME~1\path2\Temp\zpaqtmp1504_1.out
'zpaq' is not recognized as an internal or external command,
operable program or batch file.
C:\DOCUME~1\path2\Temp\zpaqtmp1504_1.out: No such file or directory
zpaq error: preprocessing failed
1 seconds
C:\PATH>ren zpaq4.exe zpaq.exe
C:\PATH>zpaq.exe -mbmp_j4 -t1 c zpq4-t1 DSC_1062.bmp
Using model bmp_j4.cfg
Creating archive zpq4-t1.zpaq
zpaq -h -mcolorpre r DSC_1062.bmp C:\DOCUME~1\path2\Temp\zpaqtmp1092_1.out
Using model colorpre.cfg
[1] DSC_1062.bmp 18367770 -> 6484685 (2.8244 bpc)
184 seconds
C:\PATH>zpaq.exe -mbmp_j4 -t4 c zpq4-t4 DSC_1062.bmp
Using model bmp_j4.cfg
Creating archive zpq4-t4.zpaq
zpaq -h -mcolorpre r DSC_1062.bmp C:\DOCUME~1\path2\Temp\zpaqtmp2132_1.out
Using model colorpre.cfg
[1] DSC_1062.bmp 18367770 -> 6484685 (2.8244 bpc)
183 seconds
C:\PATH>fc/b zpq4-t1.zpaq zpq4-t4.zpaq
Comparing files zpq4-t1.zpaq and ZPQ4-T4.ZPAQ
FC: no differences encountered
Assuming you don't know the algorithms inside BMF, GraLIC, StuffIt etc, why are you sure your modeling techniques are essentially new?
Last edited by Alexander Rhatushnyak; 2nd November 2011 at 22:32.
This newsgroup is dedicated to image compression:
http://linkedin.com/groups/Image-Compression-3363256
Glad to know zpaq400b1 works. Of course it is slower than zpaq 3.01 (the official release) because I removed source level JIT. Only run() is implemented so far. I'll make an official release when predict() and update() are finished. (I just started on them).
Anyway I can't say that I know the bmp_j4.cfg model myself because I didn't write it. Jan Ondrus did. It beats paq8px_v69 on rafale.bmp and lena.bmp.
JUST A TESTER you people are the greatest im free to test some of bim files and also some of zpaq -bmp_j4 to state some benckmarks in diference of speed and size
when bm with gui release here feel free to ask me for some free tests.
to denote that zpaq can produce a bit by bit image png without transparency if converted to bmp and then packed. sometimes smaller than 60% size of original png
The development goes really slow. Anyway, check out some new results:
Dummy CM
lena.pnm -> 461,078 bytes
cow.pnm -> 7,971,045 bytes
butterfly.pnm -> 5,091,202 bytes
Transform+Dummy CM
lena.pnm -> 436,282 bytes
cow.pnm -> 7,073,658 bytes
butterfly.pnm -> 4,801,684 bytes
BCIF 1.0 beta
lena.bmp -> 424,723 bytes
cow.bmp -> 7,211,365 bytes
butterfly.bmp -> 4,710,454 bytes
BMF 2.01
lena.pnm -> 422,568 bytes
cow.pnm -> 6,866,180 bytes
butterfly.pnm -> 4,938,080 bytes
FLIC 1.4.demo
lena.pnm -> 419,012 bytes
cow.pnm -> 6,785,997 bytes
butterfly.pnm -> 4,414,260 bytes
Test files:
http://narod.ru/disk/61965621001.24e...rtest.rar.html
If you're interested in performance on some specific file, let me know!
![]()
Dummy CM v2
lena.pnm -> 442,987 bytes
cow.pnm -> 7,906,938 bytes
butterfly.pnm -> 4,939,465 bytes
Test log of my image codec in fast mode:
C:\Test>bim lena.pnm
BIM/GAR v0.01-ALPHA by Ilia Muraviev
Compressing...
Dimensions: 512 x -1
786447 -> 440919 in 0.06 sec
C:\Test>bim cow.pnm
BIM/GAR v0.01-ALPHA by Ilia Muraviev
Compressing...
Dimensions: 2014 x -1
18361655 -> 7288772 in 0.85 sec
C:\Test>bim butterfly.pnm
BIM/GAR v0.01-ALPHA by Ilia Muraviev
Compressing...
Dimensions: 3008 x -1
18048017 -> 4888606 in 0.76 sec
C:\Test>bim PIA13757.pnm
BIM/GAR v0.01-ALPHA by Ilia Muraviev
Compressing...
Dimensions: 7753 x -1
54705185 -> 12673157 in 2.28 sec
C:\Test>bim PIA13912.pnm
BIM/GAR v0.01-ALPHA by Ilia Muraviev
Compressing...
Dimensions: 6330 x -1
111642227 -> 34860263 in 4.94 sec
![]()
New day new improvements
C:\Test>gar lena.pnm
BIM/GAR v0.01-ALPHA2 by Ilia Muraviev
Compressing...
Dimensions: 512 x -1
786447 -> 439107 in 0.06 sec
C:\Test>gar cow.pnm
BIM/GAR v0.01-ALPHA2 by Ilia Muraviev
Compressing...
Dimensions: 2014 x -1
18361655 -> 7174784 in 0.87 sec
C:\Test>gar butterfly.pnm
BIM/GAR v0.01-ALPHA2 by Ilia Muraviev
Compressing...
Dimensions: 3008 x -1
18048017 -> 4881711 in 0.80 sec
C:\Test>gar PIA13757.pnm
BIM/GAR v0.01-ALPHA2 by Ilia Muraviev
Compressing...
Dimensions: 7753 x -1
54705185 -> 11802795 in 2.36 sec
C:\Test>gar PIA13912.pnm
BIM/GAR v0.01-ALPHA2 by Ilia Muraviev
Compressing...
Dimensions: 6330 x -1
111642227 -> 33386430 in 5.09 sec
![]()