Here is another compressor I'm still working on. It uses only context modelling. The normal version (CM0) is supposed to be "fast", the extended version (CM0_EXT) is more exhaustive. The models are similar, but incompatible with each other. I'm going to continue developing this. If I can improve it further, I will probably release the source later.
Usage
CM0/CM0_EXT c in out
CM0/CM0_EXT d out in_reconstructed
Last edited by cade; 5th December 2013 at 03:03.
Reason: Fixed attached exe
I compiled with the latest MinGW. I have used static linking this time (huge file sizes) so nothing else is needed, and tested on an old computer running Windows XP. Sorry for the inconvenience.
Edit: I updated the attached files in the top post.
Two small changes today: fixed a silly bug when reporting the final file size (forgot to account for the IO buffer, so it was less by 128k) and a small change to the weighting scale between orders to increase compression ratio a little.
I have a hash table with individual counters similar to fpaq0. After I update each bit, I look for (hash + current byte) for each order. The alternative is that instead of after every bit, I could just do it every byte, but then I would have 256 counters out of which only 8 are used, wasting a lot of memory. I can use SIMD to update 8 counters at once, but loading is the main bottleneck. Does anyone have any ideas?
There was a silly error with the way I reported file sizes in my IO library (forgot to count how many characters are buffered in memory and just reported ftell), I have updated in the original post. Sorry about that.