I am looking for fast C++ implementation of static/adaptive Huffman coder. What is a state of art implementation, now?
I am looking for fast C++ implementation of static/adaptive Huffman coder. What is a state of art implementation, now?
Enjoy coding, enjoy life!
you can start by reading one in tornado (since it should be easier to decipher), but i believe that the best one is in 7-zip sources. it also has less restrictive license
How do they compare to FastHF
FastHF has bit-by-bit decoding and heap sort, so it's pretty slow
No idea how well it compares, but try: http://entropyware.info/shcodec/index.html.
Good luck!
M1, CMM and other resources - http://sites.google.com/site/toffer86/ or toffer.tk
I have some aging code in libstaden-read (aka io_lib): http://staden.svn.sourceforge.net/vi...30&view=markup
I won't claim it's super and it's a bit of a mess as it's mixed up with all sorts of other bits and bobs, but it's BSD licence so feel free to experiment and slice/dice if you wish. It implements an interlaced version of deflate, but minus the LZ bit (as for that particular purpose it didn't help at all). The decoder is reasonably fast once it gets going, but has some decoding setup time so it's not great on very small blocks. I decode a nibble at a time for speed, using a lookup table to work out how many symbol to emit (if any) and which new state to jump to for the next lookup table.
From what I recall it was slightly faster than zlib at decoding and slightly slower at encoding, but I may have got that back to front - it's been a few years.
James
PS. The interlacing is basically an efficient way to handle data reordering without needing to copy it to separate buffers. I was using this for packing multi-dimensional arrays of N x 4 x 16-bit quantaties. It turns out that you can get a long way with just treating it as 8 separate streams, eg producing a buffer for byte 0, 8, 16, 24, ..., another for 1, 9, 17, 24, etc. However instead I just fed the one buffer into this code with a step/stride of 8 and let it interlace as it goes. Not an optimal strategy, but fast and not miles off optimal given the specific nature of that data.