
Originally Posted by
Jarek
I see BitKnit adapts probabilities for rANS every 1024 symbols, but there is some strange weight choice.
That's just an oversight, not an intentional feature of the model. Quoting the actual code:
Code:
kIncrement = kAvailable / kRenormEvery,
kFinalBoost = kAvailable % kRenormEvery,
It's just trying to redistribute kAvailable slots of the probability space between renormalization intervals. To be honest I didn't notice how huge the final boost was relative to the normal increments until months after the bitstream had been frozen!
A much more uniform way is to just boost the last "kFinalBoost" symbols before a renormalization takes place by a single increment:
Code:
symfreq[sym] += kIncrement + (renorm_counter >= kFinalBoost ? 1 : 0);
if (--renorm_counter == 0) renorm();
Alas, not the kind of change you make post-bitstream-freeze.

Originally Posted by
Jarek
LZNA has very nice symbol-by-symbol rANS adaptation for 8 (3 bit) or 16 (4 bit) size alphabet - with beautiful SIMD for both searching of the symbol and adapting probabilities using CDF only.
This is exactly what they need in VP10/AV1 (alphabet is limited by 16) ...
This is why I've been insisting that this approach is really quite SIMD-friendly and you might wanna have a look at that for more than a year.