Any idea on restructuring this or adding something?
1. Basic statistics
1.1. CM (context modelling) with simple incremental counters (frequencies) -
despite common beliefs it can be faster than working with probabilities.
1.2. CM with shift-based probability counters (fpaq0-like)
1.3. combinatorical multiplication-based probability counters (frequency simulation)
1.4. state machine counters
1.5. delayed counters
1.6. delayed counters with context interpolation and indirect update
1.7. (=2.5) +context quantization
2. Secondary estimation
2.1. Using a quantized probability in context
2.2. Nonlinear probability mapping
2.3. +Interpolation
2.4. +Indirect update
3. Prediction merging techniques
3.1. Switching (eg. by amortized code length)
3.2. Static linear mixing
3.3. Adaptive linear mixing
3.4. +Indirect update
3.5. Multi-dimensional version of "secondary estimation"
3.6. Update back-propagation
4. Precision control
4.1. Using interval arithmetics to calculate the prediction error (and make a correction)
4.2. Adaptive correction by using the prediction error in contexts
5. Parameter optimization techniques
5.0. Manual
5.1. Simple bruteforce
5.2. CM-driven bruteforce (using correlations between parameter set and output size)
5.3. Bayesian (likelihood-based) (eg. by polynomial approximation)
6. Symmetry control
6.1. Blockwise redundancy check
6.2. Statistical segmentation
6.3. Adaptive model optimization and parameter values encoding
7. Applied modelling
7.1. Hash function design
7.2. Speed optimization
7.2.1. Alphabet decomposition by Huffman's algorithm
7.2.2. Faster processing of probable cases in general
7.3. Serial improvement (adaptively using a secondary model to determine the design of primary model,
eg. symbol ranking)
7.4. Speculative processing (mostly relevant for decoding and threaded implementations)