I have been experimenting with symbol ranking compression and wrote a new file compressor, sr2 (and decompressor unsr2). Code is GPL.

http://cs.fit.edu/~mmahoney/compression/text.html# 2739

The source explains the details, but it is loosely based on Peter Fenwick's SRANK, but with modern improvements like arithmetic coding. It is about as fast as zip -9 but compresses a little smaller. However decompression is a bit slower than compression. I optimized it for speed rather than size. It is ranked 56th on the large text benchmark (zip is 66th) but compresses enwik9 in 99 seconds (zip -9 takes 123 seconds). LZ77 beats it for decompression speed, though.

The basic idea is to map a 20 bit hash of an order 4 context to a MTF queue of the last 3 bytes in that context plus a consecutive hit count. The queue position or literal (for a miss) is arithmetic coded using an order 1 model (order 0 for counts > 3) with the count as extra context. After a byte is coded it is moved to the front of the queue. The count (max 63) is for consecutive hits at the front of the queue. It is reset to 1 for other positions or 0 for misses.

SRANK, written in 1996-97, is about twice as fast but gets poor compression. It used Huffman coding with no context modeling. I called my program SR2 because it is only the second symbol ranking compressor that I know of (other than MTF modeling in BWT).