Well, it would be nice if anybody could explain it to me.
Ill start how I understood how it works, please correct me if I'm wrong.
ZPAQ cares for filenames checksums, etc.
It uses Context Mixing. Each Model that are to be mixed are descriped by VM-Bytecode. There are three bytecodes build in, but they are intercangeable. It usually makes no sense to create a model that predicts that the output will be PI, but if you do and the output acutally should be pi the compression will be amazing. All these Models are "symetric" due to their nature, the same code will be used for compression and decompression.
Then there are pre and post processors. Each of them are different, but ZPAQ asserts that post(pre(data))==data.
These processors can manipulate files such as BWT and the BWT^-1 or encoding in flac and decoding, it's more likely that compression will be inversed in the preprocessing step and later compressed again since many file formats contain inefficent compression. So e.g. a png file is processed to a raw bitmap, because there are better predictors than the one png uses (it it a predictor?) witch is later reversed.
Can these Processors be cained? Are they written in VM bytecode and included in the archive? (I guess not, since I saw C-Code; if not: how are they distributed, executed and how portable are they, and are they sandboxed?) What are configurations? Is there a mechanism that selects fitting processors? How to write processors? And CM with just one static model is just Huffman coding, isn't it?