> But nobody is doing general adaptive structure detection and modeling.

table detection (not only 4 or 8, but more general) is rather common these
days (see paq,ccm,bsc).
Also paq8 can sometimes implicitly handle tricky correlations because of some
of its contexts - eg. where context history is used as context, or actual SSE.

But as to real structure detection, imho it doesn't make much sense at this
point - its easy to think of examples where structure analysis would be
very slow, and for most binary files the format is known.
So at least for file compression it won't be of much help, though sure it'd
be good to have an entropy-based structure detection tool, like for reverse-engineering.

And anyway, for now we don't have even a single archiver able to handle
even a few most common known formats - "stuffit" is most advanced in that
sense, but although its able to parse a few formats, its actual compression
methods are nothing special.

Also, its pretty clear that for most common formats (jpg,png,mp3,txt,html,exe,pdf,zip,cab,rar)
its plain impossible to get any significant improvement from automated analysis.

Anyway, it seems that we have to implement (lossless) parsers for known structures
first, just to be able to analyze the raw data.