> I'm not quite sure how the method I mentioned would be
> redundant,
Well, not really the method, but its output :)
> Of course it would miss areas where it could compress
> data, and it would attempt to compress areas where it
> could not compress data, but the goal is to minimize both
> of these with the very minimum of speed costs, is it not?
1. Having different code branches in the same loop is potentially
slow, keeping different algorithms separate is usually better for
performance.
2. Decoding can be noticeably faster if it can just read an
uncompressed block flag and copy the block, instead of adapting
in sync with encoder.
3. A static detection method can also minimize redundancy.
> Well if I read what Christian did correctly, he is
> switching between algorithms;
He does, and certainly its usually easier to implement that way -
symmetry means that you don't have to write two handlers
(for encoding and decoding) and using side effects of the main
method means no duplicate code.
> is this not possible in Cyan's case?
Its more like I can't really give advices about that,
because I don't know the specifics of Cyan's algorithm
(and maybe he doesn't know either :).
And either way, if there's a superior method, even requiring
a little more programming, why not use it :)
> Or is the goal to create a method better than the one
> Christian is using, rather than optimize his?
Its not a matter of "optimizing his", it has to be solved
from scratch for each specific codec, though Christian likely
enjoys doing exactly that (i mean modifying the given codec
to efficiently process random data).
So even though there're examples of such implementations
(in fact, I actually prefer such adaptive methods myself eg. for
bitrate control), I'd not recommend it for fast LZ codec.
Btw, here I found an old example (order2 bitwise CM) with
incompressible block skipping:
http://shelwien.googlepages.com/order_test4.rar
Code:
ver c.size c.time d.time
o2_io9co 20281316 61.751 63.718 io9c + /Qprof_use
o2_ioA 20281316 62.938 66.813 io9c + probability caching on encoding (by 1M=128k bytes)
o2_ioB 20166375 64.639 67.250 ioA + copy on redundant blocks, PSize=1320, optimized for wcc386
o2_ioBa 20160989 64.156 67.204 ioA + copy on redundant blocks, PSize=256
Its different from what I said though, because decoder simulates (de)compression
of stored blocks to stay in sync with encoder statistics.
But again, for fast LZ its less troublesome.
And maybe these stats (compression gain and speed loss) would be still useful.