Thank you Matt! 
And I think it's time to describe LZSS' machinery.
LZSS compresses input by 16MB blocks. The compressed stream consists of (series of):
- Uncompressed block size (4-bytes / signed 32-bit integer, Little-endian) - <=16MB
- Compressed data
You may concatenate compressed files - e.g.:
Code:
lzss c book1 book1.lzss
lzss c book2 book2.lzss
copy /b book1.lzss+book2.lzss books.lzss
The compressed stream is pretty much similar to a classic LZSS:
Eight Literal / Match flags are packet into one byte.
1 = Match, 0 = Literal
Bit-order differs from classic LZSS. Here I'm going from highest bit to lowest. In this case we may avoid shift operations:
Code:
if (t&128)
// Match
else
// Literal
t+=t; // t<<=1;
Literals stored as-is.
Matches are coded as:
Code:
OOOOOOLL OOOOOOOO - for Match Lengths of 3..5 - i.e. 0..2+MIN_MATCH
OOOOOO11 OOOOOOOO LLLLLLLL - for Match Lengths of 6 and up to MAX_MATCH=3+255+MIN_MATCH
i.e. offsets are stored as 14-bit integers. (16KB window)
Matches longer than 5-bytes require an extra byte.
Decompressor is the same for all compression modes.
Compression modes are:
- cf - Fast mode - Fast string searching, Greedy parsing
- c - Normal mode - Balanced string searching, Greedy parsing
- cx - Maximum (Extreme) mode - Exhaustive string searching, Optimal parsing (improved Storer&Szymanski parsing, a la CABARC/LZMA-Ultra but on minimalistic LZSS output)
Hope this helps!