Hello everyone . Just thought I'd say this forum seems to be full of neat compressors, and I feel awkward asking such a basic question...

I've got a fully functioning lz compressor with greedy parsing. I'm trying to upgrade it to optimal parsing, which has led me to lazy parsing. All the explanations of lazy parsing seem to explain it as this:

1. start from the beginning of the file
2. find the longest match at the current position
3. If the match overlaps with a longer match, ignore it and proceed to the next position
4. do this until you reach the end of the file

But this seems far from optimal to me. If there's chains of matches only overlapping by one byte, a lot of matches will discarded. The next idea would be to just shorten the shorter of the 2 overlapping matches, but this also is sub-optimal for certain chains of matches.

Is this the correct way of interpreting lazy parsing? It seems too simple to get it confused. Maybe I was mistaken thinking it was optimal?

Anyways, thanks for all the other insightful posts you guys have made