I've got this TL-WR840N router, but it lacks detailed wifi settings.
So I decided to download the config from it and edit it manually - it worked before with other routers.
But in this case the result was a random-looking binary blob.
I googled it and found how to decrypt it: http://teknoraver.net/software/hacks/tplink/
But after decrypting, it turned out that since then they also added some LZSS-looking obscure compression.
I tried to write a decoder, but it wasn't so easy - had to reverse-engineer the decoder from https://www.nirsoft.net/utils/router..._recovery.html
(this utility decodes this type of config, but there's no option to encode it back).
So I made my own encoder for it and it seems to work all right (edited the config, uploaded it to the router and it works).
First of all I would like to thank you for your great work.
It works perfectly with WR840N.
I tried to decrypt a config file of Archer C2v1 and it works perfectly also.
But if I try to decrypt a config file of Archer C2v3, decryption will not pass.
Do you have any information about some changes of the cryption key or compression metod?
1. Some TP-LINK routers just don't have the compression stage. Try simply decrypting?
2. Try https://www.nirsoft.net/utils/router..._recovery.html
3. If nirsoft utility works, I can try finding what it does, if you'd give me a sample of encrypted config.
See decode.inc - maybe somebody can identify the method?
Frankly, this looks like some sort of custom variation on hundreds of existing small platform LZSS packers.
The fact that it uses "interlaced" Exp-Golomb-1 code suggests that this is likely to be a relatively modern scheme (for example, I never saw Exp-Golomb-1 used in packers for 8-bit platforms written in the 1990s). I am pretty sure that the use of Exp-Golomb-1 to encode match lengths is not a good idea for most kinds of data I used in the past, which makes it less likely to be an existing scheme taken from elsewhere. At the same time, the use of Exp-Golomb-1 for the top bits of offsets is likely to be pretty efficient.
The question is how their original encoder was supposed to work.
Normal LZSS (with packed literal flags) just can have a fixed-size token buffer.
While here I had to use two passes for encoding - normal encoding with two output streams
and 2-to-1 stream transform based on decoding (it interleaves data bytes in order of decoder access).
It'd be much more complicated if I tried to make it single-pass - I'd have to maintain variable-length
token queue and bit/byte streams and async decoder and encoder steps.
I wonder if I'm missing some simple way to do this.
All compressors of this kind that I know of would typically have some kind of internal representation of the graph corresponding to the LZSS encoding. The greedy parser, lazy evaluator or some other kind of optimizer would typically try minimizing the number of bits in compressed representation (without actually generating the compressed stream). During the first pass they would choose the path through the graph; during the second pass they would actually generate the binary representation for the compressed stream.
I suspect this is pretty much exactly what you ended up doing. I do not know of any single-pass schemes that would be applicable to the mixed bit/byte compressed data streams.
Its probably something different in this case.
Maybe memory-to-memory implementation allows to make it simple somehow (instead of my streams).
Because what I implemented is a simple greedy parser, and it still provides better compression than original.
(original: 5176, mine: 4319).