For strong LZ codecs with large window, complete recompression like reflate is not really practical, at least not for automated use.
Like, in case of lzma, there're plenty of versions which produce subtly different outputs, compression levels differ in hard to detect options like "fb", etc.
And lzmarec experiment showed that entropy-level recompression (like what's used for jpegs, mp3s etc) is only able to gain 1-2%,
mostly because tokens are already optimized for specific entropy coder.
But there's still another option that I missed when writing lzmarec - deduplication.

http://nishi.dreamhosters.com/u/token_dedup_demo_0.rar

Code:
  768,771 BOOK1.2
  536,624 wcc386.1

  535,311 0.7z            // book1,wcc386
  535,140 00000000.lzma   // lzma stream extracted with lzmadump
1,462,669 00000000.rec    // token stream decoded from lzma stream
  535,194 1.7z            // wcc386,book1
  535,023 00000001.lzma
1,464,420 00000001.rec

  276,869 0+1.dif    // bsdiff.exe 00000000.rec 00000001.rec 0+1.dif
  168,564 0+1.dif.7z // 7z a "0+1.dif.7z" "0+1.dif" 

1,070,732 0.7z+1.7z.7z    // solid compression
  867,579 0.rec+1.rec.7z  // solid compression of token streams
  703,875 = 535311+168564 // 0.7z+"0+1.dif.7z"
Of course, its possible to further improve this result by integrating dedup into lzmarec,
or by creating an archiver with universal support for this kind of deduplication.