That would be fine, indeed, but you'd get the downside of this when loading some of your documents. For example, it's really bad to compress pages seperately in a PDF file from a compression point-of-view. But if this would change, both speed and memory requirements would suffer and you'd wait some minutes before you could even have a look at the first page of your document.
This doesn't mean it's not worth it - sometimes you can get better results for precompressed files even with ultra-fast compressors like THOR - but it often collides with more important things.
There are two things I dislike about recompression today. One is that lossy recompression is done even in some archivers which are typically lossless, and the other is the speed as mentioned in the very first post by maadjordan.
Fast lossless recompression is definitely possible, it's more complicated from the coding side, but it's not impossible.
Another benefit of recompression is having everything in decompressed state and being able to spot patterns across different streams or even files. For example, I recently stumbled upon Microsoft's new SharePoint documentation:
http://www.microsoft.com/downloads/d...displaylang=en
The documentation consists of 152 PDFs and is also available as a ZIP archive - 134 MB in size. After precompressing all the PDFs, everything is expanded to 661 MB and can get
a lot smaller - 75 MB using THOR, 32 MB (!!) using CCM. Creating a SFX with this would slow down the decompression a bit (Recompressing takes 3-4x longer than ZIP extracting, plus THOR/CCM decompression - using RZM would be nice here), but would extremely reduce Microsoft's traffic and the download time for users.