Please forgive and send a thread link if this has been discussed already; I am new to this forum. Our software (which is ETL and quality software, performs lots of sorts and searches, etc) uses compression to reduce file I/O bandwidth. However, the primary need is not reducing space; it is increasing throughput. Most data is written once and read once (temporary files), although some is stored and read multiple times. This has been an effective strategy, but I am always looking for an optimal time/space tradeoff. Currently we use QuickLZ and/or LZF, both of which offer about 3:1 compression ratios on our typical "database records", and which give pretty good compression speeds (LZF is generally faster). We manage multi-core parallelism automatically, so I'm not looking for any speedups that are simply parallelism based. Can you recommend other code or libraries for this purpose?