Results 1 to 5 of 5

Thread: Fastest compression for high-speed I/O?

  1. #1
    Member
    Join Date
    Jul 2011
    Location
    boulder co usa
    Posts
    3
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Fastest compression for high-speed I/O?

    Please forgive and send a thread link if this has been discussed already; I am new to this forum. Our software (which is ETL and quality software, performs lots of sorts and searches, etc) uses compression to reduce file I/O bandwidth. However, the primary need is not reducing space; it is increasing throughput. Most data is written once and read once (temporary files), although some is stored and read multiple times. This has been an effective strategy, but I am always looking for an optimal time/space tradeoff. Currently we use QuickLZ and/or LZF, both of which offer about 3:1 compression ratios on our typical "database records", and which give pretty good compression speeds (LZF is generally faster). We manage multi-core parallelism automatically, so I'm not looking for any speedups that are simply parallelism based. Can you recommend other code or libraries for this purpose?

  2. #2
    Member zody's Avatar
    Join Date
    Aug 2009
    Location
    Germany
    Posts
    90
    Thanks
    0
    Thanked 1 Time in 1 Post
    You could probably take a look at this thread...
    http://encode.su/threads/1266-In-mem...y)-compressors

  3. #3
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    889
    Thanks
    482
    Thanked 279 Times in 119 Posts
    Latest benchmark results were provided by m^2, using inikep's test program.
    http://encode.su/threads/1266-In-mem...ll=1#post25401

    They are all single-thread figures.
    All algorithms can be parallelized afaik.

    Note however that there can be some difference in the OS licence (they are all Open Source, but some are GPL, some are LGPL, other are BSD, etc.).

  4. #4
    Member
    Join Date
    Jul 2011
    Location
    boulder co usa
    Posts
    3
    Thanks
    0
    Thanked 0 Times in 0 Posts
    [QUOTE=Cyan;25739]Latest benchmark results were provided by m^2, using inikep's test program.

    Can you tell me what size of block was used in compression? The entire file in memory, or chunks at a time?

  5. #5
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    889
    Thanks
    482
    Thanked 279 Times in 119 Posts
    I believe it was the entire file.
    The test program is freely available as source code. Inikep released it as GPLv3.
    So you can check and modify it as you please.

    Regards

Similar Threads

  1. compression speed VS decomp speed: which is more important?
    By Lone_Wolf236 in forum Data Compression
    Replies: 14
    Last Post: 12th July 2010, 19:57
  2. Compression and speed
    By Wladmir in forum Data Compression
    Replies: 4
    Last Post: 25th April 2010, 12:15
  3. The best algorithm for high compression
    By Wladmir in forum Data Compression
    Replies: 8
    Last Post: 18th April 2010, 14:54
  4. Ross Williams high-speed LZRW compressors
    By LovePimple in forum Data Compression
    Replies: 4
    Last Post: 8th June 2008, 20:26
  5. High Throughput Compression (FPC vs DFCM)...
    By Raymond in forum Forum Archive
    Replies: 0
    Last Post: 5th March 2007, 18:02

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •