Results 1 to 8 of 8

Thread: SandForce SSD Compression

  1. #1
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    869
    Thanks
    470
    Thanked 261 Times in 108 Posts

    SandForce SSD Compression

    Does anyone have any information regarding the compression used by SandForce in their new "DuraWrite" controller ?
    They seem to be able to compress extremely fast (~500MB/s) with this dedicated hardware.
    I'm wondering what could be the inner mechanics and algorithms used...

  2. #2
    Member
    Join Date
    May 2008
    Location
    Germany
    Posts
    410
    Thanks
    37
    Thanked 60 Times in 37 Posts
    i think they use may be a hardware-based lzo-compression
    lzo from http://www.oberhumer.com/ - is very fast and is well proved for a long time

    best regards

  3. #3
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    869
    Thanks
    470
    Thanked 261 Times in 108 Posts
    proof ?

  4. #4
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,717
    Thanks
    271
    Thanked 1,185 Times in 656 Posts

  5. #5
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    I don't think you'll find any info about the algorithm.
    Anyway, you can be sure that the speed is a result of parallelism. Erase block has 512 KB, so they do 1000 blocks per second. The compression granularity might be even higher depending on how they implemented deduplication, but I would be hugely surprised if it was lower. This is just a natural threshold.
    Though some things don't add up with this assumption. They claim 100 us access time. Assuming 512 KB blocks and that all but decompression is negligible, they'd have 5 GB/s decompression. So maybe block size is much lower?

  6. #6
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,717
    Thanks
    271
    Thanked 1,185 Times in 656 Posts
    1. Compression can be done offline if necessary, so write speed is not directly related to compression speed.
    2. Yes, I also think that they have much smaller independently compressed blocks (likely 4k) due to random access requirements.
    But potentially its possible to have large blocks while providing random access using dictionary-based methods.
    3. Imho its more interesting that they had to implement internal RAID to reduce the error ratio, because with compression
    the errors can kill all the data.
    4. Its possible to find more info about the algorithm by extracting the firmware and reverse-engineering it

  7. #7
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    I don't think it's offline because they said that it decreased write amplification. Well, offline compression may have such effect too, but much smaller. And doing it offline doesn't seem to offer many benefits.

  8. #8
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Weekend and fresher mind made me notice what a nonsense I was talking in this thread.
    Erase block size is a natural boundary for compressed size, not uncompressed. So it's likely that they are packing smaller pieces into blocks.
    I wonder if they let the compressed blocks spill over erase blocks. Because otherwise the problem gets NP-complex.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •