Page 2 of 9 FirstFirst 1234 ... LastLast
Results 31 to 60 of 249

Thread: Filesystem benchmark

  1. #31
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,511
    Thanks
    746
    Thanked 668 Times in 361 Posts
    s/minizip/miniz/

  2. #32
    Member
    Join Date
    Jan 2007
    Location
    Moscow
    Posts
    239
    Thanks
    0
    Thanked 3 Times in 1 Post
    Can anyone provide Windows build please?
    Thanks.

  3. #33
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    865
    Thanks
    463
    Thanked 260 Times in 107 Posts
    mmmh, i failed to build it (win32, MinGW, using GCC 4.6.2) due to numerous issues into lzham....
    Attached Files Attached Files

  4. #34
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Bulat Ziganshin View Post
    s/minizip/miniz/
    Thanks, corrected internally.
    Quote Originally Posted by nimdamsk View Post
    Can anyone provide Windows build please?
    Thanks.
    Here you are.
    Quote Originally Posted by Cyan View Post
    mmmh, i failed to build it (win32, MinGW, using GCC 4.6.2) due to numerous issues into lzham....
    Is the mingw 32-bit?
    If yes, it's the 1st thing I mentioned in the release notes. :P
    It will stay like this until Rich fixes the issues or I loose hope for a reasonably quick resolution (and remove LZHAM).
    Attached Files Attached Files

  5. #35
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    865
    Thanks
    463
    Thanked 260 Times in 107 Posts
    If yes, it's the 1st thing I mentioned in the release notes. :P
    indeed ...

  6. #36
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,423
    Thanks
    223
    Thanked 1,052 Times in 565 Posts
    I tried compiling it with lzham, and it doesn't seem like that much of a problem, there're 3 main issues:
    1. Some syntax requires -fpermissive
    2. __forceinline not supported by gcc (can be defined as __attribute__((always_inline)) )
    3. lzham_win32_threading.h, included from lzham_threading.h uses some new winapi which is not supported by mingw.
    It should be possible to just include the corresponding library from VS, but I just modified lzham_threading.h
    to include another header - it compiles with either lzham_pthreads_threading.h or lzham_null_threading.h

    But in the end, maybe something is still wrong, because when I run
    fsbench.exe LZHAM,1 fsbench.exe
    it starts printing
    c0xb00020 1278976
    indefinitely.

  7. #37
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Shelwien View Post
    I tried compiling it with lzham, and it doesn't seem like that much of a problem, there're 3 main issues:
    1. Some syntax requires -fpermissive
    2. __forceinline not supported by gcc (can be defined as __attribute__((always_inline)) )
    3. lzham_win32_threading.h, included from lzham_threading.h uses some new winapi which is not supported by mingw.
    It should be possible to just include the corresponding library from VS, but I just modified lzham_threading.h
    to include another header - it compiles with either lzham_pthreads_threading.h or lzham_null_threading.h

    But in the end, maybe something is still wrong, because when I run
    fsbench.exe LZHAM,1 fsbench.exe
    it starts printing
    c0xb00020 1278976
    indefinitely.
    OOps, it's my leftover debug code. Sorry, I'll fix it soon, probably during the weekend.
    ADDED: BTW, it's not indefinite. And shouldn't have notable impact on speed. It just runs many iterations because the input file is small. This is designed to have meaningful time/iteration even with fast codecs. It's not a great default for testing only slow ones, but I don't see a good fix. You can try with -s20 switch.
    Last edited by m^2; 23rd March 2012 at 01:29.

  8. #38
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    OK, I managed to fix it today.

    [+] added lzg
    [~] code cleanup
    [!] removed leftover debug code
    Attached Files Attached Files

  9. #39
    Member
    Join Date
    May 2008
    Location
    Antwerp , country:Belgium , W.Europe
    Posts
    487
    Thanks
    1
    Thanked 3 Times in 3 Posts
    Can anyone make a Win32 (or Win64) executable ?

  10. #40
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    I just found another bug in my LZHAM code....the default mode didn't work. So I'm releasing yet another version so quickly. A win64 binary inside.

    [+] added Doboz
    [~] code cleanup and minor improvements
    [!] LZHAM's default mode didn't work...

    Notes:
    Doboz seems to have very inconsistent performance. On some it's great, on others not so much.

    Code:
    e:\projects\benchmark04\tst>fsbench doboz lz4hc lzo,1x1_999 zlib,9 -w0 -i1 -s1
    .\nbbs.tar
    memcpy               = 15 ms (1925 MB/s), 30289408->30289408
    Codec         version    args       Size (Ratio)    C.Speed      D.Speed
    Doboz      2011-03-19            6184607 (x 4.90)   C:   1 MB/s  D: 444 MB/s
    LZ4hc             r12            9498793 (x 3.19)   C:  14 MB/s  D: 577 MB/s
    LZO              2.05 1x1_999    9426077 (x 3.21)   C:   4 MB/s  D: 304 MB/s
    zlib            1.2.5       9    9086670 (x 3.33)   C:   8 MB/s  D: 162 MB/s
    done... (1x1 iteration(s))
    
    e:\projects\benchmark04\tst>fsbench doboz lz4hc lzo,1x1_999 zlib,9 -b131072 -w0
    -i1 -s1 ..\nbbs.tar
    memcpy               = 14 ms (2063 MB/s), 30289408->30289408
    Codec         version    args       Size (Ratio)    C.Speed      D.Speed
    Doboz      2011-03-19            9889560 (x 3.06)   C:   1 MB/s  D: 390 MB/s
    LZ4hc             r12           10033655 (x 3.02)   C:  15 MB/s  D: 589 MB/s
    LZO              2.05 1x1_999    9760175 (x 3.10)   C:   4 MB/s  D: 310 MB/s
    zlib            1.2.5       9    9297559 (x 3.26)   C:   8 MB/s  D: 158 MB/s
    done... (1x1 iteration(s))
    
    e:\projects\benchmark04\tst>fsbench doboz lz4hc lzo,1x1_999 zlib,9 -w0 -i1 -s1 .
    .\HANNOMB.ttf
    memcpy               = 20 ms (1612 MB/s), 33815824->33815824
    Codec         version    args       Size (Ratio)    C.Speed      D.Speed
    Doboz      2011-03-19           19794365 (x 1.71)   C:   1 MB/s  D: 195 MB/s
    LZ4hc             r12           19668291 (x 1.72)   C:   9 MB/s  D: 556 MB/s
    LZO              2.05 1x1_999   19808141 (x 1.71)   C:   2 MB/s  D: 223 MB/s
    zlib            1.2.5       9   18331590 (x 1.84)   C:   6 MB/s  D: 97 MB/s
    done... (1x1 iteration(s))
    Please note that the timings above are very inaccurate.

  11. #41
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Forgot the attachment...
    And note to admins:
    Edit -> Advanced doesn't work on my browser, clicking on it doesn't seem to have any effect.
    Attached Files Attached Files

  12. #42
    Member
    Join Date
    Jan 2007
    Location
    Moscow
    Posts
    239
    Thanks
    0
    Thanked 3 Times in 1 Post
    Thank you for fsbench.
    LZP1 crashes on Calgary corpus tarred with 7-Zip (http://mailcom.com/challenge/corpus.zip).
    No crash if every file of corpus is compressed individually.
    Besides, i didn't manage to make fsbench use more than 1 thread for compression. "-t2" switch seems not to work.
    I've used it on Windows 7 x64.
    Last edited by nimdamsk; 24th March 2012 at 23:50.

  13. #43
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Readme explains the benchmarking process and answers how to use the -t switch.
    In short, you need to specify -b too. Maybe I should print a warning...
    Thx for the calgary info, I'll look into it.

    ADDED: I can reproduce the crash.
    ADDED: How I love crashes that don't happen in the debug mode...
    ADDED: I decided to just remove it. It's bad anyway and for me it's not worth the fight.
    Last edited by m^2; 25th March 2012 at 01:05.

  14. #44
    Member cfeck's Avatar
    Join Date
    Jan 2012
    Location
    Germany
    Posts
    50
    Thanks
    0
    Thanked 17 Times in 9 Posts
    Thanks Matt, nice test program.

    To compile under Linux, I first had to rename directory "LZHAM" to "lzham", then apply this patch: http://paste.kde.org/445934/ (against 0.10c) - you might need to add proper "if (platform)" conditions.

    Also, when "lzjb" is faced with uncompressible data, it looks like it just returns a "cannot compress" flag, and then fsbench reports 223753 MB/s decompression speed on my machine

  15. #45
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Thanks for the patch.

    The speed is something expected and documented in the readme. This happens with all algorithms.
    Though it wouldn't be bad to actually fix it...

    And I'm Maciek, not Matt BTW.
    Last edited by m^2; 25th March 2012 at 13:31.

  16. #46
    Member cfeck's Avatar
    Join Date
    Jan 2012
    Location
    Germany
    Posts
    50
    Thanks
    0
    Thanked 17 Times in 9 Posts
    Quote Originally Posted by m^2 View Post
    And I'm Maciek, not Matt BTW.
    Oops, pardon, I was confused...

  17. #47
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by cfeck View Post


    Oops, pardon, I was confused...
    No problem, happens sometimes.

  18. #48
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    I have a status update and I seek advice.

    There was a problem with multithreading work scheduling. I divided data into equally sized parts (as much as possible within the block splitting scheme defined by user) and gave each thread first to compress and then to decompress.
    This wasn't great because quite a few algorithms have different speed on different pieces of data. And since many test files are just pieces of unrelated data merged together, this was a quite practical issue.
    I moved to dynamic scheduling when threads don't have any predefined data to work on, but instead ask a central entity to get small work assignments.
    It proved to be effective; the best result on a real-world data that I've seen was 20% performance improvement on my dual core. I didn't test much and I expect to see even greater gains once I benchmark its effects on decompression of partly compressible data. It has drawbacks though.
    One thing I don't like about it is that I expect it to have low scalability. But this is quite easy to get fixed if needed, I don't want to complicate it w/out evince that it would indeed be helpful.
    The other thing is locking overhead. With fast codecs and small pieces of data it's a real thing. I tried to measure it by introducing a pseudocodec called 'nop' that doesn't do anything. From the measurements taken, with a reasonable work item size of 256 KB (work item size and block size are tweakable independently), the fastest codecs would get 2% overhead on my machine.
    But I wonder how accurate is this measurement. The test is effectively single threaded with lock contention far greater than real codecs would suffer from. So maybe dividing the result by a number of cores would be more like it? Or maybe there's a better way to measure the effect?
    Also, I am thinking what should I do with the data. Maybe it would be good to measure the overhead before each test and substract it from results?

    I would be thankful for your thoughts.

  19. #49
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    865
    Thanks
    463
    Thanked 260 Times in 107 Posts
    The dynamic assignment is effective, and as long as threads are constantly "fed" (instead of "created"), the overhead should be low.
    256KB proved to be a good value in my tests too.

    I don't believe it is worth bothering with the cost of overhead as long as it remains within 1%.

    Measuring it precisely is quite a challenge, but it's reasonably easy to get some good enough approximations.
    I would start by comparing a completely single-threaded code with a multi-threaded one using 1 thread. The additional cost should be the job allocation call. If done correctly, it should be unnoticeable.

    With 2 and more threads, you have the contention effect to consider.
    Here, i would simply go for a simple contention counter.
    If the job allocation call is fast enough, you should end up with a number of contention which is extremely small, almost zero.

    Once both conditions are met, i believe the job allocation mechanism to be good enough.

  20. #50
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Yeah, I feed the threads.
    256 KB is just an initial default. I reckon that as long as there's a fairly large granularity, it doesn't hurt to further increase the job size. For me everything over 0.1% is unwelcome....So I'm not happy with the 2% and possibly more on some machines.

    I just tried to measure the overhead with 1 and 2 threads...and the results are interesting.
    4 KB job size, 2 threads runs <700 MB/s 80% of the time. But there are huge spikes. The exact range that I got was from 659 to 1863 (sic!) MB/s.
    1 thread does 1875-1965.
    I didn't expect notable difference in scores. Why does it happen? Context switches on contented locks?

    As to job allocation call, the cost is practically none except for synchronisation. 2 branches, 2 multiplications, 1 division. Thread creation cost should be negligible for reasonably sized data as well...I don't think that measuring them is worth bothering, so syncing is all I care about now.

    Here, i would simply go for a simple contention counter.
    What is a contention counter?

  21. #51
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    865
    Thanks
    463
    Thanked 260 Times in 107 Posts
    It's something you probably already have : a simple counter
    which increments each time a call to the job allocation function has to "wait" (because it is already in use by another thread).

    If you get "0", then there was no contention.
    Note that there is no intention to even attempt to measure timings. Too complex, and too much overhead.

  22. #52
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,511
    Thanks
    746
    Thanked 668 Times in 361 Posts
    i think it should be rather easy
    1. create as much threads as you have cpu cores
    2. split file into 256kb jobs and place them into queue
    3. each thread should perform loop picking one job from the queue and executing it
    4. the only thing you have to share is a lock around queue head pointer. time required to take the lock is the time required to propagate memory change around all cores. it should be about 5-50 ns (time of L3 cache/memory access). you need to implement the lock with user-space operations (as opposed to OS call). on windows, i think that critical sections should do the trick

    you should know that even w/o hyperthreading, N cores may be not N times than 1 core. for example, just run N threads compressing enwik8 each

  23. #53
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Cyan View Post
    It's something you probably already have : a simple counter
    which increments each time a call to the job allocation function has to "wait" (because it is already in use by another thread).

    If you get "0", then there was no contention.
    Note that there is no intention to even attempt to measure timings. Too complex, and too much overhead.
    I didn't know whether WinAPI gave information about lock contentions. Looking deeper I see that I can use 0 wait time and it will return an error. So OK, I can count contentions, thanks.

    Quote Originally Posted by Bulat Ziganshin View Post
    i think it should be rather easy
    1. create as much threads as you have cpu cores
    2. split file into 256kb jobs and place them into queue
    3. each thread should perform loop picking one job from the queue and executing it
    4. the only thing you have to share is a lock around queue head pointer. time required to take the lock is the time required to propagate memory change around all cores. it should be about 5-50 ns (time of L3 cache/memory access).
    Up to this point of your message, that's what I use.

    you need to implement the lock with user-space operations (as opposed to OS call). on windows, i think that critical sections should do the trick
    You mean custom-implemented critical sections?
    Sounds cool, but is there a portable way to do it? For now, I'm not willing to use compiler intrinsics or asm. I see that some implemented faster synchronisation than what Windows offers using its lower level routines like here. Well, I may do this, but this is a stretch.
    I would be willing to invest much more into something portable...

    you should know that even w/o hyperthreading, N cores may be not N times than 1 core. for example, just run N threads compressing enwik8 each
    Yeah, I know. That's the main reason to test with multiple threads.

  24. #54
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,511
    Thanks
    746
    Thanked 668 Times in 361 Posts
    i'm not an expert, but have heard that WinAPI so called "critical sections" are user-space operations in most cases. but please check it first.

  25. #55
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    OK, thanks. I'll try it and if it's faster - I'll use it.
    The article that I mentioned says that in Windows XP critical sections are kernel space objects and in Windows 2003 (I guess that mine XP x64 counts too) - user space. Though the difference between different Windows versions is mostly in creation, not locking and I have it outside of timer.

  26. #56
    Member BetaTester's Avatar
    Join Date
    Dec 2010
    Location
    Brazil
    Posts
    43
    Thanks
    0
    Thanked 3 Times in 3 Posts
    Because the files will get smaller, it would be interesting to add a system error correction, maybe 5%.

    If there are many bad blocks on the hard drive crashes or the file system itself would remain immune to defects, enabling the recovery happens automatically if the failure of some reading block.

    The file could be corrupted as they are written in HD, but the system is reading the correct file, thanks to the recovery system.



  27. #57
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    865
    Thanks
    463
    Thanked 260 Times in 107 Posts
    I didn't know whether WinAPI gave information about lock contentions. Looking deeper I see that I can use 0 wait time and it will return an error. So OK, I can count contentions, thanks.
    There is also the possibility to check for a set flag (effectively a semaphore).
    There are some atomic operations available in Windows API which ensure that.

    In pseudo code it does something like this

    if (flag_set) contention-counter++;
    set_flag
    Do_job_allocation
    unset_flag

    set_flag is itself a conditional loop using an atomic test+set operation.
    Such loop is only efficient if the amount of work to do (and therefore to wait for) is very small.

    The problem with wait(0) is that you may lose too much time waiting, depending on remaining slice of "refresh" rate (i think it is in the range of 1ms).
    While with above method, we should remain in the micro to nanosecond range.

  28. #58
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by BetaTester View Post
    Because the files will get smaller, it would be interesting to add a system error correction, maybe 5%.

    If there are many bad blocks on the hard drive crashes or the file system itself would remain immune to defects, enabling the recovery happens automatically if the failure of some reading block.

    The file could be corrupted as they are written in HD, but the system is reading the correct file, thanks to the recovery system.


    I wouldn't be surprised if such filesystem existed, though I don't think I've seen it.
    The closest thing that I know is ZFS; it offers 2 ways of adding redundancy:
    - RAIDZ, where you have a+b HDDs (a >= 1; 3 >= b >= 0) and the system survives crash or other corruption of b of them.
    - On a higher layer you can keep n copies of each data; if possible each copy will be on a different HDD / RAIDZ of HDDs.

    There are more flexible systems allowing a+b for any a, b like Cleversafe, but all that I've seen run a+b servers, not disks, and target enterprises.

    Quote Originally Posted by Cyan View Post
    There is also the possibility to check for a set flag (effectively a semaphore).
    There are some atomic operations available in Windows API which ensure that.

    In pseudo code it does something like this

    if (flag_set) contention-counter++;
    set_flag
    Do_job_allocation
    unset_flag

    set_flag is itself a conditional loop using an atomic test+set operation.
    Such loop is only efficient if the amount of work to do (and therefore to wait for) is very small.

    The problem with wait(0) is that you may lose too much time waiting, depending on remaining slice of "refresh" rate (i think it is in the range of 1ms).
    While with above method, we should remain in the micro to nanosecond range.
    Thanks, I'll try it.
    Yesterday I run some tests and it appears that fast codecs suffer from huge variability of results, just like nop did. The tests were rough, on a dirty machine and I have to check it better, but I think I'll have to do something about it.
    I am all the way in favour of portable solutions, so the plan is:
    1. Measure. This doesn't have to be portable.
    2. If needed, implement something that minimises the problem. In fact I've had a design well before I knew about the problem because I expected scalability issues. And I wanted to implement it, but I needed a reason.
    3. Measure again. If there is no gain, revert changes.
    4. Implement some OS-specific routines. I want to get < 0.1% overhead and I don't believe that I can do so well w/out OS-specific stuff.
    5. Measure again. If there is no gain, revert changes.

    Or alternatively, I could implement OS-specific stuff in point 2, but proceed with portable improvements, just gather data on how much do I gain by avoiding contention in relation to how fast synchronisation routines do I use.

  29. #59
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    I noticed a weird thing. *sometimes* when I run multiple tests, each subsequent run is slower than the previous ones. I've had it yesterday and it was 100% repeatable. I tried it today and it didn't happen...but some time later I noticed it again and it's again 100% repeatable.
    Code:
    e:\projects\benchmark04\tst>fsbench lz4 lz4 lz4 lz4 lz4 lz4 lz4 -s2 -i1 -t2  -b1
    31072 ..\scc.tar
    memcpy               = 170 ms (2377 MB/s), 211927552->211927552
    Codec         version    args       Size (Ratio)    C.Speed      D.Speed
    LZ4               r59      12  102943473 (x 2.06)   C: 240 MB/s  D:  751 MB/s
    LZ4               r59      12  102943473 (x 2.06)   C: 216 MB/s  D:  744 MB/s
    LZ4               r59      12  102943473 (x 2.06)   C: 195 MB/s  D:  720 MB/s
    LZ4               r59      12  102943473 (x 2.06)   C: 188 MB/s  D:  700 MB/s
    LZ4               r59      12  102943473 (x 2.06)   C: 184 MB/s  D:  679 MB/s
    LZ4               r59      12  102943473 (x 2.06)   C: 182 MB/s  D:  660 MB/s
    LZ4               r59      12  102943473 (x 2.06)   C: 171 MB/s  D:  643 MB/s
    Codec         version    args       Size (Ratio)    C.Speed      D.Speed
    done... (1x2 iteration(s))
    Does anybody have an idea what's up?

  30. #60
    Member BetaTester's Avatar
    Join Date
    Dec 2010
    Location
    Brazil
    Posts
    43
    Thanks
    0
    Thanked 3 Times in 3 Posts
    BtrFS

    • Extent based file storage
    • 2^64 byte == 16 EiB maximum file size
    • Space-efficient packing of small files
    • Space-efficient indexed directories
    • Dynamic inode allocation
    • Writable snapshots, read-only snapshots
    • Subvolumes (separate internal filesystem roots)
    • Checksums on data and metadata
    • Compression (gzip and LZO)
    • Integrated multiple device support
      • RAID-0, RAID-1 and RAID-10 implementations

    • Efficient incremental backup
    • Background scrub process for finding and fixing errors on files with redundant copies
    • Online filesystem defragmentation

    Additional features in development, or planned, include:


    • Very fast offline filesystem check
    • RAID-5 and RAID-6
    • Object-level mirroring and striping
    • Alternative checksum algorithms
    • Online filesystem check
    • Efficient incremental filesystem mirroring
    • Other compression methods (snappy, lz4)
    • Hot data tracking and moving to faster devices
    • Subvolume-aware quota support
    • Send/receive of changes

    He will be the default file system in the industry over the next 5-10 years.
    It just will not happen if some other file system, open-source, do the same things that he, more quickly or efficiently.
    Last edited by BetaTester; 8th April 2012 at 09:04.

Page 2 of 9 FirstFirst 1234 ... LastLast

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •