Page 3 of 6 FirstFirst 12345 ... LastLast
Results 61 to 90 of 160

Thread: TurboBench - Back to the future (incl. LzTurbo)

  1. #61
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    789
    Thanks
    64
    Thanked 274 Times in 192 Posts
    Quote Originally Posted by dnd View Post
    TurboBench new modules and compressors update to the latest versions
    -ezstd,21,22 ?
    -ebrotli,10 ?
    -eglza ?

  2. #62
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts

    Lightbulb TurboBench - Fastest Lz77 - ARM 64 bits - Cortex-A53 2Ghz

    Hardware: ODROID C2 - ARM 64 bits - 2Ghz CPU, OS: Ubuntu 16.04, gcc 5.3
    All compressors with latest versions

    pd3d.tar - 3D Test Set (RAD Game Tools)
    Code:
          C Size  ratio%     C MB/s     D MB/s   Name            (bold = pareto) MB=1.000.0000
         8052040    25.2       0.53      23.23   lzma 9          
         9092280    28.4       0.08      52.61   brotli 11       
         9159574    28.7       0.52     119.76   lzturbo 39      
         9691094    30.3       0.68      94.02   zstd 22         
         9826984    30.7       3.24     136.91   lzturbo 32      
        10264073    32.1      26.15     142.28   lzturbo 30      
        10427322    32.6       4.90     108.76   zstd 9          
        10938385    34.2       9.46     110.38   lzfse           
        10966870    34.3       8.92     101.96   zstd 5          
        11059511    34.6       1.74      98.16   zlib 9          
        11121480    34.8       7.63      97.47   zlib 6          
        12649309    39.6       0.61     366.17   lzturbo 29      
        12838230    40.2       0.74     179.61   lz5 15          
        13302907    41.6      19.07     435.28   lzturbo 21      
        14237494    44.5       0.66     500.67   lzturbo 19      
        14283317    44.7      10.04     329.14   lz4 9           
        14723054    46.1     103.21     483.81   lzturbo 20      
        14814049    46.4       8.14     484.09   lzturbo 12      
        14926456    46.7      20.71     238.37   lz5 1           
        16069593    50.3     121.12     365.08   lz4 1           
        16166867    50.6     111.43     475.66   lzturbo 10      
        31952896   100.0    1676.10    1704.00   memcpy
    Note on decompression:
    - zstd : slightly faster (5,9) or slower (22) than zlib
    - lzfse: only slightly faster than zlib. Apple is claiming 2x-3x faster than zlib!
    - brotli: ~2x slower than zlib!

  3. #63
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts

    Exclamation TurboBench - New update

    TurboBench binaries for windows and linux updated with the latest compressor versions.
    You can find the latest source code on Github: TurboBench

    New: You can now benchmark Oodle including Kraken, Mermaid and Selkie with your own data on your own hardware.
    Just put the oodle dll v2.3.0 (only windows 64 bits) on the same directory as turbobench(s) and type for ex. "turbobench(s) -eoodle,11,21,58 file"
    Compression codec/level: 11-19, 21-29, 31-39, 41-49, 51-59

    Codec 1:LZNA 2:Kraken 3:Mermaid 4:BitKnit 5:Selkie
    level: 1-9
    Last edited by dnd; 17th August 2016 at 22:41.

  4. Thanks:

    Sportman (17th August 2016)

  5. #64
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    789
    Thanks
    64
    Thanked 274 Times in 192 Posts
    Quote Originally Posted by dnd View Post
    TurboBench binaries for windows and linux updated with the latest compressor versions.
    zstd,21 and brotli,10 are still not supported.

  6. #65
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts
    Uploaded a new version with the missing options
    Last edited by dnd; 17th August 2016 at 22:44.

  7. #66
    Tester
    Stephan Busch's Avatar
    Join Date
    May 2008
    Location
    Bremen, Germany
    Posts
    876
    Thanks
    474
    Thanked 175 Times in 85 Posts
    Turbobenchs -eECODER crashes on many files, p. ex. on this one:

    https://drive.google.com/file/d/0ByL...ew?usp=sharing

  8. #67
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts
    try to specify the coders without turboanx (this version is crashing on windows).

    ex.
    turbobenchs -eturbohf/turborc/turborc_o1/turboac_byte/arith_static/rans_static16/subotin/fasthf/fastac/zlibh/fse/fsehuf/memcpy/ epub.tar

  9. Thanks:

    Stephan Busch (19th August 2016)

  10. #68
    Tester
    Stephan Busch's Avatar
    Join Date
    May 2008
    Location
    Bremen, Germany
    Posts
    876
    Thanks
    474
    Thanked 175 Times in 85 Posts
    may I also suggest a -eALL switch as in Lzbench?

    There are some codecs that are not used in test runs such as lzoma, sap, heatshrink, nakamichi..

  11. #69
    Member
    Join Date
    Dec 2011
    Location
    Cambridge, UK
    Posts
    467
    Thanks
    149
    Thanked 160 Times in 108 Posts
    It looks like -eECODER is a synonym for -eturbohf/turboanx/turborc/turborc_o1/turboac_byte/arith_static/rans_static16/rans_static16o1/subotin/fasthf/fastac/zlibh/fse/fsehuf/memcpy/

    Of those, turboanx crashes for me on your epub.tar. I also notice that rans_static16o1 and rans_static8o1 fail too, which is my own code.

    It's an interesting test though of how well programs cope with random data and whether they detect it and fall back to a memcpy equivalent. The tar file contains compressed data, so it's very hard to get further compression on.

    Code:
    @ deskpro[tmp/TurboBench]; ./turbobench_linux64/turbobenchs -e subotin/fasthf/fastac/zlibh/fse/fsehuf/memcpy epub.tar
    Benchmark: 1 from 3
        51719172   100.0     540.47    8854.67   TurboHF         epub.tar
        51978584   100.5      33.00      26.96   TurboRC         epub.tar
        51394696    99.4      30.93      21.67   TurboRC_o1      epub.tar
        52929992   102.3      29.85      19.79   TurboAC_byte    epub.tar
        52986165   102.4     249.52     173.13   arith_static    epub.tar
        52060106   100.7     430.83     593.49   rans_static16   epub.tar
        52061745   100.7     405.07     566.34   rans_static8    epub.tar
        51579326    99.7      26.71      17.68   subotin         epub.tar
        51630690    99.8     115.57      85.08   FastHF          epub.tar
        51563272    99.7     163.04      40.65   FastAC          epub.tar
        51651330    99.9     324.52     310.05   zlibh           epub.tar
        51634323    99.8     582.26    4494.95   fse             epub.tar
        51617788    99.8     872.56    1492.62   fsehuf          epub.tar
        51719172   100.0    7446.36    8917.94   memcpy          epub.tar
    I fixed a bug in the rans_static order-1 code recently, but can't get the submodules to update correctly in my checkout for some reason. It does seem able to process this file though when run directly from my git repository. I'll try and figure out why the submodule won't update and test in conjunction with this benchmark. Sometimes git just hates me (and everyone else!).

  12. Thanks (2):

    dnd (19th August 2016),Stephan Busch (19th August 2016)

  13. #70
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts
    Quote Originally Posted by Stephan Busch View Post
    may I also suggest a -eALL switch as in Lzbench?
    There are some codecs that are not used in test runs such as lzoma, sap, heatshrink, nakamichi..
    There are too many slow codecs included in TurboBench, so I think "-eALL" doesn't make sense.
    Successive runs are stored in a ".tbb" file.

    Use "turbobench -p1 -o file.tbb" to print a report.

  14. #71
    Member
    Join Date
    Dec 2011
    Location
    Cambridge, UK
    Posts
    467
    Thanks
    149
    Thanked 160 Times in 108 Posts
    The encoding crash for rans_static order-1 is in the benchmark suite.

    turbobench.c has a parameter "fac" for expansion rate, set to 1.3. I upped it to 3.3 just as a safeguard and it can then compress. The problem is that the order-1 coders aren't dynamic. They're compressing blocks statically which means storing the frequency table (at least N bits of it) for the full 256x256 matrix. With few symbols in use this isn't large, but with pre-compressed data it's a huge overhead. It can be offset by using a much larger block size, but arguably it's just the wrong algorithm for the data. These codecs aren't auto-sensing. They probably should be, but there is a nicety and purity about it doing precisely what you asked for with the logic of which to use (order 0 or 1) being relegated to a wrapper function perhaps.

    [Edit1: turbobench should be honouring the return values here. The rans codec checks the bounds and returns NULL to indicate it may not fit, but this then translates to a crash in the test harness.]

    Anyway, changing that fac parameter didn't help entirely! It encodes now, but dies during decoding and I haven't found why yet. The codec itself works OK and can be tested with e.g. "cc -g -DBLK_SIZE=65536 -DTEST_MAIN rans_static/rANS_static4x8.c -o rans_test; ./rans_test -o1 -t epub.tar". That works, while "./turbobench -Y64k -k0 -t0 -e rans_static8o1 epub.tar" does not.

    Edit2: found the other bug.

    becomp() in turbobench.c stores the output length in either 2 bytes or 4 bytes depending on the size of the *input* buffer. In this case the input is < 64k while the output > 64k due to growth in compressing uncompressable data. This then means the first few bits get lost of the block size, making the decompression step return early and give a shorter, different, buffer.

    Hardcoding bs to 4 for both becomp() and bedcomp() gives a working executable.

    Is this, or the "fac" value, also what is causing turboanx to crash I wonder?

  15. #72
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts
    You can try the option "-P" that automatically fall back to memcpy when the output is larger than the input or limit the input buffer to 32k.

    TurboANX will run by setting the fac factor with the option "-F2"
    Last edited by dnd; 19th August 2016 at 19:03.

  16. #73
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts
    TurboBench binaries for windows and linux updated with the latest compressor versions (incl. zstd 1.0.1).
    You can find the latest source code on Github: TurboBench

  17. #74
    Member
    Join Date
    Dec 2011
    Location
    Cambridge, UK
    Posts
    467
    Thanks
    149
    Thanked 160 Times in 108 Posts
    Any chance of rerunning the entropy coder benchmark at

    https://sites.google.com/site/powturbo/entropy-coder

  18. #75
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts
    Quote Originally Posted by JamesB View Post
    Any chance of rerunning the entropy coder benchmark at

    https://sites.google.com/site/powturbo/entropy-coder
    Yes, probably the next weekend

  19. Thanks:

    JamesB (6th September 2016)

  20. #76
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts
    Entropy Coder Benchmark updated with the latest coder versions including:
    - fse in zstd 1.0
    - fsehuf in zstd 1.0
    - rans_static16

    New: ARM 64 bits benchmark

    Note:
    - rans_static now decoding faster than fse on intel.

    - This benchmark shows clearly that the hype around "Asymmetric Numeral Systems"
    with current implementations is only marketing driven and
    exaggerated.

  21. Thanks:

    algorithm (10th September 2016)

  22. #77
    Member
    Join Date
    Dec 2011
    Location
    Cambridge, UK
    Posts
    467
    Thanks
    149
    Thanked 160 Times in 108 Posts
    Quote Originally Posted by dnd View Post
    - This benchmark shows clearly that the hype around "Asymmetric Numeral Systems"
    with current implementations is only marketing driven and
    exaggerated.
    I think it shows that it's a clear winner over arithmetic coding for speed, but not over huffman - which is precisely what we expected and not over hyped at all.

    It also shows that given the same conditions (eg window sizes) on skewed distributions it is more accurate than huffman - also what we expected.

  23. #78
    Member
    Join Date
    Nov 2014
    Location
    California
    Posts
    134
    Thanks
    46
    Thanked 37 Times in 27 Posts
    Quote Originally Posted by dnd View Post

    - This benchmark shows clearly that the hype around "Asymmetric Numeral Systems" with current implementations is only marketing driven and
    exaggerated.
    TurboANX in LzTurbo v1.2. World's fastest and most efficient entropy coder.
    ???
    Is TurboANX not based on ANS ?

  24. #79
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    702
    Thanks
    217
    Thanked 217 Times in 134 Posts
    The advantage of accurate entropy coders is strongly dependent on the actual probabilities - e.g. if they are powers of 1/2, Huffman is perfect.
    From the other side, if probability distribution is skewed, this advantage can as large as you want - e.g. if Pr(s) = 0.99, it carries only ~0.014 bits, while Huffman would use 1 bit, finally leading to ~10x scale increase of the file.
    Huffman carries this risk of really nasty behavior for some specific data (e.g. when nearly all literals or literal lengths or offsets or match lengths in LZ are equal).

  25. #80
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts
    Quote Originally Posted by hexagone View Post
    ???
    Is TurboANX not based on ANS ?
    Yes TurboANX in lzturbo 1.2 is based on ANS. Actually lzturbo 1.3 included in TurboBench is using only TurboHF.

  26. #81
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts

    Lightbulb TurboBench: Static/Dynamic web content compression benchmark

    TurboBench: Static/Dynamic web content compression benchmark
    Benchmark from the SLZ thread updated with the latest version of libdeflate.

  27. #82
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts
    New update TurboBench Compressor Benchmark:


    Latest compressor versions
    - brotli
    - Nakamichi
    - zlib + zlib-ng
    - lz4
    - lizard
    - rans_static
    - snappy
    - zstd
    - and others
    Last edited by dnd; 3rd March 2017 at 16:26.

  28. #83
    Member
    Join Date
    Dec 2011
    Location
    Cambridge, UK
    Posts
    467
    Thanks
    149
    Thanked 160 Times in 108 Posts
    It'd be interesting to see different block sizes. 32k is all well and good and maybe OK for some files, but on large homogeneous files with slow or unchanging stats it's more appropriate to use larger buffers.

    For what it's worth, I found *nearly* 1Mb to be far faster than *exactly* 1Mb blocks in my entropy encoder. I think this was due to cache collisions.

  29. #84
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts
    You can use the option "-Y" to set the block size for ANS coders.
    Ex:
    turbobench -erans_static16 -Y1m enwik8

    or
    turbobench -erans_static16 -Y900k enwik8

  30. #85
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts
    New github update TurboBench Compressor Benchmark:

    Latest compressor versions :
    -
    brotli
    - lizard
    - lzfse
    - snappy
    - zstd


  31. Thanks:

    inikep (10th March 2017)

  32. #86
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts

    TurboBench Compressor benchmark

    TurboBench Compressor benchmark github update:

    - Thanks to Turbobench submodule architecture, all codecs with latest version as usual
    - New: now you can specify the dictionary size for some codecs with the option "-d#"
    to overwrite the codec's default setting
    ex.
    Code:
    turbobench -d24 -elzma,9/brotli,11/lzlib,9 enwik8
    or
    turbobench -d100m -elzma,9/brotli,11 enwik8
    and individually for each codec
    Code:
    turbobench -elzma,9d29
    - New: you can set codec individual advanced options as parameter for some compressors
    (see file "turbobench.ini" or "plugins.cc")
    ex.
    Code:
    turbobench -elzma,9d29:a1:fb273:mf=bt4:mc999:lc8:lp0:pb2 silesia.tar
    (see thread)
    turbobench reports a compressed size of "48096529" for silesia.tar

    - New: You can set your own codec group in "turbobench.ini" (must be in the current directory)

  33. Thanks (2):

    inikep (24th March 2017),Shelwien (24th March 2017)

  34. #87
    Member Samantha's Avatar
    Join Date
    Apr 2016
    Location
    italy
    Posts
    38
    Thanks
    31
    Thanked 7 Times in 4 Posts
    @dnd the executable has not been updated on this page https://sites.google.com/site/powturbo/downloads , with the new updates...


  35. #88
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts

    TurboBench new update

    TurboBench binaries for windows and linux updated with the latest compressor versions.

    You can find the latest source code on Github: TurboBench

    To benchmark Oodle including Kraken, Mermaid and Selkie, you must put the oo2core_4_win64.dll (only windows 64 bits) on the same directory as turbobench(s).

    Type: "turbobench -l2" for a codec list
    Type: "turbobench -l1" for a codec list with levels + advanced option (see also "turbobench.ini")

  36. Thanks:

    Samantha (27th March 2017)

  37. #89
    Member Samantha's Avatar
    Join Date
    Apr 2016
    Location
    italy
    Posts
    38
    Thanks
    31
    Thanked 7 Times in 4 Posts
    Great, the version is more complete of @inikep, but I prefer the @inikep version, even if it contains less compressors-bench , the command-line output and result is more orderly in the reading.
    I personally prefer this sequence:

    Code:
    Compressor name | Compress. | Decompress. | Orig. size | Compr. size | Ratio | Filename
    Code:
    ♒ [ βench_Σntropy_v1.0 ] ↓ [ ● LZ-βench v1.7.1 ● Test Size βench : 211.938.580 Bytes ● ] ↓ [ ● Completed At 27/03/2017 14:36:28 ● ] ♒
    
    ──────────────────────────────────────────────────────────────
    ... Scan - lzma - In Progress ... \ dickens ... 9,7°MB
    ──────────────────────────────────────────────────────────────
    Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
    memcpy                  12022 MB/s 12030 MB/s    10192446     10192446 100.00 dickens
    lzma 16.04 -9            1.90 MB/s   103 MB/s    10192446      2830235  27.77 dickens
    
    
    ──────────────────────────────────────────────────────────────
    ... Scan - lzma - In Progress ... \ mozilla ... 48,8°MB
    ──────────────────────────────────────────────────────────────
    Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
    memcpy                  10888 MB/s 10701 MB/s    51220480     51220480 100.00 mozilla
    lzma 16.04 -9            2.67 MB/s    81 MB/s    51220480     13366210  26.10 mozilla
    
    
    ──────────────────────────────────────────────────────────────
    ... Scan - lzma - In Progress ... \ mr ... 9,5°MB
    ──────────────────────────────────────────────────────────────
    Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
    memcpy                  13210 MB/s 13147 MB/s     9970564      9970564 100.00 mr
    lzma 16.04 -9            2.93 MB/s    83 MB/s     9970564      2749668  27.58 mr
    
    
    ──────────────────────────────────────────────────────────────
    ... Scan - lzma - In Progress ... \ nci ... 32,0°MB
    ──────────────────────────────────────────────────────────────
    Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
    memcpy                  10607 MB/s 10384 MB/s    33553445     33553445 100.00 nci
    lzma 16.04 -9            3.51 MB/s   394 MB/s    33553445      1738570   5.18 nci
    
    
    ──────────────────────────────────────────────────────────────
    ... Scan - lzma - In Progress ... \ ooffice ... 5,9°MB
    ──────────────────────────────────────────────────────────────
    Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
    memcpy                  14549 MB/s 14590 MB/s     6152192      6152192 100.00 ooffice
    lzma 16.04 -9            3.26 MB/s    56 MB/s     6152192      2426789  39.45 ooffice
    
    
    ──────────────────────────────────────────────────────────────
    ... Scan - lzma - In Progress ... \ osdb ... 9,6°MB
    ──────────────────────────────────────────────────────────────
    Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
    memcpy                  12407 MB/s 12338 MB/s    10085684     10085684 100.00 osdb
    lzma 16.04 -9            2.61 MB/s    81 MB/s    10085684      2849363  28.25 osdb
    
    
    ──────────────────────────────────────────────────────────────
    ... Scan - lzma - In Progress ... \ reymont ... 6,3°MB
    ──────────────────────────────────────────────────────────────
    Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
    memcpy                  14372 MB/s 14150 MB/s     6627202      6627202 100.00 reymont
    lzma 16.04 -9            2.00 MB/s   130 MB/s     6627202      1317081  19.87 reymont
    
    
    ──────────────────────────────────────────────────────────────
    ... Scan - lzma - In Progress ... \ samba ... 20,6°MB
    ──────────────────────────────────────────────────────────────
    Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
    memcpy                  10979 MB/s 10926 MB/s    21606400     21606400 100.00 samba
    lzma 16.04 -9            3.61 MB/s   129 MB/s    21606400      3764299  17.42 samba
    
    
    ──────────────────────────────────────────────────────────────
    ... Scan - lzma - In Progress ... \ sao ... 6,9°MB
    ──────────────────────────────────────────────────────────────
    Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
    memcpy                  13100 MB/s 13036 MB/s     7251944      7251944 100.00 sao
    lzma 16.04 -9            3.29 MB/s    36 MB/s     7251944      4423158  60.99 sao
    
    
    ──────────────────────────────────────────────────────────────
    ... Scan - lzma - In Progress ... \ webster ... 39,5°MB
    ──────────────────────────────────────────────────────────────
    Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
    memcpy                  10714 MB/s 10926 MB/s    41458703     41458703 100.00 webster
    lzma 16.04 -9            1.66 MB/s   124 MB/s    41458703      8384276  20.22 webster
    
    
    ──────────────────────────────────────────────────────────────
    ... Scan - lzma - In Progress ... \ x-ray ... 8,1°MB
    ──────────────────────────────────────────────────────────────
    Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
    memcpy                  13685 MB/s 13652 MB/s     8474240      8474240 100.00 x-ray
    lzma 16.04 -9            3.18 MB/s    40 MB/s     8474240      4486761  52.95 x-ray
    
    
    ──────────────────────────────────────────────────────────────
    ... Scan - lzma - In Progress ... \ xml ... 5,1°MB
    ──────────────────────────────────────────────────────────────
    Compressor name         Compress. Decompress.  Orig. size  Compr. size  Ratio Filename
    memcpy                  14728 MB/s 14643 MB/s     5345280      5345280 100.00 xml
    lzma 16.04 -9            4.79 MB/s   262 MB/s     5345280       454690   8.51 xml
    
    
    ──────────────────────────────────────────────────────────────
    ... Scanning Dir ( C:\X_Test ) Operation Has Been Completed ...
    ──────────────────────────────────────────────────────────────


  38. #90
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    486
    Thanks
    52
    Thanked 182 Times in 133 Posts
    In TurboBench, I'm trying to limit the output text length and the spaces between columns.
    All the columns left to the compressors name are of fixed size, whereas the name + level + options and file name are variable size
    and can disturb the format when they are placed on the left side.
    The original file size is not printed and can be shown simply by including "memcpy".
    Matt is also using a similar format here, but everyone has their own preferences.

Page 3 of 6 FirstFirst 12345 ... LastLast

Similar Threads

  1. EBOLA AND THE FUTURE OF CIVILZATION
    By biject.bwts in forum The Off-Topic Lounge
    Replies: 20
    Last Post: 3rd November 2014, 11:34
  2. Future-LZ as last-step compression scheme
    By Piotr Tarsa in forum Data Compression
    Replies: 18
    Last Post: 3rd December 2011, 01:55
  3. BCM's future
    By encode in forum Data Compression
    Replies: 17
    Last Post: 9th August 2009, 02:00
  4. Future Bandwidth.
    By Tribune in forum The Off-Topic Lounge
    Replies: 9
    Last Post: 10th October 2008, 23:56
  5. LZPM's future
    By encode in forum Forum Archive
    Replies: 129
    Last Post: 3rd March 2008, 20:23

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •