Page 6 of 6 FirstFirst ... 456
Results 151 to 178 of 178

Thread: TurboBench - Back to the future (incl. LzTurbo)

  1. #151
    Member
    Join Date
    Nov 2015
    Location
    boot ROM
    Posts
    95
    Thanks
    27
    Thanked 17 Times in 15 Posts
    LZSA proven to be rather interesting thing.

    What I like about it?
    - Still "simple LZ" kind of thing.
    - Still in "requires no memory for decompression" league (with some notes).
    - Seems to be very picky about stream format, trying to balance speed/simplicity vs ratio. LZSA2 looks nice in this regard.
    - Not yet an overgrown mammoth monster like brotli, zstd and somesuch.
    - LZSA2 got very reasonable ratio on small data sets.
    - One could find rather advanced techs here in action - be it repmatch or nibble alignment, that looks really nice.

    What I don't like about it:
    - Obsession on Z80 and other obsolete/toy platforms, ignoring modern viable uses (e.g. larger blocks for boot loaders/OS kernels/etc, modern CPU cores like e.g. app/mcu/etc ARMs, etc - comparably sized platforms for present-day uses).
    - A very strange decompression termination conditions, likely resulting from previous oddity. It looks like this thing was never meant to be decompressed in "safe" way? Like that: "I have 2K buffer for result, never exceed that, even if input is random garbage, no matter what". Seems this algo never been meant to run in safe way? Or I've failed to get idea. Looking on specs it seems author never considered decompressing that in safe manner under limited-size buffer that should never get exceeded, even if input is "damaged" so decompressor fed with (odd/"uncooperative" or even malicious) garbage.
    - Ahem, speaking of that, there're tons of standalone asm stuff, but no trivial, standalone C decompressor, digging to decompression routine expressed in C takes all pains of unwinding LZ4-like "enterprise-grade sphaggetti for all occasions". Uh-oh. Of course, perfection is a bitch, but as LZ4 shown, algo could enjoy bu very reasonable speed without resorting to assembly/simd/etc.

    TL;DR: out of simple LZs available in source form, on small data only lzoma gets better ratios. On larger datas lzsa isn't anyhow exciting though. I haven't benchmarked proprietary algos or algos with unknown/warez-like licensing since I have no use cases for it. Overall it looks nice for small data sets. So thanks to ppl for bringing this one into attention, I had quite some fun with it.
    Last edited by xcrh; 15th July 2019 at 20:41.

  2. Thanks:

    introspec (15th July 2019)

  3. #152
    Member
    Join Date
    May 2008
    Location
    USA
    Posts
    45
    Thanks
    0
    Thanked 4 Times in 4 Posts
    Quote Originally Posted by introspec View Post
    There is no official repo for ZX7. The author has released the compressor via http://www.worldofspectrum.org/infos...cgi?id=0027996
    The official place is actually his dropbox: https://www.dropbox.com/sh/mwa5geyxg..._3-1bI8YxPHLca
    There you can find the 8086 ASM decoder I did for it.

    LZSA2 beats almost all the time but is simpler to decode, so I don't know if I'll ever use again it in my personal stuff.

    Quote Originally Posted by xcrh
    - Obsession on Z80 and other obsolete/toy platforms, ignoring modern viable uses (e.g. larger blocks for boot loaders/OS kernels/etc, modern CPU cores like e.g. app/mcu/etc ARMs, etc - comparably sized platforms for present-day uses).
    It was designed for those toy platforms, so complaining that it doesn't support a 4GB dictionary and 128-bit symbols is kind-of pointless. If you are targeting modern environments, use threaded LZMA or something.

  4. Thanks:

    introspec (25th July 2019)

  5. #153
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 198 Times in 147 Posts
    TurboBench - Compression Benchmark updated.
    - All compressors updated to the latest version
    - rle8
    added
    - TurboRLE: Run Length Encoding improved + new benchmarks

    Some compressors like rle8 for example must be manually downloaded
    Example :
    Code:
    git clone --recursive git://github.com/powturbo/TurboBench.git
    cd TurboBench
    git clone git://github.com/rainerzufalldererste/rle8.git   
    make RLE8=1
    To make a formatted table in encode.su like below or like in this post

    No code has to be inserted here.

    use:
    Code:
    ./turbobench -p5 -o data.tbb
    "data.tbb" is the turbobench output file after benchmarking the file "data"

  6. Thanks:

    rainerzufalldererste (8th September 2019)

  7. #154
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    245
    Thanks
    100
    Thanked 48 Times in 32 Posts
    Do you have any benchmarks that track CPU and RAM usage during compression and decompression?

  8. #155
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 198 Times in 147 Posts
    TurboBench will store the exact peak memory usage for compression and decompression in the result file ".tbb".
    This works only when compiled on linux and without static linking (default Turbobench mode).
    Turbobench can only track memory allocated dynamically.
    Memory on the stack or allocated with "mmap" (memory mapped) cannot be tracked.
    No changes in the source code of the codecs is necessary.

    Here an example of memory usage: Static/Dynamic web content compression benchmark



  9. #156
    Tester
    Stephan Busch's Avatar
    Join Date
    May 2008
    Location
    Bremen, Germany
    Posts
    876
    Thanks
    474
    Thanked 175 Times in 85 Posts
    Hamid, would you please be so kind to upload a recent x64 compile for Windows?
    The old versions I have don't seem to work anymore and despite your description in earlier posts, I was unable to compile latest version of Turbobench.

    Thanks in advance

  10. #157
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    770
    Thanks
    219
    Thanked 287 Times in 169 Posts
    Quote Originally Posted by dnd View Post
    Static/Dynamic web content compression benchmark
    I'm not comfortable that a web benchmark combines streaming and non-streaming implementations into the same bucket. Of course non-streaming are faster, but they come with practical disadvantages leading to inferior system level performance, and no serious and capable user (such as Chrome or nginx) would use them.

  11. #158
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 198 Times in 147 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    I'm not comfortable that a web benchmark combines streaming and non-streaming implementations into the same bucket. Of course non-streaming are faster, but they come with practical disadvantages leading to inferior system level performance, and no serious and capable user (such as Chrome or nginx) would use them.
    Well, streaming is not absolutly required for http.
    For dynamic web content compression, pages are in general small and chunking (streaming) is not required here.
    If a site doesn't generate megabytes html pages, it is still possible to use block based compressors.
    Chunking can also be switched off in Nginx, when it is not supported by the used compression library.
    libslz for example doesn't support streaming, but it it is already used in web servers.

  12. #159
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    770
    Thanks
    219
    Thanked 287 Times in 169 Posts
    Quote Originally Posted by dnd View Post
    Well, streaming is not absolutly required for http.
    When capacity optimization is the main objective, non-streaming solutions give better system performance (roughly 0.5 % better).

    When user experienced latency is the main objective, non-streaming solutions give worse system performance (tens of percent for the first meaningful paint).

    On average, a user is 777x more expensive than a computer, so usually it is more honest humanity scale optimization to optimize for the user experience latency. Also if you want users to come back or even grow the user-base of the website, then optimizing the user experienced latency makes more sense than optimizing another 0.5 % of cpu usage in the servers.

  13. #160
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 198 Times in 147 Posts
    The average page size in my benchmark in 84k. The non streaming libdeflate can compress this to 20K in a dynamic szenario.
    It is unlikely than 20K will affect the latency so that it is perceptible to a user.
    Additionally large dynamic web pages (template+data,...) can be splitted into several small segments that can be loaded in parallel by the web browser.

    Wondering why until today, you have not presented any performance benchmark or numbers showing real dynamic web transfer encoding szenario comparing brotli and gzip.
    I'm also not aware of any recent statistic about web compression comparing gzip and brotli usage.
    Last edited by dnd; 8th September 2019 at 10:51.

  14. #161
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 198 Times in 147 Posts
    TurboBench - Compression Benchmark updated.
    - All compressors are continuously updated to the latest version
    - base64 encoding/decoding
    - New external dictionary compression with zstd including multiblock mode in TurboBench

    - benchmarking zstd with external dictionary
    1 - generate a dictionary file with:
    zstd --train mysamples/* -o mydic
    2 - start turbobench with:
    ./turbobench -ezstd,22Dmydic file

    actually the external dictionary "mydic" must be in the current directoy


    You can also benchmark multiple small files using multiblock mode in turbobench:
    1 - store your small files into a multiblock file using option "M"
    ./turbobench -Mmymultiblock files
    (mymultiblock output format: length1,file1,length2,file2,...lengthN,fileN, length=4 bytes file/block length)
    2 - Benchmark using option "-m" :
    ./turbobench -ezstd,22Dmydic mymultiblock -m

    Last edited by dnd; 16th December 2019 at 09:15.

  15. #162
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,775
    Thanks
    276
    Thanked 1,206 Times in 671 Posts
    I managed to build turbobench for windows with gcc/mingw after some patches to makefile and my mingw distribution.

    Now why somebody might need a different benchmark.
    Using same zstd source and same compiler, so that can't be the reason for differences in speed and size.
    Code:
    C:\9A76-zstd_bench\021-int_usage>ppmd_bench.exe D:\000\corpus 8 zstd 1 3
    File "D:\000\corpus": size=230461090; attempts=3;
    read 230461090 bytes to inpbuf; blksize=8192; crc32=EAA6AAE5
    "zstd" level 1: time=0.844s; compress 230461090 to 92295591, 255.6MB/s
    "zstd" level 1: time=0.281s; decompress 92295591 to 230461090, 782.2MB/s; crc32=EAA6AAE5
    
    Z:\014\TurboBench>turbobench D:\000\corpus -ezstd,1 -m -b8kb -d13
        92495156    40.1     269.87     811.14   zstd 1           corpus
    TurboBench:  - Mon Jan 06 18:12:50 2020
    
          C Size  ratio%     C MB/s     D MB/s   Name            File
        92495156    40.1     270.38     812.31   zstd 1          corpus
    Attached Files Attached Files

  16. #163
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 198 Times in 147 Posts
    TurboBench - Compression Benchmark updated.

    - All compressors are continuously updated to the latest version
    - NEW: Old or not competive codecs removed from the repository
    Removed codecs can be still benchmarked by using the release 2019-12
    - Some codecs are supported, but must be manually downloaded into the turbobench directory
    - NEW: Codecs can be individually included/excluded simply by switching the corresponding line in the makefile

    - NEW: TurboBench can now be build on
    Arm64, Amd64, PowerPC, IBM mainframe s390x and for linux, Windows, MacOS.
    You can see the CI benchmarks for each architecture by scrolling down to the bottom of the pages

    There are 2 benchmarks with brotli, bsc, lz4, lzma, zlib, zstd
    - TEXT : C,C++ sources of all codecs in the turbobench repository concatenated into a single 32MB file

    - BINARY: turbobench executable

    Remarks:
    - Brotli,11 compress 12% and 8% more denser for BINARY/TEXT than zstd,22
    - AMD64: brotli,11 is crashing. This is why it is removed from these builds.
    - ARM64: brotli is slower than zlib on ARM.
    Libdeflate decompress 2 times faster than brotli and is a lot faster than zlib
    - PowerPC: brotli and zlib speed are similar
    - S390x: Compare error for BSC. Due probably to not supporting of big edian architecture.

    Last edited by dnd; 9th January 2020 at 14:02.

  17. Thanks:

    algorithm (9th January 2020)

  18. #164
    Member
    Join Date
    Apr 2015
    Location
    Greece
    Posts
    84
    Thanks
    34
    Thanked 26 Times in 17 Posts
    Very nice idea this Travis thing. It would be nice if you included entropy coder benchmark for Travis.

  19. #165
    Member
    Join Date
    Apr 2015
    Location
    Greece
    Posts
    84
    Thanks
    34
    Thanked 26 Times in 17 Posts
    Quote Originally Posted by dnd View Post
    TurboBench - Compression Benchmark updated.

    - All compressors are continuously updated to the latest version
    - NEW: Old or not competive codecs removed from the repository
    Removed codecs can be still benchmarked by using the release 2019-12


    I also noticed that you removed FPC.

    Why? Uncompetitive? Old? In my opinion it is the highest performance (at least compared to open source ones) huffman coder without using SIMD.

    It even has higher compression ratio than many ANS and AC codecs for many probability distributions.


  20. #166
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 198 Times in 147 Posts
    It is removed only from the default repository. You can still download it manually, set the switch in the makefile and build it automatically.
    I want to reduce the number of default codecs, but I'm still thinking how to organize this.
    Only few people are interested in the EC codecs and the rest will get confused by the huge number of the default codecs.

  21. #167
    Member
    Join Date
    Apr 2015
    Location
    Greece
    Posts
    84
    Thanks
    34
    Thanked 26 Times in 17 Posts
    If entropy coders are not important then why not remove all of them? Maybe FPC is too pareto for Turbobench.

  22. #168
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 198 Times in 147 Posts
    You must be a little patient as I've written I'm still thinking how to deal with entropy coders.
    I'm not saying, entropy coders are not important, but must be probably handled differently as other codecs
    to reduce the huge number of codecs and also confusion.
    This is a private repository with a great deal of work behind and I'm free to add new codecs or remove not popular codecs.

    I've now updated the turbobench directory.

    Please add a link to the Entropy Coding Benchmark and to TurboBench Compression Benchmark on the github FPC readme.

  23. #169
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 198 Times in 147 Posts
    Here a listing of the repository size of the some codecs used in TurboBench Compression Benchmark:
    brotli 37,3 MB
    pysap 12,8 MB
    zstd 9,5 MB
    lzma 7,0 MB
    isa-l 4,6 MB
    lzo 4,4 MB
    snappy 3,4 MB
    zlib 3,9 MB
    bzip2 2,8 MB

    Some packages are including huge test data or indirectly related files.
    This can reside on a separate repository.
    The size of the brotli repository is nearly as high as the whole linux system.

    The bandwidth on a lot of countries are not very high as the countries where the developers reside.
    Some users have only mobile connections.
    The paradox, we have here compressors that are designed to save internet bandwidth.

    Strange, that the files to download is still continuing to grow.
    This is also valable for games, web pages, images, ...

  24. #170
    Programmer schnaader's Avatar
    Join Date
    May 2008
    Location
    Hessen, Germany
    Posts
    593
    Thanks
    233
    Thanked 228 Times in 108 Posts
    Quote Originally Posted by dnd View Post
    The bandwidth on a lot of countries are not very high as the countries where the developers reside.
    The git infrastructure is already doing a great job here. First, the TurboBench repository is organized using submodules, so when you clone, you can choose which submodules to clone:

    Code:
    git clone https://github.com/powturbo/TurboBench.git
      => (clones only the main repository)
      => directory size: 40,213,702 bytes
      transferred data: about 36,635,613 bytes (size of biggest file in .git\objects\pack)
    
    git submodule update --init brotli
      => (clones only the brotli submodule)
      => brotli directory size: 35,512,637
      transferred data: about 32,181,545 (size of biggest file in .git\modules\brotli\objects\pack)
    Note that the 37 MB transferred data for the main repository contain the whole repository history (all 1,261 commits). If you don't need that, "git clone --depth 1" will give you the latest revision only which transfers only about 765 KB (!) of data.

    Looking at that brotli pack file, the transferred data is compressed quite good by git already, though I agree that it could be improved using lzma compression and recompression instead of deflate in git:

    Code:
    .git\modules\brotli\objects\pack\[...].pack:       32,181,545 bytes
    .git\modules\brotli\objects\pack\[...].pcf_cn:     76,547,450 bytes (Precomp 0.4.7 -cn -intense) - so it transferred only 32 MB instead of 77 MB
    .git\modules\brotli\objects\pack\[...].pcf:        24,072,789 bytes (Precomp 0.4.7 -intense)
    Quote Originally Posted by dnd View Post
    The size of the brotli repository is nearly as high as the whole linux system.
    I agree (though I would replace "system" with "kernel"), but they are already compressed good as well, and have the same potential to improve (tested on Ubuntu):

    Code:
    /boot/initrd.img-4.15.0-74-generic:            24,305,803
    /boot/initrd.img-4.15.0-74-generic.pcf_cn:     71,762,575 (Precomp 0.4.7 -cn)
    /boot/initrd.img-4.15.0-74-generic.pcf:        16,949,956
    Quote Originally Posted by dnd View Post
    Some packages are including huge test data or indirectly related files.
    This can reside on a separate repository.
    This is more of an issue of the brotli repository than of the TurboBench repository. Note that we didn't use "--recursive" in the submodule init command above, so the submodules in the brotli repository (esaxx and libdivsufsort) aren't cloned. Test data and stuff not needed to build could also be moved to brotli submodules.

    Of course another thing that could help would be to not use outdated image formats

    Code:
    brotli\research\img\enwik9_diff.png:  5,096,698
                                   .webp: 3,511,804  (cwebp -lossless -q 100 -m 5)
                                   .flif: 3,488,547  (flif -e)
    Last edited by schnaader; 15th February 2020 at 11:59.
    http://schnaader.info
    Damn kids. They're all alike.

  25. #171
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 198 Times in 147 Posts
    Thank you for your elaboration, corrections and hints.
    I've removed recently some old, not maintained or not notable codecs.
    Many codecs listed in the readme, but not in the turbobench repository must manually downloaded and activated in the make file.

    I will continue to clean, simplify and automate the process.

  26. #172
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,775
    Thanks
    276
    Thanked 1,206 Times in 671 Posts
    I think it could make sense to make a complete and clean repository on your side (make scripts to download all submodules, then remove unnecessary stuff),
    then push that to github.
    Windows users don't have git by default etc. Also it would be good to have buildable full sources as release archive - getting most recent version of each codec
    is not always a good idea - eg. zstd developers frequently break it.

  27. #173
    Member
    Join Date
    Apr 2015
    Location
    Greece
    Posts
    84
    Thanks
    34
    Thanked 26 Times in 17 Posts
    To reduce size of repos you can just shallow clone

    git clone --depth=1

    This downloads only the latest revision, omitting history.

  28. Thanks (2):

    dnd (19th February 2020),JamesB (19th February 2020)

  29. #174
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 198 Times in 147 Posts
    Quote Originally Posted by Shelwien View Post
    I think it could make sense to make a complete and clean repository on your side...
    On linux it's simple to download and build the package, but on windows you must first install git and the mingw-w64 package.
    This scares windows users.


    The submodules are already updated automatically and there is also a "make cleana" (linux only) to remove some unnecessary huge directories.


    I've made a new release with builds for linux+windows and a cleaned small source code 7zip package (5MB) containing all submodules ready to build.
    That's a solution for users with limited download bandwidth or with difficulties to build turbobench.

    But this step implies more work to setup.


    git clone --depth=1
    I've added this option in the readme file.
    As already stated, if you have git and gcc/mingw installed then there is no problem to download and build turbobench.
    This option reduces the downloaded size by few percents, but the huge submodules will be still completely downloaded.

  30. Thanks:

    Shelwien (19th February 2020)

  31. #175
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,775
    Thanks
    276
    Thanked 1,206 Times in 671 Posts
    Code:
    Z:\010\TurboBench> C:\MinGW820x\bin\make.exe 
    [...]
    gcc -O3  -w -Wall    -DNDEBUG -s -w -std=gnu99 -fpermissive -Wall -Ibrotli/c/include -Ibrotli/c/enc  -Ilibdeflate
    eflate/common -Ilizard/lib -Ilz4/lib -Izstd/lib -Izstd/lib/common -Ilzo/include   -D_7ZIP_ST -Ilzsa/src -Ilzsa/sr
    vsufsort/include turbobench.c -c -o turbobench.o
    turbobench.c: In function 'mem_init':
    turbobench.c:154:24: error: 'RTLD_NEXT' undeclared (first use in this function); did you mean 'RTLD_NOW'?
       mem_malloc   = dlsym(RTLD_NEXT, "malloc" );
                            ^~~~~~~~~
                            RTLD_NOW
    turbobench.c:154:24: note: each undeclared identifier is reported only once for each function it appears in
    makefile:717: recipe for target 'turbobench.o' failed
    make: *** [turbobench.o] Error 1

  32. #176
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 198 Times in 147 Posts
    The coding with dlsym is only included in linux. It is not included when _WIN32 is defined. see turbobench.c
    Normally it compiles with mingw without any issue. see the CI Mingw build .
    Don't know why _WIN32 is not defined in your gcc?
    You can try to compile with "make STATIC=1" (NMEMSIZE will be defined)

  33. Thanks:

    Shelwien (20th February 2020)

  34. #177
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,775
    Thanks
    276
    Thanked 1,206 Times in 671 Posts
    Apparently it uses cygwin there. STATIC=1 let it build the exe anyway, though.

    With PATH set to mingw \bin it actually compiled with just "make".
    But then didn't work because of too many linked mingw dlls.
    I think STATIC=1 should be default on windows.
    Attached Files Attached Files

  35. #178
    Member
    Join Date
    Mar 2013
    Location
    Worldwide
    Posts
    565
    Thanks
    67
    Thanked 198 Times in 147 Posts
    TurboBench Compression Benchmark new update:
    - _WIN32 added when __CYGWIN__ is defined

    You can use the option -l1 to display the codecs compiled in TurboBench including the possible levels and parameters:
    Code:
    ./turbobench -l1
    
    Plugins:
    brotli 0,1,2,3,4,5,6,7,8,9,10,11/d#:V
    bzip2 
    fastlz 1,2
    flzma2 0,1,2,3,4,5,6,7,8,9,10,11/mt#
    glza 
    bsc 0,3,4,5,6,7,8/p:e#
    bscqlfc 1,2
    libdeflate 1,2,3,4,5,6,7,8,9,12/dg
    zpaq 0,1,2,3,4,5
    lz4 0,1,9,10,11,12,16/MfsB#
    lizard 10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49
    lzfse 
    lzham 1,2,3,4/t#:fb#:x#
    lzlib 1,2,3,4,5,6,7,8,9/d#:fb#
    lzma 0,1,2,3,4,5,6,7,8,9/d#:fb#:lp#:lc#:pb#:a#:mt#
    lzo1b 1,9,99,999
    lzo1c 1,9,99,999
    lzo1f 1,999
    lzo1x 1,11,12,15,999
    lzo1y 1,999
    lzo1z 999
    lzo2a 999
    lzsa 9/f#cr
    quicklz 1,2,3
    sap 0,1,2
    zlib 1,2,3,4,5,6,7,8,9
    zopfli 
    zstd 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,-1,-2,-3,-4,-5,-6,-7,-8,-9,-10,-11,-12,-13,-14,-15,-16,-17,-18,-19,-20,-21,-22/d#
    imemcpy 
    memcpy 
    fse 
    fsehuf 
    TurboRC 11,12,13,14,21,22,23,24,25,30,31,32,33,34,35
    zlibh 8,9,10,11,12,13,14,15,16,32
    zlibrle 
    divbwt 
    st 3,4,5,6,7,8
    Last edited by dnd; 20th February 2020 at 23:12.

Page 6 of 6 FirstFirst ... 456

Similar Threads

  1. EBOLA AND THE FUTURE OF CIVILZATION
    By biject.bwts in forum The Off-Topic Lounge
    Replies: 20
    Last Post: 3rd November 2014, 10:34
  2. Future-LZ as last-step compression scheme
    By Piotr Tarsa in forum Data Compression
    Replies: 18
    Last Post: 3rd December 2011, 00:55
  3. BCM's future
    By encode in forum Data Compression
    Replies: 17
    Last Post: 9th August 2009, 01:00
  4. Future Bandwidth.
    By Tribune in forum The Off-Topic Lounge
    Replies: 9
    Last Post: 10th October 2008, 22:56
  5. LZPM's future
    By encode in forum Forum Archive
    Replies: 129
    Last Post: 3rd March 2008, 19:23

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •