Page 1 of 5 123 ... LastLast
Results 1 to 30 of 138

Thread: BIT Archiver

  1. #1
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts

    BIT Archiver

    BIT Archiver has started for compressing game binaries. In BIT 0.1, ROLZ based schema was used which is derived from QUAD. In past, I want to make only one compressing method in BIT. But today, BIT Archiver becomes a universal archiver step by step. After understanding that there is no universal compressor/model for all kind of data, I've decided to add couple of codecs besides couple of preprocessing for specific file types in future versions.

    In this prerelease, only LWCX mode is working. There is no preprocessing currently. Here is the LWCX details:

    - Order 0-4, 6 context mixer based on Neural Networks
    - 2D dimensional SSE with 32 vertex
    - Semi-Stationary bit modelling (SSE functions have different modelling)
    - Nibble based hashed context

    As you see, this is kind of reinventing the wheel. By doing this, I have learned lots of thing about context mixing (of course there are still lots of thing, too). I want to add only a few well-tuned submodels to it such as match model and sparse model.

    I hope you like it and I hope it hasn't any fatal error

    Download: http://www.osmanturan.com/bit02.zip

  2. #2
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts

    Thumbs up

    Thanks Osman!

  3. #3
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts
    Quick (-mem 9) test...

    A10.jpg > 835,942
    AcroRd32.exe > 1,461,060
    english.dic > 580,848
    FlashMX.pdf > 3,709,615
    FP.LOG > 574,636
    MSO97.DLL > 1,822,291
    ohs.doc > 853,469
    rafale.bmp > 762,162
    vcfiu.hlp > 643,792
    world95.txt > 521,153

    Total = 11,764,968 bytes

  4. #4
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts
    Another quick (-mem 9) test...

    Test machine: AMD Sempron 2400+, Windows XP SP2

    Test file: ENWIK8 (100,000,000 bytes)


    Compressed Size: 21,958,938 bytes

    Elapsed Time: 633.50 Seconds

    00 Days 00 Hours 10 Minutes 33.50 Seconds

  5. #5
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Thanks a lot!
    I wonder the result in MOC and Black Fox's benchmark, too. As you see in the SFC test, LWCX suffers from executables. I think, E8E9 transform definetely helps.

    @testers:
    Could you post the timing with your computer specifications besides the compressed result? Mainly, I do some speed optimizations. I want to know it's performance on Celeron, Pentium 4, AMD etc. On my laptop (Core2 Duo 2.2Ghz, 2 GB RAM), it works at 350-800 KB/sec. I hope, it can compress around 500KB/sec-1 MB/sec in the future.

  6. #6
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    On my laptop (Core2 Duo 2.2 GHz, 2 GB RAM). All tests have been done with -9 option.
    --------------------------------------------------------------------------
    Valley.cmb (19,776,230 bytes)
    WinRK 3.0 build 3 Beta (PWCM) -> 5.961.850 bytes (1315 seconds)
    7-Zip 4.57 (Ultra) -> 7.508.392 bytes (~9 seconds)
    WinRAR 3.71 (Best) -> 8,511,238 bytes (~8 seconds)
    CMM4 0.1f (76) -> 8.197.710 bytes @ 607 KB/sec (33.013 seconds)
    BIT 0.2 -> 9.219.167 bytes @ 470 KB/sec (41.476 seconds)
    --------------------------------------------------------------------------
    Brosur1.tif (31,269,732 bytes) A brochure. 20x28 cm 300 dpi CMYK, 2362x3307 pixels, uncompressed TIFF

    CMM4 0.1f (76) -> 3,380,120 bytes @ 996 KB/sec (31.832 seconds)
    BIT 0.2 -> 3,562,068 bytes @ 714 KB/sec (43.089 seconds)
    7-Zip (Ultra) -> 3,948,960 bytes (~6 seconds)
    WinRAR 3.71 (Best) -> 4,478,996 bytes (~9 seconds)
    --------------------------------------------------------------------------
    Design2.tif (49,413,716 bytes): A newspaper advertisement, 25x35 cm 300 dpi CMYK, 2953x4134 pixels, uncompressed TIFF

    WinRAR 3.71 (Best) -> 10,072,714 bytes (~18 seconds)
    CMM4 0.1f (76) -> 11,148,805 bytes @ 755 KB/sec (65.00
    BIT 0.2 -> 11,456,267 bytes @ 634 KB/sec (76.469 seconds)
    7-Zip 4.57 (Ultra) -> 13,023,126 bytes (~15 seconds)
    --------------------------------------------------------------------------
    Calgary Corpus (TAR Version): 3,152,896 bytes
    CMM4 0.1f (76) -> 695,060 bytes (5.421 seconds)
    BIT 0.2 -> 765,273 bytes (6.604 seconds)
    --------------------------------------------------------------------------

    @toffer:
    How did you speed-up your compressor? What about the match model contributions?

  7. #7
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts

    Hi Osmanturan!

    LWCX is new High performance CM method ? Hi Osmanturan nice job !

  8. #8
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    @nania:
    Actually, it's a bit slow for me. I'll be very happy if I catch 500 KB/sec - 1 MB/sec on my laptop. Current speed is 350 KB/sec - 750 KB/sec. I'm looking forward your test result!

    Another test:

    ENWIK9 (1,000,000,000 bytes)
    BIT 0.2 (-mem 9): 189,754,007 bytes @ 603 KB/sec (1618.852 seconds)

  9. #9
    Programmer toffer's Avatar
    Join Date
    May 2008
    Location
    Erfurt, Germany
    Posts
    587
    Thanks
    0
    Thanked 0 Times in 0 Posts
    For speed tuning:

    - there are at most 4 nibble models (highest 4 orders) active + the match model
    - per nibble only the SSE context changes (all memory access after the first bit within a nibble coding process is cached)
    - only 13 SSE bins (three adjacent SSE contexts for a partital two bit symbol: y, 0|y, 1|y fit into a cache line)
    - i'm storing additional stuff within a hashing table (match offset, symbol ranks, ...) to be more cache efficient
    - profiled the code and removed needless function calls (a byte coding step is alltogether inlined into a flat function)
    - all data structures are cache aligned
    - some well tuned constants remove complexity (e.g. for a match model a matchlen. dependent bias neuron seems to be better than modelling the match probability - and its faster).

    You might want to try this:

    - With a very eraly version (cmm2, unreleased) i used a static variable length code decomposition (codelengths 4, 8, 12). This gave a *large* speed gain on compressible files while improving compression. So huffman decomposition will give very good results! On more or less random files it badly hurts - i abandoned it.

    - I could nearly double the compression speed using a "first unary guess" based on symbol ranks. Compression dropped to around 11.3 Mb on SFC. This loss was too high for me Have a look at the Shelwien's CM coder comparision for explanation. This way you can identify *highly* redundant data and the guess is correct in >=95% in most cases.

    I mostly did "normal" optimizations - as i would do with any other program - profiling, etc. In addition i tried to make everything as cache efficient as possible.

    BTW - i think unary coding is superior to flat decomposition (expect random data). So if you would like to make something more efficient try it PPMonstr demonstrates it

    For future CMM4 versions i'll only make small changes and maybe a simple data segmentation filter. The next versions will definitley use unary decomposition since it allows very expensive bit modelling/coding techniques (round about you should only code 3 bits or so by symbol).

    EDIT: Yesterday i noticed that the newest IntelC compiler is available for Linux (free for non commercial use). It's even in my distro's package system Its main advantage over GCC (at least in my code) is that it supports automatic vectorization.
    So you might want to use this for linux binaries.

    EDIT2: I'm sure i could gain more speed by source uglyfication and faltting everything out - a la lpaqX, X>2. But i prefer structured and readable code.
    Last edited by toffer; 13th June 2008 at 13:54.

  10. #10
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    > there are at most 4 nibble models (highest 4 orders) active + the match
    > model
    Do you mean automatic model turn on/off mechanism? I tried it based on neural network error. But, I didn't succeeded in

    > per nibble only the SSE context changes (all memory access after the first
    > bit within a nibble coding process is cached)
    Why I didn't try to use SSE context calculation per nibble before? Good idea. As a note, all memory access was already cached per nibble.

    > only 13 SSE bins (three adjacent SSE contexts for a partital two bit
    > symbol: y, 0|y, 1|y fit into a cache line)
    Well, it seems I must work on SSE My SSE implementation highly affects both speed and compression in opposite directions.

    > i'm storing additional stuff within a hashing table (match offset, symbol
    > ranks, ...) to be more cache efficient
    Another interesting idea. Thanks

    > profiled the code and removed needless function calls (a byte coding step
    > is alltogether inlined into a flat function)
    I'm using Visual Studio 2008 and I don't know anything it's profiling option. As a note, all of my code based on classes which are inlined most of the functions.

    > all data structures are cache aligned
    This is already implemented.

    > some well tuned constants remove complexity (e.g. for a match model a
    > matchlen. dependent bias neuron seems to be better than modelling the
    > match probability - and its faster).
    I tuned almost constants in emprical manner. Tested on several data types. Dependent bias neuron? Could you make it clear?

    > With a very eraly version (cmm2, unreleased) i used a static variable
    > length code decomposition (codelengths 4, 8, 12). This gave a *large*
    > speed gain on compressible files while improving compression. So huffman
    > decomposition will give very good results! On more or less random files it
    > badly hurts - i abandoned it.
    No, I don't want to use static variable length codes. Because
    LWCX designed for especially semi-imcompressible files. Rest of other files will be compressed LZ77+ROLZ+CM and GPU based BWT. LWCX is kind of fallback option.

    > I could nearly double the compression speed using a "first unary guess"
    > based on symbol ranks. Compression dropped to around 11.3 Mb on SFC.
    > This loss was too high for me Have a look at the Shelwien's CM coder
    > comparision for explanation. This way you can identify *highly* redundant
    > data and the guess is correct in >=95% in most cases.
    I have read it before. But, I couldn't understand it well.

    > I mostly did "normal" optimizations - as i would do with any other program -
    > profiling, etc. In addition i tried to make everything as cache efficient as
    > possible.
    I've done several optimizations: inlining, cache efficient aligned data structure, some unrolled loops etc. I think, I must do some profiling.

    > BTW - i think unary coding is superior to flat decomposition (expect
    > random data). So if you would like to make something more efficient try it
    > PPMonstr demonstrates it
    While working on LWCX mode, I tried to write a compressor which is kind of PPM+CM fusion. My plan was like that: collect statistics based on bytes. Then compute each prediction of order-n model. At final mix all of prediction together with a neural mixer. I believed that this would be very fast. But, I couldn't succeeded in implementing it because of my poor knowledge about PPM. So, I moved back LWCX

    > For future CMM4 versions i'll only make small changes and maybe a simple
    > data segmentation filter. The next versions will definitley use unary
    > decomposition since it allows very expensive bit modelling/coding
    > techniques (round about you should only code 3 bits or so by symbol).
    I think, data segmentation will highly help on most data. I was tried durilca's hidden option with my previous ROLZ implementation. I learned as a lesson, data segmentation nearly doesn't help on dictionary based methods due to their nature.

    > Yesterday i noticed that the newest IntelC compiler is available for Linux
    > (free for non commercial use). It's even in my distro's package system Its
    > main advantage over GCC (at least in my code) is that it supports
    > automatic vectorization.
    > So you might want to use this for linux binaries.
    I may try it! I have Ubuntu which working in a virtual machine on my laptop

    Thanks again toffer. I couldn't make this compressor without your advices. Thanks a lot again

  11. #11
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    > I'm sure i could gain more speed by source uglyfication and faltting
    > everything out - a la lpaqX, X>2. But i prefer structured and readable code
    I agree. I prefer well designed classes and structures for future maintenance.

  12. #12
    Tester
    Black_Fox's Avatar
    Join Date
    May 2008
    Location
    [CZE] Czechia
    Posts
    471
    Thanks
    26
    Thanked 9 Times in 8 Posts
    On my testset and machine (de)compression was @290 kB/s, total size 13.081.781 bytes

    however, decompressed data had different CRC32 (EXE, TXT2, PDF and SAVE, if it helps)...
    Last edited by Black_Fox; 13th June 2008 at 15:09. Reason: typos...
    I am... Black_Fox... my discontinued benchmark
    "No one involved in computers would ever say that a certain amount of memory is enough for all time? I keep bumping into that silly quotation attributed to me that says 640K of memory is enough. There's never a citation; the quotation just floats like a rumor, repeated again and again." -- Bill Gates

  13. #13
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Interesting. I've checked before publishing with CRC32 checksum. I will look at it in this night. Thanks!

  14. #14
    Tester
    Black_Fox's Avatar
    Join Date
    May 2008
    Location
    [CZE] Czechia
    Posts
    471
    Thanks
    26
    Thanked 9 Times in 8 Posts
    A few suggestions: The memory usage formula is a bit confusing - as it seems from task manager, is it 2^(N+1) + 3? Second note is "separated file list"
    I am... Black_Fox... my discontinued benchmark
    "No one involved in computers would ever say that a certain amount of memory is enough for all time? I keep bumping into that silly quotation attributed to me that says 640K of memory is enough. There's never a citation; the quotation just floats like a rumor, repeated again and again." -- Bill Gates

  15. #15
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Hi again,
    I have fixed several things in BIT 0.2. Here is the list:
    - Buffered I/O flushing issue has fixed. Now, you should pack/unpack without any data loss
    - Arithmetic coder have changed a bit (%0.05 worser compression on SFC. But it should be a bit faster)
    - Memory formula has fixed
    - "separated file list" has fixed

    Here is the same link BIT 0.2b
    http://www.osmanturan.com/bit02.zip

    Thanks Black Fox!

  16. #16
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    @Black Fox:
    When I look at it your test result, it seems your test sets highly redundant. My current implementation worser than even ZIP on highly redundant files. This is expected result, because it has no match model. Just order 0-4, 6.

  17. #17
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts

    Thumbs up

    Quote Originally Posted by osmanturan View Post
    Hi again,
    I have fixed several things in BIT 0.2. Here is the list:
    - Buffered I/O flushing issue has fixed. Now, you should pack/unpack without any data loss
    - Arithmetic coder have changed a bit (%0.05 worser compression on SFC. But it should be a bit faster)
    - Memory formula has fixed
    - "separated file list" has fixed

    Here is the same link BIT 0.2b
    http://www.osmanturan.com/bit02.zip

    Thanks Black Fox!
    Thanks Osman!

  18. #18
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Another test with "-mem 9" on Encode Corpus:

    + Doom3.exe (5,427,200 bytes) -> 1,694,067 bytes @ 350 KB/sec (15.101 seconds)
    + Mech8.s3m (747,600 bytes) -> 338,707 bytes @ 262 KB/sec (2.776 seconds)
    + MPTRACK.exe (1,159,172 bytes) -> 499,970 bytes @ 331 KB/sec (3.432 seconds)
    + PariahInterface.utx (24,375,895 bytes) -> 5,996,489 bytes @ 579 KB/sec (41.076 seconds)
    + Photoshop.exe (19,533,824 bytes) -> 2,146,022 bytes @ 496 KB/sec (38.501 seconds)
    + Reaktor.exe (14,446,592 bytes) -> 2,146,022 bytes @ 619 KB/sec (22.823 seconds)
    + track5.wav (29,644,608 bytes) -> 23,061,944 bytes @ 415 KB/sec (69.748 seconds)
    + TracktorDJStudio3.exe (29,124,024 bytes) -> 5,359,983 bytes @ 605 KB/sec (46.987 seconds)

    Total: 45,822,243 bytes

  19. #19
    Member
    Join Date
    May 2008
    Location
    Estonia
    Posts
    405
    Thanks
    155
    Thanked 235 Times in 127 Posts

    Post

    timer bit02 a "backup.bit" -m lwcx -mem 9 -files "enwik9"
    Processed: 1000000000/ 1000000000 bytes (Speed: 482 KB/s)
    Elapsed Time: 2023.438 seconds
    Kernel Time = 4.750 = 00:00:04.750 = 0%
    User Time = 1953.953 = 00:32:33.953 = 96%
    Process Time = 1958.703 = 00:32:38.703 = 96%
    Global Time = 2023.703 = 00:33:43.703 = 100%

    Size 189.881.180 bytes

    timer bit02 e "backup.bit" -files "enwik9"
    Processed: 1000000000/ 1000000000 bytes (Speed: 461 KB/s)
    Elapsed Time: 2116.422 seconds
    Kernel Time = 5.968 = 00:00:05.968 = 0%
    User Time = 2003.187 = 00:33:23.187 = 94%
    Process Time = 2009.156 = 00:33:29.156 = 94%
    Global Time = 2116.719 = 00:35:16.719 = 100%


    I had other things running as well. So compress/decompress average is 470 KB/s.


    Running: C2D E4500 @ 2.20GHz
    KZo


  20. #20
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    @kaitz:
    Thanks a lot! What is your opinion about this compressor? Is it good or not? Also, do you have any suggestion about it?

    Another test with (-mem 9)
    Bliss.bmp (1,440,054 bytes) -> 499,875 bytes @ 383 KB/sec (3.682 seconds)

  21. #21
    Member
    Join Date
    May 2008
    Location
    Estonia
    Posts
    405
    Thanks
    155
    Thanked 235 Times in 127 Posts

    Smile

    Quote Originally Posted by osmanturan
    @kaitz:
    Thanks a lot! What is your opinion about this compressor?
    I like it.
    Quote Originally Posted by osmanturan
    Is it good or not?
    Speed is nice, but on older cpu's it may not be so nice. But compressing at 470 KB\s is good for me. And as it is prerelease i hope to see improvments on all levels.
    Quote Originally Posted by osmanturan
    @Also, do you have any suggestion about it?
    Result output size would be nice to see.
    KZo


  22. #22
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts
    Here's my "-mem 9" test results...

    Compressing: reaktor.exe
    Processed: 14446592/ 14446592 bytes (Speed: 186 KB/s)

    Elapsed Time: 75.500 seconds

    Compressing: track5.wav
    Processed: 29644608/ 29644608 bytes (Speed: 117 KB/s)

    Elapsed Time: 247.032 seconds


    Compression seems very slow on my AMD Sempron 2400+, Windows XP SP2 machine.

  23. #23
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    @kaitz:
    Thanks!

    @LovePimple:
    Well, I think, I must implement a profiling option which shows process time for each major operations such as hash table access, mixing, prediction updating, I/O stuff etc. So, I could easily focus on the part which seems bottleneck on your CPU.

  24. #24
    Tester
    Black_Fox's Avatar
    Join Date
    May 2008
    Location
    [CZE] Czechia
    Posts
    471
    Thanks
    26
    Thanked 9 Times in 8 Posts
    Quote Originally Posted by osmanturan View Post
    @Black Fox:
    When I look at it your test result, it seems your test sets highly redundant.
    You're quite right in this, all files can either be compressed nicely straight away or are already compressed with ~gzip (with MP3 being not much compressible exception) which gets fixed by Precomp.

    Thanks for swift update! New version decompresses correctly. Compression was a bit faster at 307kB/s, resulting size is about 4000 bytes larger - 13.085.734. Would you mind if I publish these results in benchmark?
    I am... Black_Fox... my discontinued benchmark
    "No one involved in computers would ever say that a certain amount of memory is enough for all time? I keep bumping into that silly quotation attributed to me that says 640K of memory is enough. There's never a citation; the quotation just floats like a rumor, repeated again and again." -- Bill Gates

  25. #25
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts
    Even at level 6, CCM is much faster than BIT on my machine.


    reaktor.exe > 1,666,383 bytes

    CCM 1.30a - Copyright (c) 2007-2008 C. Martelock - Jan 9 2008

    Allocated 786 MiB of memory.
    14108.00 KiB -> 1627.33 KiB (ratio 11.53%, speed 581 KiB/s)


    track5.wav > 17,022,283 bytes

    CCM 1.30a - Copyright (c) 2007-2008 C. Martelock - Jan 9 2008

    Allocated 786 MiB of memory.
    28949.81 KiB -> 16623.32 KiB (ratio 57.42%, speed 351 KiB/s)

  26. #26
    Programmer toffer's Avatar
    Join Date
    May 2008
    Location
    Erfurt, Germany
    Posts
    587
    Thanks
    0
    Thanked 0 Times in 0 Posts
    This reminds me of my first tries with CM . Just keep on improving!

    Compression is ok, but it's slow.

    Make sure that a byte/nibble coding step is completly inlined (one/two function calls per byte encoding step should be ok). I sometimes was suprised what crappy code GCC generated (concerning inline)... and my code contains some force_inlines to fix this. And something compiler related - you should try feedback optimization. Prior to this do some profiling and identify the bottlenecks - as you said .

  27. #27
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Black_Fox View Post
    Would you mind if I publish these results in benchmark?
    It would be so nice. Thanks a lot!

    @toffer:
    Thanks for your advices and tests! Your compressor sometimes reaches 2 MB/sec while mine reaches only 750 KB/sec on my laptop

  28. #28
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts
    Which compiler do you use?

  29. #29
    Programmer osmanturan's Avatar
    Join Date
    May 2008
    Location
    Mersin, Turkiye
    Posts
    651
    Thanks
    0
    Thanked 0 Times in 0 Posts
    As posted above, Visual Studio 2008. Actually, I don't like it

  30. #30
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts
    Sorry, didn't notice that. Perhaps you would prefer the GCC compiler.

Page 1 of 5 123 ... LastLast

Similar Threads

  1. Poor compression of bit-version of PPM
    By Stefan in forum Data Compression
    Replies: 20
    Last Post: 16th March 2010, 17:58
  2. Do you have a 64-bit machine at home?
    By encode in forum The Off-Topic Lounge
    Replies: 22
    Last Post: 4th December 2009, 14:09
  3. Bit guessing game
    By Shelwien in forum Data Compression
    Replies: 11
    Last Post: 24th November 2009, 02:22
  4. RINGS Fast Bit Compressor.
    By Nania Francesco in forum Forum Archive
    Replies: 115
    Last Post: 26th April 2008, 22:58
  5. Bit Archive Format
    By osmanturan in forum Forum Archive
    Replies: 39
    Last Post: 29th December 2007, 00:57

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •