Page 2 of 10 FirstFirst 1234 ... LastLast
Results 31 to 60 of 281

Thread: NanoZip - a new archiver, using bwt, lz, cm, etc...

  1. #31
    Programmer
    Join Date
    Jul 2008
    Location
    Finland
    Posts
    102
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by osmanturan View Post
    Forgive me, but it was really boring for me. Would you mind if I test and post the result later? At least, some compression time with any other compressors? If you are impatient about this, you can find any other compressors timing at my laptop in the forum.
    Sure the tests can wait, but MC corpus is really boring for everybody I think. Why not test some other data instead.

    Also, why am I not only the one person who thinks cm part is a kind of PAQ clone?
    As said I'm one of them. The nz_cm is a paq clone. Also optimum on text is a bzip2 clone, as is blizzard. BALZ is a zip clone etc. All compressors that I've seen make only small contributions, except some significant ones like the work of Shkarin and Mahoney. Anyway, to make the comparison meaningful we need to see why it's being made, eg. I'd like to see comparison of nz_cm and lpaq to see what is the relation.

    --edit--

    So now we have Nania's benchmark results, so let's compare lpaq and nz_cm:

    Code:
    compressor, size, encode time, decode time
    NanoZIP 0.00 Alpha 368.095.984	1468,043 1358,078
    LPAQ9H	           389.010.966  1714,332 1742,083
    Last edited by Sami; 6th July 2008 at 02:33.

  2. #32
    Member
    Join Date
    May 2008
    Location
    Antwerp , country:Belgium , W.Europe
    Posts
    487
    Thanks
    1
    Thanked 3 Times in 3 Posts
    Quote Originally Posted by toffer View Post
    Hi!

    I tried the CM part, too. It behaves *very* much like LPAQ. I tried using "nz a -cc -m120m -v".
    NZ -cc seems to compress better and is faster then lpaq :
    timer nz a -cc -txt -m1500m enwik8_nz000_cc_txt_m1500m enwik8
    NanoZip 0.00 alpha - Copyright (C) 2008 Sami Runsas - www.nanozip.net
    CPU: Intel(R) Core(TM)2 Quad CPU @ 2.40GHz, cores # 1, memory: 2048/2048 MB
    Compressor: nz_cm, using 1240 MB memory.
    Compressed 100 000 000 into 18 823 845 in 2m 29.36s, 654 KB/s
    IO-in: 0.09s, 1 025 MB/s. IO-out: 0.05s, 321 MB/s

    User Time = 149.714 = 00:02:29.714 = 99%
    Process Time = 150.540 = 00:02:30.540 = 100%
    Global Time = 150.495 = 00:02:30.495 = 100%
    --> 18 823 845 bytes
    comparison with lpaq8e 9 :
    timer lpaq8e 9 enwik8 enwik8_lpaq8e_9.lpaq8e
    100000000 -> 18982007 in 211.615 sec. using 1542 MB memory
    User Time = 214.017 = 00:03:34.017 = 100%
    Process Time = 215.998 = 00:03:35.998 = 101%
    Global Time = 212.083 = 00:03:32.083 = 100%
    --> lpaq 8e 18 982 007 bytes (212.083 sec) versus 18 823 845 bytes (150.495 sec) for NanoZip v0.00

    comparison with lpaq9i 9 :
    timer lpaq9i 9 enwik8 enwik8_lpaq9i_9_nr2.lpaq9i
    100000000 -> 19124003 in 206.086 sec. using 1542 MB memory
    User Time = 209.119 = 00:03:29.119 = 101%
    Process Time = 210.882 = 00:03:30.882 = 102%
    Global Time = 206.639 = 00:03:26.639 = 100%
    lpaq9i 19 124 003 bytes in 206,6 sec, worse then lpaq8e.

  3. #33
    Programmer
    Join Date
    Feb 2007
    Location
    Germany
    Posts
    420
    Thanks
    28
    Thanked 153 Times in 18 Posts

    Exclamation

    Whoa!

    Quote Originally Posted by Sami View Post
    Well is false again. You cannot "find" the internals by permuting the text and making meaningless tests.
    Of course, I can't find the "interals" by using permuted text. But still, it was easy to see, that it's highly probable that NZ uses text preprocessing. So, the setting wasn't that bad either.

    Quote Originally Posted by Sami View Post
    The "real" core of NZ, is what you see. The NZ bwt is as said, slightly faster than optimum1 with filters and bwt.
    Well, that's great. No need to get angry.

    Quote Originally Posted by Sami View Post
    The tester generates artificial data until he finds the worst case of the particular compressor, and then declares, now I found the "real" core. This theory is bizarre.
    I'm sorry, but I think you're way overreacting. I did NOT generate data until I found a worst case. That's just ridiculous. I just checked ENWIK and that's it. I did not look for some strange bogus worst case file. I always posted the normal results, too, remember? I always pronounced that NZ is great on "normal" data. I'm sorry that I upset you, but oh well...

  4. #34
    Programmer
    Join Date
    Jul 2008
    Location
    Finland
    Posts
    102
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by Christian View Post
    Well, that's great. No need to get angry.

    I'm sorry, but I think you're way overreacting.
    Read me correctly. I always write with stoic calmness.

    I did NOT generate data until I found a worst case. That's just ridiculous. I just checked ENWIK and that's it. I did not look for some strange bogus worst case file.
    But that's exactly what you find when you run a compressor, that is designed to choose the optimal method for given input. If I'd have made NZ with no data in mind, having permuted book1 as test case, NZ would perform radically differently, even though only tunings would be the real change. So could we therefore conclude that the differently tuned nz has better core? That would be the logical conclusion of generating the worst case data.

  5. #35
    Programmer toffer's Avatar
    Join Date
    May 2008
    Location
    Erfurt, Germany
    Posts
    587
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Well, if you compare don't forget, that LPAQ doesn't include filters (ok, x86). NanoZip does...

  6. #36
    Programmer
    Join Date
    Jul 2008
    Location
    Finland
    Posts
    102
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by toffer View Post
    Well, if you compare don't forget, that LPAQ doesn't include filters (ok, x86). NanoZip does...
    After LPAQ was taken over by Alexander, it has apparently been tuned for text. Also other weird tunings can be found in there, I haven't studied it in detail though.

  7. #37
    Programmer
    Join Date
    Feb 2007
    Location
    Germany
    Posts
    420
    Thanks
    28
    Thanked 153 Times in 18 Posts
    Quote Originally Posted by Sami View Post
    Read me correctly.
    Right.

    Quote Originally Posted by Sami View Post
    If I'd have made NZ with no data in mind, having permuted book1 as test case, NZ would perform radically differently, even though only tunings would be the real change. So could we therefore conclude that the differently tuned nz has better core? That would be the logical conclusion of generating the worst case data.
    And if I'd use CCM on permuted data, which still has the 'same' distribution as the original data, and it performs significantly worse, wouldn't it be right to assume, that it is very probable that I somehow found a way to disable it's preprocessors (besides the point, that it'll hurt the binary decomposition in most cases, too)? Maybe I even (de)activate some parts of the algorithm, but still I learn something about its behavior. Anyway, I still don't understand, why you are so upset.

    If I had known before that you're so sensible to such tests I'd never have posted a reply. Again, your compressor is GREAT. Permuted data is BAD. And I'm sorry.


    ---- EDIT ----

    Btw., this is what I posted before.

    Quote Originally Posted by Christian View Post
    The permutated data isn't fair, of course. I was just interested in the impact and the quality of your filters (SBC did have very good filters, too). But as I said before, NanoZip rocks and I can't wait for benchmarks and the final...
    Quote Originally Posted by Christian View Post
    ... And now, I stop running tests on such data because it's stupid for real world tests. On normal data NanoZip performs excellent, as I've already written a couple of times.
    Last edited by Christian; 6th July 2008 at 03:18.

  8. #38
    Programmer
    Join Date
    Jul 2008
    Location
    Finland
    Posts
    102
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by Christian View Post
    Right.

    And if I'd use CCM on permuted data, which still has the 'same' distribution as the original data, and it performs significantly worse, wouldn't it be right to assume, that it is very probable that I somehow found a way to disable it's preprocessors (besides the point, that it'll hurt the binary decomposition in most cases, too)?
    Again, by disabling some internal component will generate additional effects, such as the ones I've already mentioned. Eg, when you compress permuted book1 with -co, it's checked for all kinds of unnecessary stuff, and yet the result is intepreted as the "real" performance of some kind of internal component.

    Maybe I even (de)activate some parts of the algorithm, but still I learn something about its behavior.
    See above, this isn't looking at the behavior which is presumed to be.

    Anyway, I still don't understand, why you are so upset.

    If I had known before that you're so sensible to such tests I'd never have posted a reply. Again, your compressor is GREAT. Permuted data is BAD. And I'm sorry.
    You keep referring to imagined emotions as if they would be a help here. I think we should stick to the issue. This is very important and old argument, that any compressor which differs from a simple statistical model with entropy coder, when being "analyzed", must be first modified to find the "real" internals. As said, if I'd consider permuted data, I'd not tune NZ to make predictions about text looking like plaintext and audio looking like audio. Then it would perform well permuted text. And yet the internals would be the same. So I say it yet again, that your test, which you said was to find the internals, did not find them, but instead found a worst case for the tunings, which applies in this case, to permuted text.

  9. #39
    Programmer
    Join Date
    Feb 2007
    Location
    Germany
    Posts
    420
    Thanks
    28
    Thanked 153 Times in 18 Posts

    Post

    Quote Originally Posted by SAMI
    Again, by disabling some internal component will generate additional effects, such as the ones I've already mentioned. Eg, when you compress permuted book1 with -co, it's checked for all kinds of unnecessary stuff, and yet the result is intepreted as the "real" performance of some kind of internal component.
    And I understand this. Or did you read anywhere, that I concluded, "Oh, NZ does worse on permuted data, then it must be worse in general.". No, because I did not write anything like that.

    Quote Originally Posted by SAMI
    See above, this isn't looking at the behavior which is presumed to be.
    OK, then due to all the failing data detection it's slower. So what? I did not try to find a worst case. Neither did I conclude something except, that NZ uses text preprocessing and that RZM seems to work better than "-cO" on the permuted data.

    Quote Originally Posted by SAMI
    This is very important and old argument, that any compressor which differs from a simple statistical model with entropy coder, when being "analyzed", must be first modified to find the "real" internals.
    Well, I did try to find something out about NZ's behavior by using permuted data. I assumed, that it has text preprocessing, audio, image and exe-filters. Actually, it's no surprise, but the tests underlined this assumption. And even, if this assumption would be wrong (which it isn't - you verified it), I never claimed "any compressor which differs from a simple statistical model with entropy coder, when being "analyzed", must be first modified to find the "real" internals.". So, you see, I'm not the one imagening things.

    But can we please end this tiring argument. It's leading nowhere.

  10. #40
    Programmer
    Join Date
    Jul 2008
    Location
    Finland
    Posts
    102
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by Christian View Post
    "Oh, NZ does worse on permuted data, then it must be worse in general.". No, because I did not write anything like that.
    You create an imagined quote (above) and then say you didn't write like that? All this nonsense just adds to the tiring of this argument. I again suggest we stick to the facts and the issue.

    You say that you want to find information about the internals in NZ by permuting text. I've explained that you will find a worst case for tunings. I can provide you benchmarks using permuted and non-permuted text where there is no effect on speed whatsoever of NZ internals, just to the contrary to your tests supposedly found.

    I and probably nobody else have a problem with artificial data or finding worst cases by generating artificial inputs. I've been saying, that you cannot find the internals by doing so.

    I never claimed "any compressor which differs from a simple statistical model with entropy coder, when being "analyzed", must be first modified to find the "real" internals.". So, you see, I'm not the one imagening things.
    No, it's quite accurate. Your theory is based on this, as I've pointed out many times.

    But can we please end this tiring argument. It's leading nowhere.
    No wonder you find it tiring.
    Last edited by Sami; 6th July 2008 at 04:08.

  11. #41
    Programmer
    Join Date
    Feb 2007
    Location
    Germany
    Posts
    420
    Thanks
    28
    Thanked 153 Times in 18 Posts
    ---- EDIT ----

    I removed my answer - there are no real facts to talk about. And, as I posted several times before, on normal/real data NZ is really great.
    Last edited by Christian; 6th July 2008 at 05:05.

  12. #42
    Member
    Join Date
    Sep 2007
    Location
    Denmark
    Posts
    878
    Thanks
    51
    Thanked 106 Times in 84 Posts
    I would be nice if you could implant somekind of ecm filter into nanozip

    http://www.neillcorlett.com/ecm/
    Not only does it shave of redundant data from the cd images. but the new data structure is also easier to compress as the data stream is not interrupted by ECC data.

    I think it would be the first achiever to due that.

    Also
    I'm to lazy to look up the word. But whats does permuted data means?

  13. #43
    Programmer
    Join Date
    Jul 2008
    Location
    Finland
    Posts
    102
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by Christian View Post
    ---- EDIT ----

    I removed my answer - there are no real facts to talk about. And, as I posted several times before, on normal/real data NZ is really great.
    I posted the real comparison of NZ internals to a separate thread as this becomes off topic.

  14. #44
    Programmer
    Join Date
    Feb 2007
    Location
    Germany
    Posts
    420
    Thanks
    28
    Thanked 153 Times in 18 Posts
    Quote Originally Posted by SvenBent
    I'm to lazy to look up the word. But whats does permuted data means?
    Actually, I used a permuted alphabet, not permuted data. You can think of it like swapping arbitrary letters in the alphabet - e.g. a/j, d/k, ... The data distribution, available match-positions, available match-lengths and everything stays the same - but algorithms which try to detect the data by its common nature, will most probably fail an need to fall back to a default behavior. Additionally, coders with binary decomposition might get hurt a little, too.

    Of course, a permuted alphabet does not reflect the algorithms performance under normal circumstances.

  15. #45
    Programmer
    Join Date
    Jul 2008
    Location
    Finland
    Posts
    102
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by SvenBent View Post
    I would be nice if you could implant somekind of ecm filter into nanozip

    http://www.neillcorlett.com/ecm/
    Not only does it shave of redundant data from the cd images. but the new data structure is also easier to compress as the data stream is not interrupted by ECC data.
    I have briefly considered ECC removal, but I think it works better as a separate tool. ECC layer removal is critical to have for compressing warez, but otherwise it makes no sense to have ECC included in disk images anyway, because it's redundant. I think warez compression is important, but not enough to have the ECC removal.

  16. #46
    Member
    Join Date
    Sep 2007
    Location
    Denmark
    Posts
    878
    Thanks
    51
    Thanked 106 Times in 84 Posts
    Quote Originally Posted by Sami View Post
    ECC layer removal is critical to have for compressing warez, but otherwise it makes no sense to have ECC included in disk images anyway, because it's redundant.
    Now you offend me be calling me a pirat :-D

    The reason i use ECM is not of warez as in pirated distribution of software.
    but to save space on my game image sever. All of my current games are being ripped an put into my server for quicker access and also to free up wallspace.

    400 CD/DVD games hanging on the wall does nok look pretty in my wifes eye.

    all you music and movies is going the same way because its damn more easier to just turn on the Media PC and select a video/music than it is to go through the entire wall section of music,

    with the way media center is going and with MS home server, i believe that in the future more on more people is going this way instead of using the physical media.

  17. #47
    Member
    Join Date
    Sep 2007
    Location
    Denmark
    Posts
    878
    Thanks
    51
    Thanked 106 Times in 84 Posts
    i just tested decompression time. and its in te range og rzm on cue/bin files with dat and audiotracks. and compression ratio is much better

    i just used -c0

    i hope the further development would focuses more on ratio and decompression time rather then ratio and compression time.
    Decompression time and requirements are 10 times more important for me than compression time.

    That also why i'm using a lot of brute forcing when compressing

  18. #48
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    With hot polemic I forgot to ask - did you plan to release NanoZip as an Open-Source project? Taking into account cross-platform nature, etc.

    Can you rename the algorithm names, to be more user friendly. For example - LZ77, BWT, PPM, CM, or Asymmetric, Fast, Efficient, Maximum, etc. Check out WinRK!

  19. #49
    Programmer
    Join Date
    Jul 2008
    Location
    Finland
    Posts
    102
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by SvenBent View Post
    Now you offend me be calling me a pirat :-D
    ...was not intended. And as said warez compression is important.

    The reason i use ECM is not of warez as in pirated distribution of software.
    but to save space on my game image sever.
    I'm just using the industry definition, which is that you are a pirate, even if you "pirate" the software with yourself. That's why the ECC copyprotection is there, to shield you from using the piece you have paid for. In Finland, there is a law, which defines you as a pirate if you buy an empty cd/dvd, so you need to pay extra pirate tax for each cd/dvd which goes to the entertainment industry, compensating your predicted piracy.

    i just tested decompression time. and its in te range og rzm on cue/bin files with dat and audiotracks. and compression ratio is much better
    Nice. The decompression time is what I hope makes the optimum2 worth-while.

    i hope the further development would focuses more on ratio and decompression time rather then ratio and compression time.
    Decompression time and requirements are 10 times more important for me than compression time.
    The optimum2 for this, but I hope now that I can finish optimum1 as a fast version of the LZT, because currently there is a large gap between lzhds and optimum2.

    In future, perhaps a bruteforce-like optimum2 mode is possible also. It would not effect the decompression time at all. It might be useful for installers etc.

  20. #50
    Programmer
    Join Date
    Jul 2008
    Location
    Finland
    Posts
    102
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by encode View Post
    With hot polemic I forgot to ask - did you plan to release NanoZip as an Open-Source project? Taking into account cross-platform nature, etc.
    I don't plan to release the source. I deeply respect the people who do so. I personally find it inspiring to work on the source by myself only.

    Can you rename the algorithm names, to be more user friendly. For example - LZ77, BWT, PPM, CM, or Asymmetric, Fast, Efficient, Maximum, etc. Check out WinRK!
    The reason I tried to create the visualization (in the GUI) instead of using names such "Fast" etc, because the speed might be difficult to capture in a name. Also referring to the speed in the name, suggests that there is the same code working, using different settings (like zip/bzip2). NZ was about having multiple compressors and (almost) no settings at all.

  21. #51
    Member Zonder's Avatar
    Join Date
    May 2008
    Location
    Home
    Posts
    55
    Thanks
    20
    Thanked 6 Times in 5 Posts
    Superb compression , sad but speed is on the edge of universality. And I realy like concept of GUI. I hope there is space for speed increase or multi-treading!?

    Code:
    Testset1: ~4Gb Game (45653 files) (4.8% mono-pcm)
    Machine:  Core2 T5500, Dual-channel DDR2-666 2Gb
    
    Ratio   //   Comp.   //  Decomp.        // Archiver  
    33.466% //   146kb/s //  1102kb/s(no/o) // NanoZip v0.0a -cO -m625m -r -nt -np
    34.341% //  1180kb/s //  9486kb/s(no/o) // *WinArc 0.50a-2008-02-08 -mx -mc-rep -mc-delta
    34.343% //  1172kb/s //  9603kb/s(no/o) // WinArc 0.50a-2008-02-08 -mx -mc-rep -mc-delta -ld=512mb
    34.417% //   341kb/s //  2097kb/s(no/o) // WinRK 3.0.3 Rolz3 Normal 
    34.642% //  1402kb/s // 15760kb/s(no/o) // 7-zip 4.56 Ultra d128m lc=4 lp=2
    35.023% //  1414kb/s // 16285kb/s(no/o) // 7-zip 4.56 Ultra d128m
    35.088% //  1532kb/s // 16151kb/s(no/o) // 7-zip 4.56 Ultra  d64m
    35.436% //   461kb/s //  ~//-kb/s       // *CMM4 v0.1d 47 (stored-split 7z)
    35.594% //  1002kb/s //  ~//-kb/s       // *CCMx v1.30a 7 (stored 7z)
    35.676% //  1011kb/s //  ~//-kb/s       // CCMx v1.30a 5 (stored 7z)
    35.700% //   547kb/s //  5512kb/s       // RZM v0.7e (stored-split 7z)
    35.809% //  1243kb/s //  ~//-kb/s       // CCM v1.30a 5 (stored 7z)
    37.541% //   919kb/s //  2115kb/s(no/o) // WinRK 3.0.3 Rolz3 Fastest
    38.285% //   830kb/s //  3205kb/s(no/o) // NanoZip v0.0a -co -m625m -r -nt -np
    39.407% //   993kb/s //  2563kb/s       // Sbc 0.970r2 -m3 -b62
    40.423% //   670kb/s //  7171kb/s       // Balz v1.7 ex (stored 7z)
    41.060% //  1966kb/s // 15327kb/s(no/o) // WinRar v3.71 Best Solid

  22. #52
    Programmer
    Join Date
    Jul 2008
    Location
    Finland
    Posts
    102
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by Zonder View Post
    Superb compression , sad but speed is on the edge of universality. And I realy like concept of GUI. I hope there is space for speed increase or multi-treading!?
    Thanks for testing. So much to do. I hope I can try multithreading in the future. If nothing else it can be used to increase compression while not slowing things down. Great that you like the GUI. I have never used GUI archivers. So I just put this thing together without knowing how they supposed to work or feel.

  23. #53
    Member
    Join Date
    May 2008
    Location
    brazil
    Posts
    163
    Thanks
    0
    Thanked 3 Times in 3 Posts
    Quote Originally Posted by encode View Post
    With hot polemic I forgot to ask - did you plan to release NanoZip as an Open-Source project? Taking into account cross-platform nature, etc.

    Can you rename the algorithm names, to be more user friendly. For example - LZ77, BWT, PPM, CM, or Asymmetric, Fast, Efficient, Maximum, etc. Check out WinRK!
    About this Ilia ,release balz as gpl v3 code.

    Gplv3 does not permit people closing software.
    Last edited by lunaris; 7th July 2008 at 02:46.

  24. #54
    Member
    Join Date
    May 2008
    Location
    Antwerp , country:Belgium , W.Europe
    Posts
    487
    Thanks
    1
    Thanked 3 Times in 3 Posts

    Nanozip v0.00 : "Archive corrupted. Error decoding (code 109)"

    It seems NZ in mode -co and -cO is not working well for me.
    When I tried to archive a directory in NZ -co and -cO mode I get "Archive corrupted. Error decoding (code 109)" on testing the archive.
    The other modes f,F.d,D and c work ok. (no errors on testing). I tried and retried it a dozen times, with different mem settings, but no avail.
    On other directories and files I've tested, all modes work just ok.
    It seems it has something to do with (the combination of) these specific files.

    nz a -co -m512m shelwdata_nz00_m512m_co .\shelwdata\*
    NanoZip 0.00 alpha - Copyright (C) 2008 Sami Runsas - www.nanozip.net
    CPU: Intel(R) Core(TM)2 Quad CPU @ 2.40GHz, cores # 1, memory: 2048/2048 MB
    *** THIS IS AN EARLY ALPHA VERSION *** USE ONLY FOR TESTING ***
    Archive file: shelwdata_nz00_m512m_co.nz
    Compressor: nz_optimum1, using 781 MB memory.
    Compressed 379 710 414 into 122 397 855 in 3m 17.54s, 1 877 KB/s
    IO-in: 0.76s, 471 MB/s. IO-out: 1.49s, 78 MB/s

    nz t shelwdata_nz00_m512m_co
    NanoZip 0.00 alpha - Copyright (C) 2008 Sami Runsas - www.nanozip.net
    CPU: Intel(R) Core(TM)2 Quad CPU @ 2.40GHz, cores # 1, memory: 2048/2048 MB
    *** THIS IS AN EARLY ALPHA VERSION *** USE ONLY FOR TESTING ***
    Archive file: shelwdata_nz00_m512m_co.nz
    Compressor: nz_optimum1, using 516 MB memory.
    shelwdata/finn_lst.txt 219 MB
    Archive corrupted. Error decoding (code 109)

    nz a -co -m250m shelwdata_nz00_m250m_co .\shelwdata\*
    NanoZip 0.00 alpha - Copyright (C) 2008 Sami Runsas - www.nanozip.net
    CPU: Intel(R) Core(TM)2 Quad CPU @ 2.40GHz, cores # 1, memory: 2048/2048 MB
    *** THIS IS AN EARLY ALPHA VERSION *** USE ONLY FOR TESTING ***
    Archive file: shelwdata_nz00_m250m_co.nz
    Compressor: nz_optimum1, using 332 MB memory.
    Compressed 379 710 414 into 122 881 666 in 3m 10.17s, 1 950 KB/s
    IO-in: 1.55s, 232 MB/s. IO-out: 1.86s, 63 MB/s

    nz t shelwdata_nz00_m250m_co
    NanoZip 0.00 alpha - Copyright (C) 2008 Sami Runsas - www.nanozip.net
    CPU: Intel(R) Core(TM)2 Quad CPU @ 2.40GHz, cores # 1, memory: 2048/2048 MB
    *** THIS IS AN EARLY ALPHA VERSION *** USE ONLY FOR TESTING ***
    Archive file: shelwdata_nz00_m250m_co.nz
    Compressor: nz_optimum1, using 259 MB memory.
    shelwdata/finn_lst.txt 219 MB
    Archive corrupted. Error decoding (code 109)

    nz l shelwdata_nz00_m250m_co
    NanoZip 0.00 alpha - Copyright (C) 2008 Sami Runsas - www.nanozip.net
    CPU: Intel(R) Core(TM)2 Quad CPU @ 2.40GHz, cores # 1, memory: 2048/2048 MB
    *** THIS IS AN EARLY ALPHA VERSION *** USE ONLY FOR TESTING ***
    Archive file: shelwdata_nz00_m250m_co.nz
    shelwdata/gits2op_mkv.mkv, 61 MB [00f012c9]
    shelwdata/cyg_bin.tar, 50 MB [73ad61d9]
    shelwdata/cyg_lib.tar, 91 MB [ce61ceba]
    shelwdata/finn_lst.txt, 30 MB [ccabae92]
    shelwdata/bookstar.txt, 34 MB [59554ae0]
    shelwdata/enwik8.txt, 95 MB [9a616704]

    When I remove the file "finn_lst.txt" (the error comes when processing this file) from the directory, all goes well.
    Creating an archive with this file only also goes well.
    nz q .... (quick test) gives no errors.
    What exactly does nz q ... ? A CRC/hash from the complete archive or something similar ?
    @Sami : does this "error 109" make ring a bell for you ?

    System : C2Q E6600 2.4 Mhz (not oc't) / 4GB RAM / Vista Prem. 32 bit.

    Thanks in advance !
    Last edited by pat357; 7th July 2008 at 05:22. Reason: changed title

  25. #55
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,423
    Thanks
    223
    Thanked 1,052 Times in 565 Posts
    I'd remind that these files are there: http://ctxmodel.net/sh_samples_1.rar

  26. #56
    Programmer
    Join Date
    Jul 2008
    Location
    Finland
    Posts
    102
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by pat357 View Post
    It seems NZ in mode -co and -cO is not working well for me.
    When I tried to archive a directory in NZ -co and -cO mode I get "Archive corrupted. Error decoding (code 109)" on testing the archive.
    Thanks. I downloaded the files and I can confirm this. I'll be working on it.

    nz q .... (quick test) gives no errors.
    What exactly does nz q ... ? A CRC/hash from the complete archive or something similar ?
    q just checks the archive on a high level. Currently it doesn't see much, but later (after I've added things) it's easy way to check if the archive is complete (eg. after downloading), etc. It doesn't calculate any checksums.

  27. #57
    Member Zonder's Avatar
    Join Date
    May 2008
    Location
    Home
    Posts
    55
    Thanks
    20
    Thanked 6 Times in 5 Posts
    How big files nz supports? because NZ keeps endlessly compressing one 3.9Gb file

    ex. :
    ./testset1.7z 8 870 MB, 0% ( with -cf)
    or
    ./testset1.7z 25 GB, 0% ( with -cd)

    NZ just keeps compressing nonexistant Gbytes

  28. #58
    Programmer
    Join Date
    Jul 2008
    Location
    Finland
    Posts
    102
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by Zonder View Post
    How big files nz supports? because NZ keeps endlessly compressing one 3.9Gb file

    ex. :
    ./testset1.7z 8 870 MB, 0% ( with -cf)
    or
    ./testset1.7z 25 GB, 0% ( with -cd)

    NZ just keeps compressing nonexistant Gbytes
    Thanks for finding this one. I hadn't had time to test the large files, but presumed everything worked.

  29. #59
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,511
    Thanks
    746
    Thanked 668 Times in 361 Posts
    knowing you, i hope that you will made practical archiver. something like sbc+gui+archive updating+even better efficiency

  30. #60
    Member Zonder's Avatar
    Join Date
    May 2008
    Location
    Home
    Posts
    55
    Thanks
    20
    Thanked 6 Times in 5 Posts
    tested -cO with double ram -m1250m and got much faster c/d speed than with -m625m switch. Is it normal or i mixed up something, maybe i'll rerun -m625m test again!?

    Code:
    33.251% //   214kb/s //  1628kb/s(no/o) // *NanoZip v0.0a -cO -m1250m -r -nt -np
    33.466% //   146kb/s //  1102kb/s(no/o) // NanoZip v0.0a -cO -m625m -r -nt -np

Page 2 of 10 FirstFirst 1234 ... LastLast

Similar Threads

  1. Nanozip decompression data troubles
    By SvenBent in forum Data Compression
    Replies: 11
    Last Post: 12th January 2009, 23:25
  2. BWT - how to use?
    By m^2 in forum Data Compression
    Replies: 29
    Last Post: 6th November 2008, 03:01
  3. NanoZip huge efficiency issue
    By m^2 in forum Data Compression
    Replies: 9
    Last Post: 10th September 2008, 22:51
  4. enwik9 benchmark nanozip, bliz, m99, dark
    By Sami in forum Data Compression
    Replies: 6
    Last Post: 31st July 2008, 21:24
  5. DARK - a new BWT-based command-line archiver
    By encode in forum Forum Archive
    Replies: 138
    Last Post: 23rd September 2006, 22:42

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •