Page 3 of 7 FirstFirst 12345 ... LastLast
Results 61 to 90 of 201

Thread: LZA archiver

  1. #61
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    Updated 10GB benchmark. I tested lza.exe 0.51 under Wine in Ubuntu (system 4) with options -mx5 -h6 -b6 -r -s for maximum compression possible in the 32 bit version. The 64 bit version would allow -h7 -b7 but the 32 bit version would run out of memory. So the compression is a little worse. http://mattmahoney.net/dc/10gb.html

    Code:
     Size       Compress  Extract Sys  Program version  Options
     ---------- -------- -------- ---  ---------------  --------
     3422372179     4202      271   4  lza_x64 0.10     -mx5 -h7 -b7 -t1 -r -s
     3522585547     3735      325   4  lza 0.51         -mx5 -h6 -b6 -t1 -r -s
    Still does not restore file dates or empty directories.

    Edit: compression is better and faster on LTCB even though less memory is used. http://mattmahoney.net/dc/text.html#2430
    Last edited by Matt Mahoney; 9th September 2014 at 22:38.

  2. The Following User Says Thank You to Matt Mahoney For This Useful Post:

    Nania Francesco (9th September 2014)

  3. #62
    Member
    Join Date
    Dec 2012
    Location
    japan
    Posts
    150
    Thanks
    30
    Thanked 59 Times in 35 Posts
    xezz - what command line exactly do you use?
    Code:
    lza a arcive in
    lza a -m1 arcive in
     :
    lza a -m5 arcive in
    lza a -mx1 arcive in
     :
    lza a -mx5 arcive in

  4. #63
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    For example:

    lza a -v -mx5 -b5 -h5 archive.lza c:\test\FP.LOG (single file normal)

    lza a -r -v -mx5 -b5 -h5 archive.lza c:\test\ (recursive normal)

    lza a -r -s -v -mx5 -b5 -h5 archive.lza c:\test\ (recursive Solid)
    Last edited by Nania Francesco; 10th September 2014 at 16:01.

  5. #64
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    ENJOY !

    LZA 0.51 64 BIT version for ultra compression !
    option -h extetend to -h9 (require 4gb of memory)


    __________________________________________________ ______________________________________
    HFCB: Huge Files Compression Benchmark :with Intel Core i7 920 2.67 ghz 6GB ram:
    option : a -r -s -v -mx5 -b7 -h9
    Compressed 4244176896 bytes to 1069276453 bytes
    Kernel Time = 8.096 = 00:00:08.096 = 0%
    User Time = 1442.775 = 00:24:02.775 = 91%
    Process Time = 1450.871 = 00:24:10.871 = 92%
    Global Time = 1576.640 = 00:26:16.640 = 100%
    Attached Files Attached Files
    Last edited by Nania Francesco; 10th September 2014 at 23:18.

  6. The Following User Says Thank You to Nania Francesco For This Useful Post:

    Stephan Busch (10th September 2014)

  7. #65
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    Much better compression on LTCB. http://mattmahoney.net/dc/text.html#2348

    Edit: Pareto frontier for decompression speed on 10GB (sys 4). http://mattmahoney.net/dc/10gb.html
    Last edited by Matt Mahoney; 11th September 2014 at 03:42.

  8. The Following User Says Thank You to Matt Mahoney For This Useful Post:

    Nania Francesco (10th September 2014)

  9. #66
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    Regarding compression rate, both tANS and rANS can work as close theoretical maximum (Shannon entropy) as you want.
    Regarding understanding, the basic ANS concept is really simple:
    - spread symbols: assign a symbol to each natural number in more or less uniform way with p[s] densities (a finite range like {L, ..., 2L-1} in practice),
    - enumerate symbol appearances,
    - now the encoding step is: to add information from symbol s to information in state/number x, go to x-th appearance of s (in analogy to x -> 2x + s in standard binary system).
    Now add renormalization to stay in {L, ..., 2L-1} range and you have tANS, like Yann's FSE.
    Mixing different probability distributions, also binary ones (speed like for arithmetic coding), is not a problem.

    How much time LZA spends on entropy decoding? You can easily divide it by 2 or 3 ... leaving LZHAM behind.
    I am very interested in trying the FSE and RANS as fast encoder with LZA but the problem is that I should rewrite the code again because as you can see these programs are addressed to compress the buffer sections and not as would me a single symbol. I hope I was clear, I need a version that is addressed directly to the symbol with an output buffer must be updated in real time.

    pseudo code

    RANS Old:
    encodeblok (*inbuf, &insize , * outbuf, &outsize ) (...........)

    RANS in real time:

    unsigned char *output;
    encodesymbol (int symbol, & freq, etc. ..) (encode .... *output++= x ; )
    Last edited by Nania Francesco; 14th September 2014 at 02:38.

  10. #67
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    858
    Thanks
    450
    Thanked 255 Times in 103 Posts
    Hi Nania

    You may have a look at the advanced FSE API.
    It breaks down the job into smaller components, down to unitary elements, such as single symbol encoding/decoding step.

    https://github.com/Cyan4973/FiniteSt...ter/fse.h#L114
    Last edited by Cyan; 14th September 2014 at 14:51.

  11. The Following User Says Thank You to Cyan For This Useful Post:

    Nania Francesco (14th September 2014)

  12. #68
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    682
    Thanks
    215
    Thanked 213 Times in 131 Posts
    Hi Nania,
    So ANS has this inconvenience of encoding in backward direction (to make decoding in forward direction), so while decoding is as usual, you rather don't want to directly address the encoder.
    Instead, you should first write into a buffer (e.g. 32kB - 1MB) a sequence of (the number of alphabet, the number of symbol in this alphabet).
    Where some of the alphabets can correspond to the binary case, using tabled variant for different quantizations (or uABS for high precision quantization like in Matt's fpaqc).
    The rest of alphabets can be larger (and decoder should be able to determine them).
    Encoder should first build tables for all alphabets used, then process the buffer in backward direction.

    FSE is intended for simpler case: single large alphabet (can be extended), while LZA seems to mix with binary choices (?).
    Please give some details of your entropy coding case: you want binary and large alphabets in a single stream? How frequent they are? How many large alphabets do you use (and how large?), how often do you update their probability distributions? What are typical probabilities in your binary choices? (do you use tiny ones?)
    Last edited by Jarek; 14th September 2014 at 10:11.

  13. The Following User Says Thank You to Jarek For This Useful Post:

    Nania Francesco (14th September 2014)

  14. #69
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    @ ANS e FSE developers !

    Did I mention that with version 1.0 release lza in open source. My idea would be to work with someone for the first time in the implementation of the programme which could really be a benchmark in data compression, but thanks to a group of programmers. I would like to propose a few solutions that you choose:
    1) Can I create a version of LZA (like single file LZMA compressor) addressed to single file compression, open source (for you) that you can freely edit and send me when you feel complete and stable (fast solution);
    2) Continue to try to rebuild the code with pseudo code and advice (slow solution)!

    Let me know which of the two ways we can choose !

  15. #70
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    At moment .. in testing... LZA with FSE ... partial results are good in comp. ratio and optimal in decompression efficiency !
    Last edited by Nania Francesco; 15th September 2014 at 02:52.

  16. The Following 2 Users Say Thank You to Nania Francesco For This Useful Post:

    Cyan (15th September 2014),Jarek (15th September 2014)

  17. #71
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    Implement the program with the FSE does not seem to be a problem. but certainly it is the BSD 2 clause license by FSE. Cyan, if I decide to put the code in my program, what do I have to comply because they do not release an open source version prior to version 1.0? just put the c++ source code of FSE in LZA archive.zip ?
    Although I would prefer to put the program copyrighted by all those who participated in the code, and in this case we are io e te Cyan.
    Of course if the request will limit my search and efficiency of the programme, I have to give up to use external codes.

    Could you let me know
    Last edited by Nania Francesco; 15th September 2014 at 12:52.

  18. #72
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    858
    Thanks
    450
    Thanked 255 Times in 103 Posts
    BSD is one of the most permissive license.
    You have almost nothing to do, just provide a copy of the "LICENSE" file, which includes the author's name.
    Your source code can remain closed as long as you wish.

  19. The Following User Says Thank You to Cyan For This Useful Post:

    Nania Francesco (15th September 2014)

  20. #73
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    Okay thanks Cyan for the explanation. In addition to the file with the license, LZA from next version will have the copyright to a group of 2 persons composed initially by me and the author of the FSE. Of course I hope that increases the number of employees who will hold the copyright. Before you release the version 1.0 we must correct the problem of registration of attributes of files and folders. When will it be made stable 0.90 release program the Working Group will decide together which license will be granted and the details. I don't want my LZA is only a draft and therefore you can also decide to change the name to the program.


    I still thank the creators of RANS that I did not want to take into account only for opportunities and I can say that it is super fast!

    Yann , could you give me an email address where to send you the source code to let you see the project and improve compression efficiency above all!
    Last edited by Nania Francesco; 15th September 2014 at 15:26.

  21. #74
    Member
    Join Date
    Sep 2008
    Location
    France
    Posts
    858
    Thanks
    450
    Thanked 255 Times in 103 Posts
    sent in PM

  22. The Following User Says Thank You to Cyan For This Useful Post:

    Nania Francesco (15th September 2014)

  23. #75
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    I sent the source to your address. Let me know if you have received!

  24. #76
    Member
    Join Date
    Sep 2007
    Location
    Denmark
    Posts
    864
    Thanks
    46
    Thanked 104 Times in 82 Posts
    Really looking forward to LZA with FSE. no news on ecm data filtering

    If ecm just supported piping it would be so much nice

  25. #77
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    I solved the problem of empty folder that now are normally compressed. I solved the problem of the file attributes that are applied according to the standards of the most common compressors (RAR, 7ZIP, ZPAQ). I still haven't solved the problem of unicode filenames but they want before version 1.0. I think that represents a great opportunity LZA both for me, and anyone who wants to participate in Yann joined the Group actively working on the project. I honestly don't know how we can improve again, but you will see with the next version that the results are encouraging. I'm working on the structure of the files which must be compact but easily accessible (those who want to can create a Visual version). I would appreciate that Igor Pavlov and Matt Mahoney could consider the possibility of including the version 1.0 supported compression types, considering that the goals that we both are essentially the same (free software).

  26. #78
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    Here is how I handle unicode filenames in Windows in zpaq. Windows uses UTF-16, but I store filenames in UTF-8 format for compatibility with Linux and I have functions to convert between them (std::wstring to std::string or wchar_t* to char*). These are functions wtou() and utow() in zpaq.cpp. Then to make all Windows functions take and return UTF-16 strings I have

    #define UNICODE
    #include <windows.h>

    To print filenames to stdout or stderr use WriteConsole() instead of fprintf(). To check whether stdout or stderr are redirected, use _get_ofshandle() and GetFileType() because WriteConsole() does not redirect. (see printUTF8()).

    To read command line arguments use GetCommandLine() and CommandLineToArgvW() (see main()). This also prevents g++ from expanding wildcards so you have to handle that yourself. You would need to do that anyway with MSVC++.

    This is all surrounded by "#ifndef unix" everywhere. unix is automatically defined in Linux but you need -Dunix to compile in Mac OS/X. Linux uses UTF-8 with normal NUL terminated strings, so the normal stdio.h and string.h functions will work.

  27. The Following User Says Thank You to Matt Mahoney For This Useful Post:

    Nania Francesco (13th October 2014)

  28. #79
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 660 Times in 354 Posts
    wfprintf/_tprintf and %ls format works with wide strings

  29. The Following User Says Thank You to Bulat Ziganshin For This Useful Post:

    Nania Francesco (13th October 2014)

  30. #80
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    fwprintf() does not print UTF-16 correctly or even consistently with different compilers (g++ and MSVC). When WriteConsole() is asked to display a character that is not in the current code page, it will display something that looks similar, like removing the accent from an accented character. Here is a small program that illustrates the difference. It prints the characters 0 through 512 in 3 columns using WriteConsole(), printf(), and fwprintf() respectively. Notice that for 16..255, fwprintf() displays the same as printf(), but different from WriteConsole for 128..255. For 256 and higher, fwprintf() produces no output in g++ and outputs '?' in MSVC, but WriteConsole() handles these correctly. The 2 images show the output of MSVC and g++ respectively.

    Code:
    // public domain
    #define UNICODE
    #include <stdio.h>
    #include <windows.h>
    #include <io.h>
    
    int main() {
      for (int i=0; i<512; i+=16) {
        char s[17]={0};
        wchar_t w[17]={0};
        DWORD n=0;
        for (int j=0; j<16; ++j) s[j]=i+j, w[j]=i+j;
        printf("%6d  (", i);
        WriteConsole((HANDLE)_get_osfhandle(fileno(stdout)), w, 16, &n, 0);
        printf(")  (%s)", s);
        fwprintf(stdout, L"  (%s)\n", w);
      }
      return 0;
    }
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	msvc.PNG 
Views:	244 
Size:	29.2 KB 
ID:	3199   Click image for larger version. 

Name:	gcc.PNG 
Views:	222 
Size:	28.6 KB 
ID:	3198  

  31. The Following 2 Users Say Thank You to Matt Mahoney For This Useful Post:

    Bulat Ziganshin (14th October 2014),Nania Francesco (14th October 2014)

  32. #81
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 660 Times in 354 Posts
    msvc wprintf details: look at the section "Fixing the Wide String Format and Conversion Specifiers"

  33. The Following 2 Users Say Thank You to Bulat Ziganshin For This Useful Post:

    Matt Mahoney (17th October 2014),Nania Francesco (17th October 2014)

  34. #82
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    Released last version 0.61 [FSE]!

    - Added new FSE superfast core!
    - added compression of empty folder's
    - added date and time for file's and forder's
    - actual limit for single file is 33 GB [removed in v. 1.0]
    - corrected bug on v. 0.60 (single file compression error)

    download from:
    http://heartofcomp.altervista.org/
    Last edited by Nania Francesco; 19th October 2014 at 00:06.

  35. The Following 2 Users Say Thank You to Nania Francesco For This Useful Post:

    Cyan (19th October 2014),Jarek (18th October 2014)

  36. #83
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    Edit: I get errors trying to compress the 10GB benchmark in Ubuntu/Wine (system 4). (I guess -t was removed):

    Code:
    matt@matt-Latitude-E6510:~$ time LZA_x64.exe a -mx5 -h7 -b7  -r -s  usb/10gb.lza 10gb
    fixme:seh:RtlAddFunctionTable 0x35d4820 2 400000: stub
    LZA archiver v.0.61.  Warning is demo version.                                  
    Copyright (C) 2014 LZA development Group: Nania Francesco and Yann Collet.      
    Archive is Z:\home\matt\usb/10gb.lza 
    Source file 10gb\2011 error!
    real	0m0.377s
    user	0m0.084s
    sys	0m0.062s
    Edit: tested on LTCB. http://mattmahoney.net/dc/text.html#2348
    Last edited by Matt Mahoney; 19th October 2014 at 03:00.

  37. The Following User Says Thank You to Matt Mahoney For This Useful Post:

    Nania Francesco (19th October 2014)

  38. #84
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    682
    Thanks
    215
    Thanked 213 Times in 131 Posts
    Hi Nania,
    While decoding speed has significantly improved as expected, I see the compression rate is slightly worse - it can be improved by increasing the number of states (L) or using a more accurate symbol spread and quantization ( toolkit ).
    Eventually, you could use 64bit rANS instead - slower (300 instead of 500MB/s), but should get the same rate as your 64 bit range coder.

    Could you maybe write an example of probability distribution FSE gets?
    I suspect you have lots of low probable symbols?
    Heuristic formula (spread_tuned(), beneficial especially for low probable symbols) for the best position of symbol of probability p is
    1/(p[s] * ln(1+1/i))
    where i = q[s] ... 2q[s]-1 are its appearances for p[s] ~ q[s]/L quantization.
    Last edited by Jarek; 19th October 2014 at 08:00.

  39. The Following User Says Thank You to Jarek For This Useful Post:

    Nania Francesco (19th October 2014)

  40. #85
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    Soon I'll release version 0.62 to corrects errors!

  41. #86
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    Hi Nania,
    While decoding speed has significantly improved as expected, I see the compression rate is slightly worse - it can be improved by increasing the number of states (L) or using a more accurate symbol spread and quantization ( toolkit ).
    Eventually, you could use 64bit rANS instead - slower (300 instead of 500MB/s), but should get the same rate as your 64 bit range coder.

    Could you maybe write an example of probability distribution FSE gets?
    I suspect you have lots of low probable symbols?
    Heuristic formula (spread_tuned(), beneficial especially for low probable symbols) for the best position of symbol of probability p is
    1/(p[s] * ln(1+1/i))
    where i = q[s] ... 2q[s]-1 are its appearances for p[s] ~ q[s]/L quantization.
    I don't think I've ever closed the door on anyone. With regard to the inclusion of RANS with FSE I think is one thing that can be done safely. If you don't have anything to the contrary in the future I will enter in the code project RANS and I'll make sure to select the super fast compression factor (FSE), very fast (RANS), fast (my range coder) slow (normal bit coder), what do you think? LZA is a project open to all! Let me know if you want to enter in your project already and there are no problems!

  42. #87
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    682
    Thanks
    215
    Thanked 213 Times in 131 Posts
    rANS is very similar to Range Coding: just using mathematical formula. Advantages are that it has single multiplication instead of two, and the state is a single number, what makes optimizations like SIMD or interleaving more convenient. Disadvantage is backward encoding like FSE (tANS) - there is buffer needed.

    However, I think you should get everything you need from FSE - if you would provide an example of used probability distribution, I could suggest how to improve the compression rate ...

  43. #88
    Member
    Join Date
    Aug 2014
    Location
    Argentina
    Posts
    468
    Thanks
    203
    Thanked 81 Times in 61 Posts


    Quote Originally Posted by Nania Francesco
    Released last version 0.61 [FSE]!
    Got an error trying to pack a folder:

    Code:
    lza a bin.lza bin
    Archive is C:\Archivos de programa\FreeArc\PowerPack\bin.lza
    Source file bin\44.exe error!
    
    lza a bin.lza bin\
    Archive is C:\Archivos de programa\FreeArc\PowerPack\bin.lza
    Source file bin\\44.exe error!
    
    lza a bin.lza bin\*
    Archive is C:\Archivos de programa\FreeArc\PowerPack\bin.lza
    Source file bin\*\bin error!
    
    lza a bin.lza bin\*.*
    Archive is C:\Archivos de programa\FreeArc\PowerPack\bin.lza
    Source file bin\*.*\bin error!
    The same in other combinations (-r, -s, -m*... and so on)


    Parsing??
    Last edited by Gonzalo; 20th October 2014 at 04:21.

  44. #89
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    Released last version 0.62 [FSE]!

    - fixed bug in directory date/time saving
    - corrected bugs on v. 0.61 archiver console command

    download from:
    http://heartofcomp.altervista.org/
    Last edited by Nania Francesco; 20th October 2014 at 11:54.

  45. The Following 2 Users Say Thank You to Nania Francesco For This Useful Post:

    Cyan (20th October 2014),surfersat (21st October 2014)

  46. #90
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    LZA 0.62 vs HFCB: Huge Files Compression Benchmark :with Intel Core i7 920 2.67 ghz 6GB ram:

    C:\>timer lza_x64 a -r -s -v -mx5 -b7 -h9 c:\pro.lza d:\test\vm.dll
    ......................
    Compressed 4244176896 bytes to 1069933971 bytes
    Kernel Time = 7.378 = 00:00:07.378 = 0%
    User Time = 1544.690 = 00:25:44.690 = 88%
    Process Time = 1552.069 = 00:25:52.069 = 88%
    Global Time = 1753.061 = 00:29:13.061 = 100%
    __________________________________________________ ___________________
    Decompression
    Kernel Time = 2.667 = 00:00:02.667 = 5%
    User Time = 23.821 = 00:00:23.821 = 47%
    Process Time = 26.488 = 00:00:26.488 = 52%
    Global Time = 50.373 = 00:00:50.373 = 100%

    LZA 0.62 vs ENWIK9
    C:\>timer lza_x64 a -r -s -v -mx5 -b7 -h9 c:\pro.zcm d:\test\enwik9
    ............
    Archive is c:\pro.zcm
    add.. d:\test\enwik9 1000000000 to 231801003 [100%]
    Compressed 1000000000 bytes to 231801036 bytes

    Kernel Time = 5.538 = 00:00:05.538 = 1%
    User Time = 399.534 = 00:06:39.534 = 97%
    Process Time = 405.072 = 00:06:45.072 = 98%
    Global Time = 409.784 = 00:06:49.784 = 100%
    __________________________________________________ __________________
    Decompression
    Kernel Time = 0.733 = 00:00:00.733 = 7%
    User Time = 8.392 = 00:00:08.392 = 88%
    Process Time = 9.126 = 00:00:09.126 = 95%
    Global Time = 9.517 = 00:00:09.517 = 100%

    LZA 0.62 vs ENWIK8

    C:\>timer lza_x64 a -r -s -v -mx5 -b7 -h9 c:\pro.zcm d:\test\enwik8

    .........................
    Compressed 100000000 bytes to 27870452 bytes
    Kernel Time = 1.372 = 00:00:01.372 = 3%
    User Time = 41.683 = 00:00:41.683 = 95%
    Process Time = 43.056 = 00:00:43.056 = 98%
    Global Time = 43.758 = 00:00:43.758 = 100%
    __________________________________________________ __________________
    Decompression
    Kernel Time = 0.093 = 00:00:00.093 = 8%
    User Time = 0.967 = 00:00:00.967 = 91%
    Process Time = 1.060 = 00:00:01.060 = 99%
    Global Time = 1.061 = 00:00:01.061 = 100%
    Last edited by Nania Francesco; 21st October 2014 at 01:25.

Page 3 of 7 FirstFirst 12345 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •