Results 1 to 29 of 29

Thread: lzpm 0.11 is here!

  1. #1
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    Just follow to the new homepage!

    http://lzpm.encode.su/


  2. #2
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    Very good job! Nice!

  3. #3
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts
    Thanks Ilia!

    Another awesome release!

  4. #4
    Member
    Join Date
    Dec 2006
    Posts
    611
    Thanks
    0
    Thanked 1 Time in 1 Post
    Thanks encode!

    LZPM 0.08 __13 409 100____958 / 9 866
    LZPM 0.09 __13 400 783___1 028 / 9 866
    LZPM 0.10 __13 383 068____967 / 9 866
    LZPM 0.11 __13 545 901____827 / 8 969

    leaving exeflt out had some impact... 0.11 is generally slightly stronger than 0.10... one interesting thing is that TXT2 got compressed best at compression level 2 (difference in compression time to level 9 was much bigger than at other files), when all other files were squished best at level 9

  5. #5
    Member Fallon's Avatar
    Join Date
    May 2008
    Location
    Europe - The Netherlands
    Posts
    158
    Thanks
    14
    Thanked 10 Times in 5 Posts
    You've done lots and it's good!
    Thank you!

  6. #6
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    Quote Originally Posted by Black_Fox
    leaving exeflt out had some impact... 0.11 is generally slightly stronger than 0.10...
    Mainly, the difference will be with large files >= 64 MB. The decompression speed is also slightly affected - its due to a larger ROLZ table (4 MB vs 16 MB). Just waiting for Large Text Benchmark results. Maybe LZPM should be slimdowned - 8K index instead of current 16K, 32 or 16 MB buffer, instead of 64 MB. Anyway, with such configuration the difference with ENWIK9 is huge, probably the decompression speed will be not affected or even faster with EINWIKs.

    Quote Originally Posted by Black_Fox
    one interesting thing is that TXT2 got compressed best at compression level 2 (difference in compression time to level 9 was much bigger than at other files), when all other files were squished best at level 9
    That means there is a flaw in parsing scheme. At present time I face only with the one file which compressed with other levels ("7") better than with "9". However, test carefully the ROLZ2 from MCOMP, and youll find that sometimes Normal outcompresses the Max mode.

    Anyway, LZPM 0.11 is an experiment. You can compare it with previous versions and make your own verdict.

  7. #7
    Member Vacon's Avatar
    Join Date
    May 2008
    Location
    Germany
    Posts
    523
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Hello everyone,

    simply great

    Best regards!

  8. #8
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    2 Black_Fox & others:
    Here is the special build of LZPM 0.11 - LZPM 0.11lite:
    lzpm011lite.zip

    This one has 32 MB buffer and 8K index. Can you please test it and post the compression results and the decompression speed.

    Thank you!

    I just think that this variant is more adequate - it uses much less memory and much faster at compression. Just interesting to see its decompression speed, especially on Black_Fox's benchmark, since Radek tested at almost ALL versions of the LZPM, including their decompression speeds.


  9. #9
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts
    Thanks Ilia!

  10. #10
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    LZPM 0.11 was tested on Large Text Benchmark. Thanks Matt!

    Well, it decompresses as fast, introducing a new record in terms of compression ratio and decompression speed! (LZTURBO was nailed by LZPM)


  11. #11
    Moderator

    Join Date
    May 2008
    Location
    Tristan da Cunha
    Posts
    2,034
    Thanks
    0
    Thanked 4 Times in 4 Posts
    Its awesome! I'm looking forward to the MC results.

  12. #12
    Tester
    Nania Francesco's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    1,565
    Thanks
    220
    Thanked 146 Times in 83 Posts
    The Best LZ Compressor!!

  13. #13
    Member Fallon's Avatar
    Join Date
    May 2008
    Location
    Europe - The Netherlands
    Posts
    158
    Thanks
    14
    Thanked 10 Times in 5 Posts
    Quote Originally Posted by encode
    Maybe LZPM should be slimdowned - 8K index instead of current 16K, 32 or 16 MB buffer, instead of 64 MB.
    Is there not room for both versions? I like any archiver that will run on all PCs under the sun. Then it will be used sooner. But in that case other things will enter the equation as well, like how sophisticated is the command syntax for archiving? There is always something to improve.
    When we keep in mind that in a few years time, we will all run 64-bit systems with many gigs of memory, then it also seems logical to go there already with experimental compressors.
    Lzpm is ok either way, its just a choice.
    Quote Originally Posted by encode
    Binary Tree is better with optimal parsing since the string search becomes faster. Just currently I have no idea how exactly use binary trees for string searching. Probably I should look at LZMA sources.
    Do you plan to look into this at some point later?

  14. #14
    Member
    Join Date
    Dec 2006
    Posts
    611
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by encode
    This one has 32 MB buffer and 8K index. Can you please test it and post the compression results and the decompression speed.
    Sure ___________________in / out
    LZPM 0.10___13 383 068____967 / 9 866
    LZPM 0.11___13 545 901____827 / 8 969
    LZPM 0.11lite 13 536 090 __1 165 / 9 866

  15. #15
    Tester

    Join Date
    May 2008
    Location
    St-Petersburg, Russia
    Posts
    182
    Thanks
    3
    Thanked 0 Times in 0 Posts
    Thanx Ilia!
    What about release addition version with EXE filter (something like lzpm 0.11exe)?

  16. #16
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    Quote Originally Posted by Squxe
    What about release addition version with EXE filter (something like lzpm 0.11exe)?
    The only reason I removed EXE filter is that this filter not works with buffer larger than 16 MB. Thus to add an exe filter I must write something new - will have an additional spare time, will thinking about that.

    Quote Originally Posted by Black_Fox
    Sure ___________________in / out
    LZPM 0.10___13 383 068____967 / 9 866
    LZPM 0.11___13 545 901____827 / 8 969
    LZPM 0.11lite 13 536 090 __1 165 / 9 866
    Its what Im talking about - the decompression is faster with even higher compression.

    Quote Originally Posted by Fallon
    Do you plan to look into this at some point later?
    I do. However, with fast and, possibly, normal levels, hashing is the best anyway. So, it might be that LZPM will use two match finders as ROLZ2 do.
    Recently, based on theory, Ive found the niche for ROLZ - its a fast compression (a fast large dictionary LZ, no more no less, as Malcolm described). Compared to the modern LZ77, like LZMA, ROLZ has some benefits with fast modes - it compresses faster and compression often better than plain LZ77 in fast modes. If you read some papers, including papers from Ross Williams, youll see that such one byte context with ROLZ helps in compression, especially with greedy or simple parsing schemes - i.e. even with greedy parsing we get "virtually" optimized parsing with no time loss, in addition, the baseline match finder works faster with ROLZ - since we search the longer strings initially.
    Concluding, LZPM must be used with fast levels 1..3, while 9 - is just for benchmarking or for generating a tightest compression stream possible. So, Ill look at the results of LZPM 0.11, and, probably, Ill release a new 0.12 like "lite" version (LZPM 0.11lite) with some tunings.

  17. #17
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    By the way, check out my new song at myspace:
    http://www.myspace.com/djencode


  18. #18
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 779 Times in 486 Posts

  19. #19
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    Thank you!

  20. #20
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    2 Matt
    Can you please test a special version of LZPM 0.11:
    lzpm011lite.zip

    Like I said, possibly this version is more adequate for practical use. Mainly I interested in the decompression time and in performance of followed levels:
    1 - The fastest
    3 - The most optimal in terms of compression time/ratio
    9 - Max compression


  21. #21
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 779 Times in 486 Posts

  22. #22
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    Thank you, Matt!

  23. #23
    Programmer giorgiotani's Avatar
    Join Date
    May 2008
    Location
    Italy
    Posts
    166
    Thanks
    3
    Thanked 2 Times in 2 Posts
    Results are good, for lzpm011lite too. Both lzpm and lzpmlite seem interesting research threads!

  24. #24
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    LZPM has been tested at MFC... These days I will think about the configuration of future versions. 16 MB buffer + 8K index + EXE filter should be OK...


  25. #25
    Member
    Join Date
    May 2008
    Location
    CN
    Posts
    45
    Thanks
    0
    Thanked 0 Times in 0 Posts
    I unzip
    http://lzpm.encode.su/fp.log.lzpm
    and
    "C:Program Files7-Zip7z" a -t7z archive.7z fp.log -m0=PPMd -mx9
    dir
    2007-10-12 23:24 487,259 archive.7z

  26. #26
    Member
    Join Date
    May 2008
    Location
    CN
    Posts
    45
    Thanks
    0
    Thanked 0 Times in 0 Posts
    and
    ccmx125 c 4 fp.log fp.ccm
    2007-10-12 23:43 475,563 fp.ccm

  27. #27
    Member
    Join Date
    Dec 2006
    Posts
    611
    Thanks
    0
    Thanked 1 Time in 1 Post
    Quote Originally Posted by l1t
    -m0=PPMd
    IMHO the value in the comparison table is obtained using LZMA, comparing LZPM with PPMd is unprobable (PPM is symmetric), same for CCM(x)... UHARC is in the comparison maybe because of its high compression ratio

  28. #28
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    3,984
    Thanks
    377
    Thanked 352 Times in 140 Posts
    Quote Originally Posted by Black_Fox
    IMHO the value in the comparison table is obtained using LZMA, comparing LZPM with PPMd is unprobable (PPM is symmetric), same for CCM(x)... UHARC is in the comparison maybe because of its high compression ratio
    Exactly! The catch of LZPM as with LZMA - fast decompression. PPMD is symmetric - i.e. the decompression takes about the same time as with the compression.

    Maybe I should change UHARCs settings to ALZ:3?

  29. #29
    Member
    Join Date
    Dec 2006
    Posts
    611
    Thanks
    0
    Thanked 1 Time in 1 Post
    It would be fair to use the same (asymmetric) method in all tests, but it should be also noted somewhere on the site

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •