Results 1 to 5 of 5

Thread: Im i too dumb?

  1. #1
    Member toi007's Avatar
    Join Date
    Jun 2011
    Location
    Lisbon
    Posts
    35
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Im i too dumb?

    see this pdf
    http://www.compressordeimagens.com/ternary.pdf

    now tell me... why do we need 00 01 10 when we can use 0 10 11 to compress!!!!!!

    im just a designer!!

    but this bit mistake its houfull!

    use 5 bits instead of 6!

    once you see it its quite a joke!
    a degreed girl of lots of math dont see the obvious!
    lets have a lol

    but now i might find some good soul to tell me why this doesnt works even with 5 bits instead of 6 bits


    whatever im a newbie here and i need to be a tester of files for free soo if you having to much work on some exe dos file or windows for xp 32 bits do contact me here you know better than anyone i never been to foruns but its a privilege to be on this one! (theres some brilliant minds in here)
    any good speed slow speed good compression rate bad compression rate i test it on different file formats and extentions
    and post if you need some benchmark appearce on your topic (I test versions)
    and i have a lot of free time!
    dont ask me why!
    Last edited by toi007; 28th June 2011 at 21:11. Reason: still get no response

  2. #2
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 779 Times in 486 Posts
    That is a good point. If they had done any benchmarks then they would have seen that compression is worse. In any case, normal canonical Huffman decoding without big tables is O(n) anyway, the same as ternary decoding (not O(log n) like the paper claims). You read bits until the code has a value in the right range for that length. I don't see how their algorithm is more efficient. You still have to read the code and detect when you reached the end, which isn't even mentioned in their algorithms.

    Also, even with your fix, ternary coding still gives worse compression. Suppose you could encode base 3 with each symbol taking the same space (say, 3 level memory or something) or you encoded each ternary code ideally using log(3)/log(2) = 1.585 bits. Even then, a ternary code implicitly rounds each probability to a power of 1/3 instead of 1/2 which would increase the average case rounding error.

  3. #3
    Member toi007's Avatar
    Join Date
    Jun 2011
    Location
    Lisbon
    Posts
    35
    Thanks
    0
    Thanked 0 Times in 0 Posts
    amasingly i do understand you but i must be off the math as tester the same number of 0 and 10 and 11 gives bad distribuition witch i know that a good compressed file gives 50% 0 25% 10 and 25% 11 i do understand somthing! in my special way.
    its good to have such a good professors!
    I stay for the tests dont worry.

  4. #4
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 779 Times in 486 Posts
    The paper doesn't say how to build a ternary Huffman tree. It just describes how to decode using it, and claims it is faster (without testing). Of course using codes 00, 01, 10 is bad because you need 2 bits to code a symbol with probability 1/3. Using 0, 10, 11 would average 1.667 bits per symbol. The best you can do is log(3)/log(2) = 1.585 bits, maybe using arithmetic coding. Still this is worse than a regular Huffman code because a ternary tree forces probabilities to be rounded to 1/3, 1/9, 1/27... instead of 1/2, 1/4, 1/8... before coding.

  5. #5
    Member toi007's Avatar
    Join Date
    Jun 2011
    Location
    Lisbon
    Posts
    35
    Thanks
    0
    Thanked 0 Times in 0 Posts
    we just have to pay attention to matt its hilghy recomemded!
    except for those how already knows this!
    this is theaching!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •