Results 1 to 6 of 6

Thread: Quine?McCluskey, K-Maps and logic minimalisation

  1. #1
    Member
    Join Date
    Apr 2010
    Location
    CZ
    Posts
    81
    Thanks
    5
    Thanked 7 Times in 5 Posts

    Quine?McCluskey, K-Maps and logic minimalisation

    Have you heard of Quine-McCluskey or K-Maps logic minimization compression? have you tried this anyhow? Actually I seen some experimental results for grey scale images or just for some specific uses only. Maybe the complexity of algorithm is still quite high, or the same results might be obtained with common algorithms with better filters or models, anyway do you think this might have any real use ?

    Here is a specific implementation for short text messages, actually it gets better results than specialized text compressors on specific data, but this is maybe doe to lots of different contexts or so..

    I think they have error in results section, in table 6, bible.txt might be a wrong column name.. because those results seems strange, but this is maybe due to specific processing, encoding..
    (They are using BWT, MTF, Fibonacci, and than Adaptive Huffman so results has to be far from optimal.. also their input might not be in ASCII?)

    http://dspace.vsb.cz/bitstream/handl...pdf?sequence=2
    Last edited by quadro; 7th July 2012 at 13:43.

  2. #2
    Member biject.bwts's Avatar
    Join Date
    Jun 2008
    Location
    texas
    Posts
    449
    Thanks
    23
    Thanked 14 Times in 10 Posts
    Quote Originally Posted by quadro View Post
    Here is a specific implementation for short text messages, actually it gets better results than specialized text compressors on specific data, but this is maybe doe to lots of different contexts or so..

    I think they have error in results section, in table 6, bible.txt might be a wrong column name.. because those results seems strange, but this is maybe due to specific processing, encoding..
    (They are using BWT, MTF, Fibonacci, and than Adaptive Huffman so results has to be far from optimal.. also their input might not be in ASCII?)

    http://dspace.vsb.cz/bitstream/handl...pdf?sequence=2
    There is a BIJECTIVE BWTS DC FIB method on this site. I have a feeling for real small files that method
    would compress better than the method used there. It would be nice to compare the results on short files.
    http://encode.su/attachment.php?atta...5&d=1292347715
    Also the MTF is already bijective. You could modify there FIB to be bijective like mine and use a Bijective Adaptive Huffman.
    For small files being bijective does lead to smaller compression than nonbijective alternatives.

  3. #3
    Member biject.bwts's Avatar
    Join Date
    Jun 2008
    Location
    texas
    Posts
    449
    Thanks
    23
    Thanked 14 Times in 10 Posts
    I commented very quickly after just reading your message. It seems like they do a lot of processing before the BWT
    However for very short files I still think the BWTS method above better. For the files they test which in my mind are longer text files one could do the same prepossessing as they did and then follow the above BWTS chain of events instead of there BWT chain of events. I just looked at paper. So these are my thoughts about it. Since they are not just doing BWT MTF HUFF there is a big prepossessing event at the start of there compression.

  4. #4
    Member biject.bwts's Avatar
    Join Date
    Jun 2008
    Location
    texas
    Posts
    449
    Thanks
    23
    Thanked 14 Times in 10 Posts
    Actually for short text files the SCOTT transform followed by lzw1_m32 will be better. The form of Bijective DC that I used above is not the best for short files. I had forgotten which form I posted. But the SCOTT transform and lzw1_m32 well do quite well on short files. Other stages could be added but its pretty good by itself

    768,771 book1 which is not a short text file
    768,771 book1.scott_trans
    244,769 book1.scott_trans.lzw1_m32

    file bananas 7 characters "BANANAS" a very short text file
    7 bananas.scott_trans
    9 bananas.lzw1_32
    6 bananas.scott_trans.lzw1_m32

  5. #5
    Member
    Join Date
    Apr 2010
    Location
    CZ
    Posts
    81
    Thanks
    5
    Thanked 7 Times in 5 Posts
    Yeah, I think it would be much better to have a working demo in this case. I am not sure if I get this correctly, but at the point where they are indexing "product terms", I am asking myself what's the whole point of Boolean minimization than? If we could e.g. index such input vectors directly based e.g. on their frequency or so and than compress them.

    Anyway there could be some advantage of indexing minimized "product terms" like they do, as terms are context related while those mapping tables are maintained in adaptive way so there is less grouping, and less terms needed.
    Last edited by quadro; 9th July 2012 at 17:46.

  6. #6
    Member
    Join Date
    Apr 2010
    Location
    CZ
    Posts
    81
    Thanks
    5
    Thanked 7 Times in 5 Posts
    I tried to implement it, but get no improvements at all, using Q-M algorithm from http://techhouse.org/~deigen/older-p...bool-funcs.pdf, Appendix A. (Author mentiones that resulting cover doesn't have to be always minimal in Q-M algorithm.) I tried various suggestions from the paper above, but It seems 8bits vectors= 3 input variables is a better choice than 4 input variables or more, with more variables compression gets worse. I tried both static (mapping has to be stored as well) and dynamic update of vector frequencies (with MTF in front), only difference is I have stored resulting terms in ternary alphabet, 1term - 1byte and no additional indexing.

    My best result was about 7k worse on bible from Canterbury Corpus than without Q-M stage, followed by bsc -p -e2

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •