Page 1 of 3 123 LastLast
Results 1 to 30 of 80

Thread: Hutter Prize, 4.17% improvement is here

  1. #1
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    38
    Thanked 92 Times in 48 Posts

    Smile Hutter Prize, 4.17% improvement is here

    It is the last official submission this year:
    https://sites.google.com/site/lossle...7_November.zip

    I am not releasing any source code this year, but for those curious how much LSTM costs, here is a version without LSTM:
    https://sites.google.com/site/lossle...thout_LSTM.rar
    It could qualify for the prize if the size of executable was smaller.
    ~1.4% bigger compressed size, but speed is about twice higher. Or is it more like 2.5 times higher on your machine?
    Perhaps decompression time can be below 1.5 hours on modern computers. And it's for a single-threaded application.

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  2. The Following 5 Users Say Thank You to Alexander Rhatushnyak For This Useful Post:

    byronknoll (6th November 2017),Darek (6th November 2017),Jyrki Alakuijala (6th November 2017),khavish (6th November 2017),xinix (6th November 2017)

  3. #2
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    226
    Thanks
    108
    Thanked 106 Times in 65 Posts
    Nice work! For the HP_2017_May release you posted 16 facts about the algorithm. Those were very useful - I am wondering if you can share some information about which components you have focused on to get further improvements since then?

  4. #3
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,269
    Thanks
    200
    Thanked 985 Times in 511 Posts
    Please check windows version - it seems to crash at the end (c0000374 = heap corruption).
    I suspect that it incorrectly decodes the data due to floats.
    Might be necessary to compress the data using the windows version too.
    Attached Files Attached Files

  5. #4
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    38
    Thanked 92 Times in 48 Posts
    Quote Originally Posted by byronknoll View Post
    I am wondering if you can share some information about which components you have focused on to get further improvements since then?
    Some bugs were fixed, parameters were tuned, some code was removed, and some code moved to other places. Words in the dictionaries were shuffled. Almost no new code was added. Between a dozen and two dozens of lines of new code. Tried a lot more random seed values (for LSTM) than before. When there are hundreds of cells the random seed is probably less critical (and some cells are probably redundant or just plain useless), but when there are just 50 cells or so, LSTM's random seed makes a big difference, as far as I can see.

    "Please check windows version" -- there was a problem with windows version, it was fixed more than 25 hours ago. The size of updated zip is 15373086 bytes. Compressed data file and Linux files were not modified.
    Last edited by Alexander Rhatushnyak; 7th November 2017 at 22:53. Reason: added "Compressed data file and Linux files were not modified"

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  6. The Following 3 Users Say Thank You to Alexander Rhatushnyak For This Useful Post:

    avitar (7th November 2017),byronknoll (7th November 2017),Shelwien (7th November 2017)

  7. #5
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    226
    Thanks
    108
    Thanked 106 Times in 65 Posts
    Optimizing the LSTM random seed is an interesting idea - I haven't tried that before. It makes sense when optimizing for a single file, but I am guessing won't lead to gains when compressing other files. I'll try it out in cmix to see if that is true.

    In this post you attached a dictionary used by phda. Would it be possible to also release the updated dictionary with shuffled words? I switched to using the new dictionary you posted in cmix.

    "Words in the dictionaries were shuffled." - I noticed you used plural for dictionaries - is there more than one dictionary used? In paq8hp12 there were two dictionaries ("temp_HKCC_dict1.dic" and "to_train_models.dic"). What benefit was there for using two dictionaries? readme.txt says "to_train_models.dic" is "data used to initialize models". It seems like there would be additional storage cost for having a second dictionary, and I would be surprised if pretraining models is significantly different between the two dictionaries.

  8. #6
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    38
    Thanked 92 Times in 48 Posts
    "Would it be possible to also release the updated dictionary with shuffled words?" - it could be possible, especially if I could get something more than "thank you" for that (-: but there are also reasons not to release it. For exampe, just like open-sourcing phda would probably kill almost all alternative approaches to Hutter Prize, releasing the dictionary would probably kill attempts to build better dictionaries, do you agree? People tend to use English dictionary from paq8hp12 without even looking carefully at what's inside it.

    "
    I noticed you used plural for dictionaries - is there more than one dictionary used?" -- There are 3 actually: 80 words that are encoded by 1 byte, 48*80 that are encoded by 2 bytes, and 32*16*80 that are replaced by 3 bytes, 91.27% words (40960/44880) are in the biggest dictionary. The 3 sizes have not changed since 2007 or so.

    "In paq8hp12 there were two dictionaries ("temp_HKCC_dict1.dic" and "to_train_models.dic"). What benefit was there for using two dictionaries?" - in
    temp_HKCC_dict1.dic the words encoded by 3 bytes are sorted much better than in to_train_models.dic, but the latter seemingly worked better for pre-training.




    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  9. The Following 2 Users Say Thank You to Alexander Rhatushnyak For This Useful Post:

    byronknoll (8th November 2017),snowcat (8th November 2017)

  10. #7
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    226
    Thanks
    108
    Thanked 106 Times in 65 Posts
    Quote Originally Posted by Alexander Rhatushnyak View Post
    "Would it be possible to also release the updated dictionary with shuffled words?" - it could be possible, especially if I could get something more than "thank you" for that (-: but there are also reasons not to release it. For exampe, just like open-sourcing phda would probably kill almost all alternative approaches to Hutter Prize, releasing the dictionary would probably kill attempts to build better dictionaries, do you agree? People tend to use English dictionary from paq8hp12 without even looking carefully at what's inside it.
    It is true - when working on cmix I haven't done any work on improving the dictionary preprocessing because it already works so well. I feel like there is a large barrier for figuring out how to make further improvements, so I focus on other components. Eventually I might look into dynamically creating the dictionary, which has some nice benefits:

    1) adapting the dictionary size to the data size (enwik9 should have a larger dictionary, BOOK1 should have a small one)
    2) adapting to the type of data (e.g. other languages).

    However, a dynamic dictionary will probably lead to a large performance drop for enwik8 because that static dictionary already has so much optimization.

  11. #8
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    38
    Thanked 92 Times in 48 Posts
    Here is an experimental compressor designed for enwik9:
    http://qlic.altervista.org/phda9.zip

    The included read_me.txt:

    phda9 is an experimental compressor designed for enwik9
    and similar (but smaller) files.
    It's a single-threaded app compiled for Intel Core i7.

    To compress enwik9:
    ===================
    ./phda9 C9 enwik9 compressed_enwik9
    To decompress:
    ./phda9dec compressed_enwik9 decompressed_enwik9

    phda stands for Pack Hundred Days Ahead.
    Decompression of enwik9 takes about 23.5 hours on a desktop
    computer with Core i7-4770 @ 3.40 GHz (and no other
    processes actively running). Compression time is similar.

    Memory usage reported by /usr/bin/time is approximately
    5000000 kb (4.77 G). If allocation fails, app just aborts.

    If you try compressing/decompressing either other files
    with 'C9', or enwik9 differently, that will not work:
    even if encoding/decoding complete correctly, decompressed
    file will not be a bit-exact copy of the original.
    YOU MAY LOSE YOUR DATA! and a lot of time.

    You can also try compressing other files with English text:
    ================================================== =========
    ./phda9 C plain_text_file compressed_file
    To decompress:
    ./phda9 D compressed_file decompressed_file

    Plain English text or XML, HTML, etc. Files must be smaller
    than ~900 MB. Other problems? Please report them. Problems
    in the (de)compression algorithm most likely will be fixed.

    You can also try compressing/decompressing other files
    smaller than ~500 MB using lowercase c/d instead of C/D,
    but this makes little sense because phda9 was not designed
    for arbitrary files, therefore issues will not be fixed.

    The MS Windows executable phda9dec.exe requires files
    with standard C++ libraries from the MinGW-W64 package:
    libstdc++-6.dll
    libgcc_s_seh-1.dll
    libwinpthread-1.dll

    Please email questions/comments to Alexander Rhatushnyak


    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  12. The Following 5 Users Say Thank You to Alexander Rhatushnyak For This Useful Post:

    byronknoll (15th December 2017),Darek (16th December 2017),JamesB (15th December 2017),Matt Mahoney (5th January 2018),Mike (15th December 2017)

  13. #9
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    226
    Thanks
    108
    Thanked 106 Times in 65 Posts
    Wow, great results! Using the Linux executables:

    enwik9 compressed size: 118658060 bytes
    size of decompression program in .zip: 41994 bytes
    total size (compressed file + decompression program): 118700054 bytes
    compression time: 56815.162 seconds
    decompression time: 55201.891 seconds
    compression memory: 5031284 KiB
    decompression memory: 4991748 KiB

    enwik8 compressed size: 15173565 bytes
    size of decompression program in .zip: 569242 bytes
    total size (compressed file + decompression program): 15742807 bytes
    compression time: 6135.337 seconds
    decompression time: 6165.697 seconds
    compression memory: 3843492 KiB
    decompression memory: 3840376 KiB

    Description of test machine:
    processor: Intel Core i7-7700K
    memory: 32GB DDR4
    OS: Ubuntu 16.04
    Last edited by byronknoll; 17th December 2017 at 00:41.

  14. The Following User Says Thank You to byronknoll For This Useful Post:

    Alexander Rhatushnyak (16th December 2017)

  15. #10
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    38
    Thanked 92 Times in 48 Posts
    Thank you Byron!
    Quote Originally Posted by byronknoll View Post
    enwik9 compressed size: 118658060 bytes
    MD5sum is f39dc57c2d094a82068b60c2687d95bc

    It's available for download in a RAR5 multi-volume archive:
    http://qlic.altervista.org/da/v_118658.part1.rar - 30'000'000 bytes
    http://qlic.altervista.org/da/v_118658.part2.rar - 30'000'000 bytes
    http://qlic.altervista.org/da/v_118658.part3.rar - 30'000'000 bytes
    http://qlic.altervista.org/da/v_118658.part4.rar - 28'658'740 bytes

    Quote Originally Posted by byronknoll View Post
    size of decompression program in .zip: 41994 bytes
    So this is version 1.0 of phda9.
    Version 1.1 fixes a small bug (enwik9 and enwik8 were not affected) and .zip is 41992 bytes.
    Version 1.1 is at the same URL, as 1.0 was there for just a couple hours.

    Quote Originally Posted by byronknoll View Post
    processor: Intel Core i7-7700K
    memory: 32GB DDR4
    and what are their frequencies?

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  16. #11
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    226
    Thanks
    108
    Thanked 106 Times in 65 Posts
    Quote Originally Posted by Alexander Rhatushnyak View Post
    and what are their frequencies?
    CPU is 4.7GHz
    RAM is 3200MHz

  17. The Following User Says Thank You to byronknoll For This Useful Post:

    Alexander Rhatushnyak (19th December 2017)

  18. #12
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    38
    Thanked 92 Times in 48 Posts
    Thank you Byron!
    If you (or someone else) could compress LPCB images with cmix, I would post the results on LPCB front page.
    For many years I was curious to see whether cmix would compress them better than GraLIC, and I still don't have access to a machine with 32 gigs of RAM... To be more precise, not enough access to compress 3.4 gigs of data, and decompress at least half of it.

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  19. #13
    Member
    Join Date
    Dec 2008
    Location
    Poland, Warsaw
    Posts
    940
    Thanks
    558
    Thanked 380 Times in 284 Posts
    Simply calculation of time to compress 3.4G by latest cmix for my laptop:
    Based on real case for 5 images (1.BMP, A.TIF, B.TGA, C.TIF, D.TGA) with sum of sizes ~= 8MB consumes about 13'350s.
    That means 3.4GB would consume 67 days to compress = 5'809'920s. On Byron desktop maybe about 35-40 days. Damn! It's quite challenging test.

    What about use latest paq8px version? F.e. v124? After conversion to TIFF/BMP/TGA it could take "only" 3 days to compress....

  20. #14
    Member
    Join Date
    Mar 2011
    Location
    USA
    Posts
    226
    Thanks
    108
    Thanked 106 Times in 65 Posts
    Sure, I am curious how cmix compares on lossless image compression. The link to the test data on your page is broken: http://compressionratings.com/download.html
    Do you know of any other mirrors for this data? I am travelling right now and don't have access to a Windows machine to run nconvert.exe. Alternatively, maybe you can post just 1 or 2 images so I can test how cmix does. If cmix doesn't do well on those images, maybe it would be a waste of time to run the full benchmark. 3.4G would take a *long* time for cmix to compress. If I was running the full benchmark, I would use a computer from Google Compute Engine (since I have lots of credits there) rather than my home computer.

  21. #15
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    38
    Thanked 92 Times in 48 Posts
    Quote Originally Posted by Darek View Post
    What about use latest paq8px version? F.e. v124?
    Testing paq8px would not decrease the curiosity to test cmix...

    Quote Originally Posted by byronknoll View Post
    Do you know of any other mirrors for this data?
    No, but I believe many people have downloaded the test set from compressionratings.com, it was there since 2011. Perhaps someone reading this?

    Quote Originally Posted by byronknoll View Post
    I am travelling right now and don't have access to a Windows machine to run nconvert.exe.
    NConvert is available for Mac and Linux too: https://www.xnview.com/en/nconvert/#downloads

    Quote Originally Posted by byronknoll View Post
    Alternatively, maybe you can post just 1 or 2 images so I can test how cmix does.
    If nothing else works, one image from each of the six subsets, okay?

    UPDATE: Actually, https://download.xnview.com/old_versions and maybe https://www.winehq.org if Linux and Mac versions 5.75 and 6.05 produce files that are not bit-exact with output of version 5.91...
    Last edited by Alexander Rhatushnyak; 21st December 2017 at 04:12.

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  22. The Following User Says Thank You to Alexander Rhatushnyak For This Useful Post:

    Darek (21st December 2017)

  23. #16
    Member
    Join Date
    Sep 2015
    Location
    Italy
    Posts
    239
    Thanks
    104
    Thanked 142 Times in 103 Posts
    Quote Originally Posted by Alexander Rhatushnyak View Post
    I believe many people have downloaded the test set from compressionratings.com, it was there since 2011. Perhaps someone reading this?


    I uploaded lpcb.gralic.zip here:
    lpcb_gralic.7z.001 486539264 bytes https://www.datafilehost.com/d/e70d5840
    lpcb_gralic.7z.002 485466112 bytes https://www.datafilehost.com/d/b41a83b4
    Image files must be extracted with Gralic 111d, that I attached in this post with html page from which I downloaded lpcb.gralic.zip.
    Tell me if you need smaller or already extracted files, or if the download doesn't work.
    Attached Files Attached Files

  24. The Following 3 Users Say Thank You to Mauro Vezzosi For This Useful Post:

    Alexander Rhatushnyak (21st December 2017),byronknoll (30th December 2017),Gotty (5th July 2018)

  25. #17
    Member
    Join Date
    Feb 2016
    Location
    Luxembourg
    Posts
    521
    Thanks
    198
    Thanked 745 Times in 302 Posts
    Testing with paq8px_v124: 973.203.002 bytes
    Detailed results attached.

    paq8px did bad on the STA* images, the color transform hurts compression.
    Also, I need to improve the model to do better on high resolution photographic images with contexts from a broader neighborhood.

    cmix is using the image models from paq8px_v119, though I haven't made many improvements to the 24/32bpp model since.
    It usually beats paq8px on images by about 1%, mostly due to the LSTM byte mixer, but I suspect the margin may be sligthly higher on such large files. So cmix will undoubtedly beat GraLIC, if I make a PNM parser (it currently doesn't have one).

    Testing with EMMA 0.1.25 x64: 959.633.599 bytes
    Detailed results attached.

    EMMA takes 1st spot (until someone tests with cmix), ~1.27% ahead of GraLIC.

    And just for fun, I also tried my internal version of ZCM (1.03, unreleased): 1.032.886.237 bytes (in solid mode, 1.029.688.398 bytes)
    Good enough to beat Flic, and just slightly behind paq8im. Maybe now Nania will release it
    Last edited by mpais; 21st December 2017 at 18:03. Reason: Added results for EMMA

  26. The Following 2 Users Say Thank You to mpais For This Useful Post:

    Alexander Rhatushnyak (21st December 2017),schnaader (21st December 2017)

  27. #18
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 779 Times in 486 Posts
    Congrats on number 1 spot
    http://mattmahoney.net/dc/text.html#1187

  28. The Following User Says Thank You to Matt Mahoney For This Useful Post:

    Alexander Rhatushnyak (6th January 2018)

  29. #19
    Member
    Join Date
    Dec 2008
    Location
    Poland, Warsaw
    Posts
    940
    Thanks
    558
    Thanked 380 Times in 284 Posts
    Quote Originally Posted by Alexander Rhatushnyak View Post
    Here is an experimental compressor designed for enwik9:
    http://qlic.altervista.org/phda9.zip

    The included read_me.txt:
    Hi!
    Could you post also Windows executable compressor? There is only decompressor included.

    Darek

  30. #20
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    38
    Thanked 92 Times in 48 Posts
    I can consider building a windows exe for version 1.2 or 1.3 (February or March), but sorry, no promise: MS Windows is a thing of the past for me.
    If you could tell you managed to run the Linux exe, that would be a relief
    Building a 32-bit exe would be more interesting for me. If anyone needs...
    Last edited by Alexander Rhatushnyak; 8th January 2018 at 02:56.

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  31. The Following User Says Thank You to Alexander Rhatushnyak For This Useful Post:

    Darek (8th January 2018)

  32. #21
    Member
    Join Date
    Dec 2008
    Location
    Poland, Warsaw
    Posts
    940
    Thanks
    558
    Thanked 380 Times in 284 Posts
    Thanks. I haven't option now to run Linux on my machine even on VM ... but "consider" gives me some hope even on pair with "no promise".

  33. #22
    Member
    Join Date
    May 2008
    Location
    Kuwait
    Posts
    332
    Thanks
    35
    Thanked 36 Times in 21 Posts
    Quote Originally Posted by Matt Mahoney View Post
    please notice the decompresser is compressed with UPX (through LZMA) prior zipping which gives a different size. Would it be fair to use 7z or xz instead of zip?

  34. #23
    Member
    Join Date
    May 2012
    Location
    United States
    Posts
    324
    Thanks
    181
    Thanked 53 Times in 38 Posts
    Quote Originally Posted by Alexander Rhatushnyak View Post
    I can consider building a windows exe for version 1.2 or 1.3 (February or March), but sorry, no promise: MS Windows is a thing of the past for me.
    If you could tell you managed to run the Linux exe, that would be a relief
    Building a 32-bit exe would be more interesting for me. If anyone needs...
    As we are now in March, I wanted to comment on this. A 32-bit Windows binary would be the msot interesting for me as well. If you are still considering building a Windows binary, please build a 32-bit version.

    It would be very appreciated!

    Thanks.

  35. #24
    Member
    Join Date
    Nov 2015
    Location
    -
    Posts
    46
    Thanks
    202
    Thanked 10 Times in 9 Posts
    Quote Originally Posted by Matt Mahoney
    The time for decompression/compression is estimated for a 2GHz P4 till 2010 and for a 2.7GHz i7 since 2017. The percent (%) improvement is over the baseline previous record.
    http://prize.hutter1.net/hrules.htm
    This new guide means that I can use all 2 cores and all 4 threads?
    Can I use all 4 processor threads and 1gb RAM, this is not against the rules? I will be able to pass the competition and get a prize?

  36. #25
    Member
    Join Date
    Nov 2015
    Location
    -
    Posts
    46
    Thanks
    202
    Thanked 10 Times in 9 Posts
    Programs must run in less than 20'000/T hours real time on our test machine, where T is its Geekbench4 score
    https://browser.geekbench.com/v4/cpu/145066
    ==
    20000\2441=8 hours
    20000\4336=4 hours
    I have time to unpack in either 8 or 4 hours.
    Right?
    ==
    It turns out that I can use 2 paid cores and I will have only 4 hours to finish unpacking.
    But at the same time I can use the additional 2 Hyper-Threading kernels for free?
    I will have the same 4 hours - but I will use 4 cores.
    Right?

  37. #26
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,474
    Thanks
    26
    Thanked 121 Times in 95 Posts
    Quote from http://prize.hutter1.net/hrules.htm (emphasis mine):
    Update: We now test submissions on newer machines. Use of of less than 1GB RAM and 10GB HD for temporary files is still required. No GPU usage. Programs must run in less than 20'000/T hours real time on our test machine, where T is its Geekbench4 score. Current machine (as of 2017, which may change without notice) is a Dell Latitude E6510 Intel Core i7-620M 2667 MHz with 64bit Linux with T=2441 (1 core) and T=4336 (2 cores).
    Real time is another name for clock time, so from this description it follows that you can use additional cores and you won't be penalized in any way.

  38. #27
    Member
    Join Date
    Nov 2015
    Location
    -
    Posts
    46
    Thanks
    202
    Thanked 10 Times in 9 Posts
    Quote Originally Posted by Piotr Tarsa View Post
    Quote from http://prize.hutter1.net/hrules.htm (emphasis mine):

    Real time is another name for clock time, so from this description it follows that you can use additional cores and you won't be penalized in any way.
    Thx
    ==
    We are waiting for confirmation from Matt Mahoney.
    Otherwise, what is the meaning of what he wrote "T = 2441 (1 core) and T = 4336 (2 cores)",
    If it is enough "T = 2441 (1 core)"
    Specially misleading.

  39. #28
    Member Alexander Rhatushnyak's Avatar
    Join Date
    Oct 2007
    Location
    Canada
    Posts
    237
    Thanks
    38
    Thanked 92 Times in 48 Posts
    Quote Originally Posted by comp1 View Post
    As we are now in March, I wanted to comment on this. A 32-bit Windows binary would be the msot interesting for me as well. If you are still considering building a Windows binary, please build a 32-bit version.
    The next version 1.2 will probably be released next week, and as before, only the enwik9 decompressor will be available for Windows.
    First, the demand for a Windows and/or 32-bit executable is too little, just two persons on the planet,
    and second, I have no good hardware to run the codec on enwik9 (version 1.2 would be released a month ago if I had).

    Quote Originally Posted by Piotr Tarsa View Post
    Real time is another name for clock time, so from this description it follows that you can use additional cores and you won't be penalized in any way.
    Guess the above quoted statements "Programs must run in ... ... with T=2441 (1 core) and T=4336 (2 cores)" mean
    "if you use 1 core then your time limit is 20000 / 2441 ~= 8.19 hours real time, and if you use two cores then it's 20000 / 4336 ~= 4.61 hours"

    This newsgroup is dedicated to image compression:
    http://linkedin.com/groups/Image-Compression-3363256

  40. The Following 2 Users Say Thank You to Alexander Rhatushnyak For This Useful Post:

    Mike (11th March 2018),xinix (11th March 2018)

  41. #29
    Member
    Join Date
    Nov 2015
    Location
    -
    Posts
    46
    Thanks
    202
    Thanked 10 Times in 9 Posts
    Quote Originally Posted by Alexander Rhatushnyak View Post

    Guess the above quoted statements "Programs must run in ... ... with T=2441 (1 core) and T=4336 (2 cores)" mean
    "if you use 1 core then your time limit is 20000 / 2441 ~= 8.19 hours real time, and if you use two cores then it's 20000 / 4336 ~= 4.61 hours"
    Here!
    I knowingly doubted the answer above.
    As a result, we have paid 2 cores, I have to catch either 8 hours or 4 hours (2 cores)
    ==
    So the question about "Hyper-Threading" remains open!
    ==
    Can I use 2 Hyper-Threading kernels for free?
    That is to use 4 cores but I will have 4.61 hours for unpacking!

  42. #30
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 779 Times in 486 Posts
    You are allowed to use all the cores and hyperthreads. My test machine has 2 cores and 4 hyoerthreads.

Page 1 of 3 123 LastLast

Similar Threads

  1. Hutter Prize submission
    By Matt Mahoney in forum Data Compression
    Replies: 30
    Last Post: 26th October 2017, 20:29
  2. Hutter prize awarded
    By Matt Mahoney in forum Data Compression
    Replies: 2
    Last Post: 19th August 2009, 21:17
  3. Forum improvement
    By Lasse Reinhold in forum The Off-Topic Lounge
    Replies: 1
    Last Post: 13th May 2008, 16:48
  4. Alexander Rhatushnyak wins Hutter Prize!
    By LovePimple in forum Forum Archive
    Replies: 1
    Last Post: 5th November 2006, 18:04
  5. The Hutter Prize
    By LovePimple in forum Forum Archive
    Replies: 7
    Last Post: 22nd September 2006, 12:28

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •