the ZPAQ prize is the same thing as the Hutter prize except you can only use ZPAQ and Extensions of it to compress the file
Extra Rule No.1 Neither tornado or lzturbo are allowed
the ZPAQ prize is the same thing as the Hutter prize except you can only use ZPAQ and Extensions of it to compress the file
Extra Rule No.1 Neither tornado or lzturbo are allowed
Interesting idea. Can anyone compress 1 million digits of pi to less than 114 bytes? http://mattmahoney.net/dc/pi.txt.zpaq
Matt Mahoney Can you compress 1 million digits of y-cruncher lemniscate to 10000 bytes or less
if you need the program to generate digits go to http://www.numberworld.org/y-cruncher/
if you want a hint go to http://www.numberworld.org/notes.html
Last edited by calthax; 18th August 2014 at 02:52.
Kolmogorov complexity has no time limit
The 114 byte zpaq archive only takes a day to decompress.
A lot of times you can write very short algorithms that are exponential in time. I'm not sure if there is one for pi.
Here is ten million digits of pi in 21 bytes, give or take: http://pi.karmona.com
The language is URL. I don't think Kolmogorov compression disallows it.
I think "URL compression" is quite close to BARF-style compression, thereby undermining the whole concept of general purpose compression.The language is URL. I don't think Kolmogorov compression disallows it.
Yes! !! We can also send data between paralell universes... to retrieve it again an ipv16 will be needed but we cut off the HD usage
Some other small programs for calculating pi. So far ZPAQL beats C.
http://numbers.computation.free.fr/C.../tinycodes.pdf
But some other languages look promising.
http://codegolf.stackexchange.com/qu...0-digits-of-pi
I'm not sure it undermines general purpose compression. Kolmogorov compression in general is uncomputable; this, OTOH, is a viable method for representing the digits of pi; moreover, it likely uses fewer bits than the Kolmogorov solution.
It might undermine this competition, because it clearly doesn't seem to be the point. But I don't know what precept of compression of it violates.
Edit:
This has an important difference from BARF. AFAIK, BARF "compresses" a file mainly by moving data into the file name. So the filename now has unbounded length.
This scheme wasn't intended to compress arbitrary data. The concept is to take advantage of the fact that common data is already on the web, so it can be unambiguously referenced by its URL.
The smallest URL that fetches a given piece of data describes a well-defined concept that can be used to measure entropy. Call it "Internet entropy." Actually, arbitrary data can also be incorporated into a URL, so this could work for any data.
Last edited by nburns; 20th August 2014 at 21:13.