Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
Filter by: Popular Clear All
  • Ms1's Avatar
    17th June 2020, 20:24
    Hi, This thread is for discussions related to the Global Data Compression Competition. My name is Maxim and I’m the first person to blame in case of anything. Note that any and all formal announces are made only on the official website or from Globalcompetition at compression ru email address. Check https://globalcompetition.compression.ru/ and https://globalcompetition.compression.ru/rules/ in particular for rules and explanations on how to submit your compressor.
    39 replies | 2635 view(s)
  • Sportman's Avatar
    20th June 2020, 16:10
    Input: 400,000,000 bytes - TS40.txt (Test 1 - English texts from Project Gutenberg in UTF-8 ) Output: 219,871,172 bytes, 72.520 sec. - 23.897 sec., paq8pxd_v89_no_avx2 -s0 203,837,965 bytes, 2.183 sec. - 0.436 sec., qpress64 -mL2T1 181,027,275 bytes, 4.597 sec. - 0.266 sec., lz4 -3 175,062,733 bytes, 5.991 sec. - 2.596 sec., gzip -1 170,838,185 bytes, 7.909 sec. - 0.260 sec., lz4 -5 169,631,447 bytes, 1.669 sec. - 1.174 sec., brotli_gc82 -q 0 --large_window=30 167,848,522 bytes, 14.472 sec. - 0.264 sec., lz4 -9 162,532,394 bytes, 2.506 sec. - 1.533 sec., tor64 -3 -b1g 160,369,344 bytes, 1.402 sec. - 0.452 sec., zstd -1 --ultra --single-thread 160,062,666 bytes, 8.325 sec. - 2.396 sec., gzip -3 157,075,906 bytes, 1.976 sec. - 1.094 sec., brotli_gc82 -q 1 --large_window=30 150,264,912 bytes, 3.871 sec. - 0.989 sec., brotli_gc82 -q 2 --large_window=30 150,006,975 bytes, 1.954 sec. - 0.549 sec., zstd -2 --ultra --single-thread 149,463,616 bytes, 11.309 sec. - 2.369 sec., gzip -5 149,140,429 bytes, 5.274 sec. - 1.141 sec., brotli_gc82 -q 3 --large_window=30 145,830,570 bytes, 25.139 sec. - 2.336 sec., gzip -9 142,755,486 bytes, 2.334 sec. - 0.563 sec., zstd -3 --ultra --single-thread 142,041,652 bytes, 5.867 sec. - 1.718 sec., rar -m1 -mt1 140,816,232 bytes, 2.358 sec. - 0.591 sec., zstd -4 --ultra --single-thread 140,359,623 bytes, 3.866 sec. - 0.579 sec., zstd -5 --ultra --single-thread 139,837,540 bytes, 9.021 sec. - 11.980 sec., ppmd -m256 -o2 136,868,789 bytes, 5.491 sec. - 0.575 sec., zstd -6 --ultra --single-thread 135,555,777 bytes, 6.968 sec. - 1.023 sec., brotli_gc82 -q 4 --large_window=30 134,373,286 bytes, 18.111 sec. - 2.980 sec., 7z -mx1 -t7z -mmt1 132,248,557 bytes, 8.055 sec. - 0.544 sec., zstd -7 --ultra --single-thread 131,148,975 bytes, 12.726 sec. - 1.009 sec., brotli_gc82 -q 5 --large_window=30 130,848,754 bytes, 28.274 sec. - 2.763 sec., 7z -mx3 -t7z -mmt1 130,590,357 bytes, 10.530 sec. - 0.528 sec., zstd -8 --ultra --single-thread 128,877,316 bytes, 16.120 sec. - 0.525 sec., zstd -9 --ultra --single-thread 128,121,552 bytes, 16.978 sec. - 0.994 sec., brotli_gc82 -q 6 --large_window=30 127,948,684 bytes, 18.931 sec. - 0.523 sec., zstd -10 --ultra --single-thread 127,337,551 bytes, 22.643 sec. - 0.528 sec., zstd -11 --ultra --single-thread 125,834,430 bytes, 37.601 sec. - 0.527 sec., zstd -12 --ultra --single-thread 125,827,898 bytes, 30.065 sec. - 4.575 sec., 7z -mx1 -tbzip2 -mmt1 125,566,434 bytes, 26.913 sec. - 0.982 sec., brotli_gc82 -q 7 --large_window=30 125,310,173 bytes, 27.459 sec. - 1.604 sec., rar -m2 -mt1 124,999,622 bytes, 6.261 sec. - 1.007 sec., lzturbo -32 -p1 -b1024 123,308,097 bytes, 45.064 sec. - 0.998 sec., brotli_gc82 -q 8 --large_window=30 123,027,949 bytes, 44.446 sec. - 0.521 sec., zstd -13 --ultra --single-thread 121,198,039 bytes, 58.888 sec. - 0.529 sec., zstd -14 --ultra --single-thread 121,158,907 bytes, 86.535 sec. - 1.034 sec., brotli_gc82 -q 9 --large_window=30 120,950,376 bytes, 67.583 sec. - 1.557 sec., rar -m3 -mt1 119,679,537 bytes, 80.359 sec. - 0.540 sec., zstd -15 --ultra --single-thread 118,232,294 bytes, 149.312 sec. - 1.559 sec., rar -m4 -mt1 117,068,801 bytes, 220.511 sec. - 1.575 sec., rar -m5 -mt1 114,443,789 bytes, 74.329 sec. - 0.515 sec., zstd -16 --ultra --single-thread 113,635,552 bytes, 10.122 sec. - 11.798 sec., ppmd -m256 -o3 112,623,113 bytes, 106.130 sec. - 0.512 sec., zstd -17 --ultra --single-thread 110,930,478 bytes, 125.705 sec. - 0.544 sec., zstd -18 --ultra --single-thread 109,536,996 bytes, 142.226 sec. - 0.542 sec., zstd -19 --ultra --single-thread 109,490,803 bytes, 339.655 sec. - 6.143 sec., 7z -mx9 -tbzip2 -mmt1 106,784,025 bytes, 174.501 sec. - 0.563 sec., zstd -20 --ultra --single-thread 106,763,971 bytes, 179.277 sec. - 2.294 sec., 7z -mx5 -t7z -mmt1 105,010,520 bytes, 197.138 sec. - 0.582 sec., zstd -21 --ultra --single-thread 104,828,445 bytes, 204.698 sec. - 2.369 sec., 7z -mx7 -t7z -mmt1 104,105,584 bytes, 399.093 sec. - 1.321 sec., brotli_gc82 -q 10 --large_window=30 103,573,154 bytes, 250.832 sec. - 1.836 sec., tor64 -16 -b1g 103,449,693 bytes, 216.918 sec. - 0.598 sec., zstd -22 --ultra --single-thread 103,161,326 bytes, 228.660 sec. - 2.444 sec., 7z -mx9 -t7z -mmt1 101,516,046 bytes, 687.137 sec. - 1.256 sec., brotli_gc82 -q 11 --large_window=30 99,417,161 bytes, 12.430 sec. - 13.326 sec., ppmd -m256 -o4 91,358,461 bytes, 54.995 sec. - 60.793 sec., ccm 7 90,370,482 bytes, 799.376 sec. - 4.195 sec., rz -d 1023M 88,957,140 bytes, 23.784 sec. - 24.095 sec., ppmd -m256 -o6 87,726,581 bytes, 39.753 sec. - 40.490 sec., ppmd -m256 -o8 86,739,921 bytes, 56.693 sec. - 50.542 sec., arc -m9 -mt1 86,549,699 bytes, 30.431 sec. - 13.516 sec., m99_gcc82_k8 -b1000000000 -t1 84,962,580 bytes, 122.975 sec. - 129.632 sec., cmm4 77 84,625,414 bytes, 172.474 sec. - 4.419 sec., glza -t1 83,282,332 bytes, 229.907 sec. - 244.270 sec., lpq1 83,265,089 bytes, 32.076 sec. - 6.381 sec., bwtturbo -59 -t0 83,137,161 bytes, 52.545 sec. - 38.736 sec., ringsx64 -m8 -t1 82,615,027 bytes, 341.316 sec. - 348.223 sec., paq9a -9 81,773,328 bytes, 2343.278 sec. - 2341.492 sec., uda 0.301 81,729,538 bytes, 42.922 sec. - 24.991 sec., bsc -m0 -b1024 -e2 -T 81,681,030 bytes, 222.863 sec. - 245.341 sec., lpaq8 9 81,233,486 bytes, 43.788 sec. - 42.087 sec., bcm -9 81,046,263 bytes, 362.984 sec. - 299.654 sec., bbb cm1000 80,984,961 bytes, 86.155 sec. - 88.385 sec., zcmx64 -m8 -t1 80,569,729 bytes, 277.914 sec. - 282.126 sec., ppmonstr -m1024 79,911,160 bytes, 558.554 sec. - 530.958 sec., slim23d -o32 -m2048 79,270,565 bytes, 1857.265 sec. - 1898.469 sec., fp8_v3 -8 79,110,045 bytes, 674.670 sec. - 685.552 sec., zpaq64 -m511 -t1 77,503,484 bytes, 194.256 sec. - 193.304 sec., nz -cc -t1 76,156,565 bytes, 90.437 sec. - 85.750 sec., mcm_generic -x -m11 70,648,226 bytes, 24684.416 sec. - 25236.934 sec., paq8pxd_v89_no_avx2 -x15 Input: 400,000,000 bytes - mixed40.dat (Test 3 - Ubuntu x64 distribution and x64 shared library files from Python packages for Linux) Output: 151,828,132 bytes, 0.550 sec.- 0.261 sec., lz4 -1 128,037,938 bytes, 2.518 sec.- 0.248 sec., lz4 -3 125,575,606 bytes, 3.474 sec.- 0.245 sec., lz4 -5 124,236,658 bytes, 5.143 sec.- 0.241 sec., lz4 -7 123,667,882 bytes, 9.004 sec.- 0.241 sec., lz4 -9 120,343,782 bytes, 0.937 sec.- 0.984 sec., brotli_gc82 -q 0 --large_window=30 112,535,313 bytes, 4.319 sec.- 1.856 sec., gzip -1 109,862,965 bytes, 1.145 sec.- 0.947 sec., brotli_gc82 -q 1 --large_window=30 107,968,743 bytes, 5.423 sec.- 1.801 sec., gzip -3 101,890,860 bytes, 0.869 sec.- 0.429 sec., zstd -1 --ultra --single-thread 100,621,029 bytes, 7.179 sec.- 1.747 sec., gzip -5 99,121,232 bytes, 1.054 sec.- 0.442 sec., zstd -2 --ultra --single-thread 98,469,169 bytes, 14.139 sec.- 1.717 sec., gzip -7 97,624,475 bytes, 86.727 sec.- 1.709 sec., gzip -9 96,165,328 bytes, 1.487 sec.- 0.458 sec., zstd -4 --ultra --single-thread 96,114,162 bytes, 1.354 sec.- 0.448 sec., zstd -3 --ultra --single-thread 94,901,663 bytes, 2.403 sec.- 0.780 sec., brotli_gc82 -q 2 --large_window=30 93,509,021 bytes, 2.237 sec.- 0.462 sec., zstd -5 --ultra --single-thread 93,472,235 bytes, 27.393 sec.- 3.190 sec., 7z -mx1 -tbzip2 -mmt1 93,264,123 bytes, 4.745 sec.- 0.923 sec., brotli_gc82 -q 4 --large_window=30 93,139,762 bytes, 3.243 sec.- 0.955 sec., brotli_gc82 -q 3 --large_window=30 92,862,470 bytes, 2.559 sec.- 0.463 sec., zstd -6 --ultra --single-thread 92,859,347 bytes, 30.555 sec.- 3.870 sec., 7z -mx3 -tbzip2 -mmt1 92,810,578 bytes, 31.959 sec.- 4.070 sec., 7z -mx5 -tbzip2 -mmt1 91,930,200 bytes, 79.564 sec.- 3.993 sec., 7z -mx7 -tbzip2 -mmt1 89,292,399 bytes, 272.160 sec.- 3.558 sec., 7z -mx9 -tbzip2 -mmt1 88,245,920 bytes, 4.538 sec. - 0.827 sec., lzturbo -32 -b1024 -p1 87,866,645 bytes, 3.886 sec.- 0.433 sec., zstd -7 --ultra --single-thread 86,374,802 bytes, 4.649 sec.- 0.423 sec., zstd -8 --ultra --single-thread 85,881,714 bytes, 5.487 sec.- 0.422 sec., zstd -9 --ultra --single-thread 85,060,590 bytes, 34.308 sec. - 35.908 sec., ppmd e -m256 -o8 84,861,877 bytes, 6.438 sec.- 0.421 sec., zstd -10 --ultra --single-thread 84,634,980 bytes, 7.789 sec.- 0.421 sec., zstd -11 --ultra --single-thread 84,418,783 bytes, 17.501 sec.- 0.413 sec., zstd -13 --ultra --single-thread 84,384,933 bytes, 10.286 sec.- 0.418 sec., zstd -12 --ultra --single-thread 84,246,538 bytes, 5.558 sec.- 1.210 sec., rar -m1 -mt1 84,171,820 bytes, 19.503 sec.- 0.414 sec., zstd -14 --ultra --single-thread 84,043,256 bytes, 26.311 sec.- 0.415 sec., zstd -15 --ultra --single-thread 80,871,756 bytes, 41.564 sec.- 0.427 sec., zstd -16 --ultra --single-thread 79,062,917 bytes, 8.893 sec.- 0.952 sec., brotli_gc82 -q 5 --large_window=30 78,796,316 bytes, 20.140 sec. - 6.911 sec., bwtturbo -59 -t0 78,411,587 bytes, 11.515 sec.- 0.944 sec., brotli_gc82 -q 6 --large_window=30 78,278,819 bytes, 49.844 sec.- 0.430 sec., zstd -17 --ultra --single-thread 78,157,770 bytes, 130.282 sec. - 150.564 sec., nz -cc -t1 77,638,400 bytes, 18.573 sec.- 0.936 sec., brotli_gc82 -q 7 --large_window=30 77,383,789 bytes, 33.179 sec.- 0.933 sec., brotli_gc82 -q 8 --large_window=30 76,807,130 bytes, 28.521 sec. - 34.126 sec., bcm -9 76,631,786 bytes, 73.061 sec.- 0.932 sec., brotli_gc82 -q 9 --large_window=30 75,951,871 bytes, 446.226 sec. - 348.386 sec., bbb cm1000 75,824,086 bytes, 25.195 sec. - 20.394 sec., bsc -m0 -b1024 -e2 -T 75,582,037 bytes, 69.137 sec.- 0.453 sec., zstd -18 --ultra --single-thread 75,073,993 bytes, 86.311 sec.- 0.454 sec., zstd -19 --ultra --single-thread 74,969,276 bytes, 9.114 sec.- 2.189 sec., 7z -mx1 -t7z -mmt1 73,290,454 bytes, 13.004 sec.- 2.070 sec., 7z -mx3 -t7z -mmt1 72,543,276 bytes, 247.662 sec. - 4.239 sec., glza -t1 72,486,000 bytes, 93.220 sec.- 0.481 sec., zstd -20 --ultra --single-thread 70,969,356 bytes, 103.202 sec.- 0.490 sec., zstd -21 --ultra --single-thread 70,048,687 bytes, 8.352 sec.- 1.167 sec., rar -m2 -mt1 69,480,082 bytes, 117.975 sec.- 0.501 sec., zstd -22 --ultra --single-thread 67,674,309 bytes, 9.390 sec.- 1.136 sec., rar -m3 -mt1 67,362,088 bytes, 13.057 sec.- 1.127 sec., rar -m4 -mt1 67,243,590 bytes, 14.647 sec.- 1.122 sec., rar -m5 -mt1 66,556,038 bytes, 450.634 sec.- 1.047 sec., brotli_gc82 -q 10 --large_window=30 64,798,846 bytes, 80.630 sec.- 1.964 sec., 7z -mx5 -t7z -mmt1 62,359,340 bytes, 100.765 sec.- 1.928 sec., 7z -mx7 -t7z -mmt1 62,098,925 bytes, 690.067 sec.- 0.949 sec., brotli_gc82 -q 11 --large_window=30 61,184,161 bytes, 107.507 sec.- 1.912 sec., 7z -mx9 -t7z -mmt1 57,126,873 bytes, 56.055 sec. - 54.751 sec., zcmx64 -m8 -t1 53,090,885 bytes, 77.583 sec. - xxx.xxx sec., mcm_generic -x -m11 (decomp take to long) 52,847,099 bytes, 432.018 sec. - 2.855 sec., rz -d 1023M 52,546,609 bytes, 560.550 sec. - xxx.xxx sec., zpaq64 -m511 -t1 47,100,239 bytes, 1707.788 sec. - xxxx.xxx sec., fp8_v3 -8 Input: 399,998,976 bytes - block40.dat (Test 4 - roughly 30-70 mixture of Test 1 and Test 3 data) Output: 184,854,213 bytes, 0.650 sec.- 0.263 sec., lz4 -1 155,565,151 bytes, 3.138 sec.- 0.258 sec., lz4 -3 152,356,335 bytes, 4.384 sec.- 0.255 sec., lz4 -5 151,191,133 bytes, 5.903 sec.- 0.254 sec., lz4 -7 150,812,547 bytes, 8.557 sec.- 0.254 sec., lz4 -9 149,491,434 bytes, 1.217 sec.- 1.137 sec., brotli_gc82 -q 0 --large_window=30 139,468,782 bytes, 4.955 sec.- 2.129 sec., gzip -1 138,482,759 bytes, 1.462 sec.- 1.078 sec., brotli_gc82 -q 1 --large_window=30 136,580,784 bytes, 1.065 sec.- 0.464 sec., zstd -1 --ultra --single-thread 132,017,337 bytes, 6.185 sec.- 2.031 sec., gzip -3 129,446,737 bytes, 1.359 sec.- 0.484 sec., zstd -2 --ultra --single-thread 124,383,697 bytes, 8.262 sec.- 2.005 sec., gzip -5 124,138,139 bytes, 1.766 sec.- 0.492 sec., zstd -3 --ultra --single-thread 122,906,961 bytes, 1.884 sec.- 0.518 sec., zstd -4 --ultra --single-thread 122,531,497 bytes, 14.341 sec.- 1.975 sec., gzip -7 121,946,034 bytes, 60.951 sec.- 1.964 sec., gzip -9 121,728,460 bytes, 3.038 sec.- 0.924 sec., brotli_gc82 -q 2 --large_window=30 120,421,936 bytes, 2.859 sec.- 0.522 sec., zstd -5 --ultra --single-thread 120,133,711 bytes, 4.091 sec.- 1.097 sec., brotli_gc82 -q 3 --large_window=30 119,142,375 bytes, 3.496 sec.- 0.520 sec., zstd -6 --ultra --single-thread 117,361,115 bytes, 5.757 sec.- 1.034 sec., brotli_gc82 -q 4 --large_window=30 114,276,021 bytes, 5.186 sec.- 0.489 sec., zstd -7 --ultra --single-thread 113,460,257 bytes, 27.352 sec.- 3.383 sec., 7z -mx1 -tbzip2 -mmt1 113,254,361 bytes, 6.420 sec.- 0.478 sec., zstd -8 --ultra --single-thread 112,657,116 bytes, 7.839 sec.- 0.477 sec., zstd -9 --ultra --single-thread 112,566,886 bytes, 29.905 sec.- 4.169 sec., 7z -mx3 -tbzip2 -mmt1 111,819,082 bytes, 31.328 sec.- 4.435 sec., 7z -mx5 -tbzip2 -mmt1 111,727,780 bytes, 83.494 sec.- 4.366 sec., 7z -mx7 -tbzip2 -mmt1 111,076,866 bytes, 9.564 sec.- 0.479 sec., zstd -10 --ultra --single-thread 110,377,975 bytes, 12.005 sec.- 0.481 sec., zstd -11 --ultra --single-thread 110,174,286 bytes, 291.329 sec.- 3.867 sec., 7z -mx9 -tbzip2 -mmt1 110,096,027 bytes, 5.556 sec. - 0.933 sec., lzturbo -32 -b1024 -p1 109,989,855 bytes, 16.457 sec.- 0.479 sec., zstd -12 --ultra --single-thread 109,559,003 bytes, 22.140 sec.- 0.473 sec., zstd -13 --ultra --single-thread 108,417,157 bytes, 26.884 sec.- 0.479 sec., zstd -14 --ultra --single-thread 107,712,708 bytes, 35.516 sec.- 0.483 sec., zstd -15 --ultra --single-thread 105,665,543 bytes, 10.864 sec.- 1.093 sec., brotli_gc82 -q 5 --large_window=30 104,533,189 bytes, 153.726 sec. - 125.871 sec., nz -cc -t1 104,259,000 bytes, 14.040 sec.- 1.074 sec., brotli_gc82 -q 6 --large_window=30 103,815,542 bytes, 45.487 sec.- 0.486 sec., zstd -16 --ultra --single-thread 102,667,031 bytes, 22.259 sec.- 1.059 sec., brotli_gc82 -q 7 --large_window=30 101,571,387 bytes, 11.471 sec.- 2.680 sec., 7z -mx1 -t7z -mmt1 101,139,011 bytes, 38.302 sec.- 1.077 sec., brotli_gc82 -q 8 --large_window=30 100,944,178 bytes, 60.172 sec.- 0.488 sec., zstd -17 --ultra --single-thread 99,411,048 bytes, 82.571 sec.- 1.058 sec., brotli_gc82 -q 9 --large_window=30 98,638,685 bytes, 78.266 sec.- 0.511 sec., zstd -18 --ultra --single-thread 98,385,901 bytes, 18.605 sec.- 2.450 sec., 7z -mx3 -t7z -mmt1 97,641,046 bytes, 93.578 sec.- 0.521 sec., zstd -19 --ultra --single-thread 93,821,193 bytes, 109.980 sec.- 0.560 sec., zstd -20 --ultra --single-thread 93,575,920 bytes, 40.051 sec. - 41.667 sec., ppmd -m256 -o8 91,460,782 bytes, 124.530 sec.- 0.568 sec., zstd -21 --ultra --single-thread 89,314,971 bytes, 140.094 sec.- 0.581 sec., zstd -22 --ultra --single-thread 85,534,704 bytes, 104.134 sec.- 2.217 sec., 7z -mx5 -t7z -mmt1 84,246,538 bytes, 5.558 sec.- 1.210 sec., rar -m1 -mt1 83,980,924 bytes, 477.131 sec.- 1.223 sec., brotli_gc82 -q 10 --large_window=30 82,928,976 bytes, 23.939 sec. - 7.xxx sec., bwtturbo -59 -t0 (decomp crash) 82,861,609 bytes, 125.527 sec.- 2.203 sec., 7z -mx7 -t7z -mmt1 80,927,550 bytes, 137.540 sec.- 2.209 sec., 7z -mx9 -t7z -mmt1 80,635,084 bytes, 32.600 sec. - 37.356 sec., bcm -9 80,511,030 bytes, 222.754 sec. - 6.201 sec., glza -t1 80,249,048 bytes, 25.195 sec. - 20.394 sec., bsc -m0 -b1024 -e2 -T 80,118,058 bytes, 372.074 sec. - 355.784 sec., bbb cm1000 79,968,657 bytes, 734.674 sec.- 1.147 sec., brotli_gc82 -q 11 --large_window=30 70,971,803 bytes, 68.454 sec. - 67.543 sec., zcmx64 -m8 -t1 70,048,687 bytes, 8.352 sec.- 1.167 sec., rar -m2 -mt1 69,992,690 bytes, 510.133 sec. - 3.691 sec., rz -d 1023M 67,674,309 bytes, 9.390 sec.- 1.136 sec., rar -m3 -mt1 67,362,088 bytes, 13.057 sec.- 1.127 sec., rar -m4 -mt1 67,243,590 bytes, 14.647 sec.- 1.122 sec., rar -m5 -mt1 65,152,519 bytes, 582.133 sec. - xxx.xxx sec., zpaq64 -m511 -t1 64,214,420 bytes, 85.848 sec. - xxx.xxx sec., mcm_generic -x -m11 (decomp take to long) 61,678,970 bytes, 1787.655 sec. - xxxx.xxx sec., fp8_v3 -8
    34 replies | 1836 view(s)
  • SolidComp's Avatar
    11th June 2020, 02:52
    Hi all – This report by LucidChart is remarkable. They tested brotli, gzip, Zstd, and other compression, as well as numerous binary serialization formats, and evaluated them based on dollars saved in storage and compute costs. They use AWS Lambdas and S3 storage. (Lambdas are pay as you go functions, instead of traditional server instances.) The most compact binary format, uncompressed, was Amazon's new Ion format. (They didn't test FlatBuffers or Protocol Buffers, or Cap'n Proto, but they tested CBOR, Smile, BSON, etc.) They tested every format in combination with every level of every compressor. The winner was JSON + Brotli 11. This surprised me. It turns out that the compression codec is more important than the data serialization format. Even though JSON is a bloaty text format, Brotli 11 sort of makes up for it. I'm not sure why Ion + Brotli 11 didn't win, at least by a little bit. Their use case was more focused on long-term storage. When people discover that compression codecs are more important than underlying data formats, they forget that the uncompressed payload is going to take up RAM, so uncompressed size matters too, but if all you're worried about is long-term compressed storage then that doesn't matter as much.
    27 replies | 912 view(s)
  • suryakandau@yahoo.co.id's Avatar
    3rd July 2020, 04:40
    this is based on fp8v6 with small improvement ​
    26 replies | 767 view(s)
  • SolidComp's Avatar
    14th June 2020, 12:16
    Hi all – Are there any good write-ups on small data compression, on the theoretical challenges and optimal approaches? I wonder what the best approaches are to small message compression. I mean sizes of about 300 bytes to 3 KB. Does small message compression present any interesting problems or opportunities? I've been looking at data serialization formats like MessagePack, Protocol Buffers, FlatBuffers, JSON, CBOR, Amazon Ion, and so forth. Most are binary, and some are compact, but none are designed specifically for compression. There are two senses in which I mean this: They weren't co-developed with a compression codec tailored for them. They weren't developed with preprocessing in mind – preprocessing with respect to subsequent compression by an existing codec like gzip, brotli, Zstd, LZMA, etc. By contrast, I think WebAssembly (Wasm) was developed with compression in mind, with maybe two tiers – tailored compression followed by general compression – but I don't know much about it. Do you think there should be a lot of headroom still for compressing small binary messages with tailored codecs? I was very impressed with some of the XML-specific codecs like XMill and XGrind (which is homomorphic). Which approaches do you think would work well on already-compact binary messages? What I'm getting at is whether there's a theoretical framework that pops out here, an approach to small payload compression. Something like JSON compresses well because there are lots of chunky and wasteful repeated strings. Compact binary formats are more challenging, but I don't know if things like CM, RANS, FSE, etc. have any particularly interesting properties for small payloads. Thanks. Links: Amazon Ion: http://amzn.github.io/ion-docs/ Apache Arrow: https://arrow.apache.org/ CBOR: https://cbor.io/ FlatBuffers: https://google.github.io/flatbuffers/ JSON: https://www.json.org MessagePack: https://msgpack.org Protocol Buffers: https://developers.google.com/protocol-buffers
    19 replies | 1072 view(s)
  • SolidComp's Avatar
    17th June 2020, 19:46
    Does anyone know how AtomBeam works? I can't tell what they're talking about: Is this just a dictionary? I found them when I was searching on compression for small data.
    17 replies | 954 view(s)
  • Trench's Avatar
    28th June 2020, 06:20
    A intro for non programmers to try basic compression?
    20 replies | 484 view(s)
  • SolidComp's Avatar
    9th June 2020, 13:40
    Hi all – Are there any codecs that use floating point? When I looked at various codecs' repositories a while back, I didn't find any floats or doubles. What are some codecs that use floating point? How do arithmetic coders handle their values internally? Thanks.
    14 replies | 742 view(s)
  • SolidComp's Avatar
    23rd June 2020, 00:27
    Ahoy. I read that rANS has a processing overhead comparable to Huffman. Is this accurate? Is there an implementation as light as something like SLZ is with gzip? This would be less than 1 MB of RAM and maybe 1% or something of CPU running at line rate on a 10 GbE network connection. Does Zstd have a mode that is super light like this? If this doesn't exist, could it exist, in theory? Zstd apparently goes much faster with a dictionary, and I wonder if it would be possible to build some sort of super fast and super lightweight ANS implementation that uses a dictionary tailored for small data payloads. That would be sweet. Benchmarks usually don't report the CPU and memory overhead, which is frustrating. I also wonder if GLZA can run super light.
    12 replies | 759 view(s)
  • SolidComp's Avatar
    2nd July 2020, 15:29
    Hi all – I have some questions about the different forms of Huffman coding, and where they're used, and I figured many of you would be able to fill in the blanks. Thanks for your help. Static Huffman: Does this mean 1) a tree generated from a single pass over all the data, or 2) some sort of predefined table independent of any given data, like defined in a spec? I'm seeing different accounts from different sources. For example, the HPACK header compression spec for HTTP/2 has a predefined static Huffman table, with codes specified for each ASCII character (starting with five-bit codes). Conversely, I thought static Huffman in deflate / gzip was based on a single pass over the data. If deflate or gzip have predefined static Huffman tables (for English text?), I've never seen them. Dynamic/Adaptive Huffman: What's the definition? How dynamic are we talking about? It's used in deflate and gzip right? Is it dynamic per block? (Strangely, the Wikipedia article says that it's rarely used, but I can't think of a codec more popular than deflate...) Canonical Huffman: Where is this used? By the way, do you think it would be feasible to implement adaptive arithmetic coding with less CPU and memory overhead than deflate? The Wikipedia article also said this about adaptive Huffman: "...the cost of updating the tree makes it slower than optimized adaptive arithmetic coding, which is more flexible and has better compression." Do you agree that adaptive arithmetic coding should be faster and with better ratios? What about the anticipated CPU and memory overhead? Thanks.
    15 replies | 432 view(s)
  • SolidComp's Avatar
    12th June 2020, 17:05
    Hi all – I want to deploy a programming language comparison benchmark site, something much better and more valid than the Benchmarks Game site. It's mostly going to be small programs of course, microbenchmarks. I wonder if there are compression codecs that can be compared across programming languages. Do you know of any codecs that are implemented in more than one PL? The only one I know is deflate/gzip. Most implementations are in C, like zlib and libdeflate, but there is at least one Go implementation: Klaus Post's compress library. (I think many of his optimizations ended up in the Go standard library.) Is Pavlov's gzip written in C++ or C? I mean the one included in the 7-Zip utility. What other codecs have multiple implementations in different programming languages? Is it only gzip? Thanks.
    13 replies | 494 view(s)
  • Dresdenboy's Avatar
    17th June 2020, 16:07
    This is about encoding of the actual codeword range of this family of compressors. So far the number of used bits just gets increased after reaching the limit of the current codeword size (e.g. 9 -> 10 bits). But the new range only gets 50% used right after increasing by 1 bit, with slow filling of the range. Is some existing algorithm adressing this? AC, RC, ANS come to my mind here. Anything else?
    10 replies | 627 view(s)
  • Jarek's Avatar
    7th July 2020, 05:48
    I have just updated context dependence for upscaling paper ( https://arxiv.org/abs/2004.03391: large improvement opportunities from FUIF/JXL no predictor) with RGB case, and standard YCrCb transform has turned out quite terrible (also PCA): - orthogonal transforms individually optimized (~L1-PCA) for each image gave on average ~6.3% reduction from YCrCb, - below single orthogonal transform optimized for all gave mean ~4.6% reduction: 0.515424, 0.628419, 0.582604 -0.806125, 0.124939, 0.578406 0.290691, -0.767776, 0.570980 Transform should align values along axes to approximate with three independent Laplace distributions (observed dependencies can be included in considered width prediction), but YCrCb is far form that (at least for upscaling): How exactly color transforms are chosen/optimized? I have optimized for lossless, while YCrCb is said to be perceptual, optimized for lossy - any chance to formalize the problem here? (what would e.g. allow to optimize it separately for various region types) YCrCb is not orthogonal – is there an advantage of considering non-orthogonal color transforms? Ps. I wasn’t able to find details about XYB from JPEG XL (?)
    7 replies | 285 view(s)
  • AlexBa's Avatar
    30th June 2020, 22:31
    I am trying to use TurboPFor for compressing huge data files for a school project. However, I have run into some very basic issues that I can't seem to resolve with just the readme. Any help is appreciated: 1.What is the difference between ./icapp and ./icbench commands for compression? 2. After downloading the git and making, running $./icbench yields the error 'No such file or directory'. Also, trying to specifically just $make icbench results in errors on a completely new linux system. What could I be doing wrong? 3. Other than the readme, what are good resources for learning how to use the software? Specifically, I want to compress a huge file of floating point numbers, do you have any guidance for how to do this(with TurboPFor or any other software that seems better)? Thank you for any help
    8 replies | 289 view(s)
  • Sportman's Avatar
    6th July 2020, 20:54
    H.266/Versatile Video Coding (VVC): "This new standard offers improved compression, which reduces data requirements by around 50% of the bit rate relative to the previous standard H.265/High Efficiency Video Coding (HEVC)" https://newsletter.fraunhofer.de/-viewonline2/17386/465/11/14SHcBTt/V44RELLZBp/1
    4 replies | 421 view(s)
  • Ms1's Avatar
    17th June 2020, 20:21
    Hi, Finally, a new contest for lossless data compression has arrived -- the Global Data Compression Competition. https://globalcompetition.compression.ru/ This thread is only for notices related to the competition. There is another thread for discussions.
    3 replies | 650 view(s)
  • CompressMaster's Avatar
    2nd July 2020, 22:01
    Hello, I am looking for the most durable (by durability, I mean material+buttons durability, not fall resistance or waterproofing) smartphone. Thing is, I actually own Lenovo a-2010 and I am unable to turn it off due broken button (everyday usage), so I´d have to make a shortcut via volume buttons. But, there´s another problem - I am unable to get into opened windows and also back button - button does not respond. Thus copying files would be kinda hard... So I decided to buy a new one - most durable, full hd video, good camera. But I am not entirely sure which to take... Thanks a lot. CompressMaster
    6 replies | 141 view(s)
  • SolidComp's Avatar
    14th June 2020, 02:01
    Hi all – Do the vector/SIMD features of Intel/AMD CPUs let you work on 8-bit and 16-bit integers, and 16-bit floats? I must be looking in the wrong places because the answer isn't popping out. Like on AVX2 256-bit registers can you pack them with 8-bit integers? Thanks.
    4 replies | 222 view(s)
  • SolidComp's Avatar
    13th June 2020, 19:03
    Hi all – Let's say we want to encode/compress URLs. Something like this: 1 bit: HTTP or HTTPS 1 bit: www or not 1 bit: .com or not Then a slightlier beefier scheme for encoding common TLDs other than .com, as well as common file extensions like .html, .png, etc. Something efficient (less than a byte) for forward slashes, question marks, and equals signs. Then whatever codec you want for the unique text of the URL. Would you expect good context modeling to beat scheme like this? I mean to beat the dictionary pieces. For example, are those first three bits wasteful compared to what good context modeling could do?
    5 replies | 220 view(s)
  • SolidComp's Avatar
    16th June 2020, 00:48
    Hi all – Has anyone worked on homomorphic compression codecs? Is this a vibrant area of research? I first learned of the concept when I stumbled on XGrind, an XML compressor that lets you access the data without decompressing it – that's what I mean by homomorphic. There's a recent paper by Jang, Na, and Yoon about homomorphic parameter compression for deep learning training data. I've only found a handful of papers, so I guess that answers my second question... There's another paper about using RLE with encrypted data. I ultimately want a data serialization format that is more efficient than Protocol Buffers, FlatBuffers, and MessagePack. Ideally it would be homomorphic and/or extremely fast to unpack. From a programmer perspective I wonder if super fast unpacking is as good, or nearly as good, as technically homomorphic. If we had something for small data, with SLZ-like overhead (almost zero), that compressed better than gzip 9, well that would be awesome. It strikes that static Huffman tables gives you something close to homomorphic since you know what all the symbols represent in advance. I guess it's not technically homomorphic since logically there's a decoding step, but realistically any act of parsing would seem to rely on exogenous data or awareness, so I'm not clear on the definition and boundary here.
    3 replies | 261 view(s)
  • encode's Avatar
    11th June 2020, 22:11
    encode started a thread enwik10 in Download Area
    A true "enwik10" is here (first 10000000000 bytes of English Wikipedia Dump): https://yadi.sk/d/2wVOWD7u3r1NRA :-)
    1 replies | 280 view(s)
  • SolidComp's Avatar
    23rd June 2020, 00:41
    Charles Bloom posted about Oodle's new texture compression last week. It looks interesting. Did someone say that Oodle compression was used in the upcoming Sony PS5? I thought I saw something about its disk IO being crazy fast, and I wonder if the compression was part of the reason why. A big part of the Oodle Texture story seems to be Rate Distortion Optimization (RDO): This randomly reminds me that I don't know why we don't use GPU texture formats as our mainstream image formats, since they compress a lot better. (By the way, JPEG-XL looks a lot like Bloom's description of the ideal JPEG successor from a couple of years ago.)
    1 replies | 237 view(s)
  • bugmeno's Avatar
    11th June 2020, 14:51
    What if we use hashcat to bruteforce multiple fast hashes of fragments of file at the same time? :_think: There would be need for fast hashes, filesize, few first and last bytes and maybe entropy. Has anyone tried that?
    1 replies | 223 view(s)
  • anasalzuvix's Avatar
    7th July 2020, 12:53
    Hey There, Please Suggest me A Good Dictionary Based Compression Program Thank You
    1 replies | 168 view(s)
  • well's Avatar
    18th June 2020, 11:53
    the idea is 1.treat input file as bit consequence width L. 2.shift M bits to the right and search for inclusions L-M subconsequence in L. 3.(L-M)*N=universal compression profit for N inclusions of L-M subconsequence... 4.create unique substitution of L-M width K. (L-M-K)*N= real profit... 5.sort all L,M,N,K for maximum of function (L-M-K)*N... 6.create unique L-M header consists of L-M consequence and K substitution. 7.write out new file with L-M header and N substitutions with sequence K. 8. check is any existence of M,N,K possible? 9.goto 1 if yes and end of algo if not! step 4,5,6,8,9 made by hands... it is dirty description but idea is clear, i hope! please tell me was this method made before? i did realization in code partelly in 2005 summer for calgary coprus comptetion but did not complete and not made working prototype, just some middle approximation between working code and hello world... for my pity i was rude with eugene shelwien so he stopped conversations with me, but he has received phda9 dictionary so he has reversed engineered phda9 at least partially...i think why he is not submitting entry for competition by himself? he is doing a lot of creative jobs for community for nothing of profit, i think why? what is his goal? what is his interest in this field? a lot of dark matter :D
    0 replies | 159 view(s)
  • Trench's Avatar
    5th July 2020, 01:56
    As stated on wikipedia "GIF images are compressed using the Lempel–Ziv–Welch (LZW) lossless data compression technique to reduce the file size without degrading the visual quality. This compression technique was patented in 1985. Controversy over the licensing agreement between the software patent holder, Unisys, and CompuServe in 1994 spurred the development of the Portable Network Graphics (PNG) standard. By 2004 all the relevant patents had expired." "Welch filed a patent application for the LZW method in June 1983. The resulting patent, US 4558302, granted in December 1985""when the patent was granted, Unisys entered into licensing agreements with over a hundred companies" https://en.wikipedia.org/wiki/GIF Did they make money? can you if you did the same? if its a small difference no but if its a big maybe? "In January 2016, Telegram started re-encoding all GIFs to MPEG4 videos that "require up to 95% less disk space for the same image quality." well mpeg4 doesnt seem as convenient or as fast. A bigger hard drive you can buy easier than time. In other fiends some people creates things and big companies pay them million to buy the patent and the company never sells the thing they buy since more money to be made in the thing they have. But other programs say free to use for public use but must pay if corporation. And some use the honor system while others give a free demo for a free days are tries only or 1 time daily. So many questions arise and more which a few are. Now is their a legal form one must fill out to get the assurances? Even if you do make something wont it hurt others to try to improve it? How long will the patent or payment last? Would it be better to not have a patent and keep ti secret for personal use? If you do make money will you use the money to make something else better or just screw everyone else like all rich which make money got to their head to destroy nations for their ego? You cant make money giving money away. and you cant go forward if you spend your time to make some thing good and cant afford to feed yourself. if Nikola Tesla made money or had plenty of money more things would have been done. Let's say you are smart and can make money if your dont have money no matter how smart you are. Unless you are smart enough to make money and connections which make the world go round then maybe you can influence the world in one thought. while others think they can influence with just ideas. As Christ said about watching out for "CASTING PEARLS BEFORE. SWINE Matthew 7:1-6" But a lot of things in the bible were stated by the ancient Greeks and everything people have now as well. But many push away who they came with the dance with to dance with another and then wonder why it was not so good in the end. In Ancient Greece only the rich paid taxes IF they wanted to. And many wanted to pay taxes to help their fellow man and to help their status and business as well. But the rich mainly went to war too. Now its backwards for poor die for the rich, the non rich pay the most taxes, the rich want to give as little and when they give its for tax breaks, the rich get tax money for their business. People seem to give praise to some rich despite they are ripping them off. Maybe the masses like to be ripped off and the rich give them what they want since people are conditioned to be that way. And the Rich will get richer and only focus on money and if you feel you are morally right and give things away for free then dont blame them if they profit off you. You can only blame yourself. I am not saying dont give things away I am saying dont hurt yourself and others. Play it smart since if you are smart enough to make something find someone else smart enough to protect you. But dont hurt the masses that can benefit from your idea. Kind of all those big tech companies getting rich by you using their product since they make money with people going on the site. you can be part of the problem if you disagree with them. Sure they will make their money without you but the mentality of the masses is the issue. Which maybe that can never change but maybe your help to use their tactics against them if you have the capability. Maybe resistance is futile and give up everything. As the Ancient Greeks said leisure time is used to learn something new.
    0 replies | 58 view(s)
  • pacalovasjurijus's Avatar
    24th June 2020, 18:32
    Please, show me code of this program. ​
    0 replies | 56 view(s)
  • Shelwien's Avatar
    Today, 11:08
    Shelwien replied to a thread Fp8sk in Data Compression
    Please don't post these in GDC discussion thread - its for contest discussion, not for benchmarks. Also fp8 doesn't seem fast enough even for the slowest category anyway, it has to be more than twice faster to fit.
    26 replies | 767 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 10:13
    I wonder if using paq8sk32 -x15 -w -e1,English.dic on enwik9 could it get 121.xxx.xxx
    141 replies | 11162 view(s)
  • zyzzle's Avatar
    Today, 05:12
    I'm very leery of this new codec. There comes a point where too much data is thrown away. "50% less bitrate for comparable quality". I don't believe it. You're in the lossy domain. The extra 50% reduction in bitrate over h.265 -- and probably ~75% reduction over h.264 codecs means extreme processor power is required, with very high heat and much greater energy cost to support the h.266 VVC codec. Even h.265 requires very high processor loads. I do not want to "throw away 50%" more bitrate. I'd rather increase bitrate by 100% to achieve much higher quality (less lossiness).
    4 replies | 421 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 04:03
    using fp8sk9 -8 option on mixed40.dat is Total 400000000 bytes compressed to 46246446 bytes. using fp8sk9 -8 option on block40.dat: Total 399998976 bytes compressed to 61042053 bytes.
    26 replies | 767 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 02:56
    using -8 option on mixed40.dat is Total 400000000 bytes compressed to 46246446 bytes. using -8 option on block40.dat: Total 399998976 bytes compressed to 61042053 bytes. ​v6-v8 is fail product
    26 replies | 767 view(s)
  • OverLord115's Avatar
    Today, 02:26
    OverLord115 replied to a thread repack .psarc in Data Compression
    @pklat ​So i'm assuming that you could extract the .psarc file from Resistance Fall of Man called game.psarc. Sorry I cant make any contribution to the post, but can I ask you how did you extract it? Because no matter how much I searched for programs and all I cant extract it nor even at least see something in notepad++, errors everywhere.
    5 replies | 1061 view(s)
  • moisesmcardona's Avatar
    Today, 01:58
    moisesmcardona replied to a thread Fp8sk in Data Compression
    Where is v6 to v8?
    26 replies | 767 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 01:42
    Fp8sk9 using -5 option on images40.dat (GDC competition public test set files): Total 399892087 bytes compressed to 163761538 bytes. here is the source code and the binary files ​
    26 replies | 767 view(s)
  • moisesmcardona's Avatar
    Yesterday, 18:01
    These stronger codecs will require newer hardware. HEVC is being supported because the hardware is ready for some time now, but these new codecs will require the ASIC chips to be updated to include them, so I imagine they will not be mainstream until Intel, Nvidia, AMD, Qualcomm, MediaTek, etc integrates them and the world have shifted to the new hardware. Surprisingly, AV1 decoding works really well compared to when HEVC started. The guys who work with dav1d have done an excellent job. Encoding, however, is still slow in the current state until we start seeing broader hardware encoders. Not to mention there is still a lot of tuning going on on the encoders.
    4 replies | 421 view(s)
  • Jon Sneyers's Avatar
    Yesterday, 03:11
    YCbCr is just a relic of analog color TV, that used to do something like that, and somehow we kept doing it when going from analog to digital. It's based on the constraints of backwards compatibility with the analog black & white television hardware of the 1940s and 1950s (as well as allowing color TVs to correctly show existing black & white broadcasts, which meant that missing chroma channels had to imply grayscale); things like chroma subsampling are a relic of the limited available bandwidth for chroma since the broadcasting frequency bands were already assigned and they didn't really have much room for chroma. YCbCr is not suitable for lossless for the obvious reason that it is not reversible: converting 8-bit RGB to 8-bit YCbCr brings you from 16 million different colors to 4 million different colors. Basically two bits are lost. Roughly speaking, it does little more than convert 8-bit RGB to 7-bit R, 8-bit G, 7-bit B, in a clumsy way that doesn't allow you to restore G exactly. Of course the luma-chroma-chroma aspect of YCbCr does help for channel decorrelation, but still, it's mostly the bit depth reduction that helps compression. It's somewhat perceptual (R and B are "less important" than G), but only in a rather crude way. Reversible color transforms in integer arithmetic have to be defined carefully - multiplying with some floating point matrix is not going to work. YCoCg is an example of what you can do while staying reversible. You can do some variants of that, but that's about it. Getting some amount of channel decorrelation is the only thing that matters for lossless – perceptual considerations are irrelevant since lossless is lossless anyway. For lossy compression, things of course don't need to be reversible, and decorrelation is still the goal but also perceptual considerations apply: basically you want any compression artifacts (e.g. due to DCT coefficient quantization) to be maximally uniform perceptually – i.e. the color space itself should be maximally uniform perceptually (after color transform, the same distance in terms of color coordinates should result in the same perceptual distance in terms of similarity of the corresponding colors). YCbCr applied to sRGB is not very good at that: e.g. all the color banding artifacts you see in dark gradients, especially in video codecs, is caused by the lack of perceptual uniformity of that color space. XYB is afaik the first serious attempt to use an actual perceptually motivated color space based on recent research in an image codec. It leads to bigger errors if you naively measure errors in terms of RGB PSNR (or YCbCr SSIM for that matter), but less noticeable artifacts.
    7 replies | 285 view(s)
  • Jyrki Alakuijala's Avatar
    Yesterday, 01:33
    I'm guilty of not having published it in other form than three opensourced implementations (butteraugli, pik and jpeg xl). Most color work is based on research from hundred years ago: https://en.wikipedia.org/wiki/CIE_1931_color_space possibly with slight differences. It concerned initially about 10 degree discs of constant color and about isosurfaces of color perception experience. An improvement was delivered later with 2 degree discs. However, pixels are about 100x smaller than that and using CIE 1931 is like modeling mice after knowing elephants very well. With butteraugli development I looked into the anatomy of the eye and considered about where the information is in photographic images. 1. The anatomy of the eye is in fovea only bichromatic, L- and M-receptors only, the S-receptors are big and only around fovea. This make sense since S-receptors scatter more. Anatomic information is more reliable than physiological information. 2. Most of the color information stored in a photograph is in the high frequency information. In a photograph quality one can concider that more than 95 % of the information is in the 0.02 degree data rather than 2 degree data. The anatomic knowledge about the eye and our own psychovisual testing suggest that the eye is scale dependent, and this invalidates the use of CIE 1931 for modeling colors for image compression. We cannot use a large scale color model to model the fine scale, and the fine scale is all that matters for modern image compression. 3. Signal compression (gamma compression) happens in the photoreceptors (cones). It happens after (or at) the process where spectral sensitivity influences conversion of light into electricity. To model this, we need first to model L, M, and S spectral sensitivy in linear (RGB) color space and then apply a non-linearity. Applying the gamma-compression to other dimensions than the L, M and S spectral sensitivity will lead to warped color spaces and have no mathematical possibility in getting perception of colors right.
    7 replies | 285 view(s)
  • Gotty's Avatar
    7th July 2020, 23:02
    Gotty replied to a thread ARM vs x64 in The Off-Topic Lounge
    Youtube: Apple ARM Processor vs Intel x86 Performance and Power Efficiency - Is the MAC Doomed? Youtube: Why you SHOULD Buy an Intel-Based Mac! (ARM can WAIT)
    11 replies | 693 view(s)
  • Shelwien's Avatar
    7th July 2020, 19:58
    You have to clarify the question. Best LZ78 is probably GLZA: http://mattmahoney.net/dc/text.html#1638 But its also possible to use any compression algorithm with external dictionary preprocessing - https://github.com/inikep/XWRT, so NNCP is technically also "dictionary based compression".
    1 replies | 168 view(s)
  • Shelwien's Avatar
    7th July 2020, 19:50
    https://en.wikipedia.org/wiki/YCbCr#Rationale But of course there's also an arbitrary tradeoff between precise perceptual colorspaces and practical ones, because better perceptual representations are nonlinear and have higher precision, so compression based of these would be too slow. For example, see https://en.wikipedia.org/wiki/ICC_profile
    7 replies | 285 view(s)
  • compgt's Avatar
    7th July 2020, 19:01
    I don't know Lil Wayne or Samwell. But i recall it was Ellen DeGeneres who wanted Justin Bieber made. Miley Cyrus is good.
    29 replies | 1415 view(s)
  • DZgas's Avatar
    7th July 2020, 18:25
    DZgas replied to a thread Fp8sk in Data Compression
    Of course! But if you compress it +0.5% stronger it will be the level of paq8pxd (default type).
    26 replies | 767 view(s)
  • suryakandau@yahoo.co.id's Avatar
    7th July 2020, 18:11
    maybe this is the trade of between compression ratio and speed.
    26 replies | 767 view(s)
  • DZgas's Avatar
    7th July 2020, 17:47
    DZgas replied to a thread Fp8sk in Data Compression
    Hm...Compresses 0.4% stronger and 65% slower (compared to fp8v6). Good!
    26 replies | 767 view(s)
  • DZgas's Avatar
    7th July 2020, 17:17
    Non-free codec. But yes - "I wonder how it compares to AV1". HEVC has just begun to be supported by almost everyone. The future for AV1 which is free and supported by all large companies...But no sooner than ~2022 even with large companies support. What will happen to VVC, Internet Still is AVC. So most likely this codec for Everything except the Internet.
    4 replies | 421 view(s)
  • Stefan Atev's Avatar
    7th July 2020, 16:10
    I think the chief criteria in a lot of color spaces is perceptual uniformity - so changes in either component are perceived as roughly similar changes in color difference; that way, when you minimize loss in the encoded color, you are indirectly using a perceptual heuristic. Something like CIELAB, the computer vision/graphics of yore's darling HSV, etc. are more perceptually uniform than RGB. For compression, it maybe makes more sense to use decompositions where one component is much more perceptually important (and has to be coded more precisely), and the other components to be less perceptually important. For lossless, I think you wouldn't care about any of these considerations, you simply want to decorrellate the color channels (so one plane is expensive to encode but the others are cheap); I think for lossless, tailoring the color transform is probably good for compression - storing the transform coefficients is not that expensive, so even small improvements in the coding can help; how you'd optimize the transform for compressibility is a different story, seems to me that if it was easy to do it efficiently, everyone would be doing it. ​
    7 replies | 285 view(s)
  • JamesWasil's Avatar
    7th July 2020, 14:35
    But why did you have to give us Justin Bieber, Lil Wayne, Samwell, and Miley Cyrus? Couldn't you have skipped these? What did we do to deserve this?
    29 replies | 1415 view(s)
  • lz77's Avatar
    7th July 2020, 14:35
    lz77 replied to a thread enwik10 in Download Area
    WOW, I will increase input buffer up to 10000000000 bytes in my compressors. :_superman2:
    1 replies | 280 view(s)
  • Jarek's Avatar
    7th July 2020, 14:25
    Kaw, indeed YCrCb is motivated for lossy, it is probably why it is so terrible for lossless. So the question is how exactly the lossy transforms are chosen? What optimization criteria are used? algorithm, at least orthonormal transforms can be used also for lossless - from accuracy/quantization perspective we get rotated lattice, usually we can uniquely translate from one lattice to rotated one. However, putting already decoded channels into context for predicting the remaining channels, we can get similar dependencies - I have just tested it, getting only ~0.1-0.2 bits/pixel worsening from optimally chosen rotations.
    7 replies | 285 view(s)
  • algorithm's Avatar
    7th July 2020, 13:52
    For lossless you need simple transforms like G, R-G, B-G because they add less noise than more complicated ones.
    7 replies | 285 view(s)
  • Kaw's Avatar
    7th July 2020, 13:18
    Maybe this reply will bite me in the butt but in the past I have read stuff that said that YCrCb was better for lossy compression because the Y-layer has most of the information and Cb and Cr can be loosely compressed because for the eye they are less important for the overall quality of the image. Also Cb and Cr are often sort of related to each other. In audio we see the same idea when stereo is not treated like 2 totally different streams, but as one (mono) stream and a second part indicating the delta between the 2 streams. Edit: sorry. You are much much more advanced in your analysis of the problem already. Should have read it better.
    7 replies | 285 view(s)
  • maadjordan's Avatar
    7th July 2020, 13:16
    maadjordan replied to a thread 7-zip plugins in Data Compression
    many standalone converters has already existing but adding such trick to compressor would improve compression for special algorithms like mfilter. Base64 is standard and can be added directly as usually are stored as blocks. I found these tools https://github.com/hiddenillusion/NoMoreXOR and https://github.com/hellman/xortool which can used to analysis data either as file or stream before sending to compressor.
    20 replies | 3564 view(s)
  • compgt's Avatar
    7th July 2020, 12:32
    Hollywood music and movies are mine. They're my work, my production. I chose the Hollywood actors and actresses; i made them. I am the major song composer of this world. I was the one naming business companies, products and technologies because i was very good in composing songs, they figured i might find the most appropriate and best sounding names. https://grtamayoblog.blogspot.com/2019/02/voice-search.html?m=1 Warner Bros Pictures and Warner Music are mine. Warner Music is supposed to be one giant music company enveloping or owning other music companies, me being prolific in composing songs since i was 2 years old in 1977. We started up the other music companies too. I made the Hollywood singers and bands, by composing for them and planning their music careers, making music albums for them for release in the 1980s, 90s and onwards. I made Madonna, Jennifer Lopez, Celine Dion, ABBA, Air Supply, Michael Jackson, Beatles, Queen, Scorpions, Nirvana, Spice Girls, Steps, Michael Learns to Rock, Westlife, Pussycat Dolls, No Doubt, M2M, Natalie Imbruglia (Torn), Barry Manilow, even Kenny Rogers' country hit songs, Robbie Williams, Rod Stewart, Mariah Carey, Geri Halliwell, Ricky Martin, Whitney Houston, Britney Spears, Cristina Aguilera, Lady Gaga, Pink, Taylor Swift, Miley Cyrus, Bee Gees, Elton John, Bon Jovi, Aerosmith, etc.
    29 replies | 1415 view(s)
  • anasalzuvix's Avatar
    7th July 2020, 11:33
    Hi Any Update? "3
    29 replies | 6559 view(s)
  • compgt's Avatar
    7th July 2020, 11:03
    It's not a fairy tale. Our lives were at stake. It was real War. I made the Star Wars movies not only for your entertainment! But for my money too!!! Don't get intimidated when i say i made modern Russia. That's the truth. In fact, i learned some nukes weapons and nuclear power technologies from Russia, and France, and helped them on these technologies too.
    29 replies | 1415 view(s)
  • ivan2k2's Avatar
    7th July 2020, 10:32
    You keep telling this fairy tale over and over, but nobody believes it here. And, as i said earlier, you'll never start this court, coz you dont have any proofs of your claims, so why bother? I've got a picture of what you are. I'm done. p.s. hope you'll restore you mind.
    29 replies | 1415 view(s)
  • compgt's Avatar
    7th July 2020, 09:29
    There are many who practice Religion. It soothes the soul. If you meet a Warlord in actual War, when he overpowers you and your armies, and diminishes your numbers, and he professes to be a god, out of your fear for your life you will exclaim he is your god!
    29 replies | 1415 view(s)
  • compgt's Avatar
    7th July 2020, 09:15
    ivan2k2, you sound young and inexperienced. A smart man will not mention about insanity, especially in public online forums. I am not used to that, somebody mentioning "insane" and "mental health". It saddens me. Too brave of you to even mention that. Corrupt, hardcore people (corrupt military and politicians) will set you up, will put you in a mental institution because you are simply in their way. They don't care about you, about your life. They have no ethics for that. They are paid big money for that. (I reckon they had been corrupting and partaking of my Hollywood billion$ up to now since the 1980s.) Let me tell you this. We were a military superpower. We were Starfleet. I presided and dictated over the United Nations, US Supreme Court, and International Criminal Court. I made the modern Nations to end the Cold War. I made the modern US and modern Russia. Politicians lined up to me to be chosen for the US presidency. I chose who would be US (and Philippine) presidents. I chose who would be Vatican's Popes. Why? One, they would be using our (the Tamayo clan's) wealth. They will be funded by our wealth. They have access to our wealth. But, are we holding our wealth? No. What kind of scheme is that?
    29 replies | 1415 view(s)
  • suryakandau@yahoo.co.id's Avatar
    7th July 2020, 03:18
    ​here is the source code
    26 replies | 767 view(s)
  • suryakandau@yahoo.co.id's Avatar
    7th July 2020, 02:45
    Fp8sk5 -improve compression ratio better than fp8sk3 and fp8sk5 using -8 option on mixed40.dat(GDCC public test set files) is Total 400000000 bytes compressed to 46612718 bytes. using -8 option on block40.dat(GDCC public test set file) is Total 399998976 bytes compressed to 61239095 bytes.
    26 replies | 767 view(s)
  • suryakandau@yahoo.co.id's Avatar
    7th July 2020, 01:35
    Thanx for your input. I will try it
    26 replies | 767 view(s)
  • moisesmcardona's Avatar
    7th July 2020, 00:24
    moisesmcardona replied to a thread Fp8sk in Data Compression
    Thanks. May I ask why you don't use a Source Control repository? Something like GitHub or Gitlab can be really convenient. We do this with paq8px and @kaitz does it with paq8pxd and paq8pxv.
    26 replies | 767 view(s)
  • Gotty's Avatar
    7th July 2020, 00:24
    Aha, I see. Yes, he will certainly need to find a balance between price vs video quality + durability. He seems to be unsatisfied with the video quality of Xiaomi 8T (It's not that bad, is it? #1 #2) ... and thinking about rugged design. To satisfy those needs it won't be cheap, clearly. >>I guess maybe a phone would cost $200-400. Yes, probably that's the range. The Xiaomi Redmi Note 8T is 169 USD here. It's extremely good priced. Has 4K video, 4K screen, Gorilla glass. It has 4 cameras at the back. That's why I suggested it. What's your budget, CompressMaster? Blackview phones ain't cheap. Are you sure you need rugged design? A normal case usually does a good job. It protects the phone when accidentally dropped - especially the lenses and the screen need protection. When it has gorilla glass it's even more solid. Protects from scratches. I believe you are afraid of a screen break. So go for a case. Rugged design is "too much" in my view.
    6 replies | 141 view(s)
  • suryakandau@yahoo.co.id's Avatar
    7th July 2020, 00:18
    ​here is the source code
    26 replies | 767 view(s)
  • ivan2k2's Avatar
    6th July 2020, 22:13
    you'll never win this court, you'll never start this court you'll never see the scale of your insanity you cant admit you can be wrong and insane if you are trolling, and your trolling is not productive
    29 replies | 1415 view(s)
  • Stefan Atev's Avatar
    6th July 2020, 22:01
    This has all had me trying to scratch the compression bug again :) I am trying several ideas out, will let you know if they pan out: ​ 1. No literals (construct a "dictionary" of 256 bytes so that a match (of length 1) is always guaranteed - the code to set that up is very short); that alone is not very promising (literals encoded as len-1 matches will definitely use more than the 9 bits LZSS does); I am trying an offset encoding scheme that's very simple but unfortunately hard to optimize for (if you must _always_ be able to encode a match, you may need too many bits for the match offset - trying to use multiple "offset contexts"); For this to work, the savings of 1 bit from each match must offset the cost of adding len-1 matches as a replacement for literals. It will also simplify the decompressor size if everything is a match. 2. Imprecise lengths - especially if you can reuse previous match offsets, it's OK if there are "gaps" in representable match lengths; an especially long match will just be encoded with multiple shorter matches; that only makes sense if you expect to have long matches and very few bits dedicated to offsets. A Goulomb or Fibonacci length distribution seems too difficult for a tiny decompressor, but I think there are easier ways to stretch out your length budget. I am trying to see if there's a way to stick to byte-oriented IO, no variable-length codes, etc (just more fun, I think; the best compression likely needs variable-length codes even if they add a little decompressor complexity). I guess I will shoot to have a program that spits out the compressed data and a small decompressor (probably just C source to match the data); The inputs being small at least allows me not to worry about being reasonable about memory consumption, match finding performance, etc.. It turns out (as we all knew) that trying to do optimal decisions when encoding choices can drastically affect future output is quite difficult.
    40 replies | 2701 view(s)
  • SolidComp's Avatar
    6th July 2020, 21:22
    I wonder how it compares to AV1.
    4 replies | 421 view(s)
  • Sportman's Avatar
    6th July 2020, 21:22
    I do not practice a religion, when you take out what's distorted in main religions there is some truth left. I believe there can only be one truth with some paradoxes. There is nothing to learn when there is no bad, free will allow slow down progress or even going backwards, the universe is very forgiving with many second chances. For who have permanent no intentions to improve, even not the slightest intention, there is a sort of recycle program what set somebody a big step back in evolution and temporary for a long time on hold, what give a big time delay and a loss of what’s build up in that cut period what must be all redone.
    29 replies | 1415 view(s)
  • SolidComp's Avatar
    6th July 2020, 19:19
    Yeah, because of the price. CompressMaster said he uses a Lenovo A2010, which is super cheap and only has 1 GB of RAM, and he said that he's looking for a phone that can record Full HD (1080p), so I figured he wanted something very low cost. I'm not sure what the prices are like in Slovakia, though I assume the top end prices are similar to the US – Apple and Samsung charge nearly the same price for their flagships in all countries, else there would be lots of arbitrage. $100 USD is a great price in the US, compared to $1,000+ for the latest flagships. To get the kind of quality you're implying, I guess maybe a phone would cost $200-400.
    6 replies | 141 view(s)
More Activity