Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • e8c's Avatar
    Today, 17:19
    https://askubuntu.com/questions/1041349/imagemagick-command-line-convert-limit-values ​ $ /bin/time -f '\nUser time (seconds): %U\nMemory (kbytes): %M' \ ​> ./guess -1 PIA23623_hires.ppm PIA23623_hires.guess encode, 2 threads: 112 MPx/s User time (seconds): 35.60 Memory (kbytes): 7743864 $ ls -l total 12539741 -rwxrwx--- 1 root vboxsf 1794025332 янв 15 16:49 PIA23623_hires.guess -rwxrwx--- 1 root vboxsf 2293756771 янв 15 15:56 PIA23623_hires.png -rwxrwx--- 1 root vboxsf 6115804597 янв 15 15:20 PIA23623_hires.ppm -rwxrwx--- 1 root vboxsf 2425213852 янв 14 02:08 PIA23623_hires.tif VM: 2 v-Cores, 11 GB RAM Host: Intel NUC8i3, SATA SSD "35.60 seconds" is sum of 2 threads, about 18 s in user space. (Said for those who are not familiar with profiling multi-threaded applications.) Respectively: 8 cores - 4.5 s, 16 cores - 2.25 s, 32 cores - 1.125 s. It is acceptable.
    63 replies | 4032 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 12:03
    Where can I get mill.jpg and dscn0791.avi files? Thank you.. could you upload them here please ?
    1026 replies | 360608 view(s)
  • Lithium Flower's Avatar
    Yesterday, 10:43
    @ Jyrki Alakuijala Thank you for your reply I check my compressed non-photographic set with my eyes, find tiny artifacts in different image, and i think in jpeg xl 0.2 -d 1.0(Speed: kitten), if compressed image maxButteraugli above 1.5 or 1.6?, probably have error or tiny artifacts in this image. A little curious, probably have a plan or patch to improvement non-photographic fidelity, in next jpeg xl public release(jpeg xl 0.3)?
    31 replies | 2540 view(s)
  • kaitz's Avatar
    13th January 2021, 21:19
    kaitz replied to a thread Paq8pxd dict in Data Compression
    ​paq8pxd_v95 jpeg model: -more context in Map1 (20) -more inputs from main context -2 main mixer inputs + 1 apm -cleanup Size Compressed Sec paq8pxd_v95 -s8 a10.jpg 842468 618555 43.42 sec 1984 MB paq8px_v200 -8 a10.jpg 842468 624597 26.51 sec 2602 MB paq8pxd_v95 -s8 mill.jpg 7132151 4910289 350.38 sec 1984 MB paq8px_v200 -8 mill.jpg 7132151 4952115 228.65 sec 2602 MB paq8pxd_v95 -s8 paq8px_v193_4_Corpuses.jpg 3340610 1367528 167.13 sec 1984 MB paq8px_v200 -8 paq8px_v193_4_Corpuses.jpg 3340610 1513850 105.90 sec 2602 MB paq8pxd_v95 -s8 DSCN0791.AVI 30018828 19858827 1336.94 sec 1984 MB paq8px_v200 -8 DSCN0791.AVI 30018828 20171981 992.85 sec 2602 MB So mill.jpg is 18571 bytes better v95 vs v94. Its slower, im sure nobody cares. Some main context changes have 0 time penalty but improve result some kb. For a10.jpg new Map1 context add only about 5 sec.
    1026 replies | 360608 view(s)
  • kaitz's Avatar
    13th January 2021, 21:15
    kaitz replied to a thread paq8px in Data Compression
    MC ​ paq8px_v200 paq8pxd_v94 ​file size -s8 -s8 A10.jpg 842468 624597 620980 AcroRd32.exe 3870784 823707 831293 english.dic 4067439 346422 347716 FlashMX.pdf 4526946 1315382 1334877 FP.LOG 20617071 215399 201621 MSO97.DLL 3782416 1175358 1190012 ohs.doc 4168192 454753 451278 rafale.bmp 4149414 468156 468757 vcfiu.hlp 4121418 372048 264060 world95.txt 2988578 313915 311700 Total 6109737 6022294
    2276 replies | 604522 view(s)
  • Jyrki Alakuijala's Avatar
    13th January 2021, 11:30
    I suspect that VarDCT will be the most appropriate mode for this -- we just need to fix the remaining issues. Palette mode and delta palette mode can also be useful for a wide range of pixel art images. They are also not yet tuned for best performance but show quite a lot of promise already. My understanding is that -- for photographics images -- lossy modular mode provides a quality that is between of libjpeg quality and VarDCT quality, but closer to libjpeg quality. I always used --distance for encoding and VarDCT. We have a final version of the format now, so in that sense it is ok to start using it. For practical use it may be nice to wait before tooling support for JPEG XL is catching up. JPEG XL committee members did a final quality review in November/December with many metrics and manual review of images where the metrics disagreed. FDIS phase starts next week.
    31 replies | 2540 view(s)
  • Sebastianpsankar's Avatar
    13th January 2021, 03:30
    This was helpful and encouraging... Thanks...
    20 replies | 821 view(s)
  • Lithium Flower's Avatar
    12th January 2021, 17:31
    @ Jyrki Alakuijala Thank you for your reply I have some question about Pixel Art, i using pingo png lossless -s0 to identify which image can lossless convert to pal8, some Pixel Art image can't lossless convert, need use vardct mode or modular mode, In my Pixel Art image test, vardct mode -q 80 Speed 8, lossy modular mode -Q 80 Speed: 9, vardct mode can't work very well(have tiny artifact), lossy modular mode work fine in Pixel Art image, but look like lossy modular mode isn't recommend use right now, which mode is best practice? And about lossy modular mode quality value(-Q luma_q), this quality value roughly match or like libjpeg quality? i don't know use lossy modular Q80 Speed 9 compare vardct q80 Speed 8 , is a fair comparison? About Pixel Art png pal8, I test Pixel Art png pal8(93 color)in lossless modular mode -q 100 -s 9 -g 2 -E 3, but if use png lossless before jxl lossless, will increase file size. jpeg xl lossless:People1.png 19.3kb => People1.jxl 19.3kb People1.png 19.3kb => People1.png 16.4kb (pingo png lossless -s9) => People1.jxl 18.1kb Webp Lossless:People1.png 19.3kb => People1.webp 16.5kb People1.png 19.3kb => People1.png 16.4kb (pingo png lossless -s9) => People1.webp 16.5kb Pixel Art png pal8(color near 256), jpeg xl lossless is best.// rgb24 605k force convert pal8 168k jpeg xl lossless: 135k Webp Lossless: 157k And a little curious, recommend my artist friend use jpeg xl 0.2, is a good idea?, or i should wait FDIS stage finish?
    31 replies | 2540 view(s)
  • fcorbelli's Avatar
    12th January 2021, 14:24
    fcorbelli replied to a thread zpaq updates in Data Compression
    This is an example of sequential scan... (...) 540.739.857.890 379.656 time 16.536 /tank/condivisioni/ 540.739.857.890 379.656 time 17.588 /temporaneo/dedup/1/condivisioni/ 540.739.857.890 379.656 time 17.714 /temporaneo/dedup/2/tank/condivisioni/ 540.739.857.890 379.656 time 16.71 /temporaneo/dedup/3/tank/condivisioni/ 540.739.857.890 379.656 time 16.991 /temporaneo/dedup/4/condivisioni/ 540.739.857.890 379.656 time 93.043 /monta/nas1_condivisioni/ 540.739.857.890 379.656 time 67.312 /monta/nas2_condivisioni/ 540.739.840.075 379.656 time 362.129 /copia1/backup1/sincronizzata/condivisioni/ ------------------------ 4.325.918.845.305 3.037.248 time 608.024 sec 608.027 seconds (all OK) vs threaded... zpaqfranz v49.5-experimental journaling archiver, compiled Jan 11 2021 Dir compare (8 dirs to be checked) Creating 8 scan threads 12/01/2021 02:00:54 Scan dir || <</tank/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</temporaneo/dedup/1/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</temporaneo/dedup/2/tank/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</temporaneo/dedup/3/tank/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</temporaneo/dedup/4/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</monta/nas1_condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</monta/nas2_condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</copia1/backup1/sincronizzata/condivisioni/>> Parallel scan ended in 330.402000 About twice as fast (in this example)
    2653 replies | 1130175 view(s)
  • Dresdenboy's Avatar
    12th January 2021, 08:19
    You're welcome! Most of the files are compressed at a compression ratio of ~2 to 5. in this ZIP. FONTES_*, MENU_* and TELEPORT_* are less compressed, with the latter two containing a lot of 16 bit data. They might contain bitmaps.
    5 replies | 344 view(s)
  • EmilyWasAway's Avatar
    12th January 2021, 06:11
    After reviewing what you both have said it makes sense that the samples I posted are not using compression at this layer of the format. I'm not certain but these files extracted from the header of the DPC appear to reference data located further down in the DPC but the headers themselves are not compressed in this version. Thank you for the help. :)
    5 replies | 344 view(s)
  • Shelwien's Avatar
    12th January 2021, 03:13
    Its not a compressed format (at least not in the first layer of structure), but just a structured format with length prefixes and mostly floats inside. seg000:00000000 dd 3Fh seg000:00000004 dd 4 seg000:00000008 dq 3Bh seg000:00000010 dd 52F79F96h seg000:00000014 dd 0C939BCA1h seg000:00000018 dd 0D24B3F6Fh seg000:0000001C aVLinkLefthandM dd 55 seg000:0000001C db 'V:LINK "LeftHand" "MUSCLE_OFF_1_SUSPFL4_LOD1" AxeZ LOD1' seg000:00000057 dd 0A6h seg000:0000005B dd 62h seg000:0000005F dq 44h seg000:00000067 dd 73DC6A13h seg000:0000006B dd 0A3F0FCD9h seg000:0000006F dd 4AB1A4C3h seg000:00000073 dd 681AF697h seg000:00000077 dd 0BE02FCF1h seg000:0000007B dd 0BE5A0EC8h seg000:0000007F dd 0BA5BF080h seg000:00000083 dd 3E801E69h seg000:00000087 dd 3F800000h seg000:0000008B dd 0 seg000:0000008F dd 0 seg000:00000093 dd 0BE02FCF1h seg000:00000097 dd 0 seg000:0000009B dd 3F800000h seg000:0000009F dd 0 seg000:000000A3 dd 0BE5A0EC8h seg000:000000A7 dd 0 seg000:000000AB dd 0 seg000:000000AF dd 3F800000h seg000:000000B3 dd 0BA5BF080h seg000:000000B7 dd 3E801E69h seg000:000000BB dd 3E801E69h seg000:000000BF dd 3E801E69h seg000:000000C3 dd 3EDDE882h seg000:000000C7 dd 0
    5 replies | 344 view(s)
  • EmilyWasAway's Avatar
    12th January 2021, 01:21
    I considered that it could be a custom format but the similarities to the previous DPC formats and sections of the file that look like this lead me to investigate the possibility of compression. Although if it is compressed, it's not by much.
    5 replies | 344 view(s)
  • Jyrki Alakuijala's Avatar
    12th January 2021, 01:03
    Thank you. This is very useful. Yes, looks awful. I had an off-by-one for smooth area detection heuristic and those areas were detected 4 pixels off. There is likely an improvement on this in the next public release, as well as an overall reduction (5-10 %) of such artifacts from other heuristic improvements -- with some more contrast preserved in the middle frequency band (where other formats do often pretty bad). If you find these images in the next version, please keep sending samples. Consider submitting them to http://gitlab.com/wg1/jpeg-xl as an issue.
    31 replies | 2540 view(s)
  • Lithium Flower's Avatar
    11th January 2021, 21:30
    ​Get a issue in vardct mode, i using jpeg xl 0.2 -d 0.8 (Speed: kitten), in some non-photographic(more natural synthetic) image, everything is fine, but some blue and red area have tiny artifacts(noise?). Use vardct mode in non-photographic(more natural synthetic) type image, probably need use other jpeg xl flag(filter) to get great result?
    31 replies | 2540 view(s)
  • Dresdenboy's Avatar
    11th January 2021, 20:24
    With those runs of zeroes and the compression ratio in the samples zip I think those files aren't compressed at all, just some custom data format.
    5 replies | 344 view(s)
  • fcorbelli's Avatar
    11th January 2021, 19:00
    fcorbelli replied to a thread zpaq updates in Data Compression
    This is version 49.5. Should also compile on Linux (tested only on Debian), plus FreeBSD and Windows (gcc) I have added some functions that I think are useful. The first is the l (list) command. Now with ONE parameter (the .ZPAQ file) shows its contents. If more than one parameter, compare the contents of the ZPAQ with one or more folders, with a (block) check of SHA1s (the old -not =). Can be used as a quick check after add: zpaqfranz a z:\1.zpaq c:\pippo zpaqfranz l z:\1.zpaq c:\pippo Then I introduce the command c (compare) for directories, between a master and N slave. With the switch -all launches N+1 threads. The default verification is file name and size only. Applying the -crc32 switch also verifies its checksum WHAT? During the verification phase of the correct functioning of the backups it is normal to extract them on several different media (devices). Using for example folders synchronized with rsync on NAS, ZIP files, ZPAQ via NFS-mounted shares, smbfs, internal HDD etc. Comparing multiple copies can takes a (very) long time. Suppose to have a /tank/condivisioni master (or source) directory (hundreds of GB, hundred thousand files) Suppose to have some internal (HDD) and external (NAS) rsynced copy (/rsynced-copy-1, /rsynced-copy-2, /rsynced-copy-3...) Suppose to have internal ZIP backup, internal ZPAQ backup, external (NAS1 zip backup), external (NAS2 zpaq backup) and so on. Let's extract all of them (ZIP and ZPAQs) into /temporaneo/1, /temporaneo/2, /temporaneo/3... You can do something like diff -qr /temporaneo/condivisioni /temporaneo/1 diff -qr /temporaneo/condivisioni /temporaneo/2 diff -qr /temporaneo/condivisioni /temporaneo/3 (...) diff -qr /temporaneo/condivisioni /rsynced-copy-1 diff -qr /temporaneo/condivisioni /rsynced-copy-2 diff -qr /temporaneo/condivisioni /rsynced-copy-3 (...) But this can take a lot of time (many hours) even for fast machines The command c compares a master folder (the first indicated) to N slave folders (all the others) in two particular operating modes. By default it just checks the correspondence of files and their size (for example for rsync copies with different charsets, ex unix vs linux, mac vs linux, unix vs ntfs it is extremely useful). Using the -crc32 switch a check of this code is also made (with HW CPU support, if available). The interesting aspect is the -all switch: N+1 threads will be created (one for each specified folder) and executed in parallel, both for scanning and for calculating the CRC. On modern servers (eg Xeon with 8, 10 or more CPUs) with different media (internal) and multiple connections (NICs) to NASs you can drastically reduce times compared to multiple, sequential diff -qr. It clearly makes no sense for single magnetic disks. In the given example zpaqfranz c /tank/condivisioni /temporaneo/1 /temporaneo/2 /temporaneo/3 /rsynced-copy-1 /rsynced-copy-2 /rsynced-copy-3 -all will run 7 threads which take care of one directory each. The hypothesis is that the six copies are each on a different device, and the server have plenty of cores and NICs. It's normal in datastorage and virtualization environments (at least in mine). Win32 e Win64 on http://www.francocorbelli.it/zpaqfranz.exe http://www.francocorbelli.it/zpaqfranz32.exe
    2653 replies | 1130175 view(s)
  • CompressMaster's Avatar
    11th January 2021, 17:11
    Thanks. But I am not familiar with java nor android programming, and so I dont know how I can get it to work. Detailed step-by-step manual will be very beneficial:). btw, better to rename this thread to something like "android camera - use different zooming algorithm"
    2 replies | 104 view(s)
  • EmilyWasAway's Avatar
    11th January 2021, 12:46
    I'm reverse engineering a version of Asobo Studio's DPC archive format used in the PC release of the game FUEL (2009). I am able to unwrap the first "layer" of the format by breaking the archive down into the files described in the DPC header using a modified version of this MexScript. However these extracted files appear to be compressed with a custom LZ variant. Some games released before FUEL (CT Special Forces: Fire for effect, Ratatouille, and Wall-E) each used a slightly different LZ variant than the previous release so I am expecting FUEL to use something similar to those. @Shelwien has provided a series of unLZ_rhys scripts in previous posts (linked at the bottom) but none of them seam to properly decompress the files I extracted. I have attached a selection of extracted files that appear to be compressed and contain a small amount of text near the beginning. They all follow a similar pattern to the one in this image: Which closely resembles the compressed files from the previous posts. In theory this should only require a small modification to the unLZ_rhys tool but unfortunately I cannot seem to figure out the header layout/mask for this new version of the format. Any help with how to modify the tool or advice in general would be greatly appreciated. If you need more samples or the original DPC files I can provide them. https://encode.su/threads/3147-Reverse-Engineering-Custom-LZ-Variant https://encode.su/threads/3526-Asobo-s-Ratatouille-DPC-Data-Compression
    5 replies | 344 view(s)
  • suryakandau@yahoo.co.id's Avatar
    11th January 2021, 08:55
    There is a trade off between compression ratio n speed... Like cmix... :)
    214 replies | 19568 view(s)
  • kaitz's Avatar
    11th January 2021, 04:22
    kaitz replied to a thread Paq8sk in Data Compression
    I think you can get a10 below 618xxx in 100sec or less. :D 619xxx in 50 sec.
    214 replies | 19568 view(s)
  • Shelwien's Avatar
    11th January 2021, 03:23
    > In a multithreaded system with multiple cores and an operating system with virtual memory > (windows, linux, unix), can you have a CPU exception when two instructions modify the same memory cell? No, there're no exceptions for this. Just that a "global variable" without "volatile" might be actually kept in a register. > Or does the content simply become not well defined? In a sense. You just can't predict the order of read/write operations working in different threads.
    15 replies | 419 view(s)
  • Shelwien's Avatar
    11th January 2021, 03:18
    > The question is: if it is NOT important that the global variables are > perfectly up to date, can I safely (no CPU exception) avoid a semaphore or > something like that (obviously reducing the latency, this is the ultimate goal)? On x86 yes, though you'd have to add "volatile" specifier to the variables accessed from multiple threads. On some weird platforms like PPC and embedded ones you might also need explicit cache flushes and/or or intrinsics like __sync_synchronize(). So yes, on x86 its quite possible to implement MT without explicit semaphores - its simply less efficient when a thread spins a loop waiting for some global variable, while with thread APIs it could release the core to OS while it waits. There're also some interesting new tools: https://gcc.gnu.org/onlinedocs/gcc-4.9.2/gcc/X86-transactional-memory-intrinsics.html
    15 replies | 419 view(s)
  • Shelwien's Avatar
    11th January 2021, 02:54
    https://stackoverflow.com/questions/37763257/android-bitmap-resizing-using-better-resampling-algorithm-than-bilinear-like-l
    2 replies | 104 view(s)
  • suryakandau@yahoo.co.id's Avatar
    11th January 2021, 01:42
    Yes I am interested in the limit of jpeg compression ratio. Btw how much jpeg can be compressed ?
    214 replies | 19568 view(s)
  • Lithium Flower's Avatar
    10th January 2021, 23:04
    @ Jyrki Alakuijala Thank you for your reply If i want increase fidelity in vardct mode(jpeg xl 0.2), target distance -d 0.8 (Speed: kitten) probably a good distance? -q 90 == -d 1.000 // visually lossless (side by side) -q 91 == -d 0.900 -q 92 == -d 0.800 -q 93 == -d 0.700 -q 94 == -d 0.600 -q 95 == -d 0.550 // visually lossless (flicker-test)
    31 replies | 2540 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 21:39
    Well, there are many things at play: 1) there should be no exception on write, whatever that means. One thread will win the race. However, when you read the value back it could be in inconsistent state, e.g. one thread won with one part of result, other thread won with other part of result, so in the end the result is corrupted and using it will result in exception, segfault, etc 2) there is always some transaction size. I think that if you have a register of size 2^N bytes and you write to a memory location aligned at 2^N bytes then your write will either succeed fully or will be overwritten fully. This means that if you e.g. store a pointer to a field aligned to pointer size then it will either suceed fully or be overwritten fully by another thread. In either case there will be a valid pointer if both threads write valid pointers. 3) you need to be aware of https://en.wikipedia.org/wiki/Memory_model_(programming) and https://en.wikipedia.org/wiki/Consistency_model . For example if you don't place volatile nor atomic modifiers then compiler is allowed to cache the value in register and potentially update the real memory cell very rarely. If you don't use memory fences (atomics trigger memory fences) then CPU core could delay updating other cores so that the other core would get stale data. 4) transformations (e.g. addition) are done on CPU level, so CPU needs to invoke many steps: load value, change value, store value. Since there are multiple steps, another CPU core could access the data in between steps of the first core. Therefore to implement atomics instructions like https://en.wikipedia.org/wiki/Compare-and-swap are needed to verify at the end of transformation that the original value is still at the original memory location. If not, then compare-exchange instruction fails and whole transformation is repeated. Such process is repeated until compare-exchange achieves success. In case of reasonably low contention between threads the success rate is high. 5) the CPU instructions define the guarantees you'll see in practice. So if you copy 8 bytes one byte at a time and two threads are doing that on the same memory location then you won't get the guarantees of 8-byte writes done as single instruction. 6) on some CPUs (e.g. ARM ones) there are only aligned writes, so compiler has to emulate unaligned writes using aligned writes. For example if you write 4 bytes at memory address 13. 13 % 4 != 0, so compiler need to issue two 4-byte writes, each transforming data that's already there. Because there is a multi-step non-atomic transformation there could be corruption of data if multiple threads access the memory location.
    15 replies | 419 view(s)
  • fcorbelli's Avatar
    10th January 2021, 21:22
    I will make a debian virtual machine and fix the BSD-dependent code. But the question is always the same of the first post. In a multithreaded system with multiple cores and an operating system with virtual memory (windows, linux, unix), can you have a CPU exception when two instructions modify the same memory cell? Or does the content simply become not well defined?
    15 replies | 419 view(s)
  • CompressMaster's Avatar
    10th January 2021, 21:19
    Is there a good android related forum somewhere? I want to alter camera´s digital zooming algorithm to use gaussian interpolation instead of bilinear.
    2 replies | 104 view(s)
  • CompressMaster's Avatar
    10th January 2021, 21:13
    CompressMaster replied to a thread Paq8sk in Data Compression
    @suryakandau what about optimizing paq8sk for file A10.jpg from Maximum Compression Corpus? I am interested whats the limit in compression ratio.:)
    214 replies | 19568 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 20:56
    What if you use `std::atomic<std::int64_t>` instead of its alias `atomic_int64_t`?
    15 replies | 419 view(s)
  • fcorbelli's Avatar
    10th January 2021, 20:44
    Including #include <atomic> and declaring atomic_int64_t g_bytescanned; atomic_int64_t g_filescanned; compile on Windows, but not on FreeBSD paq.cpp:356:1: error: 'atomic_int64_t' does not name a type; did you mean 'u_int64_t'? atomic_int64_t g_bytescanned;
    15 replies | 419 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 20:26
    Hmm, $ gcc -O3 -march=native -Dunix zpaq.cpp -static -lstdc++ libzpaq.cpp -pthread -o zpaq -static -lm > errors.txt 2>&1 gives me: zpaq.cpp: In function ‘bool comparecrc32block(s_crc32block, s_crc32block)’: zpaq.cpp:3001:40: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ sprintf(a_start,"%014lld",a.crc32start); ~~~~~~~~~~~~^ zpaq.cpp:3002:40: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ sprintf(b_start,"%014lld",b.crc32start); ~~~~~~~~~~~~^ zpaq.cpp: In function ‘void mygetch()’: zpaq.cpp:3202:17: error: aggregate ‘mygetch()::termios oldt’ has incomplete type and cannot be defined struct termios oldt, newt; ^~~~ zpaq.cpp:3202:23: error: aggregate ‘mygetch()::termios newt’ has incomplete type and cannot be defined struct termios oldt, newt; ^~~~ zpaq.cpp:3203:2: error: ‘tcgetattr’ was not declared in this scope tcgetattr ( STDIN_FILENO, &oldt ); ^~~~~~~~~ zpaq.cpp:3203:2: note: suggested alternative: ‘tcgetpgrp’ tcgetattr ( STDIN_FILENO, &oldt ); ^~~~~~~~~ tcgetpgrp zpaq.cpp:3205:21: error: ‘ICANON’ was not declared in this scope newt.c_lflag &= ~( ICANON | ECHO ); ^~~~~~ zpaq.cpp:3205:30: error: ‘ECHO’ was not declared in this scope newt.c_lflag &= ~( ICANON | ECHO ); ^~~~ zpaq.cpp:3205:30: note: suggested alternative: ‘EIO’ newt.c_lflag &= ~( ICANON | ECHO ); ^~~~ EIO zpaq.cpp:3206:28: error: ‘TCSANOW’ was not declared in this scope tcsetattr ( STDIN_FILENO, TCSANOW, &newt ); ^~~~~~~ zpaq.cpp:3206:28: note: suggested alternative: ‘TCSETAW’ tcsetattr ( STDIN_FILENO, TCSANOW, &newt ); ^~~~~~~ TCSETAW zpaq.cpp:3206:2: error: ‘tcsetattr’ was not declared in this scope tcsetattr ( STDIN_FILENO, TCSANOW, &newt ); ^~~~~~~~~ zpaq.cpp:3206:2: note: suggested alternative: ‘tcsetpgrp’ tcsetattr ( STDIN_FILENO, TCSANOW, &newt ); ^~~~~~~~~ tcsetpgrp zpaq.cpp: In function ‘int myhuman(char*, size_t, int64_t, const char*, int, int)’: zpaq.cpp:7869:14: error: ‘HN_DIVISOR_1000’ was not declared in this scope if (flags & HN_DIVISOR_1000) { ^~~~~~~~~~~~~~~ zpaq.cpp:7872:15: error: ‘HN_B’ was not declared in this scope if (flags & HN_B) ^~~~ zpaq.cpp:7882:15: error: ‘HN_B’ was not declared in this scope if (flags & HN_B) ^~~~ zpaq.cpp:7892:16: error: ‘HN_AUTOSCALE’ was not declared in this scope (scale & (HN_AUTOSCALE | HN_GETSCALE)) == 0) ^~~~~~~~~~~~ zpaq.cpp:7892:31: error: ‘HN_GETSCALE’ was not declared in this scope (scale & (HN_AUTOSCALE | HN_GETSCALE)) == 0) ^~~~~~~~~~~ zpaq.cpp:7892:31: note: suggested alternative: ‘F_GET_SEALS’ (scale & (HN_AUTOSCALE | HN_GETSCALE)) == 0) ^~~~~~~~~~~ F_GET_SEALS zpaq.cpp:7909:14: error: ‘HN_NOSPACE’ was not declared in this scope if (flags & HN_NOSPACE) ^~~~~~~~~~ zpaq.cpp:7909:14: note: suggested alternative: ‘N_6PACK’ if (flags & HN_NOSPACE) ^~~~~~~~~~ N_6PACK zpaq.cpp:7921:15: error: ‘HN_AUTOSCALE’ was not declared in this scope if (scale & (HN_AUTOSCALE | HN_GETSCALE)) { ^~~~~~~~~~~~ zpaq.cpp:7921:30: error: ‘HN_GETSCALE’ was not declared in this scope if (scale & (HN_AUTOSCALE | HN_GETSCALE)) { ^~~~~~~~~~~ zpaq.cpp:7921:30: note: suggested alternative: ‘F_GET_SEALS’ if (scale & (HN_AUTOSCALE | HN_GETSCALE)) { ^~~~~~~~~~~ F_GET_SEALS zpaq.cpp:7936:38: error: ‘HN_DECIMAL’ was not declared in this scope if (bytes < 995 && i > 0 && flags & HN_DECIMAL) { ^~~~~~~~~~ zpaq.cpp: In function ‘void tohuman(long long int, char*)’: zpaq.cpp:7959:10: error: ‘HN_B’ was not declared in this scope flags = HN_B | HN_NOSPACE | HN_DECIMAL; ^~~~ zpaq.cpp:7959:17: error: ‘HN_NOSPACE’ was not declared in this scope flags = HN_B | HN_NOSPACE | HN_DECIMAL; ^~~~~~~~~~ zpaq.cpp:7959:17: note: suggested alternative: ‘N_6PACK’ flags = HN_B | HN_NOSPACE | HN_DECIMAL; ^~~~~~~~~~ N_6PACK zpaq.cpp:7959:30: error: ‘HN_DECIMAL’ was not declared in this scope flags = HN_B | HN_NOSPACE | HN_DECIMAL; ^~~~~~~~~~ zpaq.cpp:7960:11: error: ‘HN_DIVISOR_1000’ was not declared in this scope flags |= HN_DIVISOR_1000; ^~~~~~~~~~~~~~~ zpaq.cpp:7962:17: error: ‘HN_AUTOSCALE’ was not declared in this scope bytes, "", HN_AUTOSCALE, flags); ^~~~~~~~~~~~ zpaq.cpp: In function ‘long long int getfreespace(const char*)’: zpaq.cpp:7968:16: error: aggregate ‘getfreespace(const char*)::statfs stat’ has incomplete type and cannot be defined struct statfs stat; ^~~~ zpaq.cpp:7970:26: error: invalid use of incomplete type ‘struct getfreespace(const char*)::statfs’ if (statfs(i_path, &stat) != 0) ^ zpaq.cpp:7968:9: note: forward declaration of ‘struct getfreespace(const char*)::statfs’ struct statfs stat; ^~~~~~ zpaq.cpp:7982:3: error: ‘getbsize’ was not declared in this scope getbsize(&dummy, &blocksize); ^~~~~~~~ zpaq.cpp:7982:3: note: suggested alternative: ‘getsid’ getbsize(&dummy, &blocksize); ^~~~~~~~ getsid zpaq.cpp: In member function ‘int Jidac::summa()’: zpaq.cpp:8403:103: warning: embedded ‘\0’ in format sprintf(p->second.sha1hex,"%08X\0x0",crc32_calc_file(filename.c_str(),inizio,total_size,lavorati)); ^ zpaq.cpp:8409:104: warning: embedded ‘\0’ in format sprintf(p->second.sha1hex,"%08X\0x0",crc32c_calc_file(filename.c_str(),inizio,total_size,lavorati)); ^ zpaq.cpp:8416:53: warning: format ‘%llX’ expects argument of type ‘long long unsigned int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ sprintf(p->second.sha1hex,"%016llX\0x0",result2); ^ zpaq.cpp:8416:53: warning: embedded ‘\0’ in format zpaq.cpp:8488:76: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 2 has type ‘int64_t {aka long int}’ printf("Worked on %lld average speed %s B/s\n",lavorati,migliaia(myspeed)); ^ zpaq.cpp: In function ‘int unz(const char*, const char*, bool)’: zpaq.cpp:11231:116: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘long unsigned int’ printf("%02d frags %s (RAM used ~ %s)\r",100-(offset*100/(total_size+1)),migliaia(frag.size()),migliaia2(ramsize)); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ zpaq.cpp:11286:140: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 2 has type ‘unsigned int’ printf("File %08lld of %08lld (%20s) %1.3f %s\n",i+1,(long long)mappadt.size(),migliaia(size),(mtime()-startrecalc)/1000.0,fn.c_str()); ~~~ ^ zpaq.cpp: In member function ‘int Jidac::test()’: zpaq.cpp:11721:58: warning: format ‘%lu’ expects argument of type ‘long unsigned int’, but argument 2 has type ‘unsigned int’ printf("Block %08lu K %s\r",i/1000,migliaia(lavorati)); ~~~~~~ ^ zpaq.cpp:11839:102: warning: format ‘%X’ expects argument of type ‘unsigned int’, but argument 3 has type ‘const char*’ printf("SURE: STORED %08X = DECOMPRESSED = FROM FILE %08Xn",crc32stored,filedefinitivo.c_str()); ~~~~~~~~~~~~~~~~~~~~~~^ zpaq.cpp:11852:89: warning: format ‘%X’ expects argument of type ‘unsigned int’, but argument 3 has type ‘const char*’ printf("GOOD: STORED %08X = DECOMPRESSED %08X\n",crc32stored,filedefinitivo.c_str()); ~~~~~~~~~~~~~~~~~~~~~~^ zpaq.cpp: In function ‘bool comparefilenamesize(s_fileandsize, s_fileandsize)’: zpaq.cpp:11962:33: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ sprintf(a_size,"%014lld",a.size); ~~~~~~^ zpaq.cpp:11963:33: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ sprintf(b_size,"%014lld",b.size); ~~~~~~^ zpaq.cpp: In function ‘bool comparefilenamedate(s_fileandsize, s_fileandsize)’: zpaq.cpp:11970:33: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘int64_t {aka long int}’ sprintf(a_size,"%014lld",a.date); ~~~~~~^ zpaq.cpp:11971:33: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘int64_t {aka long int}’ sprintf(b_size,"%014lld",b.date); ~~~~~~^ zpaq.cpp: In member function ‘int Jidac::dir()’: zpaq.cpp:12008:39: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘std::vector<std::__cxx11::basic_string<char> >::size_type {aka long unsigned int}’ printf("FIles.size %d\n",files.size()); ~~~~~~~~~~~~^ zpaq.cpp:12108:87: warning: format ‘%d’ expects argument of type ‘int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ printf("PRE %08d %08d %s\n",i,fileandsize.size,fileandsize.filename.c_str()); ^ zpaq.cpp:12118:88: warning: format ‘%d’ expects argument of type ‘int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ printf("POST %08d %08d %s\n",i,fileandsize.size,fileandsize.filename.c_str()); ^ zpaq.cpp:12176:173: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 2 has type ‘int’ printf("%08ld CRC-1 %08X %s %19s %s\n",i,fileandsize.crc32,dateToString(fileandsize.date).c_str(),migliaia(fileandsize.size),fileandsize.filename.c_str()); ^ zpaq.cpp:12181:39: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘int64_t {aka long int}’ printf("Done %03d ",percentuale); ^ zpaq.cpp:12204:182: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 2 has type ‘int’ printf("%08ld CRC-2 %08X %s %19s %s\n",i+1,fileandsize.crc32,dateToString(fileandsize.date).c_str(),migliaia(fileandsize.size),fileandsize.filename.c_str()); ~~~ ^ zpaq.cpp:12209:38: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘int64_t {aka long int}’ printf("Done %03d ",percentuale); ^ zpaq.cpp:12250:88: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 2 has type ‘int’ printf("%ld size %s <<%08X>>\n",i,migliaia(fileandsize.size()),fileandsize.crc32); ^ zpaq.cpp:12260:38: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 2 has type ‘int’ printf("Limit founded %ld\n",limite); ^ zpaq.cpp:12353:71: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 2 has type ‘int’ printf(" %8ld File %19s byte\n",quantifiles,migliaia(total_size)); ^ zpaq.cpp:12355:43: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 2 has type ‘int’ printf(" %8ld directory",quantedirectory); ^ zpaq.cpp: In member function ‘int Jidac::dircompare()’: zpaq.cpp:12709:61: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘std::vector<std::__cxx11::basic_string<char> >::size_type {aka long unsigned int}’ printf("Dir compare (%d dirs to be checked)\n",files.size()); ~~~~~~~~~~~~^ zpaq.cpp:12769:51: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘std::vector<std::__cxx11::basic_string<char> >::size_type {aka long unsigned int}’ printf("Creating %d scan threads\n",files.size()); ~~~~~~~~~~~~^ I have $ gcc --version gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Copyright (C) 2017 Free Software Foundation, Inc. Anyway, have you tried inserting my code somewhere in your code and then compiling it?
    15 replies | 419 view(s)
  • fcorbelli's Avatar
    10th January 2021, 20:17
    I do not use Linux, but Windows and Unix (FreeBSD and Solaris) Compile instructions are in the first lines, under FreeBSD for example is gcc7 -O3 -march=native -Dunix zpaq.cpp -static -lstdc++ libzpaq.cpp -pthread -o zpaq -static -lm the file is coded in ANSI
    15 replies | 419 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 20:11
    How to compile it under Linux (I have Ubuntu 18.04.5)? I keep getting errors like this: zpaq.cpp:7871:14: error: ‘HN_DIVISOR_1000’ was not declared in this scope if (flags & HN_DIVISOR_1000) { ^~~~~~~~~~~~~~~ zpaq.cpp:7874:15: error: ‘HN_B’ was not declared in this scope if (flags & HN_B) ^~~~ zpaq.cpp:7884:15: error: ‘HN_B’ was not declared in this scope if (flags & HN_B) ^~~~ zpaq.cpp:7894:16: error: ‘HN_AUTOSCALE’ was not declared in this scope (scale & (HN_AUTOSCALE | HN_GETSCALE)) == 0) ^~~~~~~~~~~~ zpaq.cpp:7894:31: error: ‘HN_GETSCALE’ was not declared in this scope (scale & (HN_AUTOSCALE | HN_GETSCALE)) == 0) ^~~~~~~~~~~ zpaq.cpp:7894:31: note: suggested alternative: ‘F_GET_SEALS’ (scale & (HN_AUTOSCALE | HN_GETSCALE)) == 0) ^~~~~~~~~~~ F_GET_SEALS zpaq.cpp:7911:14: error: ‘HN_NOSPACE’ was not declared in this scope if (flags & HN_NOSPACE) ^~~~~~~~~~ Also what's the encoding of that file? It probably isn't UTF-8 as gedit complains.
    15 replies | 419 view(s)
  • fcorbelli's Avatar
    10th January 2021, 19:40
    This one
    15 replies | 419 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 19:05
    I don't know what exactly your requirements are. You say you can't do something or need to do something else, etc It wasn't even clear from the start if you want solution in C or C++. Here's my example of mixing C pthread.h with C++ atomic (and of course compiling with C++ compiler): #include <cstdio> #include <pthread.h> #include <atomic> void *inc_cnt(void * cnt_void_ptr) { std::atomic<int> * cnt_ptr = (std::atomic<int> *) cnt_void_ptr; for (int i = 0; i < 100100100; i++) { (*cnt_ptr)++; } return NULL; } int main() { std::atomic<int> cnt = {0}; printf("start cnt: %d\n", cnt.load()); pthread_t inc_cnt_thread; if(pthread_create(&inc_cnt_thread, NULL, inc_cnt, &cnt)) { fprintf(stderr, "Error creating thread\n"); return 1; } inc_cnt(&cnt); if(pthread_join(inc_cnt_thread, NULL)) { fprintf(stderr, "Error joining thread\n"); return 2; } printf("final cnt: %d\n", cnt.load()); return 0; } It works for me: $ g++ -pthread atomics.cpp $ ./a.out start cnt: 0 final cnt: 200200200
    15 replies | 419 view(s)
  • fcorbelli's Avatar
    10th January 2021, 18:25
    Including #include <stdatomic.h> is not a really good idea, because you can get a gazillion of conflicts In file included from zpaq.cpp:127:0: c:\mingw\lib\gcc\x86_64-w64-mingw32\7.3.0\include\stdatomic.h:40:9: error: '_Atomic' does not name a type; did you mean '_wtoi'? typedef _Atomic _Bool atomic_bool; So you need to include #include <threads.h>
    15 replies | 419 view(s)
  • Mauro Vezzosi's Avatar
    10th January 2021, 16:48
    CMV 0.3.0 alpha 1 coronavirus.fasta (.2bit version is much worse) Compresse size Time (1) Options First 200 MiB Whole file Whole file 207 KiB 940888 45 h -m2 (decompression not verified) 179 KiB ~~18 d -m2,0,> ~11 d -m2,0,0x3ba3629e (current (2021/01/10) opimized options based on the first 1 MiB) 218149 CMV.exe compressed with 7-Zip 9.20 (.zip) 1159037 Total (1) Intel(R) Core(TM) i7-4710HQ CPU @ 2.50GHz (up to 3.50GHz), 8 GB RAM DDR3, Windows 8.1 64 bits. I haven't tested the reverse-complement order yet (1 2).
    33 replies | 1560 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 16:45
    Erm, what? Atomics of course make sense only with multiple threads, but you don't need particular threads implementation to use them. Here you have the non-C++ version of atomics: https://en.cppreference.com/w/c/atomic https://en.cppreference.com/w/c/language/atomic
    15 replies | 419 view(s)
  • fcorbelli's Avatar
    10th January 2021, 15:51
    Atomic (std::atomic) is for C++ threads (std::threads)
    15 replies | 419 view(s)
  • madserb's Avatar
    10th January 2021, 15:29
    madserb replied to a thread Paq8sk in Data Compression
    SSE2 and AVX2 builds here: ​ ​
    214 replies | 19568 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 14:51
    Have you considered atomics? They are good when your transaction scope is just a single variable and there's not a huge contention between threads (like several thousands updates per second). Atomics don't cause thread sleeping / parking / waiting / whatever you call it. If an update of atomic variable fails then the update is repeated. That works well if failure is happening not too often. The tradeoff threshold depends on how expensive is to recompute the operation. Simple addition of constant is very cheap, so it can be repeated quite a few times before atomics can be considered unfit for particular problem.
    15 replies | 419 view(s)
  • fcorbelli's Avatar
    10th January 2021, 14:14
    I am working on a modification to a certain parallel execution program (ZPAQ !), using the venerable pthreads. I already know everything about traffic lights (!!!) aka semaphore, mutexes etc. (MS in computer science), but I'm reflecting on the actual mandatory under some relaxed assumptions. It is interesting, for me, to work in concrete cases, to see how the theory - which is taught as a dogma of faith - can materialize. Classic problem of the variable shared globally, between various threads. int globale=0; int contatore=0; (...) pthread () int k=something(); globale+=k; contatore++; (main) runs N thread, that generate different "something" values In this pseudo-code the two globals (globale and contatore) are changed directly by the threads. This is not good. Typically I would do this sem_t s; /* semaphore */ (...) P(&s); globale+=k; contatore++; V(&s); (...) on whatever (a mutex) Suppose, however, that we has no interest in maintaining "perfect" global variable values. If, for some reason, they are changed erroneously (not updated, updated several times, zeroed or whatever) we don't care. Just statistical infos (#of compressed files, worked size etc) In this situation a modern CPU (with more cores, even on more separate dies) should still have an arbitration mechanism on RAM changes (for cache hierarchies, L1, L2 etc too). Is there a sort of HW mechanism for managing critical sections (portions of RAM, which are actually pages of virtual memory etc: in short it is extremely complex to establish what a simple "contatore++" does)? core/cpu 1 move.l contatore, D0 add.l #1, D0 move.l D0, contatore core/cpu 2 move.l contatore, D0 add.l #1, D0 move.l D0, contatore core/cpu 3 move.l contatore, D0 add.l #1, D0 move.l D0, contatore (...) Such a pseudocode obviously triggers a series of enormous activities before becoming a "real" change to a physical RAM cell, even gigantic when using operating systems with virtual memory (as almost always) The question is: if it is NOT important that the global variables are perfectly up to date, can I safely (no CPU exception) avoid a semaphore or something like that (obviously reducing the latency, this is the ultimate goal)?
    15 replies | 419 view(s)
  • suryakandau@yahoo.co.id's Avatar
    10th January 2021, 06:31
    Paq8sk43 -change jpegmodel -fix/change detection (tiff,text) f.jpg -s8 80751 bytes 33.61 sec a10.jpg -s8 619637 bytes 209.85 sec source: https://github.com/skandau/paq8sk ​
    214 replies | 19568 view(s)
  • Gotty's Avatar
    10th January 2021, 03:32
    Fixed.
    30 replies | 1264 view(s)
  • suryakandau@yahoo.co.id's Avatar
    10th January 2021, 03:21
    the link https://visualstudio.microsoft.com/vs/community/ is page not found.
    30 replies | 1264 view(s)
  • Gotty's Avatar
    10th January 2021, 03:18
    Verified under Lubuntu 19.04 64 bit. Generated successfully. 1) md5 checksum matches, 2) content matches with my version.
    33 replies | 1560 view(s)
  • innar's Avatar
    10th January 2021, 01:44
    If I have not mistaken, then this commend would make the approach by JamesB to be identical with Gotty's transform: cat coronavirus.fasta | sed 's/>.*/<&</' | tr -d '\n' | tr '<' '\n' | tail -n +2 > coronavirus.fasta.un-wrapped printf "\n" >> coronavirus.fasta.un-wrapped (added \n to align with Gotty's file) Both (Gotty's transform and JamesB sed/tr/tail logic) have md5sum e0d7c063a65c7625063c17c7c9760708 Would JamesB or somebody else mind double checking that the command under *nix produces a correct un-wrapped FASTA file? Thanks! ​ PS! I started rerunning paq8l and cmix v18 on this file. Will make the announcement on web and elsewhere to focus on this transform after some extra eyes re: the correctness of the file (would prefer the original file + standard *nix commands for the transform)
    33 replies | 1560 view(s)
  • NoeticRon's Avatar
    9th January 2021, 23:22
    ^^It really was a factor in my decision to compress it and also the fact that I was running out of my hoarding space. I have learnt a lot about the algos from you and the fileforums site...thanks a lot for being patient with a newbie in the scene. I look forward to our paths crossing again on one compression forum in the future. Stay safe and Godspeed!
    7 replies | 559 view(s)
  • Darek's Avatar
    9th January 2021, 13:18
    Darek replied to a thread paq8px in Data Compression
    Scores of 4 Corpuses for paq8px v199 and paq8px v200. Great gain for Silesia. The paq8px v200 is slightly better than paq8px v199, got 43'036 bytes of gain compared to paq8px v198 and (unfortunatelly) is still 4'758 bytes worse than best cmix v18 score (with precomp)... but it's very, very close now! MaximumComprssion got about 2KB of gain. Other corpuses remain in similar place.
    2276 replies | 604522 view(s)
  • Darek's Avatar
    9th January 2021, 13:13
    Darek replied to a thread Paq8pxd dict in Data Compression
    Scores of my testset for paq8pxd v94. For F.JPG file there is 215 bytes of gain. Other files remains the same.
    1026 replies | 360608 view(s)
  • schnaader's Avatar
    9th January 2021, 00:58
    From the abbreviation, I'd guess these are the variants used in ZTool and XTool from the UltraArc author. Since XTool is the successor of ZTool, P.XT most likely gives better results. Note that the meaning of "precomp" is the zlib/zstd/lz4 "precompression" methods from UltraArc. These share the name and the main idea with my tool precomp, but the implementation is different, so I don't know much details about it and can just guess - for the zlib recompression this works well as I've done it myself, but I don't know anything about the zstd and lz4 recompression variants. Couldn't find any details about MMC in UltraArc either. My guess would be it's either related to MMC (Morphing Match Chain) or an abbreviation for "MultiMedia Content". MSC seems to be a media streams compressor specialized on some game resource formats (.dds/.dxt = compressed textures, images, .wav/.mp3 = sounds/music). Thanks and welcome in the club of curious "game compressors" ( "nice game, but I wonder how small I can get it?" :cool: ). Doing this myself frequently, it's very interesting to analyse game data to see the different strategies and file formats used.
    7 replies | 559 view(s)
  • NoeticRon's Avatar
    8th January 2021, 23:45
    I have to hand it to you, your knowhow on this is astounding. Trying with the "Headerless" option did give me more accurate results :- https://ibb.co/377psfv (75.1 GB decompressed to 145 GB) I took your advice and fiddled around with the compression algos under FreeArc. I took Need for Speed: The Run with a smaller file size of 15.3 GB and obtained the following results :- 1. First compression was done by using Precomp, Srep, LOLZ and MMC giving a final size of 4.23 GB (72.35% decrease). 2. Second compression was done by using Precomp, Srep and LOLZ giving a final size of 5.61 GB (63.3% decrease). 3. Third compression is being carried out by using Precomp, Srep, LOLZ, MMC and MSC. I have a question regarding Precomp as to what do the different versions P.ZT and P.XT mean for it? Also I couldn't find any proper information about the different algos but from what I figured LOLZ, ZSTD and LZMA2 are final compressors and LOLZ trumps the lot with faster decompression. But what are MMC and MSC about? Hands down you have great observation skills, that is indeed the Forza Horizon 4 game which I got free during the XBOX game pass promotional offer. Sorry for the barrage of questions and I truly apprreciate you taking out time from your schedule to reply to these threads. :)
    7 replies | 559 view(s)
  • Jyrki Alakuijala's Avatar
    8th January 2021, 23:25
    Vardct mode is great at high quality (psychovisually lossless) on most if not all content and largely made our near lossless coding efforts reduntant. Png is the main output in pixels. Jpeg reconstruction is for byte-to-byte lossless jpeg transmission.
    31 replies | 2540 view(s)
  • Jyrki Alakuijala's Avatar
    8th January 2021, 23:16
    I will create an optimizer later (likely in 6-12 months or so) that is more focused on max butteraugli -- now it is mostly tuned to p-norm and max is currently a 2nd class citizen. Max butteraugli is a more precise indicator for the highest fidelity.
    31 replies | 2540 view(s)
  • Lithium Flower's Avatar
    8th January 2021, 21:51
    @schnaader I don't recommend use srep393a, in this version have memory leak bug, recommend use srep392 or srep393,
    7 replies | 559 view(s)
  • Lithium Flower's Avatar
    8th January 2021, 21:43
    @Jon Sneyers Thank you for your reply merge to new post @Jarek my XnViewMP(Version 0.98.0 64bits (Dec 14 2020)) can't decode jxl, i think ​they use old libjpegxl...
    31 replies | 2540 view(s)
  • Gotty's Avatar
    8th January 2021, 17:23
    Please delete your paq8gen repository in github. Follow the points 1) 2) 3) above in my email.
    30 replies | 1264 view(s)
  • suryakandau@yahoo.co.id's Avatar
    8th January 2021, 16:19
    here is an error message when i upload the source of paq8gen to github
    30 replies | 1264 view(s)
  • suryakandau@yahoo.co.id's Avatar
    8th January 2021, 15:38
    yes i want to contribute to paq8gen but i am still struggling with how to make pull request on github. i am sorry about upload the paq8gen to this forum. in the above result i just want to know which is the smaller output, preprocess.full or un-wrapped.
    30 replies | 1264 view(s)
  • Gotty's Avatar
    8th January 2021, 15:28
    What is you intention? Would you like to contribute to paq8gen?
    30 replies | 1264 view(s)
  • suryakandau@yahoo.co.id's Avatar
    8th January 2021, 15:07
    here is a little modification of paq8gen.. just testing it for fun on coronavirus.fasta.preprocessed.full and coronavirus.fasta.un-wrapped using -8 option, the result is: coronavirus.fasta.preprocessed.full Total input size : 1317937667 Total archive size : 1111982 Time 76408.30 sec, used 1715 MB (1798534789 bytes) of memory coronavirus.fasta.un-wrapped Total input size : 1317937667 Total archive size : 1119174 Time 76799.55 sec, used 1715 MB (1798534848 bytes) of memory
    30 replies | 1264 view(s)
  • Gotty's Avatar
    8th January 2021, 15:01
    Did you finish developing it, or is it still in progress?
    3 replies | 268 view(s)
  • schnaader's Avatar
    8th January 2021, 14:43
    That's much better indeed - 75.3 GB of compressed streams decompressed to 145 GB, so the 50% rule of thumb would give (95 - 75.3) + 75.3 * 0.5 = 47.5 GB. Suggestion from here: Try if the tool you tried in the first post gives similar numbers using the "Headerless" option. Processing installed data could be slightly beneficial, but UltraArc should also be able to proceed with the setup .bin as well because there clearly are zLib streams inside. After that, I'd advise to try different settings yourself with the tools you found, preferably on other smaller files, to get a feeling which settings are useful and how they influence speed and compression ratio. Other people can give you hints, but its much more effective in the long run to try things yourself and learn. Compression has the property to give minor ratio improvements for huge drops of speed, e.g. using something like a slow zpaq variant might give 10% better ratio, but might also take 10 hours instead of 1 hour to decompress. Also, memory, disk speed and multi-threading influence results. So there's no "optimal way" to compress data, but it all depends on your use case, the speed/size target you aim at and the hardware you use. Further research using the icon from your screenshot shows that the game is Forza 4 Horizon Standard Edition. There is a lossless repack of the ultimate edition available with a size of 40-50 GB depending on the installed (DLC?) content (obviously though, I didn't validate that statement as I don't own the game and don't want to support piracy, but it matches with the rule of thumb calculation above), so this is the compressed size to expect when using zLib recompression.
    7 replies | 559 view(s)
  • Emil Enchev's Avatar
    8th January 2021, 13:15
    The algorithm is pure mathematical and not use dictionary of words. And as I said it will be open. Unfortunately for Marcus, it doesn't even use Artificial Intelligence ;) P.s. And please rule out the presumption that I will cheat in any way. Some of you here even suggest the use of GPT-3, which is an automatic plagiarism system trained on Wikipedia itself, to be used for Hutter Prize compression (complete absurdity), but you become very sensitive when someone tells you he will use Wolfram Language (I can use GNU Multiple Precision Arithmetic Library and C++ with the same result). If you're looking for crooks, I'm not the best example, believe me. 100%x
    3 replies | 268 view(s)
  • algorithm's Avatar
    8th January 2021, 12:33
    Mathematica probably has a dictionary of words, so it may not be fair.
    3 replies | 268 view(s)
  • Emil Enchev's Avatar
    8th January 2021, 12:26
    This is the question I ask Marcus Hutter, and now I'm waiting to get some answer. After all, if I open my algorithm does it matter if it's an .exe file or Mathematica .nb?!
    3 replies | 268 view(s)
  • NoeticRon's Avatar
    8th January 2021, 00:29
    I followed your doubts about the zlib scan and used a better scanner for multiple streams that may have been used in that game compression. Here are the results: zlib (discrete levels) :- https://ibb.co/2SQmw36 LZO :- https://ibb.co/3RzKVsF ZSTD :- https://ibb.co/SXgf0h0 Crilayla :- https://ibb.co/YbDTDN2 WAV :- https://ibb.co/vP8Wyw6 BINK :- https://ibb.co/pzhqF8N VP6 :- https://ibb.co/x3CNrvFv LZ4 :- Stuck at 1.70GB detection Here the zlib detection was much more optimum I believe (but not sure since I don't know much about all these). Also the pre-installed game folder looks like this :- https://ibb.co/jL2Bgw4 Should I install this game and obtain the raw data after installation and then proceed with its compression or can UltraArc handle the badly compressed 91gb bin file on its own? I was planning on using the following FreeArc settings under UltraArc but was not sure about the correct Mask settings to be applied since I do not have a good knowledge on command lines so could you please have a look at this attached image as well :- https://ibb.co/C1gGmsf To be perfectly honest I couldn't understand where to put the following command lines correctly so I hope you forgive me for not having the basic knowledge on this compression stuff. Thank you for your immense help thus far.
    7 replies | 559 view(s)
More Activity