Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • Darek's Avatar
    Today, 18:53
    Darek replied to a thread paq8px in Data Compression
    Yes, of course. I'll check it again and give you switches used for these versions.
    2283 replies | 605472 view(s)
  • moisesmcardona's Avatar
    Today, 17:35
    moisesmcardona replied to a thread Paq8sk in Data Compression
    Hey Surya, I noticed some of your releases are compiled with MT=ON and some with MT=OFF. Can you please decide to stick with either one? The command line breaks when MT=OFF. Thanks!
    218 replies | 19980 view(s)
  • Jon Sneyers's Avatar
    Today, 11:00
    Yes, further reducing the memory footprint would be nice. For large images like this, it would also be useful to have an encoder that does not try to globally optimize the entire image, and a cropped decoder. These are all possible, but quite some implementation effort, and it is not the main priority right now — getting the software ready for web browser integration is a bigger priority.
    34 replies | 2927 view(s)
  • e8c's Avatar
    Today, 07:32
    ​>djxl.exe Bennu_Grayscale_s4.jxl tmp.ppm --num_threads=4 Read 109071829 compressed bytes Done. 20237 x 12066, 19.71 MP/s , 1 reps, 4 threads. Allocations: 7750 (max bytes in use: 7.050512E+09) >djxl.exe Bennu_Grayscale_s4.jxl tmp.ppm --num_threads=2 Read 109071829 compressed bytes Done. 20237 x 12066, 10.50 MP/s , 1 reps, 2 threads. Allocations: 7744 (max bytes in use: 7.041389E+09) >djxl.exe Bennu_Grayscale_s4.jxl tmp.ppm --num_threads=1 Read 109071829 compressed bytes Done. 20237 x 12066, 8.80 MP/s , 1 reps, 1 threads. Allocations: 7741 (max bytes in use: 7.036826E+09) >djxl.exe Bennu_Grayscale_s3.jxl tmp.ppm --num_threads=4 Read 112310632 compressed bytes Done. 20237 x 12066, 35.77 MP/s , 1 reps, 4 threads. Allocations: 7749 (max bytes in use: 7.053651E+09) >djxl.exe Bennu_Grayscale_s3.jxl tmp.ppm --num_threads=2 Read 112310632 compressed bytes Done. 20237 x 12066, 19.57 MP/s , 1 reps, 2 threads. Allocations: 7743 (max bytes in use: 7.044529E+09) >djxl.exe Bennu_Grayscale_s3.jxl tmp.ppm --num_threads=1 Read 112310632 compressed bytes Done. 20237 x 12066, 17.63 MP/s , 1 reps, 1 threads. Allocations: 7740 (max bytes in use: 7.039963E+09) (Scrollable.)
    34 replies | 2927 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 04:10
    I think it's difficult to go below 550000 bytes of A10 jpeg file..
    218 replies | 19980 view(s)
  • Gotty's Avatar
    Today, 01:46
    Gotty replied to a thread paq8px in Data Compression
    I need your help investigating it (I can't reproduce). Could you tell me the command line switches you used?
    2283 replies | 605472 view(s)
  • Darek's Avatar
    Yesterday, 23:13
    Darek replied to a thread paq8px in Data Compression
    If we assume about 20 versions yearly it means that it could be Year 2263.... or there would be a some breakthrough or more...
    2283 replies | 605472 view(s)
  • Gotty's Avatar
    Yesterday, 23:03
    Gotty replied to a thread paq8px in Data Compression
    150'000? Are you sure you wanted to ask 150'000? OK. Let's do the math ;-) Open Darek's MaximumCompression results a couple of posts above. Look at the first result he recorded (for paq8px_v75): 637110 Look at the last result (for paq8px_v200): 624578 Gaining that 12532 bytes took roughly 125 versions. Doing a simple interpolation (which is totally incorrect, but fits very well to your question): 150'000 bytes will be reached at around paq8px_v4858.
    2283 replies | 605472 view(s)
  • Darek's Avatar
    Yesterday, 23:01
    Darek replied to a thread paq8px in Data Compression
    In my opinion - in 2077 :) By the way - paq8pxd v95 got score 618'527 bytes for A10.jpg file with option -x9
    2283 replies | 605472 view(s)
  • Darek's Avatar
    Yesterday, 22:58
    Darek replied to a thread Paq8pxd dict in Data Compression
    Scores of my testset and $ Corpuses for paq8pxd v94 and paq8pxd v95. Good improvements on JPG files and files contains such structures. A10.JPG file got 618'527 bytes!
    1028 replies | 361607 view(s)
  • CompressMaster's Avatar
    Yesterday, 21:21
    CompressMaster replied to a thread paq8px in Data Compression
    My quick test on A10.jpg from MaximumCompression corpus using the -7 switch: Total input size: 842 468 bytes Compressed size: 624 693 bytes btw, pure hypothetical question: when you will expect to reach compression of A10.jpg below 150 000 bytes?
    2283 replies | 605472 view(s)
  • Gotty's Avatar
    Yesterday, 21:00
    Gotty replied to a thread paq8px in Data Compression
    - IndirectContext improvement: using a leading bit to distinguish context bits from empty (unused) bits - LSTM model: Applied the new IndirectContext improvements - MatchModel improvements: - moved context hashes from NormalModel to Shared so MatchModel can also use them - using more candidates in hashtable (3 instead of 1) - using the improved IndirectContext - refined contexts - tuned NormalModel contexts wrt MatchModel context lengths
    2283 replies | 605472 view(s)
  • CompressMaster's Avatar
    Yesterday, 20:57
    CompressMaster replied to a thread Paq8sk in Data Compression
    @suryakandau Are you able to go below 550,000 bytes of A10.jpg compression? And with F.JPG below 40,000 bytes? I am just interested if paq8sk can be tuned to achieve that ratio - regardless time and mem. usage. btw, you have irrelevant space between attached file and text.
    218 replies | 19980 view(s)
  • e8c's Avatar
    Yesterday, 20:15
    https://www.asteroidmission.org/osprey-recon-c-mosaic/ >cjxl.exe -q 100 -s 3 Bennu_Grayscale.png Bennu_Grayscale_s3.jxl ​J P E G \/ | /\ |_ e n c o d e r Read 20237x12066 image, 75.4 MP/s Encoding , 2 threads. Compressed to 112310632 bytes (3.680 bpp). 20237 x 12066, 24.96 MP/s , 1 reps, 2 threads. >cjxl.exe -q 100 -s 4 Bennu_Grayscale.png Bennu_Grayscale_s4.jxl J P E G \/ | /\ |_ e n c o d e r Read 20237x12066 image, 75.6 MP/s Encoding , 2 threads. Compressed to 109071829 bytes (3.573 bpp). 20237 x 12066, 1.78 MP/s , 1 reps, 2 threads. >cjxl.exe -q 100 -s 5 Bennu_Grayscale.png Bennu_Grayscale_s5.jxl J P E G \/ | /\ |_ e n c o d e r Read 20237x12066 image, 74.7 MP/s Encoding , 2 threads. terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc ... ... ... >cjxl.exe -q 100 -s 9 Bennu_Grayscale.png Bennu_Grayscale_s9.jxl J P E G \/ | /\ |_ e n c o d e r Read 20237x12066 image, 75.7 MP/s Encoding , 2 threads. terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc >dir 16.01.2021 19:26 128 446 880 Bennu_Grayscale.png 16.01.2021 19:24 732 538 945 Bennu_Grayscale.ppm 16.01.2021 19:36 112 310 632 Bennu_Grayscale_s3.jxl 16.01.2021 19:41 109 071 829 Bennu_Grayscale_s4.jxl >systeminfo | find "Memory" Total Physical Memory: 20,346 MB Available Physical Memory: 17,859 MB Virtual Memory: Max Size: 20,346 MB Virtual Memory: Available: 18,012 MB Virtual Memory: In Use: 2,334 MB (Scrollable.)
    34 replies | 2927 view(s)
  • kaitz's Avatar
    Yesterday, 20:15
    kaitz replied to a thread Paq8pxd dict in Data Compression
    No
    1028 replies | 361607 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 19:24
    Paq8sk44 - improve jpeg compression using -s8 option on f.jpg (darek corpus) Total 112038 bytes compressed to 80194 bytes. Time 19.17 sec, used 2444 MB (2563212985 bytes) of memory here is source code and binary file paq8sk44 -s8 a10.jpg 842468 616629 151.72 sec 2444 MB paq8pxd_v95 -s8 a10.jpg 842468 618555 43.42 sec 1984 MB paq8px_v200 -8 a10.jpg 842468 624597 26.51 sec 2602 MB
    218 replies | 19980 view(s)
  • pacalovasjurijus's Avatar
    15th January 2021, 21:31
    We write programs. It is necessary to compress the program for this you need to write the program which will compress the program and write the program.
    8 replies | 2081 view(s)
  • e8c's Avatar
    15th January 2021, 17:19
    (UPD: I know about , but next link could be helpful any way.) ​ https://askubuntu.com/questions/1041349/imagemagick-command-line-convert-limit-values $ /bin/time -f '\nUser time (seconds): %U\nMemory (kbytes): %M' \ ​> ./guess -1 PIA23623_hires.ppm PIA23623_hires.guess encode, 2 threads: 112 MPx/s User time (seconds): 35.60 Memory (kbytes): 7743864 $ ls -l total 12539741 -rwxrwx--- 1 root vboxsf 1794025332 янв 15 16:49 PIA23623_hires.guess -rwxrwx--- 1 root vboxsf 2293756771 янв 15 15:56 PIA23623_hires.png -rwxrwx--- 1 root vboxsf 6115804597 янв 15 15:20 PIA23623_hires.ppm -rwxrwx--- 1 root vboxsf 2425213852 янв 14 02:08 PIA23623_hires.tif VM: 2 v-Cores, 11 GB RAM Host: Intel NUC8i3, SATA SSD "35.60 seconds" is sum of 2 threads, about 18 s in user space. (Said for those who are not familiar with profiling multi-threaded applications.) Respectively: 8 cores - 4.5 s, 16 cores - 2.25 s, 32 cores - 1.125 s. It is acceptable. See attachment. (Site engine converts PNG to JPG, why?)
    63 replies | 4307 view(s)
  • suryakandau@yahoo.co.id's Avatar
    15th January 2021, 12:03
    Where can I get mill.jpg and dscn0791.avi files? Thank you.. could you upload them here please ?
    1028 replies | 361607 view(s)
  • Lithium Flower's Avatar
    14th January 2021, 10:43
    @ Jyrki Alakuijala Thank you for your reply I check my compressed non-photographic set with my eyes, find tiny artifacts in different image, and i think in jpeg xl 0.2 -d 1.0(Speed: kitten), if compressed image maxButteraugli above 1.5 or 1.6?, probably have error or tiny artifacts in this image. A little curious, probably have a plan or patch to improvement non-photographic fidelity, in next jpeg xl public release(jpeg xl 0.3)?
    34 replies | 2927 view(s)
  • kaitz's Avatar
    13th January 2021, 21:19
    kaitz replied to a thread Paq8pxd dict in Data Compression
    ​paq8pxd_v95 jpeg model: -more context in Map1 (20) -more inputs from main context -2 main mixer inputs + 1 apm -cleanup Size Compressed Sec paq8pxd_v95 -s8 a10.jpg 842468 618555 43.42 sec 1984 MB paq8px_v200 -8 a10.jpg 842468 624597 26.51 sec 2602 MB paq8pxd_v95 -s8 mill.jpg 7132151 4910289 350.38 sec 1984 MB paq8px_v200 -8 mill.jpg 7132151 4952115 228.65 sec 2602 MB paq8pxd_v95 -s8 paq8px_v193_4_Corpuses.jpg 3340610 1367528 167.13 sec 1984 MB paq8px_v200 -8 paq8px_v193_4_Corpuses.jpg 3340610 1513850 105.90 sec 2602 MB paq8pxd_v95 -s8 DSCN0791.AVI 30018828 19858827 1336.94 sec 1984 MB paq8px_v200 -8 DSCN0791.AVI 30018828 20171981 992.85 sec 2602 MB So mill.jpg is 18571 bytes better v95 vs v94. Its slower, im sure nobody cares. Some main context changes have 0 time penalty but improve result some kb. For a10.jpg new Map1 context add only about 5 sec.
    1028 replies | 361607 view(s)
  • kaitz's Avatar
    13th January 2021, 21:15
    kaitz replied to a thread paq8px in Data Compression
    MC ​ paq8px_v200 paq8pxd_v94 ​file size -s8 -s8 A10.jpg 842468 624597 620980 AcroRd32.exe 3870784 823707 831293 english.dic 4067439 346422 347716 FlashMX.pdf 4526946 1315382 1334877 FP.LOG 20617071 215399 201621 MSO97.DLL 3782416 1175358 1190012 ohs.doc 4168192 454753 451278 rafale.bmp 4149414 468156 468757 vcfiu.hlp 4121418 372048 264060 world95.txt 2988578 313915 311700 Total 6109737 6022294
    2283 replies | 605472 view(s)
  • Jyrki Alakuijala's Avatar
    13th January 2021, 11:30
    I suspect that VarDCT will be the most appropriate mode for this -- we just need to fix the remaining issues. Palette mode and delta palette mode can also be useful for a wide range of pixel art images. They are also not yet tuned for best performance but show quite a lot of promise already. My understanding is that -- for photographics images -- lossy modular mode provides a quality that is between of libjpeg quality and VarDCT quality, but closer to libjpeg quality. I always used --distance for encoding and VarDCT. We have a final version of the format now, so in that sense it is ok to start using it. For practical use it may be nice to wait before tooling support for JPEG XL is catching up. JPEG XL committee members did a final quality review in November/December with many metrics and manual review of images where the metrics disagreed. FDIS phase starts next week.
    34 replies | 2927 view(s)
  • Sebastianpsankar's Avatar
    13th January 2021, 03:30
    This was helpful and encouraging... Thanks...
    20 replies | 848 view(s)
  • Lithium Flower's Avatar
    12th January 2021, 17:31
    @ Jyrki Alakuijala Thank you for your reply I have some question about Pixel Art, i using pingo png lossless -s0 to identify which image can lossless convert to pal8, some Pixel Art image can't lossless convert, need use vardct mode or modular mode, In my Pixel Art image test, vardct mode -q 80 Speed 8, lossy modular mode -Q 80 Speed: 9, vardct mode can't work very well(have tiny artifact), lossy modular mode work fine in Pixel Art image, but look like lossy modular mode isn't recommend use right now, which mode is best practice? And about lossy modular mode quality value(-Q luma_q), this quality value roughly match or like libjpeg quality? i don't know use lossy modular Q80 Speed 9 compare vardct q80 Speed 8 , is a fair comparison? About Pixel Art png pal8, I test Pixel Art png pal8(93 color)in lossless modular mode -q 100 -s 9 -g 2 -E 3, but if use png lossless before jxl lossless, will increase file size. jpeg xl lossless:People1.png 19.3kb => People1.jxl 19.3kb People1.png 19.3kb => People1.png 16.4kb (pingo png lossless -s9) => People1.jxl 18.1kb Webp Lossless:People1.png 19.3kb => People1.webp 16.5kb People1.png 19.3kb => People1.png 16.4kb (pingo png lossless -s9) => People1.webp 16.5kb Pixel Art png pal8(color near 256), jpeg xl lossless is best.// rgb24 605k force convert pal8 168k jpeg xl lossless: 135k Webp Lossless: 157k And a little curious, recommend my artist friend use jpeg xl 0.2, is a good idea?, or i should wait FDIS stage finish?
    34 replies | 2927 view(s)
  • fcorbelli's Avatar
    12th January 2021, 14:24
    fcorbelli replied to a thread zpaq updates in Data Compression
    This is an example of sequential scan... (...) 540.739.857.890 379.656 time 16.536 /tank/condivisioni/ 540.739.857.890 379.656 time 17.588 /temporaneo/dedup/1/condivisioni/ 540.739.857.890 379.656 time 17.714 /temporaneo/dedup/2/tank/condivisioni/ 540.739.857.890 379.656 time 16.71 /temporaneo/dedup/3/tank/condivisioni/ 540.739.857.890 379.656 time 16.991 /temporaneo/dedup/4/condivisioni/ 540.739.857.890 379.656 time 93.043 /monta/nas1_condivisioni/ 540.739.857.890 379.656 time 67.312 /monta/nas2_condivisioni/ 540.739.840.075 379.656 time 362.129 /copia1/backup1/sincronizzata/condivisioni/ ------------------------ 4.325.918.845.305 3.037.248 time 608.024 sec 608.027 seconds (all OK) vs threaded... zpaqfranz v49.5-experimental journaling archiver, compiled Jan 11 2021 Dir compare (8 dirs to be checked) Creating 8 scan threads 12/01/2021 02:00:54 Scan dir || <</tank/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</temporaneo/dedup/1/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</temporaneo/dedup/2/tank/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</temporaneo/dedup/3/tank/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</temporaneo/dedup/4/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</monta/nas1_condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</monta/nas2_condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</copia1/backup1/sincronizzata/condivisioni/>> Parallel scan ended in 330.402000 About twice as fast (in this example)
    2653 replies | 1130758 view(s)
  • Dresdenboy's Avatar
    12th January 2021, 08:19
    You're welcome! Most of the files are compressed at a compression ratio of ~2 to 5. in this ZIP. FONTES_*, MENU_* and TELEPORT_* are less compressed, with the latter two containing a lot of 16 bit data. They might contain bitmaps.
    5 replies | 392 view(s)
  • EmilyWasAway's Avatar
    12th January 2021, 06:11
    After reviewing what you both have said it makes sense that the samples I posted are not using compression at this layer of the format. I'm not certain but these files extracted from the header of the DPC appear to reference data located further down in the DPC but the headers themselves are not compressed in this version. Thank you for the help. :)
    5 replies | 392 view(s)
  • Shelwien's Avatar
    12th January 2021, 03:13
    Its not a compressed format (at least not in the first layer of structure), but just a structured format with length prefixes and mostly floats inside. seg000:00000000 dd 3Fh seg000:00000004 dd 4 seg000:00000008 dq 3Bh seg000:00000010 dd 52F79F96h seg000:00000014 dd 0C939BCA1h seg000:00000018 dd 0D24B3F6Fh seg000:0000001C aVLinkLefthandM dd 55 seg000:0000001C db 'V:LINK "LeftHand" "MUSCLE_OFF_1_SUSPFL4_LOD1" AxeZ LOD1' seg000:00000057 dd 0A6h seg000:0000005B dd 62h seg000:0000005F dq 44h seg000:00000067 dd 73DC6A13h seg000:0000006B dd 0A3F0FCD9h seg000:0000006F dd 4AB1A4C3h seg000:00000073 dd 681AF697h seg000:00000077 dd 0BE02FCF1h seg000:0000007B dd 0BE5A0EC8h seg000:0000007F dd 0BA5BF080h seg000:00000083 dd 3E801E69h seg000:00000087 dd 3F800000h seg000:0000008B dd 0 seg000:0000008F dd 0 seg000:00000093 dd 0BE02FCF1h seg000:00000097 dd 0 seg000:0000009B dd 3F800000h seg000:0000009F dd 0 seg000:000000A3 dd 0BE5A0EC8h seg000:000000A7 dd 0 seg000:000000AB dd 0 seg000:000000AF dd 3F800000h seg000:000000B3 dd 0BA5BF080h seg000:000000B7 dd 3E801E69h seg000:000000BB dd 3E801E69h seg000:000000BF dd 3E801E69h seg000:000000C3 dd 3EDDE882h seg000:000000C7 dd 0
    5 replies | 392 view(s)
  • EmilyWasAway's Avatar
    12th January 2021, 01:21
    I considered that it could be a custom format but the similarities to the previous DPC formats and sections of the file that look like this lead me to investigate the possibility of compression. Although if it is compressed, it's not by much.
    5 replies | 392 view(s)
  • Jyrki Alakuijala's Avatar
    12th January 2021, 01:03
    Thank you. This is very useful. Yes, looks awful. I had an off-by-one for smooth area detection heuristic and those areas were detected 4 pixels off. There is likely an improvement on this in the next public release, as well as an overall reduction (5-10 %) of such artifacts from other heuristic improvements -- with some more contrast preserved in the middle frequency band (where other formats do often pretty bad). If you find these images in the next version, please keep sending samples. Consider submitting them to http://gitlab.com/wg1/jpeg-xl as an issue.
    34 replies | 2927 view(s)
  • Lithium Flower's Avatar
    11th January 2021, 21:30
    ​Get a issue in vardct mode, i using jpeg xl 0.2 -d 0.8 (Speed: kitten), in some non-photographic(more natural synthetic) image, everything is fine, but some blue and red area have tiny artifacts(noise?). Use vardct mode in non-photographic(more natural synthetic) type image, probably need use other jpeg xl flag(filter) to get great result?
    34 replies | 2927 view(s)
  • Dresdenboy's Avatar
    11th January 2021, 20:24
    With those runs of zeroes and the compression ratio in the samples zip I think those files aren't compressed at all, just some custom data format.
    5 replies | 392 view(s)
  • fcorbelli's Avatar
    11th January 2021, 19:00
    fcorbelli replied to a thread zpaq updates in Data Compression
    This is version 49.5. Should also compile on Linux (tested only on Debian), plus FreeBSD and Windows (gcc) I have added some functions that I think are useful. The first is the l (list) command. Now with ONE parameter (the .ZPAQ file) shows its contents. If more than one parameter, compare the contents of the ZPAQ with one or more folders, with a (block) check of SHA1s (the old -not =). Can be used as a quick check after add: zpaqfranz a z:\1.zpaq c:\pippo zpaqfranz l z:\1.zpaq c:\pippo Then I introduce the command c (compare) for directories, between a master and N slave. With the switch -all launches N+1 threads. The default verification is file name and size only. Applying the -crc32 switch also verifies its checksum WHAT? During the verification phase of the correct functioning of the backups it is normal to extract them on several different media (devices). Using for example folders synchronized with rsync on NAS, ZIP files, ZPAQ via NFS-mounted shares, smbfs, internal HDD etc. Comparing multiple copies can takes a (very) long time. Suppose to have a /tank/condivisioni master (or source) directory (hundreds of GB, hundred thousand files) Suppose to have some internal (HDD) and external (NAS) rsynced copy (/rsynced-copy-1, /rsynced-copy-2, /rsynced-copy-3...) Suppose to have internal ZIP backup, internal ZPAQ backup, external (NAS1 zip backup), external (NAS2 zpaq backup) and so on. Let's extract all of them (ZIP and ZPAQs) into /temporaneo/1, /temporaneo/2, /temporaneo/3... You can do something like diff -qr /temporaneo/condivisioni /temporaneo/1 diff -qr /temporaneo/condivisioni /temporaneo/2 diff -qr /temporaneo/condivisioni /temporaneo/3 (...) diff -qr /temporaneo/condivisioni /rsynced-copy-1 diff -qr /temporaneo/condivisioni /rsynced-copy-2 diff -qr /temporaneo/condivisioni /rsynced-copy-3 (...) But this can take a lot of time (many hours) even for fast machines The command c compares a master folder (the first indicated) to N slave folders (all the others) in two particular operating modes. By default it just checks the correspondence of files and their size (for example for rsync copies with different charsets, ex unix vs linux, mac vs linux, unix vs ntfs it is extremely useful). Using the -crc32 switch a check of this code is also made (with HW CPU support, if available). The interesting aspect is the -all switch: N+1 threads will be created (one for each specified folder) and executed in parallel, both for scanning and for calculating the CRC. On modern servers (eg Xeon with 8, 10 or more CPUs) with different media (internal) and multiple connections (NICs) to NASs you can drastically reduce times compared to multiple, sequential diff -qr. It clearly makes no sense for single magnetic disks. In the given example zpaqfranz c /tank/condivisioni /temporaneo/1 /temporaneo/2 /temporaneo/3 /rsynced-copy-1 /rsynced-copy-2 /rsynced-copy-3 -all will run 7 threads which take care of one directory each. The hypothesis is that the six copies are each on a different device, and the server have plenty of cores and NICs. It's normal in datastorage and virtualization environments (at least in mine). Win32 e Win64 on http://www.francocorbelli.it/zpaqfranz.exe http://www.francocorbelli.it/zpaqfranz32.exe
    2653 replies | 1130758 view(s)
  • CompressMaster's Avatar
    11th January 2021, 17:11
    Thanks. But I am not familiar with java nor android programming, and so I dont know how I can get it to work. Detailed step-by-step manual will be very beneficial:). btw, better to rename this thread to something like "android camera - use different zooming algorithm"
    2 replies | 139 view(s)
  • EmilyWasAway's Avatar
    11th January 2021, 12:46
    I'm reverse engineering a version of Asobo Studio's DPC archive format used in the PC release of the game FUEL (2009). I am able to unwrap the first "layer" of the format by breaking the archive down into the files described in the DPC header using a modified version of this MexScript. However these extracted files appear to be compressed with a custom LZ variant. Some games released before FUEL (CT Special Forces: Fire for effect, Ratatouille, and Wall-E) each used a slightly different LZ variant than the previous release so I am expecting FUEL to use something similar to those. @Shelwien has provided a series of unLZ_rhys scripts in previous posts (linked at the bottom) but none of them seam to properly decompress the files I extracted. I have attached a selection of extracted files that appear to be compressed and contain a small amount of text near the beginning. They all follow a similar pattern to the one in this image: Which closely resembles the compressed files from the previous posts. In theory this should only require a small modification to the unLZ_rhys tool but unfortunately I cannot seem to figure out the header layout/mask for this new version of the format. Any help with how to modify the tool or advice in general would be greatly appreciated. If you need more samples or the original DPC files I can provide them. https://encode.su/threads/3147-Reverse-Engineering-Custom-LZ-Variant https://encode.su/threads/3526-Asobo-s-Ratatouille-DPC-Data-Compression
    5 replies | 392 view(s)
  • suryakandau@yahoo.co.id's Avatar
    11th January 2021, 08:55
    There is a trade off between compression ratio n speed... Like cmix... :)
    218 replies | 19980 view(s)
  • kaitz's Avatar
    11th January 2021, 04:22
    kaitz replied to a thread Paq8sk in Data Compression
    I think you can get a10 below 618xxx in 100sec or less. :D 619xxx in 50 sec.
    218 replies | 19980 view(s)
  • Shelwien's Avatar
    11th January 2021, 03:23
    > In a multithreaded system with multiple cores and an operating system with virtual memory > (windows, linux, unix), can you have a CPU exception when two instructions modify the same memory cell? No, there're no exceptions for this. Just that a "global variable" without "volatile" might be actually kept in a register. > Or does the content simply become not well defined? In a sense. You just can't predict the order of read/write operations working in different threads.
    15 replies | 493 view(s)
  • Shelwien's Avatar
    11th January 2021, 03:18
    > The question is: if it is NOT important that the global variables are > perfectly up to date, can I safely (no CPU exception) avoid a semaphore or > something like that (obviously reducing the latency, this is the ultimate goal)? On x86 yes, though you'd have to add "volatile" specifier to the variables accessed from multiple threads. On some weird platforms like PPC and embedded ones you might also need explicit cache flushes and/or or intrinsics like __sync_synchronize(). So yes, on x86 its quite possible to implement MT without explicit semaphores - its simply less efficient when a thread spins a loop waiting for some global variable, while with thread APIs it could release the core to OS while it waits. There're also some interesting new tools: https://gcc.gnu.org/onlinedocs/gcc-4.9.2/gcc/X86-transactional-memory-intrinsics.html
    15 replies | 493 view(s)
  • Shelwien's Avatar
    11th January 2021, 02:54
    https://stackoverflow.com/questions/37763257/android-bitmap-resizing-using-better-resampling-algorithm-than-bilinear-like-l
    2 replies | 139 view(s)
  • suryakandau@yahoo.co.id's Avatar
    11th January 2021, 01:42
    Yes I am interested in the limit of jpeg compression ratio. Btw how much jpeg can be compressed ?
    218 replies | 19980 view(s)
  • Lithium Flower's Avatar
    10th January 2021, 23:04
    @ Jyrki Alakuijala Thank you for your reply If i want increase fidelity in vardct mode(jpeg xl 0.2), target distance -d 0.8 (Speed: kitten) probably a good distance? -q 90 == -d 1.000 // visually lossless (side by side) -q 91 == -d 0.900 -q 92 == -d 0.800 -q 93 == -d 0.700 -q 94 == -d 0.600 -q 95 == -d 0.550 // visually lossless (flicker-test)
    34 replies | 2927 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 21:39
    Well, there are many things at play: 1) there should be no exception on write, whatever that means. One thread will win the race. However, when you read the value back it could be in inconsistent state, e.g. one thread won with one part of result, other thread won with other part of result, so in the end the result is corrupted and using it will result in exception, segfault, etc 2) there is always some transaction size. I think that if you have a register of size 2^N bytes and you write to a memory location aligned at 2^N bytes then your write will either succeed fully or will be overwritten fully. This means that if you e.g. store a pointer to a field aligned to pointer size then it will either suceed fully or be overwritten fully by another thread. In either case there will be a valid pointer if both threads write valid pointers. 3) you need to be aware of https://en.wikipedia.org/wiki/Memory_model_(programming) and https://en.wikipedia.org/wiki/Consistency_model . For example if you don't place volatile nor atomic modifiers then compiler is allowed to cache the value in register and potentially update the real memory cell very rarely. If you don't use memory fences (atomics trigger memory fences) then CPU core could delay updating other cores so that the other core would get stale data. 4) transformations (e.g. addition) are done on CPU level, so CPU needs to invoke many steps: load value, change value, store value. Since there are multiple steps, another CPU core could access the data in between steps of the first core. Therefore to implement atomics instructions like https://en.wikipedia.org/wiki/Compare-and-swap are needed to verify at the end of transformation that the original value is still at the original memory location. If not, then compare-exchange instruction fails and whole transformation is repeated. Such process is repeated until compare-exchange achieves success. In case of reasonably low contention between threads the success rate is high. 5) the CPU instructions define the guarantees you'll see in practice. So if you copy 8 bytes one byte at a time and two threads are doing that on the same memory location then you won't get the guarantees of 8-byte writes done as single instruction. 6) on some CPUs (e.g. ARM ones) there are only aligned writes, so compiler has to emulate unaligned writes using aligned writes. For example if you write 4 bytes at memory address 13. 13 % 4 != 0, so compiler need to issue two 4-byte writes, each transforming data that's already there. Because there is a multi-step non-atomic transformation there could be corruption of data if multiple threads access the memory location.
    15 replies | 493 view(s)
  • fcorbelli's Avatar
    10th January 2021, 21:22
    I will make a debian virtual machine and fix the BSD-dependent code. But the question is always the same of the first post. In a multithreaded system with multiple cores and an operating system with virtual memory (windows, linux, unix), can you have a CPU exception when two instructions modify the same memory cell? Or does the content simply become not well defined?
    15 replies | 493 view(s)
  • CompressMaster's Avatar
    10th January 2021, 21:19
    Is there a good android related forum somewhere? I want to alter camera´s digital zooming algorithm to use gaussian interpolation instead of bilinear.
    2 replies | 139 view(s)
  • CompressMaster's Avatar
    10th January 2021, 21:13
    CompressMaster replied to a thread Paq8sk in Data Compression
    @suryakandau what about optimizing paq8sk for file A10.jpg from Maximum Compression Corpus? I am interested whats the limit in compression ratio.:)
    218 replies | 19980 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 20:56
    What if you use `std::atomic<std::int64_t>` instead of its alias `atomic_int64_t`?
    15 replies | 493 view(s)
  • fcorbelli's Avatar
    10th January 2021, 20:44
    Including #include <atomic> and declaring atomic_int64_t g_bytescanned; atomic_int64_t g_filescanned; compile on Windows, but not on FreeBSD paq.cpp:356:1: error: 'atomic_int64_t' does not name a type; did you mean 'u_int64_t'? atomic_int64_t g_bytescanned;
    15 replies | 493 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 20:26
    Hmm, $ gcc -O3 -march=native -Dunix zpaq.cpp -static -lstdc++ libzpaq.cpp -pthread -o zpaq -static -lm > errors.txt 2>&1 gives me: zpaq.cpp: In function ‘bool comparecrc32block(s_crc32block, s_crc32block)’: zpaq.cpp:3001:40: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ sprintf(a_start,"%014lld",a.crc32start); ~~~~~~~~~~~~^ zpaq.cpp:3002:40: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ sprintf(b_start,"%014lld",b.crc32start); ~~~~~~~~~~~~^ zpaq.cpp: In function ‘void mygetch()’: zpaq.cpp:3202:17: error: aggregate ‘mygetch()::termios oldt’ has incomplete type and cannot be defined struct termios oldt, newt; ^~~~ zpaq.cpp:3202:23: error: aggregate ‘mygetch()::termios newt’ has incomplete type and cannot be defined struct termios oldt, newt; ^~~~ zpaq.cpp:3203:2: error: ‘tcgetattr’ was not declared in this scope tcgetattr ( STDIN_FILENO, &oldt ); ^~~~~~~~~ zpaq.cpp:3203:2: note: suggested alternative: ‘tcgetpgrp’ tcgetattr ( STDIN_FILENO, &oldt ); ^~~~~~~~~ tcgetpgrp zpaq.cpp:3205:21: error: ‘ICANON’ was not declared in this scope newt.c_lflag &= ~( ICANON | ECHO ); ^~~~~~ zpaq.cpp:3205:30: error: ‘ECHO’ was not declared in this scope newt.c_lflag &= ~( ICANON | ECHO ); ^~~~ zpaq.cpp:3205:30: note: suggested alternative: ‘EIO’ newt.c_lflag &= ~( ICANON | ECHO ); ^~~~ EIO zpaq.cpp:3206:28: error: ‘TCSANOW’ was not declared in this scope tcsetattr ( STDIN_FILENO, TCSANOW, &newt ); ^~~~~~~ zpaq.cpp:3206:28: note: suggested alternative: ‘TCSETAW’ tcsetattr ( STDIN_FILENO, TCSANOW, &newt ); ^~~~~~~ TCSETAW zpaq.cpp:3206:2: error: ‘tcsetattr’ was not declared in this scope tcsetattr ( STDIN_FILENO, TCSANOW, &newt ); ^~~~~~~~~ zpaq.cpp:3206:2: note: suggested alternative: ‘tcsetpgrp’ tcsetattr ( STDIN_FILENO, TCSANOW, &newt ); ^~~~~~~~~ tcsetpgrp zpaq.cpp: In function ‘int myhuman(char*, size_t, int64_t, const char*, int, int)’: zpaq.cpp:7869:14: error: ‘HN_DIVISOR_1000’ was not declared in this scope if (flags & HN_DIVISOR_1000) { ^~~~~~~~~~~~~~~ zpaq.cpp:7872:15: error: ‘HN_B’ was not declared in this scope if (flags & HN_B) ^~~~ zpaq.cpp:7882:15: error: ‘HN_B’ was not declared in this scope if (flags & HN_B) ^~~~ zpaq.cpp:7892:16: error: ‘HN_AUTOSCALE’ was not declared in this scope (scale & (HN_AUTOSCALE | HN_GETSCALE)) == 0) ^~~~~~~~~~~~ zpaq.cpp:7892:31: error: ‘HN_GETSCALE’ was not declared in this scope (scale & (HN_AUTOSCALE | HN_GETSCALE)) == 0) ^~~~~~~~~~~ zpaq.cpp:7892:31: note: suggested alternative: ‘F_GET_SEALS’ (scale & (HN_AUTOSCALE | HN_GETSCALE)) == 0) ^~~~~~~~~~~ F_GET_SEALS zpaq.cpp:7909:14: error: ‘HN_NOSPACE’ was not declared in this scope if (flags & HN_NOSPACE) ^~~~~~~~~~ zpaq.cpp:7909:14: note: suggested alternative: ‘N_6PACK’ if (flags & HN_NOSPACE) ^~~~~~~~~~ N_6PACK zpaq.cpp:7921:15: error: ‘HN_AUTOSCALE’ was not declared in this scope if (scale & (HN_AUTOSCALE | HN_GETSCALE)) { ^~~~~~~~~~~~ zpaq.cpp:7921:30: error: ‘HN_GETSCALE’ was not declared in this scope if (scale & (HN_AUTOSCALE | HN_GETSCALE)) { ^~~~~~~~~~~ zpaq.cpp:7921:30: note: suggested alternative: ‘F_GET_SEALS’ if (scale & (HN_AUTOSCALE | HN_GETSCALE)) { ^~~~~~~~~~~ F_GET_SEALS zpaq.cpp:7936:38: error: ‘HN_DECIMAL’ was not declared in this scope if (bytes < 995 && i > 0 && flags & HN_DECIMAL) { ^~~~~~~~~~ zpaq.cpp: In function ‘void tohuman(long long int, char*)’: zpaq.cpp:7959:10: error: ‘HN_B’ was not declared in this scope flags = HN_B | HN_NOSPACE | HN_DECIMAL; ^~~~ zpaq.cpp:7959:17: error: ‘HN_NOSPACE’ was not declared in this scope flags = HN_B | HN_NOSPACE | HN_DECIMAL; ^~~~~~~~~~ zpaq.cpp:7959:17: note: suggested alternative: ‘N_6PACK’ flags = HN_B | HN_NOSPACE | HN_DECIMAL; ^~~~~~~~~~ N_6PACK zpaq.cpp:7959:30: error: ‘HN_DECIMAL’ was not declared in this scope flags = HN_B | HN_NOSPACE | HN_DECIMAL; ^~~~~~~~~~ zpaq.cpp:7960:11: error: ‘HN_DIVISOR_1000’ was not declared in this scope flags |= HN_DIVISOR_1000; ^~~~~~~~~~~~~~~ zpaq.cpp:7962:17: error: ‘HN_AUTOSCALE’ was not declared in this scope bytes, "", HN_AUTOSCALE, flags); ^~~~~~~~~~~~ zpaq.cpp: In function ‘long long int getfreespace(const char*)’: zpaq.cpp:7968:16: error: aggregate ‘getfreespace(const char*)::statfs stat’ has incomplete type and cannot be defined struct statfs stat; ^~~~ zpaq.cpp:7970:26: error: invalid use of incomplete type ‘struct getfreespace(const char*)::statfs’ if (statfs(i_path, &stat) != 0) ^ zpaq.cpp:7968:9: note: forward declaration of ‘struct getfreespace(const char*)::statfs’ struct statfs stat; ^~~~~~ zpaq.cpp:7982:3: error: ‘getbsize’ was not declared in this scope getbsize(&dummy, &blocksize); ^~~~~~~~ zpaq.cpp:7982:3: note: suggested alternative: ‘getsid’ getbsize(&dummy, &blocksize); ^~~~~~~~ getsid zpaq.cpp: In member function ‘int Jidac::summa()’: zpaq.cpp:8403:103: warning: embedded ‘\0’ in format sprintf(p->second.sha1hex,"%08X\0x0",crc32_calc_file(filename.c_str(),inizio,total_size,lavorati)); ^ zpaq.cpp:8409:104: warning: embedded ‘\0’ in format sprintf(p->second.sha1hex,"%08X\0x0",crc32c_calc_file(filename.c_str(),inizio,total_size,lavorati)); ^ zpaq.cpp:8416:53: warning: format ‘%llX’ expects argument of type ‘long long unsigned int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ sprintf(p->second.sha1hex,"%016llX\0x0",result2); ^ zpaq.cpp:8416:53: warning: embedded ‘\0’ in format zpaq.cpp:8488:76: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 2 has type ‘int64_t {aka long int}’ printf("Worked on %lld average speed %s B/s\n",lavorati,migliaia(myspeed)); ^ zpaq.cpp: In function ‘int unz(const char*, const char*, bool)’: zpaq.cpp:11231:116: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘long unsigned int’ printf("%02d frags %s (RAM used ~ %s)\r",100-(offset*100/(total_size+1)),migliaia(frag.size()),migliaia2(ramsize)); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ zpaq.cpp:11286:140: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 2 has type ‘unsigned int’ printf("File %08lld of %08lld (%20s) %1.3f %s\n",i+1,(long long)mappadt.size(),migliaia(size),(mtime()-startrecalc)/1000.0,fn.c_str()); ~~~ ^ zpaq.cpp: In member function ‘int Jidac::test()’: zpaq.cpp:11721:58: warning: format ‘%lu’ expects argument of type ‘long unsigned int’, but argument 2 has type ‘unsigned int’ printf("Block %08lu K %s\r",i/1000,migliaia(lavorati)); ~~~~~~ ^ zpaq.cpp:11839:102: warning: format ‘%X’ expects argument of type ‘unsigned int’, but argument 3 has type ‘const char*’ printf("SURE: STORED %08X = DECOMPRESSED = FROM FILE %08Xn",crc32stored,filedefinitivo.c_str()); ~~~~~~~~~~~~~~~~~~~~~~^ zpaq.cpp:11852:89: warning: format ‘%X’ expects argument of type ‘unsigned int’, but argument 3 has type ‘const char*’ printf("GOOD: STORED %08X = DECOMPRESSED %08X\n",crc32stored,filedefinitivo.c_str()); ~~~~~~~~~~~~~~~~~~~~~~^ zpaq.cpp: In function ‘bool comparefilenamesize(s_fileandsize, s_fileandsize)’: zpaq.cpp:11962:33: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ sprintf(a_size,"%014lld",a.size); ~~~~~~^ zpaq.cpp:11963:33: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ sprintf(b_size,"%014lld",b.size); ~~~~~~^ zpaq.cpp: In function ‘bool comparefilenamedate(s_fileandsize, s_fileandsize)’: zpaq.cpp:11970:33: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘int64_t {aka long int}’ sprintf(a_size,"%014lld",a.date); ~~~~~~^ zpaq.cpp:11971:33: warning: format ‘%lld’ expects argument of type ‘long long int’, but argument 3 has type ‘int64_t {aka long int}’ sprintf(b_size,"%014lld",b.date); ~~~~~~^ zpaq.cpp: In member function ‘int Jidac::dir()’: zpaq.cpp:12008:39: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘std::vector<std::__cxx11::basic_string<char> >::size_type {aka long unsigned int}’ printf("FIles.size %d\n",files.size()); ~~~~~~~~~~~~^ zpaq.cpp:12108:87: warning: format ‘%d’ expects argument of type ‘int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ printf("PRE %08d %08d %s\n",i,fileandsize.size,fileandsize.filename.c_str()); ^ zpaq.cpp:12118:88: warning: format ‘%d’ expects argument of type ‘int’, but argument 3 has type ‘uint64_t {aka long unsigned int}’ printf("POST %08d %08d %s\n",i,fileandsize.size,fileandsize.filename.c_str()); ^ zpaq.cpp:12176:173: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 2 has type ‘int’ printf("%08ld CRC-1 %08X %s %19s %s\n",i,fileandsize.crc32,dateToString(fileandsize.date).c_str(),migliaia(fileandsize.size),fileandsize.filename.c_str()); ^ zpaq.cpp:12181:39: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘int64_t {aka long int}’ printf("Done %03d ",percentuale); ^ zpaq.cpp:12204:182: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 2 has type ‘int’ printf("%08ld CRC-2 %08X %s %19s %s\n",i+1,fileandsize.crc32,dateToString(fileandsize.date).c_str(),migliaia(fileandsize.size),fileandsize.filename.c_str()); ~~~ ^ zpaq.cpp:12209:38: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘int64_t {aka long int}’ printf("Done %03d ",percentuale); ^ zpaq.cpp:12250:88: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 2 has type ‘int’ printf("%ld size %s <<%08X>>\n",i,migliaia(fileandsize.size()),fileandsize.crc32); ^ zpaq.cpp:12260:38: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 2 has type ‘int’ printf("Limit founded %ld\n",limite); ^ zpaq.cpp:12353:71: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 2 has type ‘int’ printf(" %8ld File %19s byte\n",quantifiles,migliaia(total_size)); ^ zpaq.cpp:12355:43: warning: format ‘%ld’ expects argument of type ‘long int’, but argument 2 has type ‘int’ printf(" %8ld directory",quantedirectory); ^ zpaq.cpp: In member function ‘int Jidac::dircompare()’: zpaq.cpp:12709:61: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘std::vector<std::__cxx11::basic_string<char> >::size_type {aka long unsigned int}’ printf("Dir compare (%d dirs to be checked)\n",files.size()); ~~~~~~~~~~~~^ zpaq.cpp:12769:51: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘std::vector<std::__cxx11::basic_string<char> >::size_type {aka long unsigned int}’ printf("Creating %d scan threads\n",files.size()); ~~~~~~~~~~~~^ I have $ gcc --version gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Copyright (C) 2017 Free Software Foundation, Inc. Anyway, have you tried inserting my code somewhere in your code and then compiling it?
    15 replies | 493 view(s)
  • fcorbelli's Avatar
    10th January 2021, 20:17
    I do not use Linux, but Windows and Unix (FreeBSD and Solaris) Compile instructions are in the first lines, under FreeBSD for example is gcc7 -O3 -march=native -Dunix zpaq.cpp -static -lstdc++ libzpaq.cpp -pthread -o zpaq -static -lm the file is coded in ANSI
    15 replies | 493 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 20:11
    How to compile it under Linux (I have Ubuntu 18.04.5)? I keep getting errors like this: zpaq.cpp:7871:14: error: ‘HN_DIVISOR_1000’ was not declared in this scope if (flags & HN_DIVISOR_1000) { ^~~~~~~~~~~~~~~ zpaq.cpp:7874:15: error: ‘HN_B’ was not declared in this scope if (flags & HN_B) ^~~~ zpaq.cpp:7884:15: error: ‘HN_B’ was not declared in this scope if (flags & HN_B) ^~~~ zpaq.cpp:7894:16: error: ‘HN_AUTOSCALE’ was not declared in this scope (scale & (HN_AUTOSCALE | HN_GETSCALE)) == 0) ^~~~~~~~~~~~ zpaq.cpp:7894:31: error: ‘HN_GETSCALE’ was not declared in this scope (scale & (HN_AUTOSCALE | HN_GETSCALE)) == 0) ^~~~~~~~~~~ zpaq.cpp:7894:31: note: suggested alternative: ‘F_GET_SEALS’ (scale & (HN_AUTOSCALE | HN_GETSCALE)) == 0) ^~~~~~~~~~~ F_GET_SEALS zpaq.cpp:7911:14: error: ‘HN_NOSPACE’ was not declared in this scope if (flags & HN_NOSPACE) ^~~~~~~~~~ Also what's the encoding of that file? It probably isn't UTF-8 as gedit complains.
    15 replies | 493 view(s)
  • fcorbelli's Avatar
    10th January 2021, 19:40
    This one
    15 replies | 493 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 19:05
    I don't know what exactly your requirements are. You say you can't do something or need to do something else, etc It wasn't even clear from the start if you want solution in C or C++. Here's my example of mixing C pthread.h with C++ atomic (and of course compiling with C++ compiler): #include <cstdio> #include <pthread.h> #include <atomic> void *inc_cnt(void * cnt_void_ptr) { std::atomic<int> * cnt_ptr = (std::atomic<int> *) cnt_void_ptr; for (int i = 0; i < 100100100; i++) { (*cnt_ptr)++; } return NULL; } int main() { std::atomic<int> cnt = {0}; printf("start cnt: %d\n", cnt.load()); pthread_t inc_cnt_thread; if(pthread_create(&inc_cnt_thread, NULL, inc_cnt, &cnt)) { fprintf(stderr, "Error creating thread\n"); return 1; } inc_cnt(&cnt); if(pthread_join(inc_cnt_thread, NULL)) { fprintf(stderr, "Error joining thread\n"); return 2; } printf("final cnt: %d\n", cnt.load()); return 0; } It works for me: $ g++ -pthread atomics.cpp $ ./a.out start cnt: 0 final cnt: 200200200
    15 replies | 493 view(s)
  • fcorbelli's Avatar
    10th January 2021, 18:25
    Including #include <stdatomic.h> is not a really good idea, because you can get a gazillion of conflicts In file included from zpaq.cpp:127:0: c:\mingw\lib\gcc\x86_64-w64-mingw32\7.3.0\include\stdatomic.h:40:9: error: '_Atomic' does not name a type; did you mean '_wtoi'? typedef _Atomic _Bool atomic_bool; So you need to include #include <threads.h>
    15 replies | 493 view(s)
  • Mauro Vezzosi's Avatar
    10th January 2021, 16:48
    CMV 0.3.0 alpha 1 coronavirus.fasta (.2bit version is much worse) Compresse size Time (1) Options First 200 MiB Whole file Whole file 207 KiB 940888 45 h -m2 (decompression not verified) 179 KiB ~~18 d -m2,0,> ~11 d -m2,0,0x3ba3629e (current (2021/01/10) opimized options based on the first 1 MiB) 218149 CMV.exe compressed with 7-Zip 9.20 (.zip) 1159037 Total (1) Intel(R) Core(TM) i7-4710HQ CPU @ 2.50GHz (up to 3.50GHz), 8 GB RAM DDR3, Windows 8.1 64 bits. I haven't tested the reverse-complement order yet (1 2).
    33 replies | 1644 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 16:45
    Erm, what? Atomics of course make sense only with multiple threads, but you don't need particular threads implementation to use them. Here you have the non-C++ version of atomics: https://en.cppreference.com/w/c/atomic https://en.cppreference.com/w/c/language/atomic
    15 replies | 493 view(s)
  • fcorbelli's Avatar
    10th January 2021, 15:51
    Atomic (std::atomic) is for C++ threads (std::threads)
    15 replies | 493 view(s)
  • madserb's Avatar
    10th January 2021, 15:29
    madserb replied to a thread Paq8sk in Data Compression
    SSE2 and AVX2 builds here: ​ ​
    218 replies | 19980 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 14:51
    Have you considered atomics? They are good when your transaction scope is just a single variable and there's not a huge contention between threads (like several thousands updates per second). Atomics don't cause thread sleeping / parking / waiting / whatever you call it. If an update of atomic variable fails then the update is repeated. That works well if failure is happening not too often. The tradeoff threshold depends on how expensive is to recompute the operation. Simple addition of constant is very cheap, so it can be repeated quite a few times before atomics can be considered unfit for particular problem.
    15 replies | 493 view(s)
  • fcorbelli's Avatar
    10th January 2021, 14:14
    I am working on a modification to a certain parallel execution program (ZPAQ !), using the venerable pthreads. I already know everything about traffic lights (!!!) aka semaphore, mutexes etc. (MS in computer science), but I'm reflecting on the actual mandatory under some relaxed assumptions. It is interesting, for me, to work in concrete cases, to see how the theory - which is taught as a dogma of faith - can materialize. Classic problem of the variable shared globally, between various threads. int globale=0; int contatore=0; (...) pthread () int k=something(); globale+=k; contatore++; (main) runs N thread, that generate different "something" values In this pseudo-code the two globals (globale and contatore) are changed directly by the threads. This is not good. Typically I would do this sem_t s; /* semaphore */ (...) P(&s); globale+=k; contatore++; V(&s); (...) on whatever (a mutex) Suppose, however, that we has no interest in maintaining "perfect" global variable values. If, for some reason, they are changed erroneously (not updated, updated several times, zeroed or whatever) we don't care. Just statistical infos (#of compressed files, worked size etc) In this situation a modern CPU (with more cores, even on more separate dies) should still have an arbitration mechanism on RAM changes (for cache hierarchies, L1, L2 etc too). Is there a sort of HW mechanism for managing critical sections (portions of RAM, which are actually pages of virtual memory etc: in short it is extremely complex to establish what a simple "contatore++" does)? core/cpu 1 move.l contatore, D0 add.l #1, D0 move.l D0, contatore core/cpu 2 move.l contatore, D0 add.l #1, D0 move.l D0, contatore core/cpu 3 move.l contatore, D0 add.l #1, D0 move.l D0, contatore (...) Such a pseudocode obviously triggers a series of enormous activities before becoming a "real" change to a physical RAM cell, even gigantic when using operating systems with virtual memory (as almost always) The question is: if it is NOT important that the global variables are perfectly up to date, can I safely (no CPU exception) avoid a semaphore or something like that (obviously reducing the latency, this is the ultimate goal)?
    15 replies | 493 view(s)
  • suryakandau@yahoo.co.id's Avatar
    10th January 2021, 06:31
    Paq8sk43 -change jpegmodel -fix/change detection (tiff,text) f.jpg -s8 80751 bytes 33.61 sec a10.jpg -s8 619637 bytes 209.85 sec source: https://github.com/skandau/paq8sk ​
    218 replies | 19980 view(s)
  • Gotty's Avatar
    10th January 2021, 03:32
    Fixed.
    30 replies | 1400 view(s)
  • suryakandau@yahoo.co.id's Avatar
    10th January 2021, 03:21
    the link https://visualstudio.microsoft.com/vs/community/ is page not found.
    30 replies | 1400 view(s)
  • Gotty's Avatar
    10th January 2021, 03:18
    Verified under Lubuntu 19.04 64 bit. Generated successfully. 1) md5 checksum matches, 2) content matches with my version.
    33 replies | 1644 view(s)
  • innar's Avatar
    10th January 2021, 01:44
    If I have not mistaken, then this commend would make the approach by JamesB to be identical with Gotty's transform: cat coronavirus.fasta | sed 's/>.*/<&</' | tr -d '\n' | tr '<' '\n' | tail -n +2 > coronavirus.fasta.un-wrapped printf "\n" >> coronavirus.fasta.un-wrapped (added \n to align with Gotty's file) Both (Gotty's transform and JamesB sed/tr/tail logic) have md5sum e0d7c063a65c7625063c17c7c9760708 Would JamesB or somebody else mind double checking that the command under *nix produces a correct un-wrapped FASTA file? Thanks! ​ PS! I started rerunning paq8l and cmix v18 on this file. Will make the announcement on web and elsewhere to focus on this transform after some extra eyes re: the correctness of the file (would prefer the original file + standard *nix commands for the transform)
    33 replies | 1644 view(s)
  • NoeticRon's Avatar
    9th January 2021, 23:22
    ^^It really was a factor in my decision to compress it and also the fact that I was running out of my hoarding space. I have learnt a lot about the algos from you and the fileforums site...thanks a lot for being patient with a newbie in the scene. I look forward to our paths crossing again on one compression forum in the future. Stay safe and Godspeed!
    7 replies | 625 view(s)
  • Darek's Avatar
    9th January 2021, 13:18
    Darek replied to a thread paq8px in Data Compression
    Scores of 4 Corpuses for paq8px v199 and paq8px v200. Great gain for Silesia. The paq8px v200 is slightly better than paq8px v199, got 43'036 bytes of gain compared to paq8px v198 and (unfortunatelly) is still 4'758 bytes worse than best cmix v18 score (with precomp)... but it's very, very close now! MaximumComprssion got about 2KB of gain. Other corpuses remain in similar place.
    2283 replies | 605472 view(s)
  • Darek's Avatar
    9th January 2021, 13:13
    Darek replied to a thread Paq8pxd dict in Data Compression
    Scores of my testset for paq8pxd v94. For F.JPG file there is 215 bytes of gain. Other files remains the same.
    1028 replies | 361607 view(s)
More Activity