Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • e8c's Avatar
    Today, 16:54
    e8c replied to a thread Lucky again in The Off-Topic Lounge
    https://tjournal.ru/internet/325612-instagram-blogerov-zapodozrili-v-kritike-vozvrashcheniya-navalnogo-po-metodichke-tezisy-u-nekotoryh-i-pravda-sovpadayut
    8 replies | 522 view(s)
  • Lithium Flower's Avatar
    Today, 14:43
    @ Jyrki Alakuijala A little curious, Probably have a plan or patch to improvement non-photographic fidelity(quality), in next jpeg xl public release(jpeg xl 0.3)? I’m looking forward to use .jxl replace .jpg, .png, .webp file. :)
    39 replies | 3228 view(s)
  • Lithium Flower's Avatar
    Today, 14:41
    @ Jyrki Alakuijala About butteraugli and tiny ringing artefacts, previous post sample image eyes_have tiny artifacts2, in this sample image, have tiny ringing artefacts on character eyes, that tiny artefacts isn't easy to see, but if i compare modular mode(-m -Q 90 speed tortoise) file and vardct mode file, that tiny artefacts will a little uncomfortable with visually experience, modular mode file don't produce tiny artefacts, but probably have another tiny error, vardct mode file compress very well(file size), but will produce tiny artefacts in some area, in sample image eyes_have tiny artifacts2, need use jpeg xl 0.2 -d 0.5(Speed: kitten), to avoid tiny ringing artefacts issue. I guess that tiny ringing artefacts in photographic type image, probably very hard to see, so butteraugli will assessment that image is fine or have tiny error, but in non-photographic type image, if some image content area have tiny ringing artefacts, will very easy to see and a little uncomfortable with visually experience. like chroma subsampling, photographic image and non-photographic image have different situation, some photographic type image use chroma subsampling still can get good visually experience, but in non-photographic type image, chroma subsampling always a bad idea.
    39 replies | 3228 view(s)
  • Lithium Flower's Avatar
    Today, 14:37
    @ Jyrki Alakuijala Thank you for your reply It look like jpeg xl 0.2 -d 1.0 (speed kitten) still in little risk zone, in my test some image in -d 1.0, 0.9 will get maxButteraugli 2.0+ , -d 0.8 can limit maxButteraugli below 1.6(1.55), In previous post maxButteraugli below 1.3 ~ 1.4, can stay safe zone, could you teach me about target distance -d 0.9(q91), 0.8(q92), 0.7(q93), 0.6(q94), those distance probably like 1.0 and 0.5 have a special meaning?
    39 replies | 3228 view(s)
  • umgefahren's Avatar
    Today, 11:36
    EDIT: Due to the latest switch from deflate in favor of Zlib, the algorithm manages to SURPASS PNG in most cases. Hello, I'm a newbie to compression and this forum, so please be patient. I recently wrote an image compression algorithm and put a prototype on GitHub. I've written a long explanation there, but in order to make it more comfortable here's the explanation: Image Compression Algorithm A new image compression algorithm. In this release version, the algorithm performs worse than PNG in most cases. In fact, the only image, where the algorithm outperforms PNG is the white void of img_3.png. However, the algorithm produces just slightly larger files then PNG. For example, img_2.png is about 12.8 MB, the resulting binary is 12.9 MB. How the system works Clustering The first step in the system is clustering the pixels. This happens in 5 Dimensions, with R, G, B, x and y of every Pixel. X & Y are normed over 255 in order to have a balance between the color values and the pixel position. This might offer a possible improvement. In the current settings a Kmeans is used to define 3 dominant clusters. More clusters are possible, but the calculation time increases rapidly with an increasing number of clusters. The encoding supports up to 255 clusters, but this is probably overkill. After defining the clusters, we calculate a cluster map, that removes the color values and just displays belonging to a cluster. A visualization of this would look like this: Grid In the next step we lay a grid on top of the cluster map. The chunks of the grids are not fixed size. They vary in size near the edges. For every grid, we check if all pixels in a grid belong to the same cluster. If this is given, the pixel is calculated relative, otherwise absolute. The gird contains for every chunk a value that determines the cluster or that the chunk has to be calculated absolute. Here is an illustration of this grid map. Every white pixel, symbolizes an absolute chunk. Calculating Lists In this step we, finally calculate the pixel values that are later written into the file. Every chunk is calculated according to the grid’s perception of absolute or relative value. Every chucks pixel values are added to a super list of relative or absolute pixel values. The pixel values are calculated in wiggly lines. Every cluster has a minimum pixel value. This value is according to the minimum R, G, B value in that chunk. The resulting pixel value is an addition of this chunk value and the encoded pixel value. Flatten and Byte conversion The grid, the cluster colors, the lines are converted in Vectors of u8 and then converted into bytes. Deflate Grid and lines bytes representations are compressed with the deflate algorithm. This should achieve the compression and provides an opportunity to optimization. Write File The resulting binary is just a list of the relevant compressed objects. Advantages compared to PNG Because of the grid, it's possible to load just specific chunks without loading the entire image. With further improvements it might be possible to surpass PNG in compression rate, but I can't prove that. Disadvantages compared to PNG Because of the clusterisation it takes quite long to calculate a result. It might be possible to improve that, although this would probably require to abolish Kmeans for another clustering algorithm. One solution to that could be a neuronal net. Help and Ideas are much appreciated, especially contributions on GitHub Thanks for your time! :)
    0 replies | 113 view(s)
  • Kirr's Avatar
    Today, 05:52
    Regarding the use of 2bit data. First of all, the fa2twobit itself is lossy. I found that it did not preserve IUPAC codes, line lengths and even sequence names (It truncates them). Also, using the 2bit data still requires decompression (other than with BLAT), while FASTA is a universal sequence exchange format. So I would rather remove 2bit representation from the contest. Anyone interested can trivially convert that DNA into 2-bit by themselves if they need it, potentially avoiding fa2twibit's limitations. But this raises a bigger question. Many (most actually) of the existing sequence compressors have compatibility issues. Some compress just plain DNA (no headers), some don't support N, IUPAC codes, mask (upper/lower case), etc.. Some have their own idea about what sequence names should look like (e.g. max length). Some compress only FASTQ, and not FASTA. How can these various tools be compared, when each is doing its own thing? When designing my benchmark last year, I decided to try my best to adopt each of the broken/incomplete tools to still perform a useful task. So I made a wrapper for each tool, which takes the input data (a huge FASTA file), and transforms it into a format acceptable to each tool. E.g., if some compressor tool does not know about N, my wrapper will pick out all N from the input, store it separately (compressed), and present the N-less sequence to the tool. Then, another wrapper will work in reverse during decompression, re-constructing the exact original FASTA file. The wrapped compressors, therefore, all perform the same task and can be compared on it. All my wrappers and tools used by them are available . This should make it relatively easy to adapt any existing compressor to work on FASTA files. In a similar way, a general-purpose compressor can be wrapped using those tools, to allow stronger compression of FASTA files. It could be an interesting experiment to try wrapping various general-purpose compressors and adding them to the benchmark, along with the non-wrapped ones. http://kirr.dyndns.org/sequence-compression-benchmark/ http://kirr.dyndns.org/sequence-compression-benchmark/?page=Wrappers
    36 replies | 1833 view(s)
  • Kirr's Avatar
    Today, 05:19
    I basically agree that there's no point doing elaborate work compressing the raw data that will be analyzed anyway. When dealing with FASTQ files the usual tasks are: 1. Get them off sequencer to analysis machine. 2. Transfer to computation nodes. 3. Archive. 4. Send to collaborators. The compressor for these purposes has to be quick, reasonably strong, and reliable (robust, portable). Robustness is perhaps the most important quality, which is not at all apparent from benchmarks. (This could be why gzip is still widely used). Among the dozens of compressors and methods, few seem to be designed for practical (industrial) use. Namely DSRC, Alapy, gtz, and NAF. DSRC unfortunately seems unmaintained (bugs are not being fixed). Alapy and gtz are closed source and non-free (also gtz phones home). So I currently use NAF for managing FASTQ data (no surprise). NAF's "-1" works well for one-time transfer (where you just need to get the data from machine A to machine B as quickly as possible). And "-22" works for archiving and distributing FASTQ data. One recent nice development in the field is transitioning to reduced resolution of base qualities. In the usual FASTQ data, the sequence is easy to compress, but the qualities occupy the main bulk of space in the archive. Therefore some compressors have option of rounding the qualities to reduce resolution. Now recent instruments can produces binned qualities from beginning, making compression much easier. CRAM and other reference-based methods work nicely in cases where they are applicable. However, there are fields like metagenomics (or compressing the reference genome itself) where we don't have a reference. In such case we still need a general reference-free compression. The interesting thing is that these days data volumes are so large that specialized tool optimized for specific data or workflow can make a meaningful difference. And yet most sequence databases still use gzip.
    36 replies | 1833 view(s)
  • Kirr's Avatar
    Today, 04:36
    Hi Innar and all! I've just tried NAF on the FASTA file. Among different options, "-15" worked best on this data, producing a 1,590,505 bytes archive. NAF is a compressor for archiving FASTA/Q files. It basically divides the input into headers, mask and sequence (and quality for FASTQ), and compresses each stream with zstd. This allows for a good compactness and very fast decompression. I use NAF to store and work with terabytes of sequence data. (BTW, NAF supports IUPAC codes). Many other sequence compressors exist, some of them are compared in , and might be interesting to try on this data. That benchmark includes a 1.2 GB Influenza dataset, which should produce similar results to the Coronavirus one. Also note the "Compressors" page has useful notes about various compressors. https://github.com/KirillKryukov/naf http://kirr.dyndns.org/sequence-compression-benchmark/ http://kirr.dyndns.org/sequence-compression-benchmark/?page=Compressors Cheers!
    36 replies | 1833 view(s)
  • Jyrki Alakuijala's Avatar
    Today, 02:03
    Max butteraugli 1.0 and below are good quality. Butteraugli 1.1 is more 'okeyish' rather than 'solid ok'. At max butteraugli of 0.6 I have never yet been able to see a difference. Butteraugli scores are calibrated at a viewing distance of 900 pixels, if you zoom a lot, you will see more. If you obviously disagree with butteraugli (when using max brightness at 200 lumen or less and with a viewing distance of 900 pixels or more), file a bug in the jpeg xl repository and I'll consider adding such cases into the butteraugli calibration corpus. There is some consensus that butteraugli has possibly been insufficiently sensitive for ringing artefacts. 2-3 years ago I have made some changes for it to be less worried about blurring in comparison to ringing artefacts, but those adjustments were somewhat conservative. Please, keep writing about your experiences, this is very useful for me in deciding where to invest effort in jpeg xl and butteraugli.
    39 replies | 3228 view(s)
  • e8c's Avatar
    Yesterday, 23:24
    ​Special processing if tile completely grayscale, and same algorithm as with "-1" if not. $ ./gray -2 Bennu_Grayscale.ppm Bennu_Grayscale.gray encode, 2 threads: 118 MPx/s $ ./gray -d Bennu_Grayscale.gray tmp.ppm decode, 2 threads: 293 MPx/s $ ls -l -rwxrwx--- 1 root vboxsf 101907455 янв 17 00:08 Bennu_Grayscale.emma -rwxrwx--- 1 root vboxsf 126415072 янв 18 21:37 Bennu_Grayscale.gray -rwxrwx--- 1 root vboxsf 105285138 янв 16 23:46 Bennu_Grayscale.lea -rwxrwx--- 1 root vboxsf 128446880 янв 16 19:26 Bennu_Grayscale.png -rwxrwx--- 1 root vboxsf 732538945 янв 16 19:24 Bennu_Grayscale.ppm New total for MCIS (Moon Citizen's Image Set) v1.20: 12'745'684'464
    19 replies | 5020 view(s)
  • Gotty's Avatar
    Yesterday, 20:27
    Gotty replied to a thread paq8px in Data Compression
    That explained it. So fixing my detection routine jumped to no1 spot on my to do list. My next version will be about detections and tranforms anyway - as requested. It fits perfectly.
    2290 replies | 606816 view(s)
  • mpais's Avatar
    Yesterday, 19:32
    mpais replied to a thread paq8px in Data Compression
    2290 replies | 606816 view(s)
  • Lithium Flower's Avatar
    Yesterday, 16:40
    Have a question about butteraugli_jxl, use cjxl -d 1.0 -j (kitten) and -m -Q 90 -j (speed tortoise), image type is non-photographic(more natural synthetic), jpeg q99 yuv444, -d 1.0 still have some tiny artifacts in this image, like previous post issue, I use butteraugli_jxl check compressed image, look like butteraugli_jxl didn't find that tiny artifacts(maxButteraugli 1.13), those tiny artifacts is a issue? or expected error in vardct mode? { "01_m_Q90_s9_280k.png": { "maxButteraugli": "1.5102199316", "6Norm": " 0.640975", "12Norm": " 0.862382", }, "01_vardct_d1.0_s8_234k.png": { "maxButteraugli": "1.1368366480", "6Norm": " 0.610878", "12Norm": " 0.734107", } } And vardct mode(-d and -q) can mix two codecs for a single image, lossy modular mode(-m -Q) probably can use two mode for a single image?, or like fuif lossy only use reversible transforms (YCoCg, reversible Haar-like squeezing)? In my non-photographic(more natural synthetic) set, lossy modular mode work very well, but vardct mode can produce smaller file.
    39 replies | 3228 view(s)
  • lz77's Avatar
    Yesterday, 15:20
    lz77 replied to a thread Lucky again in The Off-Topic Lounge
    Ah, so you haven't given up on your godless "Nord Stream" yet? Then Navalny flies to you! :) (Russian: ах, так вы ещё не отказались от своего богомерзкого "северного потока"? Тогда Навальный летит к вам! :) )
    8 replies | 522 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 14:27
    if the command line breaks when MT=OFF so use MT=ON
    219 replies | 20336 view(s)
  • Darek's Avatar
    Yesterday, 13:34
    Darek replied to a thread paq8px in Data Compression
    Scores for paq8px v201 for my testset - nice gain for K.WAD, and smaller gains to other files. Some image files and 0.WAV audio file got also small loses, however in total this is about 3.7KB of gain.
    2290 replies | 606816 view(s)
  • Darek's Avatar
    Yesterday, 12:11
    Darek replied to a thread paq8px in Data Compression
    I've tested it again and got sligtly better option now (but the effect/mechanism is the same) and the scores are: Calgary Corpus => option "-8lrta" 541'975 - for paq8px v191, v191a, v192 556'856 - for paq8px v193 and higher (with some little changes of course but it's a base) Canterbury Corpus => option "-8lrta" - the same as for Calgary Corpus 290'599 - for paq8px v191, v191a, v192 298'652 - for paq8px v193 and higher (with some little changes of course but it's a base)
    2290 replies | 606816 view(s)
  • kaitz's Avatar
    Yesterday, 05:17
    kaitz replied to a thread paq8px in Data Compression
    MC ​ paq8px_v200 paq8px_v201 diff file​ size -8 -8 A10.jpg 842468 624597 624587 10 AcroRd32.exe 3870784 823707 823468 239 english.dic 4067439 346422 345366 1056 FlashMX.pdf 4526946 1315382 1314782 600 FP.LOG 20617071 215399 213420 1979 * MSO97.DLL 3782416 1175358 1175162 196 ohs.doc 4168192 454753 454784 -31 rafale.bmp 4149414 468156 468095 61 vcfiu.hlp 4121418 372048 371119 929 world95.txt 2988578 313915 313828 87 Total 6109737 6104611 5126
    2290 replies | 606816 view(s)
  • kaitz's Avatar
    Yesterday, 04:57
    kaitz replied to a thread paq8px in Data Compression
    @Gotty If you do, please add: mbr, base85, uuencoode ​:)
    2290 replies | 606816 view(s)
  • moisesmcardona's Avatar
    Yesterday, 03:39
    moisesmcardona replied to a thread paq8px in Data Compression
    @Gotty, can the BZip2 transform that @kaitz added in paq8pxd be ported to paq8px?
    2290 replies | 606816 view(s)
  • moisesmcardona's Avatar
    Yesterday, 02:33
    @kaitz, I opened another MR https://github.com/kaitz/paq8pxd/pull/15. Since you added a BZip2 transform, the BZip2 library needed to be added to the CMakeLists so that it can detect it and allow to compile your latest versions :-)
    1030 replies | 362445 view(s)
  • Darek's Avatar
    Yesterday, 01:09
    Darek replied to a thread Paq8pxd dict in Data Compression
    Scores of paq8pxd v95 and previous versions for enwik8 and enwik9: 15'654'151 - enwik8 -x15 -w -e1,english.dic by paq8pxd_v89_60_4095, change: 0,00%, time 10422,07s 122'945'119 - enwik9 -x15 -w -e1,english.dic by paq8pxd_v89_60_4095, change: -0,06%, time 100755,31s - best score for paq8pxd versions 15'647'580 - enwik8 -x15 -w -e1,english.dic by paq8pxd_v90, change: -0,04%, time 9670,5s 123'196'527 - enwik9 -x15 -w -e1,english.dic by paq8pxd_v90, change: 0,20%, time 110200,16s 15'642'246 - enwik8 -x15 -w -e1,english.dic by paq8pxd_v95, change: 0,00%, time 10130s- best score for paq8pxd versions (the same as paq8pxd v93 and paq8pxd v94) 123'151'008 - enwik9 -x15 -w -e1,english.dic by paq8pxd_v95, change: 0,00%, time 102009,55s
    1030 replies | 362445 view(s)
  • Darek's Avatar
    17th January 2021, 18:53
    Darek replied to a thread paq8px in Data Compression
    Yes, of course. I'll check it again and give you switches used for these versions.
    2290 replies | 606816 view(s)
  • moisesmcardona's Avatar
    17th January 2021, 17:35
    moisesmcardona replied to a thread Paq8sk in Data Compression
    Hey Surya, I noticed some of your releases are compiled with MT=ON and some with MT=OFF. Can you please decide to stick with either one? The command line breaks when MT=OFF. Thanks!
    219 replies | 20336 view(s)
  • Jon Sneyers's Avatar
    17th January 2021, 11:00
    Yes, further reducing the memory footprint would be nice. For large images like this, it would also be useful to have an encoder that does not try to globally optimize the entire image, and a cropped decoder. These are all possible, but quite some implementation effort, and it is not the main priority right now — getting the software ready for web browser integration is a bigger priority.
    39 replies | 3228 view(s)
  • e8c's Avatar
    17th January 2021, 07:32
    ​>djxl.exe Bennu_Grayscale_s4.jxl tmp.ppm --num_threads=4 Read 109071829 compressed bytes Done. 20237 x 12066, 19.71 MP/s , 1 reps, 4 threads. Allocations: 7750 (max bytes in use: 7.050512E+09) >djxl.exe Bennu_Grayscale_s4.jxl tmp.ppm --num_threads=2 Read 109071829 compressed bytes Done. 20237 x 12066, 10.50 MP/s , 1 reps, 2 threads. Allocations: 7744 (max bytes in use: 7.041389E+09) >djxl.exe Bennu_Grayscale_s4.jxl tmp.ppm --num_threads=1 Read 109071829 compressed bytes Done. 20237 x 12066, 8.80 MP/s , 1 reps, 1 threads. Allocations: 7741 (max bytes in use: 7.036826E+09) >djxl.exe Bennu_Grayscale_s3.jxl tmp.ppm --num_threads=4 Read 112310632 compressed bytes Done. 20237 x 12066, 35.77 MP/s , 1 reps, 4 threads. Allocations: 7749 (max bytes in use: 7.053651E+09) >djxl.exe Bennu_Grayscale_s3.jxl tmp.ppm --num_threads=2 Read 112310632 compressed bytes Done. 20237 x 12066, 19.57 MP/s , 1 reps, 2 threads. Allocations: 7743 (max bytes in use: 7.044529E+09) >djxl.exe Bennu_Grayscale_s3.jxl tmp.ppm --num_threads=1 Read 112310632 compressed bytes Done. 20237 x 12066, 17.63 MP/s , 1 reps, 1 threads. Allocations: 7740 (max bytes in use: 7.039963E+09) (Scrollable.)
    39 replies | 3228 view(s)
  • suryakandau@yahoo.co.id's Avatar
    17th January 2021, 04:10
    I think it's difficult to go below 550000 bytes of A10 jpeg file..
    219 replies | 20336 view(s)
  • Gotty's Avatar
    17th January 2021, 01:46
    Gotty replied to a thread paq8px in Data Compression
    I need your help investigating it (I can't reproduce). Could you tell me the command line switches you used?
    2290 replies | 606816 view(s)
  • Darek's Avatar
    16th January 2021, 23:13
    Darek replied to a thread paq8px in Data Compression
    If we assume about 20 versions yearly it means that it could be Year 2263.... or there would be a some breakthrough or more...
    2290 replies | 606816 view(s)
  • Gotty's Avatar
    16th January 2021, 23:03
    Gotty replied to a thread paq8px in Data Compression
    150'000? Are you sure you wanted to ask 150'000? OK. Let's do the math ;-) Open Darek's MaximumCompression results a couple of posts above. Look at the first result he recorded (for paq8px_v75): 637110 Look at the last result (for paq8px_v200): 624578 Gaining that 12532 bytes took roughly 125 versions. Doing a simple interpolation (which is totally incorrect, but fits very well to your question): 150'000 bytes will be reached at around paq8px_v4858.
    2290 replies | 606816 view(s)
  • Darek's Avatar
    16th January 2021, 23:01
    Darek replied to a thread paq8px in Data Compression
    In my opinion - in 2077 :) By the way - paq8pxd v95 got score 618'527 bytes for A10.jpg file with option -x9
    2290 replies | 606816 view(s)
  • Darek's Avatar
    16th January 2021, 22:58
    Darek replied to a thread Paq8pxd dict in Data Compression
    Scores of my testset and $ Corpuses for paq8pxd v94 and paq8pxd v95. Good improvements on JPG files and files contains such structures. A10.JPG file got 618'527 bytes!
    1030 replies | 362445 view(s)
  • CompressMaster's Avatar
    16th January 2021, 21:21
    CompressMaster replied to a thread paq8px in Data Compression
    My quick test on A10.jpg from MaximumCompression corpus using the -7 switch: Total input size: 842 468 bytes Compressed size: 624 693 bytes btw, pure hypothetical question: when you will expect to reach compression of A10.jpg below 150 000 bytes?
    2290 replies | 606816 view(s)
  • Gotty's Avatar
    16th January 2021, 21:00
    Gotty replied to a thread paq8px in Data Compression
    - IndirectContext improvement: using a leading bit to distinguish context bits from empty (unused) bits - LSTM model: Applied the new IndirectContext improvements - MatchModel improvements: - moved context hashes from NormalModel to Shared so MatchModel can also use them - using more candidates in hashtable (3 instead of 1) - using the improved IndirectContext - refined contexts - tuned NormalModel contexts wrt MatchModel context lengths
    2290 replies | 606816 view(s)
  • CompressMaster's Avatar
    16th January 2021, 20:57
    CompressMaster replied to a thread Paq8sk in Data Compression
    @suryakandau Are you able to go below 550,000 bytes of A10.jpg compression? And with F.JPG below 40,000 bytes? I am just interested if paq8sk can be tuned to achieve that ratio - regardless time and mem. usage. btw, you have irrelevant space between attached file and text.
    219 replies | 20336 view(s)
  • e8c's Avatar
    16th January 2021, 20:15
    https://www.asteroidmission.org/osprey-recon-c-mosaic/ >cjxl.exe -q 100 -s 3 Bennu_Grayscale.png Bennu_Grayscale_s3.jxl ​J P E G \/ | /\ |_ e n c o d e r Read 20237x12066 image, 75.4 MP/s Encoding , 2 threads. Compressed to 112310632 bytes (3.680 bpp). 20237 x 12066, 24.96 MP/s , 1 reps, 2 threads. >cjxl.exe -q 100 -s 4 Bennu_Grayscale.png Bennu_Grayscale_s4.jxl J P E G \/ | /\ |_ e n c o d e r Read 20237x12066 image, 75.6 MP/s Encoding , 2 threads. Compressed to 109071829 bytes (3.573 bpp). 20237 x 12066, 1.78 MP/s , 1 reps, 2 threads. >cjxl.exe -q 100 -s 5 Bennu_Grayscale.png Bennu_Grayscale_s5.jxl J P E G \/ | /\ |_ e n c o d e r Read 20237x12066 image, 74.7 MP/s Encoding , 2 threads. terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc ... ... ... >cjxl.exe -q 100 -s 9 Bennu_Grayscale.png Bennu_Grayscale_s9.jxl J P E G \/ | /\ |_ e n c o d e r Read 20237x12066 image, 75.7 MP/s Encoding , 2 threads. terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc >dir 16.01.2021 19:26 128 446 880 Bennu_Grayscale.png 16.01.2021 19:24 732 538 945 Bennu_Grayscale.ppm 16.01.2021 19:36 112 310 632 Bennu_Grayscale_s3.jxl 16.01.2021 19:41 109 071 829 Bennu_Grayscale_s4.jxl >systeminfo | find "Memory" Total Physical Memory: 20,346 MB Available Physical Memory: 17,859 MB Virtual Memory: Max Size: 20,346 MB Virtual Memory: Available: 18,012 MB Virtual Memory: In Use: 2,334 MB (Scrollable.)
    39 replies | 3228 view(s)
  • kaitz's Avatar
    16th January 2021, 20:15
    kaitz replied to a thread Paq8pxd dict in Data Compression
    No
    1030 replies | 362445 view(s)
  • suryakandau@yahoo.co.id's Avatar
    16th January 2021, 19:24
    Paq8sk44 - improve jpeg compression using -s8 option on f.jpg (darek corpus) Total 112038 bytes compressed to 80194 bytes. Time 19.17 sec, used 2444 MB (2563212985 bytes) of memory here is source code and binary file paq8sk44 -s8 a10.jpg 842468 616629 151.72 sec 2444 MB paq8pxd_v95 -s8 a10.jpg 842468 618555 43.42 sec 1984 MB paq8px_v200 -8 a10.jpg 842468 624597 26.51 sec 2602 MB
    219 replies | 20336 view(s)
  • pacalovasjurijus's Avatar
    15th January 2021, 21:31
    We write programs. It is necessary to compress the program for this you need to write the program which will compress the program and write the program.
    8 replies | 2114 view(s)
  • e8c's Avatar
    15th January 2021, 17:19
    (UPD: I know about , but next link could be helpful any way.) ​ https://askubuntu.com/questions/1041349/imagemagick-command-line-convert-limit-values $ /bin/time -f '\nUser time (seconds): %U\nMemory (kbytes): %M' \ ​> ./guess -1 PIA23623_hires.ppm PIA23623_hires.guess encode, 2 threads: 112 MPx/s User time (seconds): 35.60 Memory (kbytes): 7743864 $ ls -l total 12539741 -rwxrwx--- 1 root vboxsf 1794025332 янв 15 16:49 PIA23623_hires.guess -rwxrwx--- 1 root vboxsf 2293756771 янв 15 15:56 PIA23623_hires.png -rwxrwx--- 1 root vboxsf 6115804597 янв 15 15:20 PIA23623_hires.ppm -rwxrwx--- 1 root vboxsf 2425213852 янв 14 02:08 PIA23623_hires.tif VM: 2 v-Cores, 11 GB RAM Host: Intel NUC8i3, SATA SSD "35.60 seconds" is sum of 2 threads, about 18 s in user space. (Said for those who are not familiar with profiling multi-threaded applications.) Respectively: 8 cores - 4.5 s, 16 cores - 2.25 s, 32 cores - 1.125 s. It is acceptable. See attachment. (Site engine converts PNG to JPG, why?)
    63 replies | 4393 view(s)
  • suryakandau@yahoo.co.id's Avatar
    15th January 2021, 12:03
    Where can I get mill.jpg and dscn0791.avi files? Thank you.. could you upload them here please ?
    1030 replies | 362445 view(s)
  • Lithium Flower's Avatar
    14th January 2021, 10:43
    @ Jyrki Alakuijala Thank you for your reply I check my compressed non-photographic set with my eyes, find tiny artifacts in different image, and i think in jpeg xl 0.2 -d 1.0(Speed: kitten), if compressed image maxButteraugli above 1.5 or 1.6?, probably have error or tiny artifacts in this image. A little curious, probably have a plan or patch to improvement non-photographic fidelity, in next jpeg xl public release(jpeg xl 0.3)?
    39 replies | 3228 view(s)
  • kaitz's Avatar
    13th January 2021, 21:19
    kaitz replied to a thread Paq8pxd dict in Data Compression
    ​paq8pxd_v95 jpeg model: -more context in Map1 (20) -more inputs from main context -2 main mixer inputs + 1 apm -cleanup Size Compressed Sec paq8pxd_v95 -s8 a10.jpg 842468 618555 43.42 sec 1984 MB paq8px_v200 -8 a10.jpg 842468 624597 26.51 sec 2602 MB paq8pxd_v95 -s8 mill.jpg 7132151 4910289 350.38 sec 1984 MB paq8px_v200 -8 mill.jpg 7132151 4952115 228.65 sec 2602 MB paq8pxd_v95 -s8 paq8px_v193_4_Corpuses.jpg 3340610 1367528 167.13 sec 1984 MB paq8px_v200 -8 paq8px_v193_4_Corpuses.jpg 3340610 1513850 105.90 sec 2602 MB paq8pxd_v95 -s8 DSCN0791.AVI 30018828 19858827 1336.94 sec 1984 MB paq8px_v200 -8 DSCN0791.AVI 30018828 20171981 992.85 sec 2602 MB So mill.jpg is 18571 bytes better v95 vs v94. Its slower, im sure nobody cares. Some main context changes have 0 time penalty but improve result some kb. For a10.jpg new Map1 context add only about 5 sec.
    1030 replies | 362445 view(s)
  • kaitz's Avatar
    13th January 2021, 21:15
    kaitz replied to a thread paq8px in Data Compression
    MC ​ paq8px_v200 paq8pxd_v94 ​file size -s8 -s8 A10.jpg 842468 624597 620980 AcroRd32.exe 3870784 823707 831293 english.dic 4067439 346422 347716 FlashMX.pdf 4526946 1315382 1334877 FP.LOG 20617071 215399 201621 MSO97.DLL 3782416 1175358 1190012 ohs.doc 4168192 454753 451278 rafale.bmp 4149414 468156 468757 vcfiu.hlp 4121418 372048 264060 world95.txt 2988578 313915 311700 Total 6109737 6022294
    2290 replies | 606816 view(s)
  • Jyrki Alakuijala's Avatar
    13th January 2021, 11:30
    I suspect that VarDCT will be the most appropriate mode for this -- we just need to fix the remaining issues. Palette mode and delta palette mode can also be useful for a wide range of pixel art images. They are also not yet tuned for best performance but show quite a lot of promise already. My understanding is that -- for photographics images -- lossy modular mode provides a quality that is between of libjpeg quality and VarDCT quality, but closer to libjpeg quality. I always used --distance for encoding and VarDCT. We have a final version of the format now, so in that sense it is ok to start using it. For practical use it may be nice to wait before tooling support for JPEG XL is catching up. JPEG XL committee members did a final quality review in November/December with many metrics and manual review of images where the metrics disagreed. FDIS phase starts next week.
    39 replies | 3228 view(s)
  • Sebastianpsankar's Avatar
    13th January 2021, 03:30
    This was helpful and encouraging... Thanks...
    20 replies | 873 view(s)
  • Lithium Flower's Avatar
    12th January 2021, 17:31
    @ Jyrki Alakuijala Thank you for your reply I have some question about Pixel Art, i using pingo png lossless -s0 to identify which image can lossless convert to pal8, some Pixel Art image can't lossless convert, need use vardct mode or modular mode, In my Pixel Art image test, vardct mode -q 80 Speed 8, lossy modular mode -Q 80 Speed: 9, vardct mode can't work very well(have tiny artifact), lossy modular mode work fine in Pixel Art image, but look like lossy modular mode isn't recommend use right now, which mode is best practice? And about lossy modular mode quality value(-Q luma_q), this quality value roughly match or like libjpeg quality? i don't know use lossy modular Q80 Speed 9 compare vardct q80 Speed 8 , is a fair comparison? About Pixel Art png pal8, I test Pixel Art png pal8(93 color)in lossless modular mode -q 100 -s 9 -g 2 -E 3, but if use png lossless before jxl lossless, will increase file size. jpeg xl lossless:People1.png 19.3kb => People1.jxl 19.3kb People1.png 19.3kb => People1.png 16.4kb (pingo png lossless -s9) => People1.jxl 18.1kb Webp Lossless:People1.png 19.3kb => People1.webp 16.5kb People1.png 19.3kb => People1.png 16.4kb (pingo png lossless -s9) => People1.webp 16.5kb Pixel Art png pal8(color near 256), jpeg xl lossless is best.// rgb24 605k force convert pal8 168k jpeg xl lossless: 135k Webp Lossless: 157k And a little curious, recommend my artist friend use jpeg xl 0.2, is a good idea?, or i should wait FDIS stage finish?
    39 replies | 3228 view(s)
  • fcorbelli's Avatar
    12th January 2021, 14:24
    fcorbelli replied to a thread zpaq updates in Data Compression
    This is an example of sequential scan... (...) 540.739.857.890 379.656 time 16.536 /tank/condivisioni/ 540.739.857.890 379.656 time 17.588 /temporaneo/dedup/1/condivisioni/ 540.739.857.890 379.656 time 17.714 /temporaneo/dedup/2/tank/condivisioni/ 540.739.857.890 379.656 time 16.71 /temporaneo/dedup/3/tank/condivisioni/ 540.739.857.890 379.656 time 16.991 /temporaneo/dedup/4/condivisioni/ 540.739.857.890 379.656 time 93.043 /monta/nas1_condivisioni/ 540.739.857.890 379.656 time 67.312 /monta/nas2_condivisioni/ 540.739.840.075 379.656 time 362.129 /copia1/backup1/sincronizzata/condivisioni/ ------------------------ 4.325.918.845.305 3.037.248 time 608.024 sec 608.027 seconds (all OK) vs threaded... zpaqfranz v49.5-experimental journaling archiver, compiled Jan 11 2021 Dir compare (8 dirs to be checked) Creating 8 scan threads 12/01/2021 02:00:54 Scan dir || <</tank/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</temporaneo/dedup/1/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</temporaneo/dedup/2/tank/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</temporaneo/dedup/3/tank/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</temporaneo/dedup/4/condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</monta/nas1_condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</monta/nas2_condivisioni/>> 12/01/2021 02:00:54 Scan dir || <</copia1/backup1/sincronizzata/condivisioni/>> Parallel scan ended in 330.402000 About twice as fast (in this example)
    2653 replies | 1131086 view(s)
  • Dresdenboy's Avatar
    12th January 2021, 08:19
    You're welcome! Most of the files are compressed at a compression ratio of ~2 to 5. in this ZIP. FONTES_*, MENU_* and TELEPORT_* are less compressed, with the latter two containing a lot of 16 bit data. They might contain bitmaps.
    5 replies | 420 view(s)
  • EmilyWasAway's Avatar
    12th January 2021, 06:11
    After reviewing what you both have said it makes sense that the samples I posted are not using compression at this layer of the format. I'm not certain but these files extracted from the header of the DPC appear to reference data located further down in the DPC but the headers themselves are not compressed in this version. Thank you for the help. :)
    5 replies | 420 view(s)
  • Shelwien's Avatar
    12th January 2021, 03:13
    Its not a compressed format (at least not in the first layer of structure), but just a structured format with length prefixes and mostly floats inside. seg000:00000000 dd 3Fh seg000:00000004 dd 4 seg000:00000008 dq 3Bh seg000:00000010 dd 52F79F96h seg000:00000014 dd 0C939BCA1h seg000:00000018 dd 0D24B3F6Fh seg000:0000001C aVLinkLefthandM dd 55 seg000:0000001C db 'V:LINK "LeftHand" "MUSCLE_OFF_1_SUSPFL4_LOD1" AxeZ LOD1' seg000:00000057 dd 0A6h seg000:0000005B dd 62h seg000:0000005F dq 44h seg000:00000067 dd 73DC6A13h seg000:0000006B dd 0A3F0FCD9h seg000:0000006F dd 4AB1A4C3h seg000:00000073 dd 681AF697h seg000:00000077 dd 0BE02FCF1h seg000:0000007B dd 0BE5A0EC8h seg000:0000007F dd 0BA5BF080h seg000:00000083 dd 3E801E69h seg000:00000087 dd 3F800000h seg000:0000008B dd 0 seg000:0000008F dd 0 seg000:00000093 dd 0BE02FCF1h seg000:00000097 dd 0 seg000:0000009B dd 3F800000h seg000:0000009F dd 0 seg000:000000A3 dd 0BE5A0EC8h seg000:000000A7 dd 0 seg000:000000AB dd 0 seg000:000000AF dd 3F800000h seg000:000000B3 dd 0BA5BF080h seg000:000000B7 dd 3E801E69h seg000:000000BB dd 3E801E69h seg000:000000BF dd 3E801E69h seg000:000000C3 dd 3EDDE882h seg000:000000C7 dd 0
    5 replies | 420 view(s)
  • EmilyWasAway's Avatar
    12th January 2021, 01:21
    I considered that it could be a custom format but the similarities to the previous DPC formats and sections of the file that look like this lead me to investigate the possibility of compression. Although if it is compressed, it's not by much.
    5 replies | 420 view(s)
  • Jyrki Alakuijala's Avatar
    12th January 2021, 01:03
    Thank you. This is very useful. Yes, looks awful. I had an off-by-one for smooth area detection heuristic and those areas were detected 4 pixels off. There is likely an improvement on this in the next public release, as well as an overall reduction (5-10 %) of such artifacts from other heuristic improvements -- with some more contrast preserved in the middle frequency band (where other formats do often pretty bad). If you find these images in the next version, please keep sending samples. Consider submitting them to http://gitlab.com/wg1/jpeg-xl as an issue.
    39 replies | 3228 view(s)
  • Lithium Flower's Avatar
    11th January 2021, 21:30
    ​Get a issue in vardct mode, i using jpeg xl 0.2 -d 0.8 (Speed: kitten), in some non-photographic(more natural synthetic) image, everything is fine, but some blue and red area have tiny artifacts(noise?). Use vardct mode in non-photographic(more natural synthetic) type image, probably need use other jpeg xl flag(filter) to get great result?
    39 replies | 3228 view(s)
  • Dresdenboy's Avatar
    11th January 2021, 20:24
    With those runs of zeroes and the compression ratio in the samples zip I think those files aren't compressed at all, just some custom data format.
    5 replies | 420 view(s)
  • fcorbelli's Avatar
    11th January 2021, 19:00
    fcorbelli replied to a thread zpaq updates in Data Compression
    This is version 49.5. Should also compile on Linux (tested only on Debian), plus FreeBSD and Windows (gcc) I have added some functions that I think are useful. The first is the l (list) command. Now with ONE parameter (the .ZPAQ file) shows its contents. If more than one parameter, compare the contents of the ZPAQ with one or more folders, with a (block) check of SHA1s (the old -not =). Can be used as a quick check after add: zpaqfranz a z:\1.zpaq c:\pippo zpaqfranz l z:\1.zpaq c:\pippo Then I introduce the command c (compare) for directories, between a master and N slave. With the switch -all launches N+1 threads. The default verification is file name and size only. Applying the -crc32 switch also verifies its checksum WHAT? During the verification phase of the correct functioning of the backups it is normal to extract them on several different media (devices). Using for example folders synchronized with rsync on NAS, ZIP files, ZPAQ via NFS-mounted shares, smbfs, internal HDD etc. Comparing multiple copies can takes a (very) long time. Suppose to have a /tank/condivisioni master (or source) directory (hundreds of GB, hundred thousand files) Suppose to have some internal (HDD) and external (NAS) rsynced copy (/rsynced-copy-1, /rsynced-copy-2, /rsynced-copy-3...) Suppose to have internal ZIP backup, internal ZPAQ backup, external (NAS1 zip backup), external (NAS2 zpaq backup) and so on. Let's extract all of them (ZIP and ZPAQs) into /temporaneo/1, /temporaneo/2, /temporaneo/3... You can do something like diff -qr /temporaneo/condivisioni /temporaneo/1 diff -qr /temporaneo/condivisioni /temporaneo/2 diff -qr /temporaneo/condivisioni /temporaneo/3 (...) diff -qr /temporaneo/condivisioni /rsynced-copy-1 diff -qr /temporaneo/condivisioni /rsynced-copy-2 diff -qr /temporaneo/condivisioni /rsynced-copy-3 (...) But this can take a lot of time (many hours) even for fast machines The command c compares a master folder (the first indicated) to N slave folders (all the others) in two particular operating modes. By default it just checks the correspondence of files and their size (for example for rsync copies with different charsets, ex unix vs linux, mac vs linux, unix vs ntfs it is extremely useful). Using the -crc32 switch a check of this code is also made (with HW CPU support, if available). The interesting aspect is the -all switch: N+1 threads will be created (one for each specified folder) and executed in parallel, both for scanning and for calculating the CRC. On modern servers (eg Xeon with 8, 10 or more CPUs) with different media (internal) and multiple connections (NICs) to NASs you can drastically reduce times compared to multiple, sequential diff -qr. It clearly makes no sense for single magnetic disks. In the given example zpaqfranz c /tank/condivisioni /temporaneo/1 /temporaneo/2 /temporaneo/3 /rsynced-copy-1 /rsynced-copy-2 /rsynced-copy-3 -all will run 7 threads which take care of one directory each. The hypothesis is that the six copies are each on a different device, and the server have plenty of cores and NICs. It's normal in datastorage and virtualization environments (at least in mine). Win32 e Win64 on http://www.francocorbelli.it/zpaqfranz.exe http://www.francocorbelli.it/zpaqfranz32.exe
    2653 replies | 1131086 view(s)
  • CompressMaster's Avatar
    11th January 2021, 17:11
    Thanks. But I am not familiar with java nor android programming, and so I dont know how I can get it to work. Detailed step-by-step manual will be very beneficial:). btw, better to rename this thread to something like "android camera - use different zooming algorithm"
    2 replies | 161 view(s)
  • EmilyWasAway's Avatar
    11th January 2021, 12:46
    I'm reverse engineering a version of Asobo Studio's DPC archive format used in the PC release of the game FUEL (2009). I am able to unwrap the first "layer" of the format by breaking the archive down into the files described in the DPC header using a modified version of this MexScript. However these extracted files appear to be compressed with a custom LZ variant. Some games released before FUEL (CT Special Forces: Fire for effect, Ratatouille, and Wall-E) each used a slightly different LZ variant than the previous release so I am expecting FUEL to use something similar to those. @Shelwien has provided a series of unLZ_rhys scripts in previous posts (linked at the bottom) but none of them seam to properly decompress the files I extracted. I have attached a selection of extracted files that appear to be compressed and contain a small amount of text near the beginning. They all follow a similar pattern to the one in this image: Which closely resembles the compressed files from the previous posts. In theory this should only require a small modification to the unLZ_rhys tool but unfortunately I cannot seem to figure out the header layout/mask for this new version of the format. Any help with how to modify the tool or advice in general would be greatly appreciated. If you need more samples or the original DPC files I can provide them. https://encode.su/threads/3147-Reverse-Engineering-Custom-LZ-Variant https://encode.su/threads/3526-Asobo-s-Ratatouille-DPC-Data-Compression
    5 replies | 420 view(s)
  • suryakandau@yahoo.co.id's Avatar
    11th January 2021, 08:55
    There is a trade off between compression ratio n speed... Like cmix... :)
    219 replies | 20336 view(s)
  • kaitz's Avatar
    11th January 2021, 04:22
    kaitz replied to a thread Paq8sk in Data Compression
    I think you can get a10 below 618xxx in 100sec or less. :D 619xxx in 50 sec.
    219 replies | 20336 view(s)
  • Shelwien's Avatar
    11th January 2021, 03:23
    > In a multithreaded system with multiple cores and an operating system with virtual memory > (windows, linux, unix), can you have a CPU exception when two instructions modify the same memory cell? No, there're no exceptions for this. Just that a "global variable" without "volatile" might be actually kept in a register. > Or does the content simply become not well defined? In a sense. You just can't predict the order of read/write operations working in different threads.
    15 replies | 532 view(s)
  • Shelwien's Avatar
    11th January 2021, 03:18
    > The question is: if it is NOT important that the global variables are > perfectly up to date, can I safely (no CPU exception) avoid a semaphore or > something like that (obviously reducing the latency, this is the ultimate goal)? On x86 yes, though you'd have to add "volatile" specifier to the variables accessed from multiple threads. On some weird platforms like PPC and embedded ones you might also need explicit cache flushes and/or or intrinsics like __sync_synchronize(). So yes, on x86 its quite possible to implement MT without explicit semaphores - its simply less efficient when a thread spins a loop waiting for some global variable, while with thread APIs it could release the core to OS while it waits. There're also some interesting new tools: https://gcc.gnu.org/onlinedocs/gcc-4.9.2/gcc/X86-transactional-memory-intrinsics.html
    15 replies | 532 view(s)
  • Shelwien's Avatar
    11th January 2021, 02:54
    https://stackoverflow.com/questions/37763257/android-bitmap-resizing-using-better-resampling-algorithm-than-bilinear-like-l
    2 replies | 161 view(s)
  • suryakandau@yahoo.co.id's Avatar
    11th January 2021, 01:42
    Yes I am interested in the limit of jpeg compression ratio. Btw how much jpeg can be compressed ?
    219 replies | 20336 view(s)
  • Lithium Flower's Avatar
    10th January 2021, 23:04
    @ Jyrki Alakuijala Thank you for your reply If i want increase fidelity in vardct mode(jpeg xl 0.2), target distance -d 0.8 (Speed: kitten) probably a good distance? -q 90 == -d 1.000 // visually lossless (side by side) -q 91 == -d 0.900 -q 92 == -d 0.800 -q 93 == -d 0.700 -q 94 == -d 0.600 -q 95 == -d 0.550 // visually lossless (flicker-test)
    39 replies | 3228 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 21:39
    Well, there are many things at play: 1) there should be no exception on write, whatever that means. One thread will win the race. However, when you read the value back it could be in inconsistent state, e.g. one thread won with one part of result, other thread won with other part of result, so in the end the result is corrupted and using it will result in exception, segfault, etc 2) there is always some transaction size. I think that if you have a register of size 2^N bytes and you write to a memory location aligned at 2^N bytes then your write will either succeed fully or will be overwritten fully. This means that if you e.g. store a pointer to a field aligned to pointer size then it will either suceed fully or be overwritten fully by another thread. In either case there will be a valid pointer if both threads write valid pointers. 3) you need to be aware of https://en.wikipedia.org/wiki/Memory_model_(programming) and https://en.wikipedia.org/wiki/Consistency_model . For example if you don't place volatile nor atomic modifiers then compiler is allowed to cache the value in register and potentially update the real memory cell very rarely. If you don't use memory fences (atomics trigger memory fences) then CPU core could delay updating other cores so that the other core would get stale data. 4) transformations (e.g. addition) are done on CPU level, so CPU needs to invoke many steps: load value, change value, store value. Since there are multiple steps, another CPU core could access the data in between steps of the first core. Therefore to implement atomics instructions like https://en.wikipedia.org/wiki/Compare-and-swap are needed to verify at the end of transformation that the original value is still at the original memory location. If not, then compare-exchange instruction fails and whole transformation is repeated. Such process is repeated until compare-exchange achieves success. In case of reasonably low contention between threads the success rate is high. 5) the CPU instructions define the guarantees you'll see in practice. So if you copy 8 bytes one byte at a time and two threads are doing that on the same memory location then you won't get the guarantees of 8-byte writes done as single instruction. 6) on some CPUs (e.g. ARM ones) there are only aligned writes, so compiler has to emulate unaligned writes using aligned writes. For example if you write 4 bytes at memory address 13. 13 % 4 != 0, so compiler need to issue two 4-byte writes, each transforming data that's already there. Because there is a multi-step non-atomic transformation there could be corruption of data if multiple threads access the memory location.
    15 replies | 532 view(s)
  • fcorbelli's Avatar
    10th January 2021, 21:22
    I will make a debian virtual machine and fix the BSD-dependent code. But the question is always the same of the first post. In a multithreaded system with multiple cores and an operating system with virtual memory (windows, linux, unix), can you have a CPU exception when two instructions modify the same memory cell? Or does the content simply become not well defined?
    15 replies | 532 view(s)
  • CompressMaster's Avatar
    10th January 2021, 21:19
    Is there a good android related forum somewhere? I want to alter camera´s digital zooming algorithm to use gaussian interpolation instead of bilinear.
    2 replies | 161 view(s)
  • CompressMaster's Avatar
    10th January 2021, 21:13
    CompressMaster replied to a thread Paq8sk in Data Compression
    @suryakandau what about optimizing paq8sk for file A10.jpg from Maximum Compression Corpus? I am interested whats the limit in compression ratio.:)
    219 replies | 20336 view(s)
  • Piotr Tarsa's Avatar
    10th January 2021, 20:56
    What if you use `std::atomic<std::int64_t>` instead of its alias `atomic_int64_t`?
    15 replies | 532 view(s)
More Activity