Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • Scope's Avatar
    Today, 21:16
    ​https://chromium.googlesource.com/codecs/libwebp2/+/b77ef4cce1a8b5e7351756a402272000c60cc6bb
    10 replies | 414 view(s)
  • pklat's Avatar
    Today, 20:38
    neural networks could probably be very good at 'restoring', say, screenshots from scanned old computer magazines. They could recognize windows, icons, mouse pointer, fonts, etc... reconstruct it while compressing.
    5 replies | 819 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 18:50
    You are welcome
    1004 replies | 350022 view(s)
  • LucaBiondi's Avatar
    Today, 17:20
    LucaBiondi replied to a thread Paq8pxd dict in Data Compression
    Ah ok but if you get a crash i suppose it is not the right way. Thank you!
    1004 replies | 350022 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 16:13
    Downgrade memory option in each model can make the process successfull but the compression is still worse
    1004 replies | 350022 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 15:51
    i have look at the source code but i am not sure which part cause that error. The solution for that crash is change line 1268 from 10000 downto 1000 but the compression is worse than before
    1004 replies | 350022 view(s)
  • moisesmcardona's Avatar
    Today, 15:22
    Also don't forget to submit your changes to the paq8pxd repo: https://github.com/kaitz/paq8pxd/pulls
    1004 replies | 350022 view(s)
  • LucaBiondi's Avatar
    Today, 15:20
    LucaBiondi replied to a thread Paq8pxd dict in Data Compression
    Thanks have you take a look to the bug that crash the app? I will not test versions that crash because i usually use the -x12 option. Thank you!
    1004 replies | 350022 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 15:01
    Paq8pxd90fix3 - improve and enhance jpeg compression using -s8 option on: f.jpg 112038 bytes -> 80984 bytes a10.jpg 842468 bytes -> 621099 bytes dsc_0001.jpg 3162196 bytes -> 2178889 bytes ​there is binary and source code.
    1004 replies | 350022 view(s)
  • Shelwien's Avatar
    Today, 03:51
    http://nishi.dreamhosters.com/u/instlat.html http://nishi.dreamhosters.com/u/instlat_pl_v0.7z based on http://instlatx64.atw.hu/ https://github.com/InstLatx64/InstLatx64/
    0 replies | 72 view(s)
  • e8c's Avatar
    Yesterday, 22:25
    Point of view for better Time and Date System: https://app.box.com/s/wugks9n7bh9skomtxjig8bhk3bgakp04 Decimal Time (https://en.wikipedia.org/wiki/Decimal_time) and Infinite Past Calendar: "(9)" in "(9)501st" like in "0.(9)", no "0th" (or "minus 1st") year in past, and "overflow" in future; year starts in 1st day after longest night in Northern Hemisphere.
    0 replies | 50 view(s)
  • Jyrki Alakuijala's Avatar
    Yesterday, 20:11
    Jim is eye-balling different codecs on youtube. JPEG XL wins this simple test hands down.
    51 replies | 3299 view(s)
  • CompressMaster's Avatar
    Yesterday, 19:39
    I want to synchronize history and cookies between many android and non-android browsers. Are there some app to do that? I have found only xBrowserSync, but its able to sync only bookmarks. Are there others?
    0 replies | 45 view(s)
  • Jyrki Alakuijala's Avatar
    Yesterday, 14:45
    We have an architecture in JPEG XL that allows any visual metric be used for adaptive quantization. We just have been too busy with everything else and were not able to productionize this yet. All quality-related changes are QA heavy and will introduce a long-tail of manual work that drags on to future changes, too, so we have post-poned this for now. Some of the next things we plan: - speed up butteraugli further - speed up (lossless) encoding - better adversarial characteristics in butteraugli's visual masking so that it can be used for ultra low bpp neural encoding-like solutions - improve pixel art coding density - low color count image improvements, and hybrid low color count + photographic/gradient image improvements - progressive alpha - make JXL's butteraugli also alpha-aware (blend on black/white, take the worst) - API improvements and polishing - extend competitive range from 0.3 BPP to 0.07 BPP by using larger DCTs and layering (long term improvement) Some lossless photographs contain a lot of noise. No method can compress the noise losslessly. So lossless compression of photographs may be more relevant for very high quality imaging conditions with no noise. Lossless photograph compression may be relevant in reproduction and for photography enthusiasts. Personally I consider that making photographs visually lossless (with an application depending error margin) is a more practical concept than true lossless. WebP v1 could have been more ambitious on quality in the lossy side. On lossless, it would have been great if we had been able to deploy layers in it to get 16 bit support. Ideally progressive format would be slightly more useful than non-progressive. WebP v1 lossy file format didn't require initial research. It was just repackaging. I spent seven man-months coming up with WebP v1 lossless. The research phase for AV1 could be a 100+ man-year project, JPEG XL and WebP v2 in the 10-30 man-year range (all of these are very rough guesses). Comparing WebP v1 to AV1, JPEG XL and WebP v2, the first is in the 'quick hack' category, where as the others are serious big research projects, especially so for AV1. AVIF again is just repackaging, so it is a 'quick hack' separated from a formidable research project. In my view, ideally WebP v2, AVIF, HEIF/HEIC and JPEG XL developers would consolidate the formats, concentrate the efforts to build high-quality encoders/decoders, and come up with one simple and powerful solution that would satisfy all the user needs.
    51 replies | 3299 view(s)
  • Shelwien's Avatar
    Yesterday, 13:58
    Looks like oodle: https://zenhax.com/viewtopic.php?t=14376
    1 replies | 121 view(s)
  • gameside's Avatar
    Yesterday, 13:39
    Hi sorry for maybe off-topic question I'm trying to reverse format of DATA file from assassins creed valhalla, but I don't know much about compression methods anyone can tell me this file use what compression method and how can i decompress it? thanks
    1 replies | 121 view(s)
  • Shelwien's Avatar
    Yesterday, 12:58
    > attachment is downloaded as a jpg It a screenshot from file comparison utility. Blue numbers are different. > is the input 16bit? Yes, 16-bit grayscale. > Did you expect a 16bit output? Yes, FLIF and jpegXL handle it correctly. > What command did you use to generate the attachment? convert.exe -verbose -depth 16 -size 4026x4164 gray:1.bin png:1.png cwp2.exe -mt 8 -progress -q 100 -z 6 -lossless 1.png -o 1.wp2 dwp2.exe -mt 8 -progress -png 1.wp2 -o 1u.png convert.exe -verbose -depth 16 png:1u.png gray:1u.bin > (webp2 only handles 10bit samples at max, btw) That's ok, but cwp2 doesn't say anything about -lossless not being lossless.
    10 replies | 414 view(s)
  • Jyrki Alakuijala's Avatar
    Yesterday, 12:34
    Correct. When I spoke about near-lossless actually reducing the image size, I referred to the experience of doing it for 1000 PNG images from the web. There, it is easy to get 20-40 % further size reduction. Small color count images are of course best compressed using a palette. A palette captures the color component correlations wonderfully well.
    10 replies | 414 view(s)
  • skal's Avatar
    Yesterday, 10:14
    That seems to fall into the "Responsive Web" category, where the image serves is adapted to the requester's display (like: resolution, dpi, bit-depth, etc.) Not sure i understand your example (attachment is downloaded as a jpg): is the input 16bit? Did you expect a 16bit output? What command did you use to generate the attachment? (webp2 only handles 10bit samples at max, btw)
    10 replies | 414 view(s)
  • Shelwien's Avatar
    Yesterday, 02:58
    As to definition of "near-lossless", I think there's another option worth consideration. There's a difference between displayed image and how it is stored - palette, colorspace, "depth" bits, alpha stuff, etc. So I think we can define near-lossless as providing identical displayed image while not necessarily output identical to input (for example with 16-bit input and 8-bit display). In this case it should provide strictly better compression than true lossless, since we can save bits on encoding of that lost information. Btw, here's what webp2 currently does for 16-bit images in lossless mode. I guess its "near-lossless" :)
    10 replies | 414 view(s)
  • Shelwien's Avatar
    Yesterday, 02:36
    windows binaries.
    10 replies | 414 view(s)
  • Scope's Avatar
    Yesterday, 01:59
    +Flic, Qic, SSIMULACRA, Butteraugli, ... Btw, there were no plans to integrate SSIMULACRA as an additional metric (for example, for lower quality) or it will not give any advantages and difficult to integrate? It would also be interesting to read about updates or planned experiments and improvements for Jpeg XL (because even in Gitlab there are no descriptions for updates and changes) Good compression for photographic images is important, but, as web content for delivery, people are much more often compressed in lossless artificial/artistic images than photographic, even my comparison contains mainly such images (but this is a reflection of reality, than any of my preferences, it is very difficult to find large sets of true lossless photos are not created for tests). And about the development and existence of many different image formats, on the one hand, fragmentation is bad and people hate new formats if they are not supported in all their usual applications. But, on the other hand, if we look at the existence of a large number of video formats, even if we take, for example, only the codecs of one company - On2 (which was later acquired by Google): - VP3 still exists (which became Theora) - VP6 and VP7 were supported in Flash player (as well as disappeared together) - VP8 is also still supported in browsers, WebRTC and is used in many video games - VP9 main format on YouTube - and the newest AV1, which was created by the development and practical use of all past generations of codecs And all this happened in less time than Jpeg exists. Also, more than 10 years have passed since the creation of WebP v1, perhaps images do not need such a frequent update and so many different formats, but this is not always a bad thing and it is much more difficult to create a format without flaws that will not need to be changed or improve over time, also different teams may have different goals and views.
    51 replies | 3299 view(s)
  • cssignet's Avatar
    Yesterday, 01:46
    i should probably check this first. WebP v2 is nice stuff btw
    51 replies | 3299 view(s)
  • Scope's Avatar
    Yesterday, 01:00
    Sometimes I've seen images where near-lossless in WebP encoded to a much smaller size (compared to lossless), without visual differences (compared to lossy). Although, as another opinion with Jpeg XL I mostly had better results with lossy mode (VarDCT with values about -d 0.3 ... 0.5) than with different near-lossless (Modular -N .... ), lossy-palette (--lossy-palette --palette=0) and lossy modular (-q 95-99) and perhaps WebP v2 due to more efficient lossy mode will have the same situation, but also these modes in Jpeg XL are not yet fully developed.
    10 replies | 414 view(s)
  • skal's Avatar
    Yesterday, 00:29
    Cases where lossy is larger than PNG is typically for graphic-like content like charts, drawing, etc. Usually, these don't benefit from near-lossless (nor lossy, which smears colors because of YUV420), and are already quite small in size to start with. Actually, near-lossless can actually increase the size compared to lossless in such cases. Random example from the web: http://doc.tiki.org/img/wiki_up/chart_type.png original size: 17809 webp lossless: 7906 near-lossless 90: 8990 lossy -q 96: 30k (+smear!) and same with jpeg-xl: ./tools/cjxl -s 8 chart_type.png -q 100 Compressed to 11570 bytes (0.629 bpp). ./tools/cjxl -s 8 chart_type.png -q 100 -N 5 Compressed to 15724 bytes (0.855 bpp). and wp2: cwp2 chart_type.png -tile_shape 2 -q 100 ​output size: 7635 (0.42 bpp) cwp2 chart_type.png -tile_shape 2 -q 99 output size: 8737 (0.47 bpp)
    10 replies | 414 view(s)
  • Jyrki Alakuijala's Avatar
    30th November 2020, 23:39
    I wrote the first version of WebP near-lossless in December 2011. The code was not considered an important feature and it was not deployed in the initial launch. The first version only worked for images without PREDICTOR_TRANSFORM, COLOR_TRANSFORM, and COLOR_INDEXING_TRANSFORM. This initial version was enabled much later, possibly around 2014, without an announcement, and would only work with images that were not a great fit for the PREDICTOR_TRANSFORM. Later (around 2016?) Marcin Kowalczyk (Riegeli author) ported the original code to work with the PREDICTOR_TRANSFORM. No announcement was made of that relatively significant improvement. When I originally designed it, I considered three variations of near lossless: 1. replacing the last bit with the first bit. (--near_lossless=80) 2. replacing two last bits with the two first bits. (--near_lossless=60) 3. replacing three last bits with the three first bits. (--near_lossless=40) I didn't just want to remove values locally, because having some specific values get more population count would naturally reduce their entropy. I would only do these when the monotonicity of the image around the pixel would not be affected, i.e., the new pixels would not be a new minimum or maximum. In the launch of the feature, two more modes were added (--near_lossless=20 and --near_lossless=0) for one and two more bits of near-lossless loss. I think these are mostly dangerous, can be confusing and rarely if ever useful. I was never able to see a difference between --near_lossless=80 and the true lossless, at least in my original design. It was possible to see (with severe pixel oogling) some differences at --near_lossless=60, but it would still be far superior to what was available in lossy formats. When you look at https://developers.google.com/speed/webp/docs/webp_lossless_alpha_study and Figure 3, you can wonder what would that chart look like if pure lossy was compared against the PNGs. Turns out that a one-in-a-thousand pure lossy image requires 20x more storage capacity than the same image compressed with lossless, and for about 40-50% of the web's PNG images lossless is smaller than high quality lossy. Near-lossless can make that one-in-thousand image smaller still, just normal lossy cannot. Placing near-lossless methods in range 96-99 may lead to a situation where the quality 95 produces a 20x larger file than quality 96. This may be surprising for a user. The actual performance differences of course depend on the actual implementation of near-lossless -- this is assuming near-lossless means LZ77 but no integral transforms.
    10 replies | 414 view(s)
  • skal's Avatar
    30th November 2020, 23:33
    It's the same name because it's targeting the same user-case (web). As i mentioned in these slides https://bit.ly/image_ready_webp_slides, WebP2 is more of the same. The WebP format hasn't changed since 2011. The encoder/decoder in libwebp has. WebP has been supported by 90% of the installed base before Apple eventually switched to it. That's more than just 'moderate'. WebP2 files haven't been released in the wild yet, so i doubt users are receiving confusing .wp2 files.
    51 replies | 3299 view(s)
  • LucaBiondi's Avatar
    30th November 2020, 23:33
    LucaBiondi replied to a thread Paq8pxd dict in Data Compression
    Hi Darek! Yes true
    1004 replies | 350022 view(s)
  • skal's Avatar
    30th November 2020, 23:20
    It is lossless (because of the -q 100 option indeed, which means 'lossless' in cwp2)... but in premultiplied world! Means: cwp2 is discarding everything under the alpha=0 area, and pre-multiplying the other area, which could explain the difference you're seeing with 'pngdiff' if this tool is not doing the measurement in pre-multiplied space. ​
    51 replies | 3299 view(s)
  • skal's Avatar
    30th November 2020, 23:13
    There's a tool extras/mk_preview in the extras/ directory to generate and manipulate previews as raw bits. For instance: ./extras/mk_preview input.png -o preview.bin -d preview.png -m voronoi -g 48 -n 400 -s 250 -i 2000 You can insert this preview data 'preview.bin' into a final .wp2 compressed bitstream using 'cwp2 -preview preview.bin ...' You can also try to generate a preview from cwp2 using 'cwp2 -create_preview ...', but there's not much options available for tweaking. The triangle preview can be eyeballed in the 'vwp2' tool by pressing the '\' key if you are visualizing a .wp2 bitstream with preview available. Overwise, just use mk_preview's -d option to dump the pre-rendered preview. I'm working on a javascript renderer in HTML btw, that'll be easier. Triangle preview is a very experimental field, and this is obvious when looking at the number of options in 'mk_preview' experimentation tool! The use-case of near-lossless is still unclear within WebBP2's perimeter: who is using near-lossless and to what effect? I don't have the answer to this question. Suggestions welcome. I can totally figure someone using 100% lossless to preserve a pristine copy of an image (and note that in this case, it's likely said-user might as well go for RAW format. File size is probably less an issue than CPU or ease-of-edition for them...). But i think there's a mental barrier to using near-lossless for these users: If you don't stick to 100% lossless, you might as well just go lossy (even at -q 95) because it's not going to be pristine anyway. Stopping at the near-lossless intermediate stage isn't bring much to the table in terms of filesize reduction, compared to just going hi-quality lossy. ​
    10 replies | 414 view(s)
  • skal's Avatar
    30th November 2020, 22:52
    https://chromium.googlesource.com/codecs/libwebp2/ What to expect? The WebP 2 experimental codec is mostly pushing the features of WebP further in terms of compression efficiency. The new features (like 10b HDR support) are kept minimal. The axis of experimentation are: more efficient lossy compression (~30% better than WebP, as close to AVIF as possible) better visual degradation at very low bitrate improved lossless compression improved transparency compression animation support ultra-light previews lightweight incremental decoding small container overhead, tailored specifically for image compression full 10bit architecture (HDR10) strong focus on software implementation, fully multi-threaded The use cases remain mostly the same as WebP: transfer over the wire, faster web, smaller apps, better user experience... WebP 2 is primarily tuned for the typical content available on the Web and Mobile apps: medium-range dimensions, transparency, short animations, thumbnails. WebP2 is currently only partially optimized and, roughly speaking 5x slower than WebP for lossy compression. It still compresses 2x faster than AVIF, but takes 3x more time to decompress. The goal is to reach decompression speed parity.
    10 replies | 414 view(s)
  • skal's Avatar
    30th November 2020, 22:48
    Actually, let's start a fresh thread dedicated to the topic...
    182 replies | 59603 view(s)
  • Jon Sneyers's Avatar
    30th November 2020, 22:32
    Perhaps it does premultiplied alpha?
    51 replies | 3299 view(s)
  • Adreitz's Avatar
    30th November 2020, 21:32
    This thread is getting a little off-topic in the discussion about WebPv2, but I will say that I'm a little concerned about it. I'm sure WebPv2 is very interesting from a computer science perspective and I'll probably spend some time tinkering with it when it gets a little more mature, and I'm all for continued scientific and mathematical development, but I feel like the introduction of WebPv2 so soon after v1 became stable is sabotaging both formats from reaching practical acceptance. Everybody always refers to JPEG as the mark to reach when they introduce a new image compression format nowadays, but from my perspective JPEG's success was down to two factors. 1) It was good enough and 2) it was stable for decades. WebPv1 may beat JPEG from a "good enough" standpoint, and this is where most of the comparisons have focused. But the same team releasing an incompatible file format with the same name less than three years after WebPv1 reached 1.0 is anything but stable. WebPv1 seems to be reaching a moderate level of acceptance at least from web developers (probably not from end users, though). But I can see a lot of people getting confused by receiving some file in the "WebP" format that can't be opened by their software of choice because that software only supports the other WebP format. And if the life of WebPv1 has only effectively been a few years long, what guarantee will people have that WebPv2 won't be abandoned in a few years, too, after the computer scientists lose interest and move on to something they feel is better? Who would want to use either format for more than ephemeral use cases in this circumstance? And all this comes down to the question, "What good is a file format if no one uses it?" Perhaps, if computer scientists felt there were a lot of areas of image compression that needed more research and development, it might have been better to position WebP as simply a research format for a number of years, and so not worry about acceptance or bitstream compatibility or anything. Then the scientists could be free to greatly modify design parameters if they were discovered to be suboptimal and experiment with new features or capabilities, while the general public could safely ignore it. Then, when the format stabilizes to a certain point or reaches some threshold of improvement over, say, JPEG, it would be formalized and released as a new public image format and the scientists could continue to tinker with their research format until they reach the next threshold of improvement a couple decades later. Aaron
    51 replies | 3299 view(s)
  • Darek's Avatar
    30th November 2020, 20:22
    Darek replied to a thread Paq8pxd dict in Data Compression
    Yes, this is the crash which I've mentioned in this post: https://encode.su/threads/1464-Paq8pxd-dict?p=67248&viewfull=1#post67248 A10.jpg runs only for up to -x10 ohs.doc (with JPEG inside) runs only for up to -x13
    1004 replies | 350022 view(s)
  • LucaBiondi's Avatar
    30th November 2020, 18:10
    LucaBiondi replied to a thread Paq8pxd dict in Data Compression
    hi guys, starting from version 90 i get a crash if i use too much memory. for example: >paq8pxd64 -x10 testset_docx c:\compression\doc_testset\*.docx <--- RUN FINE >paq8pxd64 -x11 testset_docx c:\compression\doc_testset\*.docx <--- CRASH It is this behaviour normal? thanks! Luca
    1004 replies | 350022 view(s)
  • lz77's Avatar
    30th November 2020, 16:08
    I guess the remaining €2000 should be awarded to the author of babylz for the minimum (de)compressor size. :)
    130 replies | 13482 view(s)
  • Ms1's Avatar
    30th November 2020, 15:10
    Yes, they are ready to insert a dictionary/model for our dirty data inside their neat compressors. Test 4, decompressor_size, bytes: *pglz 3446477 *k5 202450 *babylz 9014 *pgcm 3145170 *LUNA 23447 *PPMd 1413982 *PPMonstr 1599079 *NINO 85981 *sgcm 3150364
    130 replies | 13482 view(s)
  • Scope's Avatar
    30th November 2020, 14:06
    WebP 2: experimental successor of the WebP image format https://chromium.googlesource.com/codecs/libwebp2/ Thread I think it would be better to continue the discussion about WebP v2 in this thread. ​And some questions from me, how to encode and view the triangular previews? As far as I understood, the near lossless mode will not be changed as much as in Jpeg XL, but what improvements are planned and what are the differences from WebP v1?
    182 replies | 59603 view(s)
  • Scope's Avatar
    30th November 2020, 12:40
    I think because this image also has pixels with alpha channel, it seems that by default WebP v2 always has lossy alpha (even with -alpha_q 100?), before this comparison I checked images without alpha channel and compression was lossless. https://i.imgur.com/qqNPCs3.png https://i.imgur.com/txRmjOT.png
    51 replies | 3299 view(s)
  • cssignet's Avatar
    30th November 2020, 12:37
    you are right for WebP v1, but '-q 100' should mean lossless in cwp2. for example, if you convert the image below without transparency... (aka into PNG truecolor, 24 bits/pixel) cwp2 -q 100 0IQLRf5-notrans.png -o 0IQLRf5-notrans.wp2 output size: 575753 (6.51 bpp) dwp2 0IQLRf5-notrans.wp2 -o 0IQLRf5-dec.png Decoded 0IQLRf5-notrans.wp2. Dimensions: 707 x 1000. Transparency: no. Saved to '0IQLRf5-dec.png' magick compare -metric SSIM 0IQLRf5-notrans.png 0IQLRf5-dec.png show.png 1 pngdiff 0IQLRf5-notrans.png 0IQLRf5-dec.png web X: 0 (0.000000) R: 0 G: 0 B: 0 A: 0 in that case, it would be lossless
    51 replies | 3299 view(s)
  • schnaader's Avatar
    30th November 2020, 11:49
    I'm not familiar with cwp2 parameters, but in WebP v1 you had to use both "-q 100" and "-lossless" for real lossless mode, or use the "-z" lossless presets.
    51 replies | 3299 view(s)
  • cssignet's Avatar
    30th November 2020, 09:53
    i did not try that much, i built it without warning on gcc 8/10. on your benchmark, 'Pixiv' tab, first PNG: cwp2 -q 100 0IQLRf5.png -o 0IQLRf5.wp2 output size: 576483 (6.52 bpp) dwp2 0IQLRf5.wp2 -o 0IQLRf5-dec.png Decoded 0IQLRf5.wp2. Dimensions: 707 x 1000. Transparency: yes. Saved to '0IQLRf5-dec.png' magick compare -metric SSIM 0IQLRf5.png 0IQLRf5-dec.png show.png 0.999999 pngdiff 0IQLRf5.png 0IQLRf5-dec.png web X: 190 (0.000269) R: 53 G: 42 B: 95 A: 0 in other word, the WebP v2 processing would not be lossless. perhaps it would be related to my build, a bug, or something?
    51 replies | 3299 view(s)
  • Jyrki Alakuijala's Avatar
    30th November 2020, 00:37
    Both WebP and JPEG XL teams have significant experience in building encoders and decoders for existing formats. WebP team members have built WebP version 1's VP8 compatible part and the SJPEG codec. JPEG XL team members have built ZopfliPNG, LodePNG, Guetzli, PVRTC, and Cloudinary's re-encoding options between many formats, including a large variety of strategies. Dr. Alexander Rhatushnyak needs to be mentioned separately as he has a lot of practical experience for building the most efficient ad hoc formats. He is so far the only person who has won the Hutter prize. In addition to building coders for existing formats, contributors of JPEG XL have designed the Brunsli, FLIF, FUIF, GRALIC, PIK, QLIC, QLIC2, WebP version 1 alpha, and WebP version 1 lossless image compression formats. The diversity of experience and viewpoints brought in by JPEG XL contributors may partially explain why the JPEG XL codec is more widely applicable in lossless use, and particularly so for content that is not well compressed by previously available technology (PNG and WebP lossless version 1). I usually consider that a new generation of codecs should be roughly 25 % more efficient than the previous generation and I could not imagine reaching that by positioning the lossless codec exactly in the same performance space with WebP version 1 lossless. This is much of the reason why we focused so strongly on lossless photographs. Next update on lossless JPEG XL will make encoding quite a lot faster (like 2x) and reduce memory use. This should happen on all other modes except tortoise and falcon. I made a couple of mistakes in WebP version 1 lossless design which I understood too late in the format's launch. When WebP v2 was started I communicated these mistakes to the WebP team, they evaluated the fixes, and were able to embed them into the new version. Fixing those problems helped possibly 1-2 % (the single most important mistake was not to have bias parameter in the subtract green transform). Moving from Huffman to ANS may have helped another 1-2 %. Some of the improvements (especially palette-related and general heuristic improvements) may be back-portable to WebP version 1, so we can expect a small improvement there (perhaps 2-3 %), too, if the development focus switches back from WebP version 2 to WebP version 1. I have made the WebP team aware of the main lossless density improvements in JPEG XL lossless -- context trees, image-related predictors and hybrid-delta-palettization. My understanding is that they evaluated these approaches with severity and decided against them because of (slightly) different scoping of the efforts. Hybrid-delta-palettization is another example of the 'cross-pollination' between the projects. I had designed the delta palettization originally into the format and guided an intern project that experimentally launched it within WebP lossless (3-4 years back). Based on those practical experiences it was easier to build a more efficient delta palettization mode -- but still not easy, took three serious attempts to get it right. Scoping differences of JPEG XL and WebP version 2 are possibly visible even in the title: P in JPEG is for 'Photographic', but P in WebP is for 'Picture'.
    51 replies | 3299 view(s)
  • skal's Avatar
    29th November 2020, 17:30
    Thanks for taking the time doing these large tests, this is a very much appreciated reference! Re: lossless efficiency: WebP v2 wants to keep v1's speed if possible, while still targeting the same typical web content. This means for instance that no big effort will likely be made at improving the photo-like lossless compression (not a real use-case, and making this use-case compress better is very CPU-hungry). It also means that others tools will be declared too costly for inclusion, despite some compression gains. The usual trade-off! ​
    51 replies | 3299 view(s)
  • Scope's Avatar
    29th November 2020, 16:56
    Yes, but I'm also interested in the early stages of development (when it's still possible to make significant changes to the format), especially since I think the lossless mode in WebP v2 is currently more ready than lossy. Also, I'm almost finished encoding (except game screenshots) and by average results WebP v2 is noticeably more efficient than WebP v1 (and with the same decoding speed it's a good result), also I hope that useful lossless features from Jpeg XL are applicable for WebP v2 and there's an exchange of experience between development of these formats. Lossless Image Formats Comparison (Jpeg XL, AVIF, WebP 2, WebP, FLIF, PNG, ...) And the update results for Jpeg XL will be after the final freezing of the format (because for the slowest speed it takes a huge amount of time).
    51 replies | 3299 view(s)
  • cssignet's Avatar
    29th November 2020, 13:47
    WebP v2 is still experimental atm, maybe you should wait before doing large trials. i did not try a lot, but except if i missed something, it seems that the lossless could not be mature yet right now if image has alpha (7c0dceb) cwp2 -q 100 in.png -o out.wp2 output size: 135701 (4.14 bpp) dwp2 out.wp2 -o dec.png Decoded 1.wp2. Dimensions: 512 x 512. Transparency: yes. dssim: 0.000078 butteraugli (old): 0.027429
    51 replies | 3299 view(s)
  • suryakandau@yahoo.co.id's Avatar
    29th November 2020, 09:38
    Paq8pxd90fix2 - improve jpeg compression using -s8 option on f.jpg (darek corpus) 112038 bytes -> 81093 bytes a10.jpg (maximum compression corpus) 842468 bytes -> 621370 bytes dsc_0001.jpg 3162196 bytes -> 2180217 bytes Now it beats paq8px197, paq8sk38 and paq8pxd90. there is source code and binary file inside the package file
    1004 replies | 350022 view(s)
  • fcorbelli's Avatar
    29th November 2020, 01:36
    fcorbelli replied to a thread zpaq updates in Data Compression
    Those are normal performances on my pc Tomorrow I will make a more verbose timing log. Just to see the size of the output file. good night
    2618 replies | 1119517 view(s)
  • SpyFX's Avatar
    29th November 2020, 01:16
    SpyFX replied to a thread zpaq updates in Data Compression
    all blocks from hdd => Z:\ZPAQ\backup>..\zpaqlist.exe l DISK_F_Y_????.zpaq -all -out 1.txt 29/11/2020 01:13:24 zpaqlist franz22 29/11/2020 01:13:24 :default 29/11/2020 01:14:32 OUTPUT...V 778, F 613125, 764.858.961.314 bytes 5931 blocks 29/11/2020 01:14:33 W 010% 00061313/00613125 29/11/2020 01:14:33 W 020% 00122626/00613125 29/11/2020 01:14:33 W 030% 00183939/00613125 29/11/2020 01:14:33 W 040% 00245252/00613125 29/11/2020 01:14:34 W 050% 00306565/00613125 29/11/2020 01:14:34 W 060% 00367878/00613125 29/11/2020 01:14:34 W 070% 00429191/00613125 29/11/2020 01:14:34 W 080% 00490504/00613125 29/11/2020 01:14:35 W 090% 00551817/00613125 29/11/2020 01:14:35 T=71.765s (all OK) p/s 16sec looks like mystic...
    2618 replies | 1119517 view(s)
  • fcorbelli's Avatar
    29th November 2020, 01:12
    fcorbelli replied to a thread zpaq updates in Data Compression
    C:\zpaqfranz\660>zpaqlist l h:\zarc\copia_zarc.zpaq -all -out z:\1.txt 28/11/2020 23:09:43 zpaqlist franz22 28/11/2020 23:09:43 :default 28/11/2020 23:09:52 OUTPUT...V 1061, F 65628, 5.732.789.993 bytes 554 blocks 28/11/2020 23:09:53 W 010% 00006563/00065628 28/11/2020 23:09:53 W 020% 00013126/00065628 28/11/2020 23:09:54 W 030% 00019689/00065628 28/11/2020 23:09:54 W 040% 00026252/00065628 28/11/2020 23:09:55 W 050% 00032815/00065628 28/11/2020 23:09:55 W 060% 00039378/00065628 28/11/2020 23:09:56 W 070% 00045941/00065628 28/11/2020 23:09:56 W 080% 00052504/00065628 28/11/2020 23:09:58 W 090% 00059067/00065628 28/11/2020 23:09:59 T=16.062s (all OK) C:\zpaqfranz\660>c:\nz\zpaq64 l h:\zarc\copia_zarc.zpaq -all >z:\2.txt 78.391 seconds (all OK)
    2618 replies | 1119517 view(s)
  • SpyFX's Avatar
    29th November 2020, 01:09
    SpyFX replied to a thread zpaq updates in Data Compression
    no, second it's third test...
    2618 replies | 1119517 view(s)
  • fcorbelli's Avatar
    29th November 2020, 00:59
    fcorbelli replied to a thread zpaq updates in Data Compression
    Second round (cache loaded)? Thank you
    2618 replies | 1119517 view(s)
  • SpyFX's Avatar
    29th November 2020, 00:35
    SpyFX replied to a thread zpaq updates in Data Compression
    Z:\ZPAQ\backup>..\zpaqlist.exe l DISK_F_Y_????.zpaq -all -out 1.txt 29/11/2020 00:31:22 zpaqlist franz22 29/11/2020 00:31:22 :default 29/11/2020 00:32:08 OUTPUT...V 778, F 613125, 764.858.961.314 bytes 5931 blocks 29/11/2020 00:32:09 W 010% 00061313/00613125 29/11/2020 00:32:09 W 020% 00122626/00613125 29/11/2020 00:32:09 W 030% 00183939/00613125 29/11/2020 00:32:10 W 040% 00245252/00613125 29/11/2020 00:32:10 W 050% 00306565/00613125 29/11/2020 00:32:10 W 060% 00367878/00613125 29/11/2020 00:32:10 W 070% 00429191/00613125 29/11/2020 00:32:11 W 080% 00490504/00613125 29/11/2020 00:32:11 W 090% 00551817/00613125 29/11/2020 00:32:12 T=49.672s (all OK)​ Z:\ZPAQ\backup>..\zpaq715 l DISK_F_Y_????.zpaq -all > zpaq715.txt 38.484 seconds (all OK)
    2618 replies | 1119517 view(s)
  • fcorbelli's Avatar
    28th November 2020, 23:52
    fcorbelli replied to a thread zpaq updates in Data Compression
    This forum does not allow to attach .h. Just rename libzpaq.h.txt to libzpaq.h libzpaq.cpp ? LIBZPAQ Version 6.52 implementation - May 9, 2014. My zpaqlist is NOT based on 7.15 source (zpaq.cpp, libzpaq.cpp, libzpaq.h) BUT on 6.60 (with older and coordinated libzpaq) Why? https://encode.su/threads/456-zpaq-updates?p=66588&viewfull=1#post66588 Because enumerate files in different ways vs 7.15 (useless for a GUI). I removed (almost) all the compression portion, leaving the listing one (rewritten the mylist () function) and some remnants of various commands that no longer exist. Unfortunately the sources of the various versions of ZPAQ, and libzpaq, have very subtle incompatibilities, sometimes really difficult to find even for me (different default parameters, same name, different functionality etc). So it is not immediate, at least for me, to bring a 6.60 source to work with the latest libzpaq and even less "merge" with the 7.15 source. So it took me much less time (and effort) to keep the sources (6.60-franz22 and 7.15-franz42) distinct. The first is zpaqlist.exe, the second zpaqfranz.exe
    2618 replies | 1119517 view(s)
  • Gotty's Avatar
    28th November 2020, 23:34
    Gotty replied to a thread Paq8sk in Data Compression
    Problems are not fixed (except the hashxxk one). Exe compiled with asserts on (debug mode).
    182 replies | 16466 view(s)
  • SpyFX's Avatar
    28th November 2020, 20:27
    SpyFX replied to a thread zpaq updates in Data Compression
    libzpaq.h.txt ? LIBZPAQ Version 7.00 header - Dec. 15, 2014. libzpaq.cpp ? LIBZPAQ Version 6.52 implementation - May 9, 2014. last version => ​ libzpaq.h - LIBZPAQ Version 7.12 header - Apr. 19, 2016. libzpaq.cpp - LIBZPAQ Version 7.15 implementation - Aug. 17, 2016.
    2618 replies | 1119517 view(s)
  • suryakandau@yahoo.co.id's Avatar
    28th November 2020, 20:03
    Paq8sk38 - improve jpeg compression using -8 option on f.jpg (darek corpus) 112038 bytes -> 81110 bytes a10.jpg (maximum compression corpus) 842468 bytes -> 623432 bytes dsc_0001.jpg 3162196 bytes -> 2188750 bytes there is source code and binary file inside the package.
    182 replies | 16466 view(s)
  • fcorbelli's Avatar
    28th November 2020, 19:16
    fcorbelli replied to a thread zpaq updates in Data Compression
    This is not so useful, for me, because on clients I use only NVMe/SSD disks. Can you please send me your EXE, or try mine? http://www.francocorbelli.it/zpaqlist.exe zpaqlist l "h:\zarc\copia_zarc.zpaq" -out z:\default.txt zpaqlist l "h:\zarc\copia_zarc.zpaq" -all -out z:\all.txt zpaqlist l "h:\zarc\copia_zarc.zpaq" -until 10 -out z:\10.txt I attach the source, if you want to compile yourself. The output (-all) of 715 is sorted by version, then by file. Mine is sorted by file, then by version (aka: like a Time Machine). To reduce time to write (and to read) from disk I "deduplicate" the filenames to just "?" (invalid filename). Write, and read, a 600MB (typically 715 list output of complex zpaq) file on magnetic disks takes time. Shrinking to 170MB (my test bed) is faster, but not real quick. --- Result: my PAKKA Windows GUI is much faster than anything else I have founded. Of course... only half dozen competitors :D But it doesn't satisfy me anyway
    2618 replies | 1119517 view(s)
  • Sportman's Avatar
    28th November 2020, 16:14
    CPU Security Mitigation Performance On Intel Tiger Lake: https://www.phoronix.com/scan.php?page=article&item=tiger-lake-mitigations
    30 replies | 6006 view(s)
  • lz77's Avatar
    28th November 2020, 11:07
    Not so surprised because zlib compresses it 7.3 sec. longer than zstd. I'm surprised by pglz's c_time in Test 1, text, Rapid. Here it is their moral: they are ready to do everything for money! ;)
    130 replies | 13482 view(s)
  • SpyFX's Avatar
    28th November 2020, 02:02
    SpyFX replied to a thread zpaq updates in Data Compression
    ..\zpaq715.exe l DISK_F_Y_????.zpaq -all > zpaq715.first.txt zpaq v7.15 journaling archiver, compiled Aug 17 2016 DISK_F_Y_????.zpaq: 778 versions, 833382 files, 24245031 fragments, 764858.961314 MB 30027797.206934 MB of 30027797.206934 MB (834160 files) shown -> 2000296.773364 MB (415375470 refs to 24245031 of 24245031 frags) after dedupe -> 764858.961314 MB compressed. 54.032 seconds (all OK) Z:\ZPAQ\backup>..\zpaq715.exe l DISK_F_Y_????.zpaq -all > zpaq715.first.txt 54.032 seconds (all OK) Z:\ZPAQ\backup>..\zpaq715.exe l DISK_F_Y_????.zpaq -all > zpaq715.second.txt 38.453 seconds (all OK) Z:\ZPAQ\backup>..\zpaq715.exe l DISK_F_Y_????.zpaq -all > zpaq715.third.txt 38.812 seconds (all OK) the first launch caches h/i blocks and for this reason the next launches are processed faster for my archive, the time to get a list of files is 38 seconds if all blocks are in the system file cache can you do multiple launches?, you should reset the system file cache before first run
    2618 replies | 1119517 view(s)
  • Gotty's Avatar
    27th November 2020, 23:03
    Gotty replied to a thread Paq8sk in Data Compression
    m1 ->set(column >> 3 | min(5 + 2 * static_cast<int>(comp == 0), zu + zv),2048); m1->set( coef | min(7, zu + zv),2048); m1->set(mcuPos,1024); m1->set( coef | min(5 + 2 * static_cast<int>(comp == 0), zu + zv),1024); m1->set(coef | min(3, ilog2(zu + zv)), 1024);
    182 replies | 16466 view(s)
  • fcorbelli's Avatar
    27th November 2020, 20:52
    fcorbelli replied to a thread zpaq updates in Data Compression
    You are right, but in my case the archives is big (400+GB) so the overhead is small. Very quick and very dirty test (same hardware, from SSD to Ramdisk, decent CPU) Listing of copia_zarc.zpaq: 1057 versions, 3271902 files, 166512 fragments, 5730.358526 MB 796080.407841 MB of 796080.407841 MB (3272959 files) shown -> 10544.215824 MB (13999029 refs to 166512 of 166512 frags) after dedupe -> 5730.358526 MB compressed. ZPAQ 7.15 64 bit (sorted by version) zpaq64 l h:\zarc\copia_zarc.zpaq -all >z:\715.txt 60.297 seconds, 603.512.182 bytes output zpaqlist 64 bit - franz22 (sorted by file) zpaqlist l "h:\zarc\copia_zarc.zpaq" -all -out z:\22.txt 15.047 seconds, 172.130.287 bytes output Way slower on magnetic disks, painfully slow from magnetic-disk NAS, even with 10Gb ethernet
    2618 replies | 1119517 view(s)
  • SpyFX's Avatar
    27th November 2020, 19:03
    SpyFX replied to a thread zpaq updates in Data Compression
    fake file is limited to 64k by zpaq format, I think one such file is not enough ​ it seems to me that placing the file sizes in such a file will significantly speed up the process of getting a list of files in the console, but it will be redundant to burn the entire list of files in each version
    2618 replies | 1119517 view(s)
  • fcorbelli's Avatar
    27th November 2020, 18:24
    fcorbelli replied to a thread zpaq updates in Data Compression
    I will think about it. Yes, and a optional ASCII list of all the files and all versions. So, when you "list" an archive, the fake file is uncompressed and send out as output. Another option is ASCII comments in versions, so you can make somethin add ... blablabla -comment "my first version" I work almost with Delphi :D
    2618 replies | 1119517 view(s)
  • SpyFX's Avatar
    27th November 2020, 17:39
    SpyFX replied to a thread zpaq updates in Data Compression
    I wrote that in zpaq on c/block there is no size limit, at the moment I decided to store the second usize there, which is equal to the sum of the sizes all d/block + all h/block, this makes it possible to immediately move to the first i/block, it seems to me there is a small acceleration since there are no boundaries, you can store any additional information in c/block I don't like that c/block is not aligned to the 4k border, so I am creating it at the moment in 4k size, so that during the final rewrite, do not touch other data I understand correctly that the fake file is supposed to store information about checksums and filesizes? I like this idea, I also plan to use it for 4K alignment of the first h-i/block in the version and subsequent c/blocks in the archive alignment allow me to simplify the algorithm for reading blocks from the archive without system buffering, because I don't like that when working with a zpaq archive, useful cached data is pushed out of the server's RAM p/s I understand that it is very difficult to make fundamental changes in the zpaq code, so I rewrite all the work with zpaq archive, I use C# and my zpaq api (dll) almost everything has already worked :)) but your ideas are forcing me to change the concept of creating a correct archive that would solve your wishes as well
    2618 replies | 1119517 view(s)
  • Ms1's Avatar
    27th November 2020, 14:51
    There are only 5 entries in this category to my regret. It's extremely improbable to change because other people still having issues with libraries aim at slower categories. Thus definitely no lower than 5th. That would be too early for me to make comments on such questions, but aren't you surprised by the zlib result? It's not far from Zstd in terms of compression ratio.
    130 replies | 13482 view(s)
  • fcorbelli's Avatar
    27th November 2020, 14:50
    fcorbelli replied to a thread zpaq updates in Data Compression
    Only partially, or at least for me. Two big problems: 1- no space to store anything in version (ASCII comment) 2- no space for anything in blocks (end of segment with 20 bytes SHA1, or nothing). As stated I am thinking about "fake" (date==0==deleted) files to store information (715 ignore deleted one) But it is not so easy, and even worse not so fast. This is typically OK for magnetic disks, not so good for SSDs. But, at least for me, very slow file listing is, currently, the main defect.
    2618 replies | 1119517 view(s)
  • SpyFX's Avatar
    27th November 2020, 13:19
    SpyFX replied to a thread zpaq updates in Data Compression
    the zpaq format seems to me quite thoughtful and it is possible to squeeze additional improvements out of it without violating backward compatibility We can use any CDC, maybe even different for different data, but it seems to me that zpaq CDC is not so bad at the moment I'm not satisfied with processing with a large number of small files (hundreds of thousands or more), everything is very slow, as well as processing large files from different physical hdd, everything from the fact that zpaq715 reads all files sequentially
    2618 replies | 1119517 view(s)
More Activity