Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • Bulat Ziganshin's Avatar
    Yesterday, 21:55
    for many years, I buy in russian stores Beta Tea that has OPA grade. If buying in stores like Magnit is OK for you, look for red boxes that explicitly specify OPA on the package. Unfortunately, they have a lot of other, very similar packages with low-quality tea f.e. here now I see two packages with OPA label on them and two without it
    1 replies | 127 view(s)
  • TPM's Avatar
    Yesterday, 16:25
    Hi, I'm not sure how to figure out the data compression used in the .DPC files. I've attached a .DPC file, which I used with 'ratatouille_dpc.bms' and 'asobo_ratatouille.bms' with quickbms. I then used unLZ_rhys_v1 to try and decompress it, but it didn't work. How would I go about finding the type of LZ compression used? https://drive.google.com/drive/folders/1mF9mC44u1GOF-GkBJ-LrQZWV1vH31LxK?usp=sharing Thanks, TPM
    0 replies | 106 view(s)
  • Ms1's Avatar
    Yesterday, 15:42
    Participants with valid entries should have already received preliminary results by email. Please contact us if the email did not reach you. We plan to update the website with the final results on December 15.
    14 replies | 2926 view(s)
  • Jyrki Alakuijala's Avatar
    Yesterday, 12:04
    Something is off with the decoding timings. Are all codecs compiled with the same compiler and same optimization flags?
    21 replies | 7061 view(s)
  • hexagone's Avatar
    Yesterday, 07:28
    Release 1.8 Changes: - Corner cases fixed and code improvements ​- Level 1 compresses a lot better - New codec for some multimedia files added to levels 2 & 3 - Multi-threading rewritten to parallelize entropy (de)coding - Level 5 faster & level 6 faster (but a bit weaker) - Partial decoding Improved L1 compression: Silesia 100 mb 1.7 L1 74599655 bytes in 1.6 sec / 0.9 sec 1.8 L1 69840720 bytes in 1.8 sec / 0.9 sec linux-5.1-rc5.tar 128mb 1.7 L1 177047456 bytes in 7679 ms / 5052 ms 1.8 L1 164412581 bytes in 7647 ms / 3925 ms enwik8 100mb 1.7 L1 33869944 bytes in 1.27 sec / 0.73 sec 1.8 L1 32654135 bytes in 1.24 sec / 0.65 sec Multimedia codec (L2 & L3): kanzi-1.7 -b 100m -o none -c -i Wonderwall.wav -l 2 Encoding Wonderwall.wav: 45646892 => 43652400 bytes in 1834 ms kanzi-1.8 -b 100m -o none -c -i Wonderwall.wav -l 2 Encoding Wonderwall.wav: 45646892 => 38934705 bytes in 1578 ms kanzi-1.7 -b 100m -o none -c -i data/park1024.ppm -l 3 Encoding data/park1024.ppm: 3145745 => 2436294 bytes in 239 ms kanzi-1.8 -b 100m -o none -c -i data/park1024.ppm -l 3 Encoding data/park1024.ppm: 3145745 => 1957681 bytes in 232 ms kanzi-1.7 -b 20m -l 2 -f -c -i baby.bmp Encoding baby.bmp: 18048054 => 6669470 bytes in 322 ms Decoding baby.bmp.knz: 6669470 => 18048054 bytes in 174 ms kanzi-1.8 -b 20m -l 2 -f -c -i baby.bmp Encoding baby.bmp: 18048054 => 4996114 bytes in 355 ms Decoding baby.bmp.knz: 4996114 => 18048054 bytes in 173 ms Parallel entropy: 1.7 L8 j8 Encoding linux-5.1-rc5.tar: 871557120 => 78016499 bytes in 257.7 s Decoding linux-5.1-rc5.tar.knz: 78016499 => 871557120 bytes in 266.0 s 1.8 L8 j8 Encoding linux-5.1-rc5.tar: 871557120 => 77826658 bytes in 67142 ms Decoding linux-5.1-rc5.tar.knz: 77826658 => 871557120 bytes in 65592 ms 1.7 L8 j8 Encoding /disk1/ws/enwik9: 1000000000 => 164705959 bytes in 320.5 s Decoding /disk1/ws/enwik9.knz: 164705959 => 1000000000 bytes in 305.0 s 1.8 L8 j8 Encoding /disk1/ws/enwik9: 1000000000 => 164429623 bytes in 79207 ms Decoding /disk1/ws/enwik9.knz: 164429623 => 1000000000 bytes in 78179 ms Partial decoding (more tests needed). It is possible to decompress only a range of blocks: kanzi -c -b 10m -f -i /tmp/enwik8 -l 4 Encoding /tmp/enwik8: 100000000 => 25275127 bytes in 3863 ms kanzi -d -f -i /tmp/enwik8.knz -v 4 Block 1: 2634565 => 3936096 => 10485760 Block 2: 2650775 => 3935321 => 10485760 Block 3: 2665709 => 3940268 => 10485760 Block 4: 2629675 => 3897925 => 10485760 Block 5: 2650382 => 3913712 => 10485760 Block 6: 2660628 => 3950171 => 10485760 Block 7: 2641180 => 3920834 => 10485760 Block 8: 2658707 => 3938822 => 10485760 Block 9: 2657754 => 3927861 => 10485760 Block 10: 1425693 => 2098230 => 5628160 Decoding: 2439 ms Input size: 25275127 Output size: 100000000 Throughput (KB/s): 40039 kanzi -d -f -i /tmp/enwik8.knz --from=6 --to=8 -v 4 Block 6: 2660628 => 3950171 => 10485760 Block 7: 2641180 => 3920834 => 10485760 Decoding: 537 ms Input size: 25275127 Output size: 20971520 Throughput (KB/s): 38137
    21 replies | 7061 view(s)
  • danlock's Avatar
    Yesterday, 05:00
    CPU Mitigation Performance on AMD Ryzen 5000 "Zen 3" Processors https://www.phoronix.com/scan.php?page=article&item=zen-3-spectre
    31 replies | 6089 view(s)
  • pklat's Avatar
    Yesterday, 02:18
    they all use/need tensor flow, and that uses python. I think you usually need some high end card with a lot of CUDA cores and even with that training the network takes too long to use your own data.
    10 replies | 1137 view(s)
  • Jon Sneyers's Avatar
    4th December 2020, 23:15
    Yes, it affects parallelism (more, smaller groups means more cores can be used), and also compression, e.g. lz77 matching can only be done within a group, so bigger groups are better for that, but entropy coding in general might benefit from smaller groups (or not, it really depends on the image).
    55 replies | 3640 view(s)
  • Nania Francesco's Avatar
    4th December 2020, 22:15
    I tried to install with python all the various gadgets and various libraries (pip.. etc) following the guide provided by the programmer. Honestly he finds many errors even during the installation of packages (torch.. etc.). I don't understand why they used python?!
    10 replies | 1137 view(s)
  • Adreitz's Avatar
    4th December 2020, 21:42
    Thanks, Jon! I think I had seen that -g was added recently, but the help description wasn't very helpful or descriptive, so I forgot about it. (I'm also not used to binary operations, so the use of a left shift in the description took me a while to parse. Maybe a more accessible description of the group size would be (2^n)*128 or 2^(n+7).) Does the group size affect anything other than local palettization? Aaron
    55 replies | 3640 view(s)
  • DZgas's Avatar
    4th December 2020, 20:32
    Well I'm just "playing" uses random words in codepen.io/skal65535/project/full/ZwnQWM You can rate my gallery heh.
    24 replies | 1128 view(s)
  • Jyrki Alakuijala's Avatar
    4th December 2020, 20:22
    Thank you Pascal! I believe the progression needs to balance the additional cognitive load caused by temporal changes and the benefit from lower latency. One way to reduce the cognitive load is that no preview image features disappear in the final image. One way to achieve that is the one that Facebook chose to deploy, i.e., the use of excessive Gaussian smoothing. In JPEG XL we do the same with high quality preservation of the thumbnail combined with a very high quality upsampling algorithm. Why I like minimizing cognitive load: 1. Imposing new cognitive load on the users may require holistic study than what I'm able to execute. Making the web more irritable altogether may have behaviour changes that cannot be monitored in connection with the data/renderings that are causing the issue. 2. A large fraction of user base dislikes flickering images and they may want to wish to disable progression when it is creating cognitive load. I consider that it may because some part of the user base consider the changes in the image additional information and their more sensitive brains get flooded with it. 3. My personal belief system is that human attention is limited, and the attention should go to the deeper semantics of the content rather than images flashing. These things are highly subjective. One user on encode.su self-identified in the discussions related to progressive JPEG improvements as benefiting from the flickering as they wanted to know when the final version of the image is coming.
    24 replies | 1128 view(s)
  • Nania Francesco's Avatar
    4th December 2020, 20:12
    I wanted to make a practical comparison with other formats that I do not cite for fairness. I think it's really interesting to make it at the cost of bytes. My question is whether there is an encoder and decoder del formato High-Fidelity Generative Image Compression ?
    10 replies | 1137 view(s)
  • Jon Sneyers's Avatar
    4th December 2020, 19:40
    The jxl encoder will first try to make a global palette of at most N colors. Then in every group (groups are 256x256 by default, can be changed to 128x128 with -g 0, 512x512 with -g 2, and 1024x1024 with -g 3) it will also try to make a local palette of at most N colors. So probably there is a group where 3 colors are enough, another group where 123 colors are enough, etc. For images like this, trying different group sizes can make a difference.
    55 replies | 3640 view(s)
  • DZgas's Avatar
    4th December 2020, 19:37
    @skal It would be interesting to look at the pictures with off blurring the triangles.
    24 replies | 1128 view(s)
  • skal's Avatar
    4th December 2020, 18:52
    That's the thing: these previews don't need to be exactly 1:1 similar to the original image. They can also just 'summarize' it enough to trigger interest for the viewer to stay around while the full version downloads. Using potentially sharp shapes (triangles) offers more opportunities than just blurring. There's an interesting 'tiny semantic compression' experimentation field to open here.
    24 replies | 1128 view(s)
  • Adreitz's Avatar
    4th December 2020, 18:20
    Jon/Jyrki, how does the --palette option work in JXL? I tried compressing the "chart_type.png" image that skal posted in the WebPv2 thread with JXL to see if I could do better than his quick test, which didn't compare well with WebP. This image contains 855 unique colors. Using build 05a84bb5 of cjxl (the newest version that Jamaika has built), I compressed the image using -d 0 -s 9 -E 2 -I 1 and every value of --palette=X from 0 to 1024. I would normally expect an option like this to fail or revert to RGB encoding if the image contains more colors than the threshold specified in the option, like trying to use -c3 with a PNG containing more than 256 colors. Instead, I'm seeing a rolloff: Colors Size (B) 0-2, 11501 3-122, 11418 123-154, 11212 155-424, 10897 425-571, 9988 572-711, 9597 712+, 9655 Does this have something to do with local palettization? Why does the file size stay constant within the various ranges of --palette shown above? It's interesting that the smallest file size is reached at a value smaller than the number of colors in the image, too. There could be an interaction here with some other heuristic, though, like for -C or -P. Thanks, ​Aaron
    55 replies | 3640 view(s)
  • Jyrki Alakuijala's Avatar
    4th December 2020, 18:18
    AVIF main mode is separate from the gamut/colorspace, so something else decides about the exact colors and intensity. JPEG XL main mode is absolute and physical, so you are specifying what kind of photons and how many you'd like to have. Both approaches have their benefits. JPEG XL tends to invest more bits in dark areas as a result of this, particularly when the default settings are used (when a 250 nits viewing condition with ~2x zoom is expected by default).
    67 replies | 7284 view(s)
  • Jon Sneyers's Avatar
    4th December 2020, 16:53
    The header overhead of the AVIF container is 282 bytes for images without alpha and 457 bytes for images with alpha. That's just for the obligatory HEIF header boxes, not including the header of the AV1 payload.
    24 replies | 1128 view(s)
  • Jyrki Alakuijala's Avatar
    4th December 2020, 15:54
    According to https://engineering.fb.com/2015/08/06/android/the-technology-behind-preview-photos/ Facebook has been doing this with JPEG where they remove the relatively large JPEG header from the payload only to patch it up dynamically. They also heavily Gaussian blur the result before showing it to get rid of artefacts. Adding a bit of noise at that stage can also help (in my experiments on this topic), but I don't think facebook is doing that. Would it be possible to do the same with AVIF where likely the header is 3/4 of the payload in this size? Also interesting -- how does simple JPEG XL perform in this area (at 32x32 or 64x64 resolution)? JPEG XL has relatively small header overhead. My belief is that the triangle-based modeling is favorable for much of the geometric stuff (buildings particularly), but can fail when textures need to be communicated (a natural image, a tree against the sky, etc.). My very very brief experience of triangle-based preview is that it is favorable in about 25 % of the cases, another 25 % where it is roughly even, and 50 % where it can be detrimental. Naturally such evaluations are highly subjective since we are pretty far from the original. Also depends if a Gaussian is applied in between or not. I consider that for practical deployments the Gaussian is a necessity, but it will eat some of the spatially-variable benefits that a triangle-based model has.
    24 replies | 1128 view(s)
  • DZgas's Avatar
    4th December 2020, 13:56
    This is the only interesting part of webp 2 because it is about quality/size for so small images, micro-picture, it is looks better than AVIF(at 250 byte) and anyone else. Although my associations are like Codec2 in photo formats.
    24 replies | 1128 view(s)
  • pklat's Avatar
    4th December 2020, 13:55
    OCR is probably already better than human, captchas for feeding AI haven't used it for a long time now. There are a lot of combinations (fonts, styles, DTP software etc...) but not infinite, so I think it could reconstruct original .qxp or .doc or whatever with good enough accuracy. AI can also be used for removing hard-coded subtitles and similar stuff from video, images. And removing (ingress) noise, scratches from audio/video that can't be compressed.
    10 replies | 1137 view(s)
  • zyzzle's Avatar
    4th December 2020, 08:45
    Well, that's a tall order with current hardware, but perhaps neural nets on quantum computers could be made to excel at this type of task, ie cleaning up scanned documents, magazine articles, and low-resolution material. Looking forward to that day, but it might still be a full generation off (ie, 20 years away). But, by then compression of images won't matter as much, as we'll be into yottabyes and Zetabytes on your fingertips.
    10 replies | 1137 view(s)
  • skal's Avatar
    4th December 2020, 00:07
    I've set up a demo page here http://​https://codepen.io/skal65535/project/full/ZwnQWM You can visualize the base64 strings generated with the (latest) 'mk_preview' tool. The decoder is ~500 lines of hand-written magnified Javascript.
    24 replies | 1128 view(s)
  • Nania Francesco's Avatar
    3rd December 2020, 23:17
    New update. Added: - FreeARC 0.67a - GLZA 0.114 only from: http://heartofcomp.altervista.org/MOC/MOCA.htm
    208 replies | 131041 view(s)
  • DZgas's Avatar
    3rd December 2020, 20:34
    DZgas replied to a thread JPEG XL vs. AVIF in Data Compression
    Of course, it seems to me that the development of JpegXL has already been delayed, but so far it has no competitors in it's quality at high bitrates...if something competitive suddenly appeared during this long development. I just want to switch from JpegXR to JpegXL. But after switching to AVIF, I found out that in some decoders are decode different color spaces with different gamuts, so it is decoder problem only, but I don't want to rush anymore.
    67 replies | 7284 view(s)
  • DZgas's Avatar
    3rd December 2020, 19:59
    The strange thing is, someone in Google just got bored? AVIF was made just now, for its creation they used AV1 and all Google technologies and all other members of AOMedia... So of course WebP 2 - is the joke, it definitely can not be better than AVIF, because AVIF would become the Very bad job for Google and AOMedia... So yes, is codec, but for who? for what? incomprehensible. As experiment only, I Think they made it using the technologies that were Discarded when creating AVIF because of some internal problems, maybe, complexity, different variability (as in webp) or, of course, not effective. Webp 2 is so fast. AVIF can also be encoded fast if you make this setting. Image - same encoded time.
    24 replies | 1128 view(s)
  • Jyrki Alakuijala's Avatar
    3rd December 2020, 19:34
    'edge preserving 3' is quite a lot stronger (and a bit slower) filtering than we had before. It is still guided by the adaptive quantization field and the smoothing control field, so it shouldn't be super-destructive still like some less connected restoration/deblocking filtering approaches can be. The restoration filter is relatively flexible through the smoothing control field use, and we will likely be able to improve the restoration by encoding side improvements within the next year: less artefacts, more authenticity.
    67 replies | 7284 view(s)
  • Jon Sneyers's Avatar
    3rd December 2020, 18:18
    That doesn't really answer my question. I see more efficient lossy compression (~30% better than WebP, as close to AVIF as possible) and also The goal is to reach decompression speed parity. but why do I need WebP 2 for that? AVIF itself already achieves those goals trivially :) ​
    24 replies | 1128 view(s)
  • fabiorug's Avatar
    3rd December 2020, 18:02
    fabiorug replied to a thread JPEG XL vs. AVIF in Data Compression
    i tried q 44.3 edge preserving 3 effort 4 In jpeg xl (jxl rebuild of squoosh) and it makes so fake photo but it looks very real, so jpeg xl has improved in this new 1th December build, but is still far from the finalization. But still webp v2 it selects the automatic quality jxl is better at lossless and at the moment it use less dependencies for it. Also I read in the commit // Kind of tree to use. // TODO(veluca): add tree kinds for JPEG recompression with CfL enabled,
    67 replies | 7284 view(s)
  • CompressMaster's Avatar
    3rd December 2020, 17:25
    jxl 2kb worser -much more blocking artefacts than AVIF
    67 replies | 7284 view(s)
  • pklat's Avatar
    3rd December 2020, 12:41
    i think there are a lot of uses of (near) lossless, especially if for some reason one doesn't want vector formats.
    24 replies | 1128 view(s)
  • DZgas's Avatar
    3rd December 2020, 02:42
    Webp 2 is so fast. But the quality...in this image is difficult question. But I don't know how and where WEBP 2 will add...only AVIF is already ready. Maybe, as WEBP now, is stickers in messengers, only. Just in development. Google hah why? - you have the AVIF. ok, one more codec.
    24 replies | 1128 view(s)
  • skal's Avatar
    3rd December 2020, 02:12
    See post #1 ​
    24 replies | 1128 view(s)
  • DZgas's Avatar
    3rd December 2020, 01:27
    DZgas replied to a thread JPEG XL vs. AVIF in Data Compression
    JpegXL gets better. cjxl 0.1.0-05a84bb5
    67 replies | 7284 view(s)
  • Jon Sneyers's Avatar
    3rd December 2020, 01:11
    What are the key (expected) benefits of WebP 2 compared to AVIF and JPEG XL for web delivery of images?
    24 replies | 1128 view(s)
  • Scope's Avatar
    2nd December 2020, 21:16
    ​https://chromium.googlesource.com/codecs/libwebp2/+/b77ef4cce1a8b5e7351756a402272000c60cc6bb
    24 replies | 1128 view(s)
  • pklat's Avatar
    2nd December 2020, 20:38
    neural networks could probably be very good at 'restoring', say, screenshots from scanned old computer magazines. They could recognize windows, icons, mouse pointer, fonts, etc... reconstruct it while compressing.
    10 replies | 1137 view(s)
  • suryakandau@yahoo.co.id's Avatar
    2nd December 2020, 18:50
    You are welcome
    1004 replies | 350950 view(s)
  • LucaBiondi's Avatar
    2nd December 2020, 17:20
    LucaBiondi replied to a thread Paq8pxd dict in Data Compression
    Ah ok but if you get a crash i suppose it is not the right way. Thank you!
    1004 replies | 350950 view(s)
  • suryakandau@yahoo.co.id's Avatar
    2nd December 2020, 16:13
    Downgrade memory option in each model can make the process successfull but the compression is still worse
    1004 replies | 350950 view(s)
  • suryakandau@yahoo.co.id's Avatar
    2nd December 2020, 15:51
    i have look at the source code but i am not sure which part cause that error. The solution for that crash is change line 1268 from 10000 downto 1000 but the compression is worse than before
    1004 replies | 350950 view(s)
  • moisesmcardona's Avatar
    2nd December 2020, 15:22
    Also don't forget to submit your changes to the paq8pxd repo: https://github.com/kaitz/paq8pxd/pulls
    1004 replies | 350950 view(s)
  • LucaBiondi's Avatar
    2nd December 2020, 15:20
    LucaBiondi replied to a thread Paq8pxd dict in Data Compression
    Thanks have you take a look to the bug that crash the app? I will not test versions that crash because i usually use the -x12 option. Thank you!
    1004 replies | 350950 view(s)
  • suryakandau@yahoo.co.id's Avatar
    2nd December 2020, 15:01
    Paq8pxd90fix3 - improve and enhance jpeg compression using -s8 option on: f.jpg 112038 bytes -> 80984 bytes a10.jpg 842468 bytes -> 621099 bytes dsc_0001.jpg 3162196 bytes -> 2178889 bytes ​there is binary and source code.
    1004 replies | 350950 view(s)
  • Shelwien's Avatar
    2nd December 2020, 03:51
    http://nishi.dreamhosters.com/u/instlat.html http://nishi.dreamhosters.com/u/instlat_pl_v0.7z based on http://instlatx64.atw.hu/ https://github.com/InstLatx64/InstLatx64/
    0 replies | 135 view(s)
  • e8c's Avatar
    1st December 2020, 22:25
    Point of view for better Time and Date System: https://app.box.com/s/wugks9n7bh9skomtxjig8bhk3bgakp04 Decimal Time (https://en.wikipedia.org/wiki/Decimal_time) and Infinite Past Calendar: "(9)" in "(9)501st" like in "0.(9)", no "0th" (or "minus 1st") year in past, and "overflow" in future; year starts in 1st day after longest night in Northern Hemisphere.
    0 replies | 66 view(s)
  • Jyrki Alakuijala's Avatar
    1st December 2020, 20:11
    Jim is eye-balling different codecs on youtube. JPEG XL wins this simple test hands down.
    55 replies | 3640 view(s)
  • CompressMaster's Avatar
    1st December 2020, 19:39
    I want to synchronize history and cookies between many android and non-android browsers. Are there some app to do that? I have found only xBrowserSync, but its able to sync only bookmarks. Are there others?
    0 replies | 52 view(s)
  • Jyrki Alakuijala's Avatar
    1st December 2020, 14:45
    We have an architecture in JPEG XL that allows any visual metric be used for adaptive quantization. We just have been too busy with everything else and were not able to productionize this yet. All quality-related changes are QA heavy and will introduce a long-tail of manual work that drags on to future changes, too, so we have post-poned this for now. Some of the next things we plan: - speed up butteraugli further - speed up (lossless) encoding - better adversarial characteristics in butteraugli's visual masking so that it can be used for ultra low bpp neural encoding-like solutions - improve pixel art coding density - low color count image improvements, and hybrid low color count + photographic/gradient image improvements - progressive alpha - make JXL's butteraugli also alpha-aware (blend on black/white, take the worst) - API improvements and polishing - extend competitive range from 0.3 BPP to 0.07 BPP by using larger DCTs and layering (long term improvement) Some lossless photographs contain a lot of noise. No method can compress the noise losslessly. So lossless compression of photographs may be more relevant for very high quality imaging conditions with no noise. Lossless photograph compression may be relevant in reproduction and for photography enthusiasts. Personally I consider that making photographs visually lossless (with an application depending error margin) is a more practical concept than true lossless. WebP v1 could have been more ambitious on quality in the lossy side. On lossless, it would have been great if we had been able to deploy layers in it to get 16 bit support. Ideally progressive format would be slightly more useful than non-progressive. WebP v1 lossy file format didn't require initial research. It was just repackaging. I spent seven man-months coming up with WebP v1 lossless. The research phase for AV1 could be a 100+ man-year project, JPEG XL and WebP v2 in the 10-30 man-year range (all of these are very rough guesses). Comparing WebP v1 to AV1, JPEG XL and WebP v2, the first is in the 'quick hack' category, where as the others are serious big research projects, especially so for AV1. AVIF again is just repackaging, so it is a 'quick hack' separated from a formidable research project. In my view, ideally WebP v2, AVIF, HEIF/HEIC and JPEG XL developers would consolidate the formats, concentrate the efforts to build high-quality encoders/decoders, and come up with one simple and powerful solution that would satisfy all the user needs.
    55 replies | 3640 view(s)
  • Shelwien's Avatar
    1st December 2020, 13:58
    Looks like oodle: https://zenhax.com/viewtopic.php?t=14376
    1 replies | 164 view(s)
  • gameside's Avatar
    1st December 2020, 13:39
    Hi sorry for maybe off-topic question I'm trying to reverse format of DATA file from assassins creed valhalla, but I don't know much about compression methods anyone can tell me this file use what compression method and how can i decompress it? thanks
    1 replies | 164 view(s)
  • Shelwien's Avatar
    1st December 2020, 12:58
    > attachment is downloaded as a jpg It a screenshot from file comparison utility. Blue numbers are different. > is the input 16bit? Yes, 16-bit grayscale. > Did you expect a 16bit output? Yes, FLIF and jpegXL handle it correctly. > What command did you use to generate the attachment? convert.exe -verbose -depth 16 -size 4026x4164 gray:1.bin png:1.png cwp2.exe -mt 8 -progress -q 100 -z 6 -lossless 1.png -o 1.wp2 dwp2.exe -mt 8 -progress -png 1.wp2 -o 1u.png convert.exe -verbose -depth 16 png:1u.png gray:1u.bin > (webp2 only handles 10bit samples at max, btw) That's ok, but cwp2 doesn't say anything about -lossless not being lossless.
    24 replies | 1128 view(s)
  • Jyrki Alakuijala's Avatar
    1st December 2020, 12:34
    Correct. When I spoke about near-lossless actually reducing the image size, I referred to the experience of doing it for 1000 PNG images from the web. There, it is easy to get 20-40 % further size reduction. Small color count images are of course best compressed using a palette. A palette captures the color component correlations wonderfully well.
    24 replies | 1128 view(s)
  • skal's Avatar
    1st December 2020, 10:14
    That seems to fall into the "Responsive Web" category, where the image serves is adapted to the requester's display (like: resolution, dpi, bit-depth, etc.) Not sure i understand your example (attachment is downloaded as a jpg): is the input 16bit? Did you expect a 16bit output? What command did you use to generate the attachment? (webp2 only handles 10bit samples at max, btw)
    24 replies | 1128 view(s)
  • Shelwien's Avatar
    1st December 2020, 02:58
    As to definition of "near-lossless", I think there's another option worth consideration. There's a difference between displayed image and how it is stored - palette, colorspace, "depth" bits, alpha stuff, etc. So I think we can define near-lossless as providing identical displayed image while not necessarily output identical to input (for example with 16-bit input and 8-bit display). In this case it should provide strictly better compression than true lossless, since we can save bits on encoding of that lost information. Btw, here's what webp2 currently does for 16-bit images in lossless mode. I guess its "near-lossless" :)
    24 replies | 1128 view(s)
  • Shelwien's Avatar
    1st December 2020, 02:36
    windows binaries.
    24 replies | 1128 view(s)
  • Scope's Avatar
    1st December 2020, 01:59
    +Flic, Qic, SSIMULACRA, Butteraugli, ... Btw, there were no plans to integrate SSIMULACRA as an additional metric (for example, for lower quality) or it will not give any advantages and difficult to integrate? It would also be interesting to read about updates or planned experiments and improvements for Jpeg XL (because even in Gitlab there are no descriptions for updates and changes) Good compression for photographic images is important, but, as web content for delivery, people are much more often compressed in lossless artificial/artistic images than photographic, even my comparison contains mainly such images (but this is a reflection of reality, than any of my preferences, it is very difficult to find large sets of true lossless photos are not created for tests). And about the development and existence of many different image formats, on the one hand, fragmentation is bad and people hate new formats if they are not supported in all their usual applications. But, on the other hand, if we look at the existence of a large number of video formats, even if we take, for example, only the codecs of one company - On2 (which was later acquired by Google): - VP3 still exists (which became Theora) - VP6 and VP7 were supported in Flash player (as well as disappeared together) - VP8 is also still supported in browsers, WebRTC and is used in many video games - VP9 main format on YouTube - and the newest AV1, which was created by the development and practical use of all past generations of codecs And all this happened in less time than Jpeg exists. Also, more than 10 years have passed since the creation of WebP v1, perhaps images do not need such a frequent update and so many different formats, but this is not always a bad thing and it is much more difficult to create a format without flaws that will not need to be changed or improve over time, also different teams may have different goals and views.
    55 replies | 3640 view(s)
  • cssignet's Avatar
    1st December 2020, 01:46
    i should probably check this first. WebP v2 is nice stuff btw
    55 replies | 3640 view(s)
  • Scope's Avatar
    1st December 2020, 01:00
    Sometimes I've seen images where near-lossless in WebP encoded to a much smaller size (compared to lossless), without visual differences (compared to lossy). Although, as another opinion with Jpeg XL I mostly had better results with lossy mode (VarDCT with values about -d 0.3 ... 0.5) than with different near-lossless (Modular -N .... ), lossy-palette (--lossy-palette --palette=0) and lossy modular (-q 95-99) and perhaps WebP v2 due to more efficient lossy mode will have the same situation, but also these modes in Jpeg XL are not yet fully developed.
    24 replies | 1128 view(s)
  • skal's Avatar
    1st December 2020, 00:29
    Cases where lossy is larger than PNG is typically for graphic-like content like charts, drawing, etc. Usually, these don't benefit from near-lossless (nor lossy, which smears colors because of YUV420), and are already quite small in size to start with. Actually, near-lossless can actually increase the size compared to lossless in such cases. Random example from the web: http://doc.tiki.org/img/wiki_up/chart_type.png original size: 17809 webp lossless: 7906 near-lossless 90: 8990 lossy -q 96: 30k (+smear!) and same with jpeg-xl: ./tools/cjxl -s 8 chart_type.png -q 100 Compressed to 11570 bytes (0.629 bpp). ./tools/cjxl -s 8 chart_type.png -q 100 -N 5 Compressed to 15724 bytes (0.855 bpp). and wp2: cwp2 chart_type.png -tile_shape 2 -q 100 ​output size: 7635 (0.42 bpp) cwp2 chart_type.png -tile_shape 2 -q 99 output size: 8737 (0.47 bpp)
    24 replies | 1128 view(s)
  • Jyrki Alakuijala's Avatar
    30th November 2020, 23:39
    I wrote the first version of WebP near-lossless in December 2011. The code was not considered an important feature and it was not deployed in the initial launch. The first version only worked for images without PREDICTOR_TRANSFORM, COLOR_TRANSFORM, and COLOR_INDEXING_TRANSFORM. This initial version was enabled much later, possibly around 2014, without an announcement, and would only work with images that were not a great fit for the PREDICTOR_TRANSFORM. Later (around 2016?) Marcin Kowalczyk (Riegeli author) ported the original code to work with the PREDICTOR_TRANSFORM. No announcement was made of that relatively significant improvement. When I originally designed it, I considered three variations of near lossless: 1. replacing the last bit with the first bit. (--near_lossless=80) 2. replacing two last bits with the two first bits. (--near_lossless=60) 3. replacing three last bits with the three first bits. (--near_lossless=40) I didn't just want to remove values locally, because having some specific values get more population count would naturally reduce their entropy. I would only do these when the monotonicity of the image around the pixel would not be affected, i.e., the new pixels would not be a new minimum or maximum. In the launch of the feature, two more modes were added (--near_lossless=20 and --near_lossless=0) for one and two more bits of near-lossless loss. I think these are mostly dangerous, can be confusing and rarely if ever useful. I was never able to see a difference between --near_lossless=80 and the true lossless, at least in my original design. It was possible to see (with severe pixel oogling) some differences at --near_lossless=60, but it would still be far superior to what was available in lossy formats. When you look at https://developers.google.com/speed/webp/docs/webp_lossless_alpha_study and Figure 3, you can wonder what would that chart look like if pure lossy was compared against the PNGs. Turns out that a one-in-a-thousand pure lossy image requires 20x more storage capacity than the same image compressed with lossless, and for about 40-50% of the web's PNG images lossless is smaller than high quality lossy. Near-lossless can make that one-in-thousand image smaller still, just normal lossy cannot. Placing near-lossless methods in range 96-99 may lead to a situation where the quality 95 produces a 20x larger file than quality 96. This may be surprising for a user. The actual performance differences of course depend on the actual implementation of near-lossless -- this is assuming near-lossless means LZ77 but no integral transforms.
    24 replies | 1128 view(s)
  • skal's Avatar
    30th November 2020, 23:33
    It's the same name because it's targeting the same user-case (web). As i mentioned in these slides https://bit.ly/image_ready_webp_slides, WebP2 is more of the same. The WebP format hasn't changed since 2011. The encoder/decoder in libwebp has. WebP has been supported by 90% of the installed base before Apple eventually switched to it. That's more than just 'moderate'. WebP2 files haven't been released in the wild yet, so i doubt users are receiving confusing .wp2 files.
    55 replies | 3640 view(s)
  • LucaBiondi's Avatar
    30th November 2020, 23:33
    LucaBiondi replied to a thread Paq8pxd dict in Data Compression
    Hi Darek! Yes true
    1004 replies | 350950 view(s)
  • skal's Avatar
    30th November 2020, 23:20
    It is lossless (because of the -q 100 option indeed, which means 'lossless' in cwp2)... but in premultiplied world! Means: cwp2 is discarding everything under the alpha=0 area, and pre-multiplying the other area, which could explain the difference you're seeing with 'pngdiff' if this tool is not doing the measurement in pre-multiplied space. ​
    55 replies | 3640 view(s)
  • skal's Avatar
    30th November 2020, 23:13
    There's a tool extras/mk_preview in the extras/ directory to generate and manipulate previews as raw bits. For instance: ./extras/mk_preview input.png -o preview.bin -d preview.png -m voronoi -g 48 -n 400 -s 250 -i 2000 You can insert this preview data 'preview.bin' into a final .wp2 compressed bitstream using 'cwp2 -preview preview.bin ...' You can also try to generate a preview from cwp2 using 'cwp2 -create_preview ...', but there's not much options available for tweaking. The triangle preview can be eyeballed in the 'vwp2' tool by pressing the '\' key if you are visualizing a .wp2 bitstream with preview available. Overwise, just use mk_preview's -d option to dump the pre-rendered preview. I'm working on a javascript renderer in HTML btw, that'll be easier. Triangle preview is a very experimental field, and this is obvious when looking at the number of options in 'mk_preview' experimentation tool! The use-case of near-lossless is still unclear within WebBP2's perimeter: who is using near-lossless and to what effect? I don't have the answer to this question. Suggestions welcome. I can totally figure someone using 100% lossless to preserve a pristine copy of an image (and note that in this case, it's likely said-user might as well go for RAW format. File size is probably less an issue than CPU or ease-of-edition for them...). But i think there's a mental barrier to using near-lossless for these users: If you don't stick to 100% lossless, you might as well just go lossy (even at -q 95) because it's not going to be pristine anyway. Stopping at the near-lossless intermediate stage isn't bring much to the table in terms of filesize reduction, compared to just going hi-quality lossy. ​
    24 replies | 1128 view(s)
  • skal's Avatar
    30th November 2020, 22:52
    https://chromium.googlesource.com/codecs/libwebp2/ What to expect? The WebP 2 experimental codec is mostly pushing the features of WebP further in terms of compression efficiency. The new features (like 10b HDR support) are kept minimal. The axis of experimentation are: more efficient lossy compression (~30% better than WebP, as close to AVIF as possible) better visual degradation at very low bitrate improved lossless compression improved transparency compression animation support ultra-light previews lightweight incremental decoding small container overhead, tailored specifically for image compression full 10bit architecture (HDR10) strong focus on software implementation, fully multi-threaded The use cases remain mostly the same as WebP: transfer over the wire, faster web, smaller apps, better user experience... WebP 2 is primarily tuned for the typical content available on the Web and Mobile apps: medium-range dimensions, transparency, short animations, thumbnails. WebP2 is currently only partially optimized and, roughly speaking 5x slower than WebP for lossy compression. It still compresses 2x faster than AVIF, but takes 3x more time to decompress. The goal is to reach decompression speed parity.
    24 replies | 1128 view(s)
  • skal's Avatar
    30th November 2020, 22:48
    Actually, let's start a fresh thread dedicated to the topic...
    182 replies | 59982 view(s)
  • Jon Sneyers's Avatar
    30th November 2020, 22:32
    Perhaps it does premultiplied alpha?
    55 replies | 3640 view(s)
More Activity