Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • Gotty's Avatar
    Today, 12:35
    No "inventor" can patent a thing that was not invented. In the patent application you need to describe how your invention works. And since there is no such invention, they cannot describe it, and so they cannot patent is. Hmmm... where did you see? My conclusion: no, they can't sue you.
    3 replies | 19 view(s)
  • Jarek's Avatar
    Today, 12:28
    From https://en.wikipedia.org/wiki/Versatile_Video_Coding Created new group to reduce licensing problems ... so now there are 4 companies fighting for licenses on one codec - brilliant! :D
    10 replies | 663 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 11:35
    Fp8sk14 - use -9 option in this version here is source code and binary file @gotty could you add upto -15 option for fp8sk ?? thank you!!
    32 replies | 1141 view(s)
  • Jyrki Alakuijala's Avatar
    Today, 10:55
    What happens if you build it from https://jpeg.org/jpegxl/ Please file an issue there if a compilation does not produce a result that you are happy with. At the moment we have the best ability to improve on issues found out with a linux compilation. We will improve on our cross-platform support soon.
    38 replies | 3258 view(s)
  • randomthing's Avatar
    Today, 10:43
    thanks Shelwien for your comment, and your kind information i didnt mean that those company will be affect something for that compression engine, i just mean that will they sue for the invention? i saw that already big corporation patent for random compression or recursive compression thats why i ask will they sue the author who invent that thing?
    3 replies | 19 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 10:32
    this is the correct binary file ​
    32 replies | 1141 view(s)
  • Shelwien's Avatar
    Today, 10:31
    There'd be nothing special. Even if there was a way to compress any file to a 128-bit hash (like, based on a time machine), it would only matter if its faster than internet. Actually even as it is, there's a lot of known redundancy in stored and transmitted content, since its compressed with inefficient methods (deflate etc), or isn't compressed at all. We could save maybe 30% of all traffic and storage if there was a strict enforcement of best compression everywhere. And even with a random compression method which is both fast, cheap and reliable, it still won't affect the corporations you mentioned. Sure, some storage companies may go bankrupt (WD, Seagate etc), but for google and microsoft... it would just allow to save some money on storage.
    3 replies | 19 view(s)
  • randomthing's Avatar
    Today, 10:10
    Hey Geeks, we know that Random Compression is impossible because Mathmetics already prove this thing.. but however lets take this as a science fiction that someone came up with a Compression Engine that Compress Random Data with recursive functionality, then what happend next? i mean Big Big corporation like ibm,google,microsfot e.t.c what they do? will they sue the author who discover the engine? or they used the engine to reeduce their database cost or other thing like streaming e.t.c ?
    3 replies | 19 view(s)
  • Shelwien's Avatar
    Today, 09:49
    There's a native GUI, but only 32-bit version (nanozip.exe in win32 archive): http://nanozip.ijat.my/
    304 replies | 320801 view(s)
  • paleski's Avatar
    Today, 09:04
    Is there some kind of GUI for NanoZip available? Or as a plugin for other archiver utilities, file browsers, etc?
    304 replies | 320801 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 07:23
    Fp8sk13 the fastest paq8 version ever fp8sk13 -1 enwik9 on my old laptop (intel core i5 3210M 2.5GHz 16Gb RAM) ​Total 1000000000 bytes compressed to 202158507 bytes.Time 5389.15 sec, used 850842021 bytes of memory:_superman2: i wonder if it runs under intel core i7 with higher GHz could it be faster ?? this is the correct binary file ​​
    32 replies | 1141 view(s)
  • Trench's Avatar
    Today, 06:59
    pacalovasjurijus Game maker program to make games without programming website drag and drop programs to not need to program cheat engine to edit game CODE to not need to program HEX editor to edit any file without knowing how to program etc obviously they have a limit but can do something with them
    22 replies | 581 view(s)
  • Gotty's Avatar
    Today, 02:22
    I disabled windows update looong time ago (I'm still on 1803, maybe it's time to move on...). Not just disabled updates, but prevented windows to turn it back on again. No popups. No searching for updates in the background. No sudden restarts. :_secret2: In the registry modify the startup type of the wuauserv service to disabled and remove all permissions for this key - except for yourself. This way windows cannot turn it back on again.
    2 replies | 52 view(s)
  • Shelwien's Avatar
    Today, 01:20
    With disabled updates it stops doing that.
    2 replies | 52 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 17:08
    sorry for late post here is the source code
    32 replies | 1141 view(s)
  • moisesmcardona's Avatar
    Yesterday, 15:32
    moisesmcardona replied to a thread Fp8sk in Data Compression
    Source code..... (Could we ban him for violating the GPL rules?)
    32 replies | 1141 view(s)
  • Gotty's Avatar
    Yesterday, 13:38
    Gotty replied to a thread ARM vs x64 in The Off-Topic Lounge
    Oh, no problem. You have to know, I don't know much about the topic. I felt the 1st video a bit (?) biased, but since I didn't find hard numbers that clearly supports or refutes the claims (the full picture is still very blurry)... I thought I'd post these - could be interesting for the readers of the thread. Thank you for posting your view.
    13 replies | 760 view(s)
  • paleski's Avatar
    Yesterday, 09:16
    Maybe here: http://forum.doom9.org/showpost.php?p=1916331&postcount=281
    38 replies | 3258 view(s)
  • Piotr Tarsa's Avatar
    Yesterday, 08:24
    (Sorry if this post is a little rude, but I'm fed up with baseless stories about ARM inferiority) Guy is spewing typical nonsense: - ARM can't have as high performance as x86 - ARM lacks some mysterious features that only x86 can have - ARM can't integrate with as much hardware as x86 - etc Where's any proof of that? The actual situation seems to be quite opposite: - ARM in the form of Apple Silicon has very high performance already and it's going up quickly. That is visible even in the first post here. - I haven't seen any example of functionalities that are possible on x86, but aren't on ARM. x86 prophets tells us otherwise, but is x86 a religion? You have access to ISAs (instruction set architecture) so you can find the mysterious features yourself, but there aren't any. - ARM can be integrated with hardware typically seen with x86, e.g. nVidia has full support for CUDA on ARM processors (currently nVidia supports x86, ARM and POWER architectures), nVidia Shield is an integration of ARM with GeForce, there are rumors of Samsung integrating RDNA (the new Radeon cores) in their upcoming ARM based smartphone SoCs, etc I'm mostly interested in any logical explanation on why ARM can't scale its performance up to the levels of x86 or above. No biased, unfounded, vague claims but actual technical analysis showing understanding of ARM and x86 architectures. Mac on ARM is an unknown so it's a perfectly logical idea to wait for independent benchmarks and see how the software we're interested in will perform on Apple Silicon based machines. Nothing shocking there. Same goes for choosing between AMD CPU and Intel CPU or AMD GPU and nVidia GPU.
    13 replies | 760 view(s)
  • Alexander Rhatushnyak's Avatar
    Yesterday, 06:58
    Here is a prooflink, and a proofshot is attached. if you ever tried to prevent Windows 10 from rebooting for more than a week, please describe your experiences. Thank you!
    2 replies | 52 view(s)
  • Alexander Rhatushnyak's Avatar
    Yesterday, 06:52
    Could anyone please build and provide cjpegxl and djpegxl executables? Either 32-bit or (better) 64-bit, either Windows or (better) Linux, for either earliest or latest Intel Core* CPUs ? I couldn't find any executables, and those that I built myself earlier this year were somehow a lot slower than expected (on my machines). Thank you in advance!
    38 replies | 3258 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 03:19
    Fp8sk12 -faster than previous version but worse compression ratio:) fp8sk12 -8 mixed40.dat is: Total 400000000 bytes compressed to 56415009 bytes.Time 3122.57 sec, used 1831298304 bytes of memory run under windows10 and intel core i5-3210M 2.5GHz and 16 Gb memory) if it run under Intel Socket 1151 Core i7-8700K (Coffee Lake, 3.7GHz, 6C12T, 95W TDP) how fast it could be ?? :_superman2:
    32 replies | 1141 view(s)
  • Dresdenboy's Avatar
    Yesterday, 02:08
    There is the Horspool paper (Improving LZW), which describes it. I also checked his source. No additional information. But Charles Bloom did some interesting analysis of this encoding (calling it "flat codes" BTW): http://cbloomrants.blogspot.com/2013/04/04-30-13-packing-values-in-bits-flat.html
    11 replies | 730 view(s)
  • Dresdenboy's Avatar
    Yesterday, 01:54
    Did you try it yet? Even Charles Bloom covered this encoding with some interesting analysis here. In another thread Mauro Vezossi brought up a LZW variant using this phased in binary encoding, called lzw-ab. He also did some tests:
    15 replies | 17304 view(s)
  • JamesB's Avatar
    Yesterday, 00:32
    Ah Tom Scott. A nice basic explanation. Yes it misses a lot out, but that's not the point. I had to chuckle with the BBC Micros in the background. My very first compression program was a huffman compressor written in 6502 assembly on the BBC Micro. :-)
    3 replies | 215 view(s)
  • Jarek's Avatar
    10th July 2020, 23:19
    L1-PCA is not exactly what we want to here: lowest bits/pixel, so I have directly optimized: e(x) = sum_{d=1..3} lg(sum_i |x_id|) which gives literally sum of entropy of 3 estimated Laplace distributions: can be translated into approximated bits/pixel. For all images it lead to this averaged transform: 0.515424, 0.628419, 0.582604 -0.806125, 0.124939, 0.578406 0.290691, -0.767776, 0.570980 First vector (as row) is kind of luminosity (Y) - should have higher accuracy, the remaining correspond to colors - I would just use finer quantization for the first one and coarser for the remaining two. But we could also directly optimize both rotation and some perceptually chosen distortion evaluation instead - I have just written theory today and will update arxiv in a day or two.
    12 replies | 539 view(s)
  • Jyrki Alakuijala's Avatar
    10th July 2020, 22:51
    That's amazing. Congratulations! I didn't expect that one could improve it more than 5 % at corpus level without going to 10x slower decoding. Will be good to learn about it. I brought quite a few techniques myself into other codecs that I originally invented for WebP lossless. Entropy clustering I use in brotli, brunsli, and in JPEG XL with great success -- it is a wonderful little mechanism that makes entropy codes so much more flexible. My Select predictor perfomed better (better compression, faster because less branches) than Paeth in JPEG XL, too. WebP's color decorrelation system through the guidance image is used in JPEG XL. Some of my inventions for the WebP that didn't make it to JPEG XL include the 2d code for LZ77 distances and the color cache, but that is mostly because JPEG XL is less pixel-oriented than WebP lossless.
    170 replies | 42219 view(s)
  • pacalovasjurijus's Avatar
    10th July 2020, 22:24
    I think that need to move information and write the prefix code.
    45 replies | 4998 view(s)
  • Stefan Atev's Avatar
    10th July 2020, 21:20
    I don't have any disagreement with the fact that the optimal solution will optimize for both perceptual and coding considerations. It just seemed from your comment that you think a sequential optimization will work - first optimize directions for coding, then optimize quantization for perceptual. I think the parameters are coupled if you evaluate them with a perceptual metric, so a sequential optimization strategy seems to be a bit on the greedy side. Perhaps I misunderstood your explanation in the forum, I am reacting to the comments and have not read the paper carefully. I personally find the use of L1-PCA very rewarding, in ML for the longest time L2/Gaussians have been used not because people think they're accurate but because they are convenient to analyze/compute with. Then people will try to find exact solutions to crippled models instead of accepting sub-optimal solutions for a model that reflects reality closer (that's the Math bias in wanting convergence proofs / theoretical guarantees)
    12 replies | 539 view(s)
  • Gotty's Avatar
    10th July 2020, 20:09
    A couple of introductory videos: https://www.youtube.com/watch?v=JsTptu56GM8 How Computers Compress Text: Huffman Coding and Huffman Trees https://www.youtube.com/watch?v=TxkA5UX4kis Compression codes | Journey into information theory | Computer Science | Khan Academy https://www.youtube.com/watch?v=Lto-ajuqW3w Compression - Computerphile
    3 replies | 215 view(s)
  • Jarek's Avatar
    10th July 2020, 14:02
    http://mattmahoney.net/dc/dce.html
    3 replies | 215 view(s)
  • randomthing's Avatar
    10th July 2020, 13:56
    Hello, Please help me to provide some Details about how to start data compression practice, i mean what staff did i need to learn Data compression thing? Like What Type of math,theory,book,articale e.t.c need to start Data Compression? and the most important thing is what type of math i need to know before start Data compression? Thank You, have a good day
    3 replies | 215 view(s)
  • Dresdenboy's Avatar
    10th July 2020, 13:02
    @Stefan: 1. This sounds interesting, but might not work. I put len-1-matches into my list because several good LZ77-family compressors use them (with 4b offsets) to avoid the longer encoding of literals in case of no longer matches. You might also consider literal runs (early in a file) vs. match runs (1-n matches following). You might also check out the literal escaping mechanism of pucrunch. 2. Fibonacci sum calculation is a few instructions. But it quickly adds up. :) Gamma is cheap instead in code footprint aspects. Bit oriented encoding is also cheap in asm (the widely seen get_bit subroutines with a shift register, which also is being filled with bytes, or BT instructions).
    41 replies | 2825 view(s)
  • Sportman's Avatar
    10th July 2020, 12:48
    Yes, Kwe use separator with not used bytes to navigate.
    2 replies | 185 view(s)
  • xezz's Avatar
    10th July 2020, 12:13
    ​Is separator must exist in file?
    2 replies | 185 view(s)
  • Jarek's Avatar
    10th July 2020, 10:40
    For data compression applications, from aligning them with rotation there is a few percent size reduction - thanks to better agreement with 3 independent Laplace distributions (there is small dependence which can be included in width prediction to get additional ~0.3 bits/pixel further reduction) Agreement with assumed distribution is crucial for both lossy and lossless, log-likelihood e.g. from ML estimation is literally savings e.g. in bits/pixel, for disagreement we pay Kullback-Leibler bits/pixel. My point is that we should use both simultaneously: optimize accordingly to perceptual criteria, and also this "dataset axis alignment" for agreement with assumed distribution ... while it seems the currently used are optimized only for perceptual (?) To optimize for both simultaneously, the basic approach is just to choose three different quantization coefficients for the three axes, what is nearly optimal from bits/pixel perspective (as explained in previous post). But maybe it would be worth to also rotate such axes for perceptual optimization? It would need formalizing such evaluation ... Another question is orthogonality of such transform - if the three axes should be orthogonal? While it seems natural from ML perspective (e.g. PCA), it is not true for YCrCb nor YCoCg. But again to optimize for non-orthogonal there is needed some formalization of perceptual evaluation ... To formalize such evaluation, we could use distortion metric with weights in perceptually chosen coordinates ...
    12 replies | 539 view(s)
  • Stefan Atev's Avatar
    10th July 2020, 00:12
    ​ I am not sure that's the case - it can happen that the directions PCA picked are not well-aligned with "perceptual importance" directions, so to maintain preceptual quality you need good precision in all 3 quantized values; as an example, if the directions have the same weight for green, you may be forced to spend more bits on all three of them; or if the directions end up being equal angles apart from luma - same situation. I think for lossless it doesn't matter because your loss according to any metric will be zero - the difficulty is in having an integer transform that's invertible.
    12 replies | 539 view(s)
  • Sportman's Avatar
    9th July 2020, 23:55
    Kwe version 0.0.0.1 - keyword encoder Kwe encode keywords: kwe e input output kwe d input output There are 4 optional options (must be used all at once): - keyword separator (default 32 = space) - minimal keyword length (default 2 = min.) - maximal keyword length (default 255 = max.) - keyword depth (default 255 = max.) Command line version, can work with .NET framework 4.8. Very simple first version, there is no error handling and not well tested. Input can be any text file. Output can be compressed with an other archiver.
    2 replies | 185 view(s)
  • skal's Avatar
    9th July 2020, 21:55
    No, WebP v2 is a redesign around the known limitations of WebP-lossless v1. It's currently at 10-15% better than WebP-v1, depending on the source (with consistent speed). It's also targeting the alpha plane compression specifically, with separate tools than lossless-ARGB.
    170 replies | 42219 view(s)
  • Darek's Avatar
    9th July 2020, 21:54
    Darek replied to a thread Paq8sk in Data Compression
    nope, paq8sk32 -x15 got worse score for enwik8 than paq8sk28... which got 122'398'396 for enwik9 -x15 -w -e1,English.dic My estimate is about 122'5xx'xxx
    142 replies | 11344 view(s)
  • msaidov's Avatar
    9th July 2020, 18:39
    ​Hello! In the beginning of this thread there was an error with making executable file in Ubuntu: /usr/bin/ld: libnc.a(libnc.o): relocation R_X86_64_32S against `.rodata' can not be used when making a PIE object; recompile with -fPIE /usr/bin/ld: libnc.a(job.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIE collect2: error: ld returned 1 exit status make: *** Error 1 The point is that we obtain an already compilled libnc.a file. And there is no possibility to add flag -fPIE into it. Could you give a hint how to build a project properly in Ubuntu? If there was a response in this thread and I didn't recognize it, hope to get a reference. Thank you.
    134 replies | 22152 view(s)
  • zubzer0's Avatar
    9th July 2020, 18:11
    Jyrki Alakuijala there’s another very interesting and possibly promising version - HIGIC: High-Fidelity Generative Image Compression "NextGen" image comperrsion? (maybe in future video?). Although, maybe someone will do a neural network to restore JPEG :) joke... or... "old man" will live forever(!) constantly adding new (re)compression / (re)processing / (re)restoration methods to it :D
    10 replies | 663 view(s)
  • Jyrki Alakuijala's Avatar
    9th July 2020, 17:06
    JPEG XL is taking the crown on visually lossless. According to our analysis with human viewers, we get a 50 % improvement in visually lossless and a lower standard deviation in bitrates than what is necessary for traditional JPEG. See figure 4 in https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11137/111370K/JPEG-XL-next-generation-image-compression-architecture-and-coding-tools/10.1117/12.2529237.full?SSO=1
    10 replies | 663 view(s)
  • zubzer0's Avatar
    9th July 2020, 15:47
    Jyrki Alakuijala This means comparing with the original in stop-frame, and not seeing differences without using a magnifying glass or mathematical analysis (butteraugli). This applies to any format. this is why "jpeg" is still popular (fast and acceptable in comparison with modern analogues - for a very long time with minimal results, in "visual lossless") and if you compress it (example) in "lepton-fast", then it becomes a leader at all, fast-quality-minimal size ...
    10 replies | 663 view(s)
  • Jyrki Alakuijala's Avatar
    9th July 2020, 15:38
    Visually lossless can mean very different things at 720p, 1080p, 4k, and 8k
    10 replies | 663 view(s)
  • zubzer0's Avatar
    9th July 2020, 15:22
    If compression touches the category of "visual lossless"* live video (fine detail, mosquit-noise, barely noticeable textures on flat surfaces), then HEVC (represented by x265) should disable some of its innovations in the form of SAO and use veryslow-placebo settings, so as not to fall into the mud face x264 and be on a par with it. I wonder how h266 will be in this regard. Since it is based on HEVC, will it inherit this as well? * example: x264 crf 16-18 or JPEG ~Q90
    10 replies | 663 view(s)
  • Jarek's Avatar
    9th July 2020, 15:00
    While I have focused on lossless, adding lossy in practice usually (as PVQ failed) means just uniform quantization: with 1/Q size of lattice. Entropy of width b Laplace distribution ( https://en.wikipedia.org/wiki/Laplace_distribution ) is lg(2 b e) bits. ML estimator of b is mean of |x|, leading to choice of transform O as minimizing entropy estimation: e(x) = sum_{d=1..3} lg(sum_i |x_id|) which optimization among rotations indeed leads to points nicely aligned along axes (plots above) - which can be approximated as 3 Laplace distributions. For lossy, uniform quantization Q leads to entropy ~ lg(2be) + lg(Q) bits So to the above e(x) entropy evaluation, we just need to add lg(Q1) + lg(Q2) + lg(Q3). Hence still we should choose transform/rotation optimizing e(x), what is similar to PCA ... and only choose quantization constants Q1, Q2, Q3 according to perceptual evaluation. Anyway, such choice shouldn't be made just by human vision analysis, but also analysis of datasample - I believe I could easily get lower bits/pixel with such optimization also for lossy.
    12 replies | 539 view(s)
  • pacalovasjurijus's Avatar
    22 replies | 581 view(s)
  • anasalzuvix's Avatar
    9th July 2020, 13:41
    Hi Thanks for your response, And Sorry, its my bad that i cant clarify the question perfectly, However Thanks Once Again
    2 replies | 274 view(s)
  • Shelwien's Avatar
    9th July 2020, 11:08
    Shelwien replied to a thread Fp8sk in Data Compression
    Please don't post these in GDC discussion thread - its for contest discussion, not for benchmarks. Also fp8 doesn't seem fast enough even for the slowest category anyway, it has to be more than twice faster to fit.
    32 replies | 1141 view(s)
  • suryakandau@yahoo.co.id's Avatar
    9th July 2020, 10:13
    I wonder if using paq8sk32 -x15 -w -e1,English.dic on enwik9 could it get 121.xxx.xxx
    142 replies | 11344 view(s)
  • zyzzle's Avatar
    9th July 2020, 05:12
    I'm very leery of this new codec. There comes a point where too much data is thrown away. "50% less bitrate for comparable quality". I don't believe it. You're in the lossy domain. The extra 50% reduction in bitrate over h.265 -- and probably ~75% reduction over h.264 codecs means extreme processor power is required, with very high heat and much greater energy cost to support the h.266 VVC codec. Even h.265 requires very high processor loads. I do not want to "throw away 50%" more bitrate. I'd rather increase bitrate by 100% to achieve much higher quality (less lossiness).
    10 replies | 663 view(s)
  • suryakandau@yahoo.co.id's Avatar
    9th July 2020, 04:03
    using fp8sk9 -8 option on mixed40.dat is Total 400000000 bytes compressed to 46246446 bytes. using fp8sk9 -8 option on block40.dat: Total 399998976 bytes compressed to 61042053 bytes.
    32 replies | 1141 view(s)
  • suryakandau@yahoo.co.id's Avatar
    9th July 2020, 02:56
    using -8 option on mixed40.dat is Total 400000000 bytes compressed to 46246446 bytes. using -8 option on block40.dat: Total 399998976 bytes compressed to 61042053 bytes. ​v6-v8 is fail product
    32 replies | 1141 view(s)
  • OverLord115's Avatar
    9th July 2020, 02:26
    OverLord115 replied to a thread repack .psarc in Data Compression
    @pklat ​So i'm assuming that you could extract the .psarc file from Resistance Fall of Man called game.psarc. Sorry I cant make any contribution to the post, but can I ask you how did you extract it? Because no matter how much I searched for programs and all I cant extract it nor even at least see something in notepad++, errors everywhere.
    5 replies | 1124 view(s)
  • moisesmcardona's Avatar
    9th July 2020, 01:58
    moisesmcardona replied to a thread Fp8sk in Data Compression
    Where is v6 to v8?
    32 replies | 1141 view(s)
  • suryakandau@yahoo.co.id's Avatar
    9th July 2020, 01:42
    Fp8sk9 using -5 option on images40.dat (GDC competition public test set files): Total 399892087 bytes compressed to 163761538 bytes. here is the source code and the binary files ​
    32 replies | 1141 view(s)
  • moisesmcardona's Avatar
    8th July 2020, 18:01
    These stronger codecs will require newer hardware. HEVC is being supported because the hardware is ready for some time now, but these new codecs will require the ASIC chips to be updated to include them, so I imagine they will not be mainstream until Intel, Nvidia, AMD, Qualcomm, MediaTek, etc integrates them and the world have shifted to the new hardware. Surprisingly, AV1 decoding works really well compared to when HEVC started. The guys who work with dav1d have done an excellent job. Encoding, however, is still slow in the current state until we start seeing broader hardware encoders. Not to mention there is still a lot of tuning going on on the encoders.
    10 replies | 663 view(s)
  • Jon Sneyers's Avatar
    8th July 2020, 03:11
    YCbCr is just a relic of analog color TV, that used to do something like that, and somehow we kept doing it when going from analog to digital. It's based on the constraints of backwards compatibility with the analog black & white television hardware of the 1940s and 1950s (as well as allowing color TVs to correctly show existing black & white broadcasts, which meant that missing chroma channels had to imply grayscale); things like chroma subsampling are a relic of the limited available bandwidth for chroma since the broadcasting frequency bands were already assigned and they didn't really have much room for chroma. YCbCr is not suitable for lossless for the obvious reason that it is not reversible: converting 8-bit RGB to 8-bit YCbCr brings you from 16 million different colors to 4 million different colors. Basically two bits are lost. Roughly speaking, it does little more than convert 8-bit RGB to 7-bit R, 8-bit G, 7-bit B, in a clumsy way that doesn't allow you to restore G exactly. Of course the luma-chroma-chroma aspect of YCbCr does help for channel decorrelation, but still, it's mostly the bit depth reduction that helps compression. It's somewhat perceptual (R and B are "less important" than G), but only in a rather crude way. Reversible color transforms in integer arithmetic have to be defined carefully - multiplying with some floating point matrix is not going to work. YCoCg is an example of what you can do while staying reversible. You can do some variants of that, but that's about it. Getting some amount of channel decorrelation is the only thing that matters for lossless – perceptual considerations are irrelevant since lossless is lossless anyway. For lossy compression, things of course don't need to be reversible, and decorrelation is still the goal but also perceptual considerations apply: basically you want any compression artifacts (e.g. due to DCT coefficient quantization) to be maximally uniform perceptually – i.e. the color space itself should be maximally uniform perceptually (after color transform, the same distance in terms of color coordinates should result in the same perceptual distance in terms of similarity of the corresponding colors). YCbCr applied to sRGB is not very good at that: e.g. all the color banding artifacts you see in dark gradients, especially in video codecs, is caused by the lack of perceptual uniformity of that color space. XYB is afaik the first serious attempt to use an actual perceptually motivated color space based on recent research in an image codec. It leads to bigger errors if you naively measure errors in terms of RGB PSNR (or YCbCr SSIM for that matter), but less noticeable artifacts.
    12 replies | 539 view(s)
  • Jyrki Alakuijala's Avatar
    8th July 2020, 01:33
    I'm guilty of not having published it in other form than three opensourced implementations (butteraugli, pik and jpeg xl). Most color work is based on research from hundred years ago: https://en.wikipedia.org/wiki/CIE_1931_color_space possibly with slight differences. It concerned initially about 10 degree discs of constant color and about isosurfaces of color perception experience. An improvement was delivered later with 2 degree discs. However, pixels are about 100x smaller than that and using CIE 1931 is like modeling mice after knowing elephants very well. With butteraugli development I looked into the anatomy of the eye and considered about where the information is in photographic images. 1. The anatomy of the eye is in fovea only bichromatic, L- and M-receptors only, the S-receptors are big and only around fovea. This make sense since S-receptors scatter more. Anatomic information is more reliable than physiological information. 2. Most of the color information stored in a photograph is in the high frequency information. In a photograph quality one can concider that more than 95 % of the information is in the 0.02 degree data rather than 2 degree data. The anatomic knowledge about the eye and our own psychovisual testing suggest that the eye is scale dependent, and this invalidates the use of CIE 1931 for modeling colors for image compression. We cannot use a large scale color model to model the fine scale, and the fine scale is all that matters for modern image compression. 3. Signal compression (gamma compression) happens in the photoreceptors (cones). It happens after (or at) the process where spectral sensitivity influences conversion of light into electricity. To model this, we need first to model L, M, and S spectral sensitivy in linear (RGB) color space and then apply a non-linearity. Applying the gamma-compression to other dimensions than the L, M and S spectral sensitivity will lead to warped color spaces and have no mathematical possibility in getting perception of colors right.
    12 replies | 539 view(s)
  • Gotty's Avatar
    7th July 2020, 23:02
    Gotty replied to a thread ARM vs x64 in The Off-Topic Lounge
    Youtube: Apple ARM Processor vs Intel x86 Performance and Power Efficiency - Is the MAC Doomed? Youtube: Why you SHOULD Buy an Intel-Based Mac! (ARM can WAIT)
    13 replies | 760 view(s)
  • Shelwien's Avatar
    7th July 2020, 19:58
    You have to clarify the question. Best LZ78 is probably GLZA: http://mattmahoney.net/dc/text.html#1638 But its also possible to use any compression algorithm with external dictionary preprocessing - https://github.com/inikep/XWRT, so NNCP is technically also "dictionary based compression".
    2 replies | 274 view(s)
  • Shelwien's Avatar
    7th July 2020, 19:50
    https://en.wikipedia.org/wiki/YCbCr#Rationale But of course there's also an arbitrary tradeoff between precise perceptual colorspaces and practical ones, because better perceptual representations are nonlinear and have higher precision, so compression based of these would be too slow. For example, see https://en.wikipedia.org/wiki/ICC_profile
    12 replies | 539 view(s)
  • compgt's Avatar
    7th July 2020, 19:01
    I don't know Lil Wayne or Samwell. But i recall it was Ellen DeGeneres who wanted Justin Bieber made. Miley Cyrus is good.
    29 replies | 1469 view(s)
  • DZgas's Avatar
    7th July 2020, 18:25
    DZgas replied to a thread Fp8sk in Data Compression
    Of course! But if you compress it +0.5% stronger it will be the level of paq8pxd (default type).
    32 replies | 1141 view(s)
  • suryakandau@yahoo.co.id's Avatar
    7th July 2020, 18:11
    maybe this is the trade of between compression ratio and speed.
    32 replies | 1141 view(s)
  • DZgas's Avatar
    7th July 2020, 17:47
    DZgas replied to a thread Fp8sk in Data Compression
    Hm...Compresses 0.4% stronger and 65% slower (compared to fp8v6). Good!
    32 replies | 1141 view(s)
  • DZgas's Avatar
    7th July 2020, 17:17
    Non-free codec. But yes - "I wonder how it compares to AV1". HEVC has just begun to be supported by almost everyone. The future for AV1 which is free and supported by all large companies...But no sooner than ~2022 even with large companies support. What will happen to VVC, Internet Still is AVC. So most likely this codec for Everything except the Internet.
    10 replies | 663 view(s)
  • Stefan Atev's Avatar
    7th July 2020, 16:10
    I think the chief criteria in a lot of color spaces is perceptual uniformity - so changes in either component are perceived as roughly similar changes in color difference; that way, when you minimize loss in the encoded color, you are indirectly using a perceptual heuristic. Something like CIELAB, the computer vision/graphics of yore's darling HSV, etc. are more perceptually uniform than RGB. For compression, it maybe makes more sense to use decompositions where one component is much more perceptually important (and has to be coded more precisely), and the other components to be less perceptually important. For lossless, I think you wouldn't care about any of these considerations, you simply want to decorrellate the color channels (so one plane is expensive to encode but the others are cheap); I think for lossless, tailoring the color transform is probably good for compression - storing the transform coefficients is not that expensive, so even small improvements in the coding can help; how you'd optimize the transform for compressibility is a different story, seems to me that if it was easy to do it efficiently, everyone would be doing it. ​
    12 replies | 539 view(s)
  • JamesWasil's Avatar
    7th July 2020, 14:35
    But why did you have to give us Justin Bieber, Lil Wayne, Samwell, and Miley Cyrus? Couldn't you have skipped these? What did we do to deserve this?
    29 replies | 1469 view(s)
More Activity