Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • Piotr Tarsa's Avatar
    Today, 08:24
    (Sorry if this post is a little rude, but I'm fed up with baseless stories about ARM inferiority) Guy is spewing typical nonsense: - ARM can't have as high performance as x86 - ARM lacks some mysterious features that only x86 can have - ARM can't integrate with as much hardware as x86 - etc Where's any proof of that? The actual situation seems to be quite opposite: - ARM in the form of Apple Silicon has very high performance already and it's going up quickly. That is visible even in the first post here. - I haven't seen any example of functionalities that are possible on x86, but aren't on ARM. x86 prophets tells us otherwise, but is x86 a religion? You have access to ISAs (instruction set architecture) so you can find the mysterious features yourself, but there aren't any. - ARM can be integrated with hardware typically seen with x86, e.g. nVidia has full support for CUDA on ARM processors (currently nVidia supports x86, ARM and POWER architectures), nVidia Shield is an integration of ARM with GeForce, there are rumors of Samsung integrating RDNA (the new Radeon cores) in their upcoming ARM based smartphone SoCs, etc I'm mostly interested in any logical explanation on why ARM can't scale its performance up to the levels of x86 or above. No biased, unfounded, vague claims but actual technical analysis showing understanding of ARM and x86 architectures. Mac on ARM is an unknown so it's a perfectly logical idea to wait for independent benchmarks and see how the software we're interested in will perform on Apple Silicon based machines. Nothing shocking there. Same goes for choosing between AMD CPU and Intel CPU or AMD GPU and nVidia GPU.
    12 replies | 707 view(s)
  • Alexander Rhatushnyak's Avatar
    Today, 06:58
    Here is a prooflink, and a proofshot is attached. if you ever tried to prevent Windows 10 from rebooting for more than a week, please describe your experiences. Thank you!
    0 replies | 3 view(s)
  • Alexander Rhatushnyak's Avatar
    Today, 06:52
    Could anyone please build and provide cjpegxl and djpegxl executables? Either 32-bit or (better) 64-bit, either Windows or (better) Linux, for either earliest or latest Intel Core* CPUs ? I couldn't find any executables, and those that I built myself earlier this year were somehow a lot slower than expected (on my machines). Thank you in advance!
    36 replies | 3069 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 03:19
    Fp8sk12 -faster than previous version but worse compression ratio:) fp8sk12 -8 mixed40.dat is: Total 400000000 bytes compressed to 56415009 bytes.Time 3122.57 sec, used 1831298304 bytes of memory run under windows10 and intel core i5-3210M 2.5GHz and 16 Gb memory) if it run under Intel Socket 1151 Core i7-8700K (Coffee Lake, 3.7GHz, 6C12T, 95W TDP) how fast it could be ?? :_superman2:
    27 replies | 893 view(s)
  • Dresdenboy's Avatar
    Today, 02:08
    There is the Horspool paper (Improving LZW), which describes it. I also checked his source. No additional information. But Charles Bloom did some interesting analysis of this encoding (calling it "flat codes" BTW): http://cbloomrants.blogspot.com/2013/04/04-30-13-packing-values-in-bits-flat.html
    11 replies | 660 view(s)
  • Dresdenboy's Avatar
    Today, 01:54
    Did you try it yet? Even Charles Bloom covered this encoding with some interesting analysis here. In another thread Mauro Vezossi brought up a LZW variant using this phased in binary encoding, called lzw-ab. He also did some tests:
    15 replies | 17223 view(s)
  • JamesB's Avatar
    Today, 00:32
    Ah Tom Scott. A nice basic explanation. Yes it misses a lot out, but that's not the point. I had to chuckle with the BBC Micros in the background. My very first compression program was a huffman compressor written in 6502 assembly on the BBC Micro. :-)
    3 replies | 153 view(s)
  • Jarek's Avatar
    Yesterday, 23:19
    L1-PCA is not exactly what we want to here: lowest bits/pixel, so I have directly optimized: e(x) = sum_{d=1..3} lg(sum_i |x_id|) which gives literally sum of entropy of 3 estimated Laplace distributions: can be translated into approximated bits/pixel. For all images it lead to this averaged transform: 0.515424, 0.628419, 0.582604 -0.806125, 0.124939, 0.578406 0.290691, -0.767776, 0.570980 First vector (as row) is kind of luminosity (Y) - should have higher accuracy, the remaining correspond to colors - I would just use finer quantization for the first one and coarser for the remaining two. But we could also directly optimize both rotation and some perceptually chosen distortion evaluation instead - I have just written theory today and will update arxiv in a day or two.
    12 replies | 516 view(s)
  • Jyrki Alakuijala's Avatar
    Yesterday, 22:51
    That's amazing. Congratulations! I didn't expect that one could improve it more than 5 % at corpus level without going to 10x slower decoding. Will be good to learn about it. I brought quite a few techniques myself into other codecs that I originally invented for WebP lossless. Entropy clustering I use in brotli, brunsli, and in JPEG XL with great success -- it is a wonderful little mechanism that makes entropy codes so much more flexible. My Select predictor perfomed better (better compression, faster because less branches) than Paeth in JPEG XL, too. WebP's color decorrelation system through the guidance image is used in JPEG XL. Some of my inventions for the WebP that didn't make it to JPEG XL include the 2d code for LZ77 distances and the color cache, but that is mostly because JPEG XL is less pixel-oriented than WebP lossless.
    170 replies | 41997 view(s)
  • pacalovasjurijus's Avatar
    Yesterday, 22:24
    I think that need to move information and write the prefix code.
    45 replies | 4969 view(s)
  • Stefan Atev's Avatar
    Yesterday, 21:20
    I don't have any disagreement with the fact that the optimal solution will optimize for both perceptual and coding considerations. It just seemed from your comment that you think a sequential optimization will work - first optimize directions for coding, then optimize quantization for perceptual. I think the parameters are coupled if you evaluate them with a perceptual metric, so a sequential optimization strategy seems to be a bit on the greedy side. Perhaps I misunderstood your explanation in the forum, I am reacting to the comments and have not read the paper carefully. I personally find the use of L1-PCA very rewarding, in ML for the longest time L2/Gaussians have been used not because people think they're accurate but because they are convenient to analyze/compute with. Then people will try to find exact solutions to crippled models instead of accepting sub-optimal solutions for a model that reflects reality closer (that's the Math bias in wanting convergence proofs / theoretical guarantees)
    12 replies | 516 view(s)
  • Gotty's Avatar
    Yesterday, 20:09
    A couple of introductory videos: https://www.youtube.com/watch?v=JsTptu56GM8 How Computers Compress Text: Huffman Coding and Huffman Trees https://www.youtube.com/watch?v=TxkA5UX4kis Compression codes | Journey into information theory | Computer Science | Khan Academy https://www.youtube.com/watch?v=Lto-ajuqW3w Compression - Computerphile
    3 replies | 153 view(s)
  • Jarek's Avatar
    Yesterday, 14:02
    http://mattmahoney.net/dc/dce.html
    3 replies | 153 view(s)
  • randomthing's Avatar
    Yesterday, 13:56
    Hello, Please help me to provide some Details about how to start data compression practice, i mean what staff did i need to learn Data compression thing? Like What Type of math,theory,book,articale e.t.c need to start Data Compression? and the most important thing is what type of math i need to know before start Data compression? Thank You, have a good day
    3 replies | 153 view(s)
  • Dresdenboy's Avatar
    Yesterday, 13:02
    @Stefan: 1. This sounds interesting, but might not work. I put len-1-matches into my list because several good LZ77-family compressors use them (with 4b offsets) to avoid the longer encoding of literals in case of no longer matches. You might also consider literal runs (early in a file) vs. match runs (1-n matches following). You might also check out the literal escaping mechanism of pucruncher. 2. Fibonacci sum calculation is a few instructions. But it quickly adds up. :) Gamma is cheap instead. Bit oriented encoding is also cheap in asm (the widely seen get_bit subroutines with a shift register, which also is being filled with bytes, or BT instructions).
    41 replies | 2784 view(s)
  • Sportman's Avatar
    Yesterday, 12:48
    Yes, Kwe use separator with not used bytes to navigate.
    2 replies | 159 view(s)
  • xezz's Avatar
    Yesterday, 12:13
    ​Is separator must exist in file?
    2 replies | 159 view(s)
  • Jarek's Avatar
    Yesterday, 10:40
    For data compression applications, from aligning them with rotation there is a few percent size reduction - thanks to better agreement with 3 independent Laplace distributions (there is small dependence which can be included in width prediction to get additional ~0.3 bits/pixel further reduction) Agreement with assumed distribution is crucial for both lossy and lossless, log-likelihood e.g. from ML estimation is literally savings e.g. in bits/pixel, for disagreement we pay Kullback-Leibler bits/pixel. My point is that we should use both simultaneously: optimize accordingly to perceptual criteria, and also this "dataset axis alignment" for agreement with assumed distribution ... while it seems the currently used are optimized only for perceptual (?) To optimize for both simultaneously, the basic approach is just to choose three different quantization coefficients for the three axes, what is nearly optimal from bits/pixel perspective (as explained in previous post). But maybe it would be worth to also rotate such axes for perceptual optimization? It would need formalizing such evaluation ... Another question is orthogonality of such transform - if the three axes should be orthogonal? While it seems natural from ML perspective (e.g. PCA), it is not true for YCrCb nor YCoCg. But again to optimize for non-orthogonal there is needed some formalization of perceptual evaluation ... To formalize such evaluation, we could use distortion metric with weights in perceptually chosen coordinates ...
    12 replies | 516 view(s)
  • Stefan Atev's Avatar
    Yesterday, 00:12
    ​ I am not sure that's the case - it can happen that the directions PCA picked are not well-aligned with "perceptual importance" directions, so to maintain preceptual quality you need good precision in all 3 quantized values; as an example, if the directions have the same weight for green, you may be forced to spend more bits on all three of them; or if the directions end up being equal angles apart from luma - same situation. I think for lossless it doesn't matter because your loss according to any metric will be zero - the difficulty is in having an integer transform that's invertible.
    12 replies | 516 view(s)
  • Sportman's Avatar
    9th July 2020, 23:55
    Kwe version 0.0.0.1 - keyword encoder Kwe encode keywords: kwe e input output kwe d input output There are 4 optional options (must be used all at once): - keyword separator (default 32 = space) - minimal keyword length (default 2 = min.) - maximal keyword length (default 255 = max.) - keyword depth (default 255 = max.) Command line version, can work with .NET framework 4.8. Very simple first version, there is no error handling and not well tested. Input can be any text file. Output can be compressed with an other archiver.
    2 replies | 159 view(s)
  • skal's Avatar
    9th July 2020, 21:55
    No, WebP v2 is a redesign around the known limitations of WebP-lossless v1. It's currently at 10-15% better than WebP-v1, depending on the source (with consistent speed). It's also targeting the alpha plane compression specifically, with separate tools than lossless-ARGB.
    170 replies | 41997 view(s)
  • Darek's Avatar
    9th July 2020, 21:54
    Darek replied to a thread Paq8sk in Data Compression
    nope, paq8sk32 -x15 got worse score for enwik8 than paq8sk28... which got 122'398'396 for enwik9 -x15 -w -e1,English.dic My estimate is about 122'5xx'xxx
    142 replies | 11304 view(s)
  • msaidov's Avatar
    9th July 2020, 18:39
    ​Hello! In the beginning of this thread there was an error with making executable file in Ubuntu: /usr/bin/ld: libnc.a(libnc.o): relocation R_X86_64_32S against `.rodata' can not be used when making a PIE object; recompile with -fPIE /usr/bin/ld: libnc.a(job.o): relocation R_X86_64_32 against `.text' can not be used when making a PIE object; recompile with -fPIE collect2: error: ld returned 1 exit status make: *** Error 1 The point is that we obtain an already compilled libnc.a file. And there is no possibility to add flag -fPIE into it. Could you give a hint how to build a project properly in Ubuntu? If there was a response in this thread and I didn't recognize it, hope to get a reference. Thank you.
    134 replies | 22088 view(s)
  • zubzer0's Avatar
    9th July 2020, 18:11
    Jyrki Alakuijala there’s another very interesting and possibly promising version - HIGIC: High-Fidelity Generative Image Compression "NextGen" image comperrsion? (maybe in future video?). Although, maybe someone will do a neural network to restore JPEG :) joke... or... "old man" will live forever(!) constantly adding new (re)compression / (re)processing / (re)restoration methods to it :D
    9 replies | 639 view(s)
  • Jyrki Alakuijala's Avatar
    9th July 2020, 17:06
    JPEG XL is taking the crown on visually lossless. According to our analysis with human viewers, we get a 50 % improvement in visually lossless and a lower standard deviation in bitrates than what is necessary for traditional JPEG. See figure 4 in https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11137/111370K/JPEG-XL-next-generation-image-compression-architecture-and-coding-tools/10.1117/12.2529237.full?SSO=1
    9 replies | 639 view(s)
  • zubzer0's Avatar
    9th July 2020, 15:47
    Jyrki Alakuijala This means comparing with the original in stop-frame, and not seeing differences without using a magnifying glass or mathematical analysis (butteraugli). This applies to any format. this is why "jpeg" is still popular (fast and acceptable in comparison with modern analogues - for a very long time with minimal results, in "visual lossless") and if you compress it (example) in "lepton-fast", then it becomes a leader at all, fast-quality-minimal size ...
    9 replies | 639 view(s)
  • Jyrki Alakuijala's Avatar
    9th July 2020, 15:38
    Visually lossless can mean very different things at 720p, 1080p, 4k, and 8k
    9 replies | 639 view(s)
  • zubzer0's Avatar
    9th July 2020, 15:22
    If compression touches the category of "visual lossless"* live video (fine detail, mosquit-noise, barely noticeable textures on flat surfaces), then HEVC (represented by x265) should disable some of its innovations in the form of SAO and use veryslow-placebo settings, so as not to fall into the mud face x264 and be on a par with it. I wonder how h266 will be in this regard. Since it is based on HEVC, will it inherit this as well? * example: x264 crf 16-18 or JPEG ~Q90
    9 replies | 639 view(s)
  • Jarek's Avatar
    9th July 2020, 15:00
    While I have focused on lossless, adding lossy in practice usually (as PVQ failed) means just uniform quantization: with 1/Q size of lattice. Entropy of width b Laplace distribution ( https://en.wikipedia.org/wiki/Laplace_distribution ) is lg(2 b e) bits. ML estimator of b is mean of |x|, leading to choice of transform O as minimizing entropy estimation: e(x) = sum_{d=1..3} lg(sum_i |x_id|) which optimization among rotations indeed leads to points nicely aligned along axes (plots above) - which can be approximated as 3 Laplace distributions. For lossy, uniform quantization Q leads to entropy ~ lg(2be) + lg(Q) bits So to the above e(x) entropy evaluation, we just need to add lg(Q1) + lg(Q2) + lg(Q3). Hence still we should choose transform/rotation optimizing e(x), what is similar to PCA ... and only choose quantization constants Q1, Q2, Q3 according to perceptual evaluation. Anyway, such choice shouldn't be made just by human vision analysis, but also analysis of datasample - I believe I could easily get lower bits/pixel with such optimization also for lossy.
    12 replies | 516 view(s)
  • pacalovasjurijus's Avatar
    21 replies | 556 view(s)
  • anasalzuvix's Avatar
    9th July 2020, 13:41
    Hi Thanks for your response, And Sorry, its my bad that i cant clarify the question perfectly, However Thanks Once Again
    2 replies | 268 view(s)
  • Shelwien's Avatar
    9th July 2020, 11:08
    Shelwien replied to a thread Fp8sk in Data Compression
    Please don't post these in GDC discussion thread - its for contest discussion, not for benchmarks. Also fp8 doesn't seem fast enough even for the slowest category anyway, it has to be more than twice faster to fit.
    27 replies | 893 view(s)
  • suryakandau@yahoo.co.id's Avatar
    9th July 2020, 10:13
    I wonder if using paq8sk32 -x15 -w -e1,English.dic on enwik9 could it get 121.xxx.xxx
    142 replies | 11304 view(s)
  • zyzzle's Avatar
    9th July 2020, 05:12
    I'm very leery of this new codec. There comes a point where too much data is thrown away. "50% less bitrate for comparable quality". I don't believe it. You're in the lossy domain. The extra 50% reduction in bitrate over h.265 -- and probably ~75% reduction over h.264 codecs means extreme processor power is required, with very high heat and much greater energy cost to support the h.266 VVC codec. Even h.265 requires very high processor loads. I do not want to "throw away 50%" more bitrate. I'd rather increase bitrate by 100% to achieve much higher quality (less lossiness).
    9 replies | 639 view(s)
  • suryakandau@yahoo.co.id's Avatar
    9th July 2020, 04:03
    using fp8sk9 -8 option on mixed40.dat is Total 400000000 bytes compressed to 46246446 bytes. using fp8sk9 -8 option on block40.dat: Total 399998976 bytes compressed to 61042053 bytes.
    27 replies | 893 view(s)
  • suryakandau@yahoo.co.id's Avatar
    9th July 2020, 02:56
    using -8 option on mixed40.dat is Total 400000000 bytes compressed to 46246446 bytes. using -8 option on block40.dat: Total 399998976 bytes compressed to 61042053 bytes. ​v6-v8 is fail product
    27 replies | 893 view(s)
  • OverLord115's Avatar
    9th July 2020, 02:26
    OverLord115 replied to a thread repack .psarc in Data Compression
    @pklat ​So i'm assuming that you could extract the .psarc file from Resistance Fall of Man called game.psarc. Sorry I cant make any contribution to the post, but can I ask you how did you extract it? Because no matter how much I searched for programs and all I cant extract it nor even at least see something in notepad++, errors everywhere.
    5 replies | 1117 view(s)
  • moisesmcardona's Avatar
    9th July 2020, 01:58
    moisesmcardona replied to a thread Fp8sk in Data Compression
    Where is v6 to v8?
    27 replies | 893 view(s)
  • suryakandau@yahoo.co.id's Avatar
    9th July 2020, 01:42
    Fp8sk9 using -5 option on images40.dat (GDC competition public test set files): Total 399892087 bytes compressed to 163761538 bytes. here is the source code and the binary files ​
    27 replies | 893 view(s)
  • moisesmcardona's Avatar
    8th July 2020, 18:01
    These stronger codecs will require newer hardware. HEVC is being supported because the hardware is ready for some time now, but these new codecs will require the ASIC chips to be updated to include them, so I imagine they will not be mainstream until Intel, Nvidia, AMD, Qualcomm, MediaTek, etc integrates them and the world have shifted to the new hardware. Surprisingly, AV1 decoding works really well compared to when HEVC started. The guys who work with dav1d have done an excellent job. Encoding, however, is still slow in the current state until we start seeing broader hardware encoders. Not to mention there is still a lot of tuning going on on the encoders.
    9 replies | 639 view(s)
  • Jon Sneyers's Avatar
    8th July 2020, 03:11
    YCbCr is just a relic of analog color TV, that used to do something like that, and somehow we kept doing it when going from analog to digital. It's based on the constraints of backwards compatibility with the analog black & white television hardware of the 1940s and 1950s (as well as allowing color TVs to correctly show existing black & white broadcasts, which meant that missing chroma channels had to imply grayscale); things like chroma subsampling are a relic of the limited available bandwidth for chroma since the broadcasting frequency bands were already assigned and they didn't really have much room for chroma. YCbCr is not suitable for lossless for the obvious reason that it is not reversible: converting 8-bit RGB to 8-bit YCbCr brings you from 16 million different colors to 4 million different colors. Basically two bits are lost. Roughly speaking, it does little more than convert 8-bit RGB to 7-bit R, 8-bit G, 7-bit B, in a clumsy way that doesn't allow you to restore G exactly. Of course the luma-chroma-chroma aspect of YCbCr does help for channel decorrelation, but still, it's mostly the bit depth reduction that helps compression. It's somewhat perceptual (R and B are "less important" than G), but only in a rather crude way. Reversible color transforms in integer arithmetic have to be defined carefully - multiplying with some floating point matrix is not going to work. YCoCg is an example of what you can do while staying reversible. You can do some variants of that, but that's about it. Getting some amount of channel decorrelation is the only thing that matters for lossless – perceptual considerations are irrelevant since lossless is lossless anyway. For lossy compression, things of course don't need to be reversible, and decorrelation is still the goal but also perceptual considerations apply: basically you want any compression artifacts (e.g. due to DCT coefficient quantization) to be maximally uniform perceptually – i.e. the color space itself should be maximally uniform perceptually (after color transform, the same distance in terms of color coordinates should result in the same perceptual distance in terms of similarity of the corresponding colors). YCbCr applied to sRGB is not very good at that: e.g. all the color banding artifacts you see in dark gradients, especially in video codecs, is caused by the lack of perceptual uniformity of that color space. XYB is afaik the first serious attempt to use an actual perceptually motivated color space based on recent research in an image codec. It leads to bigger errors if you naively measure errors in terms of RGB PSNR (or YCbCr SSIM for that matter), but less noticeable artifacts.
    12 replies | 516 view(s)
  • Jyrki Alakuijala's Avatar
    8th July 2020, 01:33
    I'm guilty of not having published it in other form than three opensourced implementations (butteraugli, pik and jpeg xl). Most color work is based on research from hundred years ago: https://en.wikipedia.org/wiki/CIE_1931_color_space possibly with slight differences. It concerned initially about 10 degree discs of constant color and about isosurfaces of color perception experience. An improvement was delivered later with 2 degree discs. However, pixels are about 100x smaller than that and using CIE 1931 is like modeling mice after knowing elephants very well. With butteraugli development I looked into the anatomy of the eye and considered about where the information is in photographic images. 1. The anatomy of the eye is in fovea only bichromatic, L- and M-receptors only, the S-receptors are big and only around fovea. This make sense since S-receptors scatter more. Anatomic information is more reliable than physiological information. 2. Most of the color information stored in a photograph is in the high frequency information. In a photograph quality one can concider that more than 95 % of the information is in the 0.02 degree data rather than 2 degree data. The anatomic knowledge about the eye and our own psychovisual testing suggest that the eye is scale dependent, and this invalidates the use of CIE 1931 for modeling colors for image compression. We cannot use a large scale color model to model the fine scale, and the fine scale is all that matters for modern image compression. 3. Signal compression (gamma compression) happens in the photoreceptors (cones). It happens after (or at) the process where spectral sensitivity influences conversion of light into electricity. To model this, we need first to model L, M, and S spectral sensitivy in linear (RGB) color space and then apply a non-linearity. Applying the gamma-compression to other dimensions than the L, M and S spectral sensitivity will lead to warped color spaces and have no mathematical possibility in getting perception of colors right.
    12 replies | 516 view(s)
  • Gotty's Avatar
    7th July 2020, 23:02
    Gotty replied to a thread ARM vs x64 in The Off-Topic Lounge
    Youtube: Apple ARM Processor vs Intel x86 Performance and Power Efficiency - Is the MAC Doomed? Youtube: Why you SHOULD Buy an Intel-Based Mac! (ARM can WAIT)
    12 replies | 707 view(s)
  • Shelwien's Avatar
    7th July 2020, 19:58
    You have to clarify the question. Best LZ78 is probably GLZA: http://mattmahoney.net/dc/text.html#1638 But its also possible to use any compression algorithm with external dictionary preprocessing - https://github.com/inikep/XWRT, so NNCP is technically also "dictionary based compression".
    2 replies | 268 view(s)
  • Shelwien's Avatar
    7th July 2020, 19:50
    https://en.wikipedia.org/wiki/YCbCr#Rationale But of course there's also an arbitrary tradeoff between precise perceptual colorspaces and practical ones, because better perceptual representations are nonlinear and have higher precision, so compression based of these would be too slow. For example, see https://en.wikipedia.org/wiki/ICC_profile
    12 replies | 516 view(s)
  • compgt's Avatar
    7th July 2020, 19:01
    I don't know Lil Wayne or Samwell. But i recall it was Ellen DeGeneres who wanted Justin Bieber made. Miley Cyrus is good.
    29 replies | 1449 view(s)
  • DZgas's Avatar
    7th July 2020, 18:25
    DZgas replied to a thread Fp8sk in Data Compression
    Of course! But if you compress it +0.5% stronger it will be the level of paq8pxd (default type).
    27 replies | 893 view(s)
  • suryakandau@yahoo.co.id's Avatar
    7th July 2020, 18:11
    maybe this is the trade of between compression ratio and speed.
    27 replies | 893 view(s)
  • DZgas's Avatar
    7th July 2020, 17:47
    DZgas replied to a thread Fp8sk in Data Compression
    Hm...Compresses 0.4% stronger and 65% slower (compared to fp8v6). Good!
    27 replies | 893 view(s)
  • DZgas's Avatar
    7th July 2020, 17:17
    Non-free codec. But yes - "I wonder how it compares to AV1". HEVC has just begun to be supported by almost everyone. The future for AV1 which is free and supported by all large companies...But no sooner than ~2022 even with large companies support. What will happen to VVC, Internet Still is AVC. So most likely this codec for Everything except the Internet.
    9 replies | 639 view(s)
  • Stefan Atev's Avatar
    7th July 2020, 16:10
    I think the chief criteria in a lot of color spaces is perceptual uniformity - so changes in either component are perceived as roughly similar changes in color difference; that way, when you minimize loss in the encoded color, you are indirectly using a perceptual heuristic. Something like CIELAB, the computer vision/graphics of yore's darling HSV, etc. are more perceptually uniform than RGB. For compression, it maybe makes more sense to use decompositions where one component is much more perceptually important (and has to be coded more precisely), and the other components to be less perceptually important. For lossless, I think you wouldn't care about any of these considerations, you simply want to decorrellate the color channels (so one plane is expensive to encode but the others are cheap); I think for lossless, tailoring the color transform is probably good for compression - storing the transform coefficients is not that expensive, so even small improvements in the coding can help; how you'd optimize the transform for compressibility is a different story, seems to me that if it was easy to do it efficiently, everyone would be doing it. ​
    12 replies | 516 view(s)
  • JamesWasil's Avatar
    7th July 2020, 14:35
    But why did you have to give us Justin Bieber, Lil Wayne, Samwell, and Miley Cyrus? Couldn't you have skipped these? What did we do to deserve this?
    29 replies | 1449 view(s)
  • lz77's Avatar
    7th July 2020, 14:35
    lz77 replied to a thread enwik10 in Download Area
    WOW, I will increase input buffer up to 10000000000 bytes in my compressors. :_superman2:
    1 replies | 292 view(s)
  • Jarek's Avatar
    7th July 2020, 14:25
    Kaw, indeed YCrCb is motivated for lossy, it is probably why it is so terrible for lossless. So the question is how exactly the lossy transforms are chosen? What optimization criteria are used? algorithm, at least orthonormal transforms can be used also for lossless - from accuracy/quantization perspective we get rotated lattice, usually we can uniquely translate from one lattice to rotated one. However, putting already decoded channels into context for predicting the remaining channels, we can get similar dependencies - I have just tested it, getting only ~0.1-0.2 bits/pixel worsening from optimally chosen rotations.
    12 replies | 516 view(s)
  • algorithm's Avatar
    7th July 2020, 13:52
    For lossless you need simple transforms like G, R-G, B-G because they add less noise than more complicated ones.
    12 replies | 516 view(s)
  • Kaw's Avatar
    7th July 2020, 13:18
    Maybe this reply will bite me in the butt but in the past I have read stuff that said that YCrCb was better for lossy compression because the Y-layer has most of the information and Cb and Cr can be loosely compressed because for the eye they are less important for the overall quality of the image. Also Cb and Cr are often sort of related to each other. In audio we see the same idea when stereo is not treated like 2 totally different streams, but as one (mono) stream and a second part indicating the delta between the 2 streams. Edit: sorry. You are much much more advanced in your analysis of the problem already. Should have read it better.
    12 replies | 516 view(s)
  • maadjordan's Avatar
    7th July 2020, 13:16
    maadjordan replied to a thread 7-zip plugins in Data Compression
    many standalone converters has already existing but adding such trick to compressor would improve compression for special algorithms like mfilter. Base64 is standard and can be added directly as usually are stored as blocks. I found these tools https://github.com/hiddenillusion/NoMoreXOR and https://github.com/hellman/xortool which can used to analysis data either as file or stream before sending to compressor.
    20 replies | 3595 view(s)
  • anasalzuvix's Avatar
    7th July 2020, 12:53
    Hey There, Please Suggest me A Good Dictionary Based Compression Program Thank You
    2 replies | 268 view(s)
  • compgt's Avatar
    7th July 2020, 12:32
    Hollywood music and movies are mine. They're my work, my production. I chose the Hollywood actors and actresses; i made them. I am the major song composer of this world. I was the one naming business companies, products and technologies because i was very good in composing songs, they figured i might find the most appropriate and best sounding names. https://grtamayoblog.blogspot.com/2019/02/voice-search.html?m=1 Warner Bros Pictures and Warner Music are mine. Warner Music is supposed to be one giant music company enveloping or owning other music companies, me being prolific in composing songs since i was 2 years old in 1977. We started up the other music companies too. I made the Hollywood singers and bands, by composing for them and planning their music careers, making music albums for them for release in the 1980s, 90s and onwards. I made Madonna, Jennifer Lopez, Celine Dion, ABBA, Air Supply, Michael Jackson, Beatles, Queen, Scorpions, Nirvana, Spice Girls, Steps, Michael Learns to Rock, Westlife, Pussycat Dolls, No Doubt, M2M, Natalie Imbruglia (Torn), Barry Manilow, even Kenny Rogers' country hit songs, Robbie Williams, Rod Stewart, Mariah Carey, Geri Halliwell, Ricky Martin, Whitney Houston, Britney Spears, Cristina Aguilera, Lady Gaga, Pink, Taylor Swift, Miley Cyrus, Bee Gees, Elton John, Bon Jovi, Aerosmith, etc.
    29 replies | 1449 view(s)
  • anasalzuvix's Avatar
    7th July 2020, 11:33
    Hi Any Update? "3
    29 replies | 6580 view(s)
  • compgt's Avatar
    7th July 2020, 11:03
    It's not a fairy tale. Our lives were at stake. It was real War. I made the Star Wars movies not only for your entertainment! But for my money too!!! Don't get intimidated when i say i made modern Russia. That's the truth. In fact, i learned some nukes weapons and nuclear power technologies from Russia, and France, and helped them on these technologies too.
    29 replies | 1449 view(s)
  • ivan2k2's Avatar
    7th July 2020, 10:32
    You keep telling this fairy tale over and over, but nobody believes it here. And, as i said earlier, you'll never start this court, coz you dont have any proofs of your claims, so why bother? I've got a picture of what you are. I'm done. p.s. hope you'll restore you mind.
    29 replies | 1449 view(s)
  • compgt's Avatar
    7th July 2020, 09:29
    There are many who practice Religion. It soothes the soul. If you meet a Warlord in actual War, when he overpowers you and your armies, and diminishes your numbers, and he professes to be a god, out of your fear for your life you will exclaim he is your god!
    29 replies | 1449 view(s)
  • compgt's Avatar
    7th July 2020, 09:15
    ivan2k2, you sound young and inexperienced. A smart man will not mention about insanity, especially in public online forums. I am not used to that, somebody mentioning "insane" and "mental health". It saddens me. Too brave of you to even mention that. Corrupt, hardcore people (corrupt military and politicians) will set you up, will put you in a mental institution because you are simply in their way. They don't care about you, about your life. They have no ethics for that. They are paid big money for that. (I reckon they had been corrupting and partaking of my Hollywood billion$ up to now since the 1980s.) Let me tell you this. We were a military superpower. We were Starfleet. I presided and dictated over the United Nations, US Supreme Court, and International Criminal Court. I made the modern Nations to end the Cold War. I made the modern US and modern Russia. Politicians lined up to me to be chosen for the US presidency. I chose who would be US (and Philippine) presidents. I chose who would be Vatican's Popes. Why? One, they would be using our (the Tamayo clan's) wealth. They will be funded by our wealth. They have access to our wealth. But, are we holding our wealth? No. What kind of scheme is that?
    29 replies | 1449 view(s)
  • Jarek's Avatar
    7th July 2020, 05:48
    I have just updated context dependence for upscaling paper ( https://arxiv.org/abs/2004.03391: large improvement opportunities from FUIF/JXL no predictor) with RGB case, and standard YCrCb transform has turned out quite terrible (also PCA): - orthogonal transforms individually optimized (~L1-PCA) for each image gave on average ~6.3% reduction from YCrCb, - below single orthogonal transform optimized for all gave mean ~4.6% reduction: 0.515424, 0.628419, 0.582604 -0.806125, 0.124939, 0.578406 0.290691, -0.767776, 0.570980 Transform should align values along axes to approximate with three independent Laplace distributions (observed dependencies can be included in considered width prediction), but YCrCb is far form that (at least for upscaling): How exactly color transforms are chosen/optimized? I have optimized for lossless, while YCrCb is said to be perceptual, optimized for lossy - any chance to formalize the problem here? (what would e.g. allow to optimize it separately for various region types) YCrCb is not orthogonal – is there an advantage of considering non-orthogonal color transforms? Ps. I wasn’t able to find details about XYB from JPEG XL (?)
    12 replies | 516 view(s)
  • suryakandau@yahoo.co.id's Avatar
    7th July 2020, 03:18
    ​here is the source code
    27 replies | 893 view(s)
  • suryakandau@yahoo.co.id's Avatar
    7th July 2020, 02:45
    Fp8sk5 -improve compression ratio better than fp8sk3 and fp8sk5 using -8 option on mixed40.dat(GDCC public test set files) is Total 400000000 bytes compressed to 46612718 bytes. using -8 option on block40.dat(GDCC public test set file) is Total 399998976 bytes compressed to 61239095 bytes.
    27 replies | 893 view(s)
  • suryakandau@yahoo.co.id's Avatar
    7th July 2020, 01:35
    Thanx for your input. I will try it
    27 replies | 893 view(s)
  • moisesmcardona's Avatar
    7th July 2020, 00:24
    moisesmcardona replied to a thread Fp8sk in Data Compression
    Thanks. May I ask why you don't use a Source Control repository? Something like GitHub or Gitlab can be really convenient. We do this with paq8px and @kaitz does it with paq8pxd and paq8pxv.
    27 replies | 893 view(s)
  • Gotty's Avatar
    7th July 2020, 00:24
    Aha, I see. Yes, he will certainly need to find a balance between price vs video quality + durability. He seems to be unsatisfied with the video quality of Xiaomi 8T (It's not that bad, is it? #1 #2) ... and thinking about rugged design. To satisfy those needs it won't be cheap, clearly. >>I guess maybe a phone would cost $200-400. Yes, probably that's the range. The Xiaomi Redmi Note 8T is 169 USD here. It's extremely good priced. Has 4K video, 4K screen, Gorilla glass. It has 4 cameras at the back. That's why I suggested it. What's your budget, CompressMaster? Blackview phones ain't cheap. Are you sure you need rugged design? A normal case usually does a good job. It protects the phone when accidentally dropped - especially the lenses and the screen need protection. When it has gorilla glass it's even more solid. Protects from scratches. I believe you are afraid of a screen break. So go for a case. Rugged design is "too much" in my view.
    6 replies | 144 view(s)
More Activity