Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • suryakandau@yahoo.co.id's Avatar
    Today, 02:22
    In Windows,you can use drag n drop function to compress or decompress files
    36 replies | 2086 view(s)
  • Shelwien's Avatar
    Today, 02:13
    https://youtu.be/1kQUXpZpLXI?t=784
    0 replies | 11 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 02:08
    Fp8sk17 -improve image24 bit compression ratio astro-01.pnm (GDCC test file) fp8sk16 Total 8997713 bytes compressed to 4612625 bytes in 103.97 sec fp8sk17 Total 8997713 bytes compressed to 4509505 bytes in 106.42 sec ​
    36 replies | 2086 view(s)
  • Shelwien's Avatar
    Yesterday, 23:42
    Shelwien replied to a thread zpaq updates in Data Compression
    Google complained, had to remove attachment from this post: https://www.virustotal.com/gui/file/d67b227c8ae3ea05ea559f709a088d76c24e024cfc050b5e97ce77802769212c/details Also it does seem to have some inconvenient things, like accessing "https://www.francocorbelli.com:443/konta.php?licenza=PAKKA&versione=PAKKA.DEMO".
    2551 replies | 1105123 view(s)
  • Shelwien's Avatar
    Yesterday, 19:29
    http://imagecompression.info/test_images/ http://imagecompression.info/gralic/LPCB-data.html
    2 replies | 122 view(s)
  • Adreitz's Avatar
    Yesterday, 18:53
    Hello. First post here. I've been lurking for a long time due to my enthusiast interest in lossless image compression, starting with PNG, then WebP, and now I'm playing with JPEG XL. My reason for creating my account was because I don't understand something fundamental about the use of the JPEG XL lossless compressor. So this question will be for either Jyrki or Jon. I've been experimenting with Jamaika's build of release 6b5144cb of cjpegXL with the aim of maximum lossless compression. (I tried building myself with Visual Studio 2019 in Windows 10, but was unsuccessful as it couldn't understand a function that apparently should be built-in. I don't know enough about programming, Visual Studio, or Windows to figure it out.) The issue that I'm encountering is that, for some images, fewer flags are better and I don't understand why. Take, for instance, the "artificial" image from the 8-bit RGB corpus at http://imagecompression.info/test_images/. Using the specified release of cjpegXL above, I reach a compressed file size of 808110 bytes with simply running cjpegxl.exe -q 100 -s 9. However, when I brute-force all combinations of color type and predictor to find the optimal, the best I can get is cjpegxl.exe -q 100 -s 9 -C 12 -P 5, which outputs a file of 881747 bytes. I figured I must be missing something, so I tried experimenting with all of the other documented command line flags, but didn't get any improvement. So then I went searching and ended up finding five undocumented flags: -E, -I, -c, -X, and -Y. I don't know what they do beyond affecting the resulting file size, and my only knowledge of their arguments is experimental. My best guess is the following: E -- 0, 1, or 2 I -- 0 or 1 c -- 0, 1, or 2 (but 2 produces invalid files) X -- takes any positive integer, but only seems to vary the output from approx. 85 to 100 (with many file size repeats in this range) Y -- similar to X, but ranging from approx. 65 to 100. I also discovered that -C accepts values up to 75 (though most, but not all, arguments above 37 produce invalid output) and -P also accepts 8 and 9 as arguments (which produce valid output and distinct file sizes compared to all documented predictors, and are even better than the defined predictors for certain files). Even with all of this, though, my best valid result from tweaking all of the flags I could access is 830368 bytes from cjpegXL -q 100 -s 9 -C 19 -P 5 -E 2 -I 1 -c 1 -X 99 -Y 97, which is still 21 KB greater than when I simply use -q 100 -s 9. So, what's going on here? From using libpng and cwebp, I am used to compressors that use heuristics to set default values to flags if they are not specified by the user (and therefore you get a compression benefit if you spend the effort to manually find the best settings). But that doesn't seem to be the case with cjpegXL. What am I missing? Also, it would be great if you could provide an official description of what the undocumented flags do and what arguments they take. Thanks, ​Aaron
    133 replies | 9660 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 16:35
    i mean lossless photo compression benchmark...
    2 replies | 122 view(s)
  • Sportman's Avatar
    Yesterday, 12:39
    Fixed.
    110 replies | 11414 view(s)
  • Piotr Tarsa's Avatar
    Yesterday, 08:14
    Nuvia doesn't even have any timeline on when their servers will hit the market and it seems it could take them 2+ years to do so, so they need high IPC jump vs at least Intel Skylake. In meantime the landscape is changing: - Intel released laptop Tiger Lake which is basically laptop Ice Lake with much higher frequencies (there's small IPC change, mostly due to beefier caches), nearly 5 GHz. This means Intel at least figured out how to clock their 10nm high, but since laptop Tiger Lake is still limited to max quad core it seems that yield is still poor. - Arm has prepared two new cores: V1 for HPC workload (2 x 256bit SIMD) and N2 for business apps (2 x 128bit SIMD): https://fuse.wikichip.org/news/4564/arm-updates-its-neoverse-roadmap-new-bfloat16-sve-support/ https://www.anandtech.com/show/16073/arm-announces-neoverse-v1-n2 IPC jump is quite big, but it remains to be seen when the servers will hit the market as it previously took much time for Neoverse N1 to be available since the announcement. At least those are SVE (Scalable Vector Extensions) enabled cores (both V1 and N2) so apps can finally be optimized using decent SIMD ISA, comparable to AVX (AVX512 has probably more features than SVE1, but SVE is automatically scalable without the need for recompilation). - Apple already presented iPad Air 2020 with Apple A14 Bionic 5nm SoC, but the promised performance increase over A13 seems to be small. I haven't found reliable source mentioning Apple A14 clocks so maybe they kept them constant to reduce power draw in mobile devices like iPad and iPhone? Right now there are people selling water cooling cases for iPhone (WTF?): https://a.aliexpress.com/_mtfZamJ - Oracle will offer ARM servers in their cloud next year: https://www.anandtech.com/show/16100/oracle-announces-upcoming-cloud-compute-instances-ice-lake-and-milan-a100-and-altra and IIRC they say they will compete on price.
    15 replies | 1213 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 01:58
    where can i get LPCB test file ?
    2 replies | 122 view(s)
  • Jyrki Alakuijala's Avatar
    Yesterday, 01:52
    I fully agree with that. We are working on to get some improvement in this domain. We have very recently made a significant (~20 %) improvement in the modular mode by defaulting to butteraugli's XYB colorspace there, too. Not sure if it is published yet, could be this week's or next Monday. We are making the number of iterations of the final filtering configurable (0, 1, 2, or 3 iterations), allowing for some more variety of compromise between smoothness and artefacts.
    133 replies | 9660 view(s)
  • Raphael Canut's Avatar
    23rd September 2020, 23:43
    Wavelets have advantage and drawbacks, personally I find the DCT block-based codecs have precision and NHW has neatness also due to fact that there is no deblocking in NHW, and so it's a choice.Personally, I prefer neatness than precision, for me it's visually more pleasant, but it's only my taste... The other advantage of NHW is that it's a lot (and a lot) faster to encode and decode. Cheers, Raphael
    195 replies | 22396 view(s)
  • Scope's Avatar
    23rd September 2020, 23:38
    ​Yes, I notice that many people compare codecs on low bpp and if this codec is visually more acceptable, it is also believed to be more efficient on higher bpp. Although low quality is also quite in demand, especially with the spread of AVIF, people try to compress images as much as possible to reduce the page size and where the accuracy of these images is not so important. According to my tests, the problem with low bpp for Jpeg XL (VarDCT) are images with clear lines, line art, graphics/diagrams and the like, there are very quickly become visible artifacts and distortions of these lines or loss of sharpness, and in modular mode there is a noticeable pixelization and also loss of sharpness, in such images AVIF has strong advantages. If it were possible to give priority to saving contours and lines with more aggressive filtering or the possibility to select a preset (in WebP such presets sometimes helped), it would be good.
    133 replies | 9660 view(s)
  • fabiorug's Avatar
    23rd September 2020, 23:28
    so the image can't divided in separate part to perform quality measure, it won't be same efficient as jpeg xl or good for video or images on the web like design fashion. I understood that.
    195 replies | 22396 view(s)
  • Raphael Canut's Avatar
    23rd September 2020, 23:26
    Yes it's very difficult to perform block processing for wavelets.Because wavelets are FIR filter and so with transitional response, and from my experience, this transitional response is quite dirty with wavelets, and so this will cause noticeable artifacts at block boundaries... But I can be wrong. Yes, there is no block processing, and so the quantization parameters for example are the same for the whole image, hence the importance of a good psychovisual optimization.Also the advantage of it is that you don't have deblocking artifacts for example...
    195 replies | 22396 view(s)
  • Jyrki Alakuijala's Avatar
    23rd September 2020, 23:21
    I suspect there is some more sharpness and preservation of textures. The improvements are encoder only, so it will be possible to go back to less sharp later. I think in Sep 07 we didn't have the filtering control field yet in use. Now we turn off filtering in flat areas, making them easier to preserve fine texture. We control the filtering using an 8x8 grid, so we may now filter non-uniformly across a larger integral transform (such as 32x32 dct).
    133 replies | 9660 view(s)
  • Jyrki Alakuijala's Avatar
    23rd September 2020, 23:16
    Nooo ;-). I promise we didn't put the big technologies into it :-D More seriously, JPEG XL is a small underdog effort when compared to the AOM industry coalition. We have built PIK/JPEG XL during 5.5 years, but mostly with 2-5 engineers (of course, Alex Rhatushnyak and Jon Sneyers brought their expertise, too). We tried to keep our eyes open for new approaches and produced intermediate milestones (webp lossless, webp delta-palettization, webp near-lossless, ZopfliPNG, guetzli, butteraugli, brunsli, knusperli) to check that we are not lost. Alex and Jon had done previously the same with QLIC, GRALIC, SSIMULACRA, FLIF and FUIF.
    133 replies | 9660 view(s)
  • fabiorug's Avatar
    23rd September 2020, 23:14
    And how is 2.469 TO 1 or 2.24 s 8 distance. Are there really improvements/how is the progress in these days?
    133 replies | 9660 view(s)
  • fabiorug's Avatar
    23rd September 2020, 23:11
    so a wavelet codec or a wavelet denoise can't process pixels in block? if you want a pixel to have higher quality or different quantization different settings different l it isn't possible is there a settings it can't be done?
    195 replies | 22396 view(s)
  • Raphael Canut's Avatar
    23rd September 2020, 23:05
    Hi, > does your codecs can support qualities from 1 to 100 For now, there are 23 quality settings for nhw, need to code high quality and extreme compression, and with a finer compression step, maybe we can reach 100 quality settings. > can increase quality, decrease quality from pixels, without dividing in colors like wavelet do? Could you explain/give us more detail of what you mean here, as I don't get it for now... > Can you give information of wavelets and qualities from 1 to 100? There are many different algorithms for wavelet compression.For example in NHW, there are 2 quantizations, one directly applied on the spatial YUV pixels right after colorspace conversion, and a second quantization on wavelet coefficients after transform, that is a uniform scalar quantization with deadzone (and with some refinements...). > Also, can wavelet codecs be written in Rust? Yes, I think so. > raphael canut i want to use only one codec. You may get wavelet to perform like jpeg xl. That's a lot of work to make a fully professional NHW codec, full-time work, and so I am searching a sponsor for that... Cheers, Raphael
    195 replies | 22396 view(s)
  • Jyrki Alakuijala's Avatar
    23rd September 2020, 22:55
    We are now looking into improving density at lowest qualities. These lowest bitrates can be important for bloggers/image compression influencers wanting to demonstrate compression artefacts, but are not used in actual day-to-day use.
    133 replies | 9660 view(s)
  • fabiorug's Avatar
    23rd September 2020, 22:36
    raphael canut I would like to use only one codec. You may get wavelet to perform like jpeg xl.
    195 replies | 22396 view(s)
  • Raphael Canut's Avatar
    23rd September 2020, 22:32
    From what I've read, it's AV2 that will be based on neural, really more than AV1 that however introduced first neural/machine learning processings.
    133 replies | 9660 view(s)
  • Raphael Canut's Avatar
    23rd September 2020, 22:26
    Hi Jyrki, Sounds interesting! I didn't try it, but it could be interesting to give more neatness to JPEG XL with NHW pre-processing, I even think that files will be then smaller, and furthermore NHW compress/uncompress is really very fast!... Many thanks. Cheers, Raphael
    133 replies | 9660 view(s)
  • fabiorug's Avatar
    23rd September 2020, 22:13
    Hi, does your codecs can support qualities from 1 to 100, or can increase quality, decrease quality from pixels, without dividing in colors like wavelet do? I tested some years ago a wavelet denoiser in Gimp and it divided the pixel based on is colors, so it couldn't separate the individual pixels in tiles or block and select a quality or strength. Can you give information of wavelets and qualities from 1 to 100? Maybe also that is a different technology with no success, it's also that a video codec requires control of individual pixels you can't just reconstruct colors by brightness (wavelet isn't av1 is a bit different, it doesn't work like av1) but anyway I do not understand wavelets. Also, can wavelet codecs be written in Rust?
    195 replies | 22396 view(s)
  • fabiorug's Avatar
    23rd September 2020, 22:10
    AV3 will likely be based on webpv2 or neural. Jpeg xl has big technologies from Google but that doesn't guarantee that Aom will be interested. Aom ≠ Google. Also I have a point i will ask you to your thread. wait.
    133 replies | 9660 view(s)
  • Jyrki Alakuijala's Avatar
    23rd September 2020, 22:03
    You could try to get both the neatness of NHW and the efficiency of JPEG XL by: 1. first compress and uncompress with NHW, and get a new kind of neat image. 2. compress the neat image with JPEG XL.
    133 replies | 9660 view(s)
  • fcorbelli's Avatar
    23rd September 2020, 20:36
    fcorbelli replied to a thread zpaq updates in Data Compression
    After a bit of digging the answer for this piece of code is... string append_path(string a, string b) { int na=a.size(); int nb=b.size(); #ifndef unix if (nb>1 && b==':') { // remove : from drive letter if (nb>2 && b!='/') b='/'; else b=b+b.substr(2), --nb; } #endif if (nb>0 && b=='/') b=b.substr(1); if (na>0 && a=='/') a=a.substr(0, na-1); return a+"/"+b; } ...the extraction of all versions, with x -all, to create different paths numbered progressively. I think I will correct my "franz28" version of zpaq in the future to support this functionality. I add an updated version of PAKKA which I am testing with rather large archives (5M + files, 700GB + size), command to extract from version x to y, check instead extract, test everything. Improved UTF8 filename list, with double click (in the log tab) to directly search (~ 4/8 seconds for ~2M nodes tree)
    2551 replies | 1105123 view(s)
  • Mauro Vezzosi's Avatar
    23rd September 2020, 19:21
    Wrong order 2,872,160,117 bytes, 240.566 sec. - 20.319 sec., brotli -q 5 --large_window=30 (v1.0.7) 2,915,934,603 bytes, 102.544 sec. - 13.302 sec., zstd -6 --ultra --single-thread (v1.4.4) 2,915,934,603 bytes, 103.971 sec. - 12.798 sec., zstd -6 --ultra --single-thread (v1.4.5) 2,812,779,013 bytes, 412.488 sec. - 64.311 sec., 7z -t7z -mx3 -mmt1 (v19.02)
    110 replies | 11414 view(s)
  • Raphael Canut's Avatar
    23rd September 2020, 17:28
    Hi Scope, Thank you very much for testing NHW and for this very interesting image comparison with JPEG XL. On my computer screen, this confirms my tests that is to say that I find that NHW has more neatness and JPEG XL has more precision. -I just wanted to notice if this demo makes others want to try NHW, that the new entropy coding schemes of NHW are not optmized for now, I have the fast ideas to improve them, and so we can save in average 2.5KB per .nhw compressed file, and even more with Chroma from Luma technique for example.- Many thanks again. Cheers, Raphael
    133 replies | 9660 view(s)
  • Shelwien's Avatar
    23rd September 2020, 17:10
    > After preprocessing TS40.txt by my preprocessor my compressor compresses it on 16Mb better, is it great? Seems reasonable: 100,000,000 enwik8 61,045,002 enwik8.mcm // mcm -store 25,340,456 enwik8.zst // zstd -22 24,007,111 enwik8.mcm.zst // zstd -22 > Did I correctly understand that these are the disadvantages of the ROLZ algorithm? No, ROLZ matchfinder can be exactly the same as LZ77 one - the only required (to be called ROLZ) difference is encoding of match rank (during search) instead of distance/position, thus forcing decoder to also run the matchfinder.
    41 replies | 1991 view(s)
  • no404error's Avatar
    23rd September 2020, 15:43
    no404error replied to a thread FileOptimizer in Data Compression
    I was unexpectedly kicked out of many communities for plagiarism. Upon learning, I realized that this is due to the mention of my old nickname in your changelog. I think you should change the changelog. I never said that any of the tools you use were written by me. I only recommended you what I was familiar with, mainly from the demoscene. PCXLite wrote by Sol, as did some of your set. Sol webpage - https://sol.gfxile.net/ /Thank
    664 replies | 202504 view(s)
  • Scope's Avatar
    23rd September 2020, 14:56
    ​I made a comparison for myself with NHW back when the first public version of Jpeg XL was released, the main problem for normal testing by enthusiasts like me is the 512x512 resolution limitation and I have to either resize large images or split them into tiles or compare only small images. Ready-made formats also require saving additional data for the image, container, structure, metadata, etc., and this gives some advantages to experimental unfinished formats with only raw data, especially on very small images. Here's another small visual quick comparison on 512x512 images with the latest available versions of Jpeg XL and NHW, for NHW I chose -l5 (at higher compression I already see unacceptable distortions and loss of detail in many images), then Jpeg XL was encoded to the same size with -s 8 settings (VarDCT, also using faster speed doesn't always make it worse). ​12 images that can be switched with the keyboard arrows up and down, and the images themselves with numbers, the first is the original, the second NHW, the third JXL. https://slow.pics/c/ivA8aKHO
    133 replies | 9660 view(s)
  • lz77's Avatar
    23rd September 2020, 10:58
    After preprocessing TS40.txt by my preprocessor my compressor compresses it on 16Mb better, is it great? Yesterday I thought 2 times about the ROLZ: 1. Compressor can calculate hashes with bytes (for example, with cdef) which are located on the right of the current position: abcd|efgh, but decompressor can't: abcd|????. Are matches near current position impossible in ROLZ? 2. At the beginning of the data classic LZ can compress abcd but ROLZ can't: abcdEabcdFabcd...Eabcd... The first appearance of abcd has no predecessor char, the second has not right one, and only second Eabcd will match... Did I correctly understand that these are the disadvantages of the ROLZ algorithm?
    41 replies | 1991 view(s)
  • fabiorug's Avatar
    22nd September 2020, 22:50
    jyri in your opinion a photo compresses with squoosh.app online at 23q mozjpeg then 8.81 speed 7 jpeg xl 07 september 2020 build. Is too low of quality would jpeg xl introduce these bitrates in next versions maybe with a forced deblocking? i'm a bit confused, maybe with imageready presentation i will be more or less.
    133 replies | 9660 view(s)
  • Raphael Canut's Avatar
    22nd September 2020, 21:12
    Yes, I am also looking forward to seeing the new update of WebP v2.Because if I remember correctly, Pascal Massimino stated in October 2019 that WebP v2 was (a little) inferior to AVIF, but they were working hard to improve it.It would be awesome if they announce this end of year that WebP v2 has become better than AVIF (and for less complexity furthermore)! There is also a huge research effort in (machine/deep learning) learned image compression.The problem for me and NHW to find financing, is that a lot of people and experts say that the future will be all-machine-learning, and so unfortunately a lot of people absolutely don't care about a wavelet codec (made by an individual furthermore)...
    133 replies | 9660 view(s)
  • fabiorug's Avatar
    22nd September 2020, 20:56
    Honestly now I'm more hyped in ImageReady presentation for webp v2 by Pascal Massimino. He gave brief look at Aomedia Symposium 2019. But it would be good to have one codec that with a 1.81 difference is transparent for all images.
    133 replies | 9660 view(s)
  • Raphael Canut's Avatar
    22nd September 2020, 20:41
    @fabiorug, OK.Thanks for the precision. I would like thank you very much for testing the NHW codec, there's too few people who looked at it, and I wanted to let you know that I appreciate your comments on NHW, even if some people will say that it is not appropriate here, it's the JPEG XL thread, the NHW thread is another page... Just a very quick remark, I don't agree when you say that wavelet codecs have all the same results, for example NHW has not the same results as JPEG2000, if you want a wavelet codec that retains details try Rududu codec (enhanced SPIHT) -but it will have less neatness than NHW-... Many thanks again. Cheers, Raphael
    133 replies | 9660 view(s)
  • fabiorug's Avatar
    22nd September 2020, 20:14
    Hi, thanks for the comments. Is not a program, is a text i wanted to find ratio between image and audio for personal scope And to know what image codec use. What butteraugli use. Nothing scientific or complex. but 2.469 TO 1 jpeg xl is the ratio where it should produce better quality in comparison to png or other lossy codecs. that's reasonable and people will like use these values so they expect good quality i'm sorry if i made comparisons. It's not intended. Pleasantness isn't a metric. is a subjective score i made 4.1 PLEASANTNESS OR INFLATES FILE SIZE/1.660 min i like for jpeg xl= and i found the 2.469 butteraugli values. And i don't know if there is a limit after 4.1 But as jyri araylla said the comment before isn't accurate as is an odd value and jpeg xl is good as it is, i only suggested that at 2.469 TO 1 distance it can be improved. Citing: Jyrki Alakuijala Why do you use so precise distance values in your recommendations? 2.6 and 2.581 should produce roughly identical results. Sorry for bold text abuse. Scope has said it's because i like more wavelet blurring in less pleasant image low bitrate and that your codecs as wavelet codecs is blurry in low bpp and it enhances image, and is like all wavelets. The comment is: For myself I didn't see very big differences from other wavelet formats, and it's much harder to compare it on such small resolutions. but is not a comparison. Even jpeg xl modular is interesting for text and some graphics. L8 (1.660pleasantness) L16 (1.1250pleasantness). that's what i use and indeed the result isn't perfect, because it doesn't support all resolutions, but for some instagram photos is acceptable.
    133 replies | 9660 view(s)
  • Raphael Canut's Avatar
    22nd September 2020, 19:36
    Hi, Sorry for interrupting the JPEG XL thread, but a just a precision, @fabiorug: do you mean that you have created a program/metrics that computes/evaluates the pleasantness of an image? If so, could you give us some details? Is there a version to download? Else, just very quickly, -l12 to -l19 quality settings are absolutely not optimized in NHW.They can be better.Will have to work on it, and so don't trust them too much for now.-Very quickly to finish, on the contrary, I find that for example -l4,-l5,-l6 quality settings have good visual results which you don't seem to agree, but that's right that I mainly tested against AVIF...- Cheers, Raphael
    133 replies | 9660 view(s)
  • fabiorug's Avatar
    22nd September 2020, 18:18
    OBJECTIVE 2.469 TO 1 is the ratio where it should produce better quality in comparison to png or other lossy codecs. SUBJECTIVE MAX 4.1 PLEASANTNESS OR INFLATES FILE SIZE MIN 1,1250 TO 1.660 PLEASANTNESS LOW QUALITY IMAGE BETTER COMPRESS WITH NHW CODEC IF YOU WANT BETTER RATIO AND YOU ACCEPT WAXED OUT IMAGE. JPEG XL IN MY OPINION ISN'T GOOD FOR THAT TYPE OF LOW QUALITY IMAGE, AND IF YOU HAVE AN IMAGE THAT PLEASES MORE THAN 4.1 (i don't know the maximum, it's subjective), it's better to leave the PNG could be as I notice more a loss of details. L8 (1.660) L16 (1.1250). l8 is a nhw codec (wavelet) settings it works only for 512x512 24 bit BMP. I found that etc... it can compress more the images I have on MIN 1,1250 TO 1.660 PLEASANTNESS LOW QUALITY IMAGE (a value I invented). But anyway returning to JPEG XL 2.469 TO 1 (distance butteraugli) is the ratio (in my opinion) where it should produce better quality in comparison to PNG or other lossy codecs.
    133 replies | 9660 view(s)
  • Jarek's Avatar
    22nd September 2020, 14:00
    Sure order might improve a bit, but I am talking about exploiting dependencies inside block - the tests show that linear predictions from already decoded coefficients in this block can give a few percent improvement, especially for width prediction (#2 post here).
    10 replies | 1166 view(s)
  • Jyrki Alakuijala's Avatar
    22nd September 2020, 13:43
    Why do you use so precise distance values in your recommendations? 2.6 and 2.581 should produce roughly identical results.
    133 replies | 9660 view(s)
  • Jyrki Alakuijala's Avatar
    22nd September 2020, 13:36
    In PIK and JPEG XL we optimize the (zigzag) order to get best zero rle characteristics, too. There is also some additional benefit for keeping a lowering variance for other reasons in the chosen codings.
    10 replies | 1166 view(s)
  • Jarek's Avatar
    22nd September 2020, 07:32
    Finally added some evaluation with quantization to https://arxiv.org/pdf/2007.12055 - the blue improvements on the right use 1D DCT of column toward left and row above (plots are coefficients: prediction mainly uses corresponding frequencies - we can focus on them) - prediction of widths has similar cost as prediction of values, but gives much larger improvements here. The red improvements use already decoded values in zigzag order - it is costly and gain is relatively small, but its practical approximations should have similar gains.
    10 replies | 1166 view(s)
  • elit's Avatar
    22nd September 2020, 03:52
    Those are not from US lab's.
    5 replies | 450 view(s)
  • Shelwien's Avatar
    21st September 2020, 20:08
    0 replies | 200 view(s)
  • Shelwien's Avatar
    21st September 2020, 20:02
    > the speed is multiplied by 2... Yes, ROLZ can provide much better compression with a fast parsing strategy, which might be good for the competition. > Are there too many literals? Mostly the same as with normal LZ77. LZ77 would usually already work like that - take a context hash and go through a hash-chain list to check previous context matches - the difference is that LZ77 then encodes match distance, while ROLZ would encode the number of hashchain steps. > Is LZP also like ROLZ? LZP is a special case of ROLZ with only one match per context hash. So it encodes only length or literals, no distance equivalent. But LZP is rarely practical on its own - its commonly used as a dedup preprocessor for some stronger algorithm.
    41 replies | 1991 view(s)
  • Gotty's Avatar
    21st September 2020, 19:45
    (These from Lucas are important ones. Let me grab you the links.) So, here they are: - the thread of WBPE: https://encode.su/threads/1301-Small-dictionary-prepreprocessing-for-text-files - the XWRT paper: http://pskibinski.pl/papers/07-AsymmetricXML.pdf --- And maybe also: - the StarNT paper: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.163.3821&rep=rep1&type=pdf The text preprocessors page certainly worth a look, some of these tools are in your league: https://encode.su/threads/3240-Text-ish-preprocessors
    30 replies | 1302 view(s)
  • lz77's Avatar
    21st September 2020, 18:46
    Decoding is slower, but in the scoring formula (F=c_time+2*d_time+...), the speed is multiplied by 2... As I understand ROLZ, if we have in current position 'the ' and the predecessor symbol is 'n', then we see in a history buffer 256 saved hashes for 'the ' only with the predecessor symbol 'n'? And if the search fails then 't' is a literal? Are there too many literals? Is LZP also like ROLZ?
    41 replies | 1991 view(s)
  • Gotty's Avatar
    21st September 2020, 18:01
    Writing Disha version V2 was just too quick. Please spend more time on it, don't rush. Start again. - I would like to suggest you to phrase v2 in such a way that it does not reference anything in v1. Even if you are not intending publishing it anymore, you would make it easier for a future reader. - Be more specific. V2 is still formulated as an idea and not a concrete algorithm. You will need to write down concretely... ...Define what are the symbols exactly. Only characters and words? Or expressions ("United States of America"?) Fragments? Combination of characters? "http://" or ".com"? ...How do you handle capitalization? ...How the dictionary would be constructed - this one is not straightforward - there are many ways. You'd like to create one or more static dictionaries, right? How exactly? What decides if a word would be a part of one of them? Where would you get your corpus from? ...How compression would work - this one is also not straightforward: how do you imagine to split up the input file to its parts (words). An example: When you have found "the" in your input and it's in the dictionary. Would you go on processing more characters in the hope that there will be a longer match? Let's say you do. And you are lucky, it's actually "there" and it is still in the dictionary. Nice. But wait. If you just did that, you will find that the following characters are "scue". Hm, There is no such a word. What happened? Ah, it is "www.therescue.com", and you should have really stopped at "the". So let's stop then early (after "the"). But now you will find "then", and that (the+n) will be expensive, you should not have stopped this time. See the problem? Its not straightforward. Should it be greedy, should it do some backtrack? Should it try multiple ways and decide on some calculated metric ? Optimally splitting a text into parts is an NP-hard problem so you would probably need heuristics (except when doing it greedy). The current description is too general - there are too many ways to implement it. If you'd ask 10 programmers to implement it, there would probably be 10 different results. So we need more specific details, please.
    30 replies | 1302 view(s)
  • Gotty's Avatar
    21st September 2020, 17:35
    From my side, I can see your passion and your will to do something interesting and unique. It's a very good start. Passion is an absolute must to reach greatness, but it's not enough. I have a strong feeling that before you'd go on you'll need to attain more knowledge in algorithms and learn some programming. You will need to learn more and read more about data compression, too. You will probably know then how to describe an algorithm properly and how be more specific. If you do intend to publish it sometime in the future you'll need to learn some scientific writing. Your document is not there yet. (Probably you know that.) Collaboration works only if we are "equal" in some sense. What do you bring to the table? Until now it's only an idea. Not very specific, so I'd even say it's a "vague idea". Please read again those links above. Read the source codes of the open source software that are similar to yours. Learn the advantages and disadvantages of them. Then you'll know where you would "break into the market". In v2 you mention that "The difference between this algorithm and others is that instead of using code words at the bit level, byte level code words are used". Homework: you'll really need to read those links above again. But most importantly: if you'd learn some programming you could give birth to your child. Please don't expect such a collaboration from us that you drop in an idea without specific details, and someone will eventually sit down, take time, elaborate the missing parts and create your software - which will not be really yours. It may be very far from your idea. Implementing V1 was (fortunately) (almost) straight forward. But V2 is not. You'll need to be experimenting, failing, experimenting more, failing more until you are getting better and better. Eventually an expert. Then you will be able to create a unique compression algorithm. If you invest your time, I believe you'll be there. Your passion will get you there. We are here, we can help you, we can spot missing stuff we can point you to different directions, but please don't expect us to elaborate on the details of your compression idea, you'll need to do that.
    30 replies | 1302 view(s)
  • urntme's Avatar
    21st September 2020, 08:32
    First of all, thank you for putting the time and effort into implementing this algorithm. It means a lot to me. I actually find this information very good. It means the algorithm works the way it's supposed to. That's great news. Now about refinement, let's look into it below: I have spent some time thinking about what changes could be made and I wrote up a document on it. I called it Disha Version 2. I have attached the file to this post. Kindly take a look and let me know your thoughts. This doesn't have to be the final version of it. It can go through more refinements. My goal is to contribute something of worth to the community at the very least. So if by doing some refinements it can be a worthy contribution to the community, then that would be a great thing for me. If we can all put our heads together and collaborate and make something new, interesting, and worthwhile for the community, then I feel that would be great reward for our efforts in itself. So please let me know if you would be interested in collaborating. If you are, then we can discuss ideas, debate, and come to some common ground and make something useful. I have passion and drive and I can contribute time in writing a detailed paper on the algorithm we finally decide upon and you all have experience so I feel it can make a good combination. Thank you all for all your time and patience. Take care.
    30 replies | 1302 view(s)
  • danlock's Avatar
    21st September 2020, 01:06
    English wikipedia contains a refererence to "QUAD / BALZ / BCM (highly-efficient ROLZ-based compressors)" on the archiver comparison page where it discusses PeaZip under the heading Uncommon archive format support, but the words shown when hovering the cursor over ROLZ indicates the page is still missing: "ROLZ (Page does not exist)".
    41 replies | 1991 view(s)
  • Gotty's Avatar
    20th September 2020, 23:57
    Gotty replied to a thread paq8px in Data Compression
    Paq8px (and cmix) is mentioned (and used as an inspiration and baseline) in this paper: Performance and Implementation Modeling of Gated Linear Networks on FPGA for Lossless Image Compression
    2111 replies | 570441 view(s)
  • fcorbelli's Avatar
    20th September 2020, 20:30
    fcorbelli replied to a thread zpaq updates in Data Compression
    24.3 Some improvements, vaguely starting to seems like a pre release :) Please any feedback on unpacking windows, linux and bsd files, possibly with utf-8 characters? Thank you
    2551 replies | 1105123 view(s)
  • fabiorug's Avatar
    20th September 2020, 14:48
    I tested jpeg xl rate control if you don't use 1.859 -s 3 2.508 -s 9 or you use image smaller than 1600x1200 you risk a corrupted image from jpeg xl, it will compress too much. consider avif for high quality use cases, or nhw for smaller resolutions. Jpeg xl rate control could be improved but the areas of efficiency always are 1.859 -s 3 2.508 -s 9 I don't think the codecs would change significantly.
    133 replies | 9660 view(s)
  • e8c's Avatar
    20th September 2020, 10:48
    Submission to T2, compiled executable only.
    3 replies | 220 view(s)
  • fabiorug's Avatar
    20th September 2020, 09:28
    Obviously jpeg can go higher as -q 84.55 -s 3 -d 5.55 -s 5 But the photo will look horrible oversharpened. Because every format without the tuning of his author (like minor sharpness frequency) is nothing. This is what differentiate new formats.
    133 replies | 9660 view(s)
  • fabiorug's Avatar
    19th September 2020, 23:47
    -d 4.581 -s 6 -nhw 8 -q 66.81 -s 6 -s 53.31 -s 3 gr.png
    133 replies | 9660 view(s)
  • fabiorug's Avatar
    19th September 2020, 22:38
    ANOTHER GOOD TRICK FOR GOOD 400 KB 40 KB IMAGE The qualities are rather high, I doubt you can notice differences. Maybe NHW re-encoding isn't good as it creates bigger file sizes and the encoder amplifies PNG artifacts cause it isn't based on no metric. JPEG XL butteraugli is metric independent, works for all metrics.
    133 replies | 9660 view(s)
  • fabiorug's Avatar
    19th September 2020, 21:25
    True hybrid like all pixel in every image with the two technologies would be like -q 48.87 -s 4 re-encoded to -s 5 -d 0.882 it totally gives a perfect image to me. but still I'd like manual quantization and quality choose from the expert, not auto mode.
    133 replies | 9660 view(s)
  • urntme's Avatar
    19th September 2020, 17:21
    This is quite a headturner. Thank you so much for all your efforts. I'm going to need some time to think about all of this and process it. Please give me a couple of days to respond.:_think:
    30 replies | 1302 view(s)
  • fabiorug's Avatar
    19th September 2020, 12:39
    i'm trying -q 59.3 -s 8 jpeg xl 07 september 2020 0.0.1 -q 38 -s 4 cavif-rs 0.6.1 jpeg xl can be used to enhance avif at low qualities as they said. But specifications will change as it's written on jpeg xl gitlab site. But you don't need final specification to do this work. at speed 9 q 49.3 is good for visual quality perception. but honestly i don't want faster quality or slower speed. I would prefer modular back to the build before that same same bits in the file and 30 seconds to encode at speed 9 and not 1 minute and 08. Or at least make ram optimization or enhancement to this quality facultative, not obligatory to use if possible. Not everybody want fake vardct details. Also consider vardct and modular hybrid. Also after jpeg xl completion, how many encoders or butteraugli we would have? For video are there agreements? I guess after 2021 Also an auto speed, but I guess you have only to do a bitstream, later if anyone like pingo wants to add features it can. Good luck anyway.
    133 replies | 9660 view(s)
  • Gotty's Avatar
    19th September 2020, 10:04
    @urntme I implemented your algorithm with 3-byte codewords and automatic space insertion and built an optimal dictionary per file to find out the best compression theoretically possible. So - I don't have a general dictionary. The aim is to find the theoretical limit per file. - It is not optimized for speed - readability is more important at this stage. - Speed: it encodes a 100 MB file in 1 sec, and decodes it in half a sec ( tested with text8 ). It encodes/decodes anything small in way less then a millisecond. A tweet message would be a couple of microseconds. - However compression-wise it cannot get better than 50%. That seems to be the average best case (average theoretical limit) - even when using an optimal dictionary. Unfortunately when there's a word or chunk that is not in the dictionary it seriously bloats the file. Any quote, comma, bracket, full stop or typo (or an unknown word) is a serious show stopper and with having such things in the content to be compressed the algo usually looses quite often (i.e. size > 100%). So it's performance (compression ratio and speed) in the optimal case is similar to an order-0 model (like a speed-optimized fpaq variant) but the latter is 1) not limited to files containing English text only. 2) uses way less memory (does not need a dictionary). Without sacrificing speed for some cleverness, it does not look promising so far. Would you consider refining it? You've got plenty of ideas from us above. I could go ahead and refine it myself, but you are the author, it's your child. You will also need to update your pdf file. Also include the details we were discussing above when we were trying to understand your idea better. You now know what information we were missing. You will also need to include (describe) an algorithm how you would build a dictionary for the general case. It's your turn ;-
    30 replies | 1302 view(s)
  • Trench's Avatar
    19th September 2020, 06:28
    "A dead man's switch (see alternative names) is a switch that is designed to be activated or deactivated if the human operator becomes incapacitated, such as through death, loss of consciousness, or being bodily removed from control. Originally applied to switches on a vehicle or machine, it has since come to be used to describe other intangible uses like in computer software." (weakpedia.. i mean wikipedia) To all newcomers to the forum if you say you have something that can do what no other person figured out a solution for try to have a back up just in case. Maybe you are afraid your underwear for a mask is not enough to protect you, or you might get a impossible computer virus in the brain which a computer virus can't cross into other species but ok. How about giving your info to someone else as a last resort and what better place than someone in this forum... I assume. Despite they didn't figure it out yet or never will it's not their fault they are taught to think the way they do, since they are they and you are you. Despite this message is slightly amusing to lighten the mood it should be taken seriously... If you care. Who you choose is up to you. Usually the serious ones have their email to send info... (to members. What you don't? Fine you get nothing and like it.) Or you can send it to multiple people you know but you should also send at least to another person you don't know after a certain time frame. Not to some homeless stranger on the street they don't count. Also don't be naive to think certain people will take good care of the info. Plenty of naive people you don't have to be one too. Or to someone with a above average IQ... well good luck on that. Here is one site that might help but you can use your own if you like. deadmansswitch.net Even some emails can send thing at certain time frames but I would not trust the security on those depending on the site since even some cloud storage sites or emails can access and erase content they do not find acceptable or even erase your account if not used after a while. Maybe some people below can give advise... or maybe not. Or just give up since no one deserves it... but then again you deserve nothing others have given you too then. :P You are not allowed to see this. Fine, say hello if you did.
    0 replies | 73 view(s)
  • Trench's Avatar
    19th September 2020, 05:03
    LOL Svenben your very amusing opinion noted.:) Whatever make you feel better to cope with the fear you have. Put a mask, with goggles and an oxygen tank as well if it makes you feel comfortable. Here is a basic understanding on how to find truth is to listen to all sides and expand on it. As much of a reason you have to think x is good have an equal amount to see how x is bad. I know it's hard since many are conditioned to be lazy and do what they know. I only stated the sourced from news site stating these facts. If you can't read between the lines o well, but I don't expect the average intellect person to get things. Its like the joke of 6 ft apart which you take to heart but partials are in the air for hours and people pass though them, you get 10000 of various viruses a day yet fine. It is obvious you don't know what a virus is and many confuse it with bacteria but then again most don't know what a bacteria is. Most people know nothing compared to the vast knowledge out their and it is amusing what think think. Which i why your comment is very funny which reminds me of a child that think it knows enough after obtaining some info. You ever hear of the word conflict of interest? It is not admissible as evidence no matter where you go unless its a banana republic. There more sources form research and doctors but people want to believe what they want to make them feel comfortable. Don't make the mistake again to assume you know more than the other which you did now or believe a unelected bureaucrat that could not make it in the private sector properly which is what you side with. Think wisely. The good news is so many people are dying less from so many other diseases it's a miracle. LOL
    5 replies | 450 view(s)
  • fabiorug's Avatar
    18th September 2020, 14:53
    in my opinion only at -q 93 -s 3 -d 0.83 -s 7 jpeg xl build 07 September 2020 has higher quality than other codecs such as NHW codec, for lower qualities, the artifacts of NHW could look good and you can save space. with this re-encoding I got 300724. but obviously from a png normal jpeg xl such as -d 1 looks way better. probably for encoding beard etc. that have neatness is not the best codec.
    133 replies | 9660 view(s)
More Activity