Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • Jyrki Alakuijala's Avatar
    Today, 12:31
    Rare to have so much competence in a single event! Could be great to understand the similarities and differences between AVIF, JPEG XL, and WebP2, for example in a matrix form. Perhaps they are similar enough that a single encoder could generate content for each format. Perhaps they are similar enough that trans-coding could work without re-encoding (possibly only when a specific subset of features is used). From my viewpoint AVIF and WebP2 are close cousins and JPEG XL is the odd-one-out because of the following unique features: lossless JPEG transcoding an absolute colorspace is used allowing better preservation of dark areas simpler color handling as it has always the same HDR + high bit encoding: no encode-time decision for HDR or no HDR no encode-time decision 8/10/12 etc. bits (JPEG XL uses always float per channel internally) no encode-time decision about YUV420 no encode-time decision about which colorspace to use, always XYB progressive coding hybrid delta/normal palettization focus on best quality with internet distribution bit rates
    180 replies | 58223 view(s)
  • a902cd23's Avatar
    Yesterday, 22:16
    a902cd23 replied to a thread Fp8sk in Data Compression
    Compressed file is excel97 Notes.xls 4266496 bytes fp8v6 fp8sk32 -1 Elapsed: 0:00:22,41 Elapsed: 0:00:26,25 -2 Elapsed: 0:00:22,86 Elapsed: 0:00:26,77 -3 Elapsed: 0:00:23,39 Elapsed: 0:00:27,35 -4 Elapsed: 0:00:23,40 Elapsed: 0:00:27,64 -5 Elapsed: 0:00:23,74 Elapsed: 0:00:28,29 -6 Elapsed: 0:00:24,14 Elapsed: 0:00:29,03 -7 Elapsed: 0:00:24,36 Elapsed: 0:00:29,78 -8 Elapsed: 0:00:24,73 Elapsed: 0:00:29,18 514 363 7.fp8sk32 514 753 8.fp8sk32 515 162 6.fp8sk32 515 233 5.fp8sk32 518 107 7.fp8 518 211 4.fp8sk32 518 541 8.fp8 518 941 6.fp8 518 951 5.fp8 521 909 4.fp8 527 435 3.fp8sk32 531 346 3.fp8 542 054 2.fp8sk32 546 251 2.fp8 565 925 1.fp8sk32 570 949 1.fp8
    74 replies | 6603 view(s)
  • Scope's Avatar
    Yesterday, 20:34
    ImageReady event Avif For Next Generation Image Coding by Aditya Mavlankar https://youtu.be/5RX6IgIF8bw Slides The AVIF Image Format by Kornel Lesiński https://youtu.be/VHm5Ql33JYw Slides Webp Rewind by Pascal Massimino https://youtu.be/MBVBfLdh984 Slides JPEG XL: The Next Generation "Alien Technology From The Future" by Jon Sneyers https://youtu.be/t63DBrQCUWc Slides Squoosh! App - A Client Side Image Optimization Tool by Surma https://youtu.be/5s1UuppSzIU Slides
    180 replies | 58223 view(s)
  • SpyFX's Avatar
    Yesterday, 19:01
    SpyFX replied to a thread zpaq updates in Data Compression
    if attr size == (FRANZOFFSET + 8) then checksum ok else if attr size == 8 checksum error ?
    2594 replies | 1118287 view(s)
  • fcorbelli's Avatar
    Yesterday, 18:56
    fcorbelli replied to a thread zpaq updates in Data Compression
    Good only max 8 bytes (puti write uint64_t) are needed... BUT... I am too lazy to iterate for write an empty FRANZOFFSET block if something goes wrong i_sb.write(&pad,FRANZOFFSET); :D
    2594 replies | 1118287 view(s)
  • SpyFX's Avatar
    Yesterday, 18:44
    SpyFX replied to a thread zpaq updates in Data Compression
    ок, my small fix :) del line const char pad = {0}; change i_sb.write(&pad,(8-i_quanti)); to puti(i_sb, 0, (8-i_quanti)); full code, no zero buffer if (getchecksum(i_filename, i_sha1)) { puti(i_sb, 8 + FRANZOFFSET, 4); // 8+FRANZOFFSET block puti(i_sb, i_data, i_quanti); puti(i_sb, 0, (8 - i_quanti)); i_sb.write(i_sha1, FRANZOFFSET); if (!pakka) printf("SHA1 <<%s>> CRC32 <<%s>> %s\n", i_sha1, i_sha1 + 41, i_filename.c_str()); } else { puti(i_sb, 8, 4); puti(i_sb, i_data, i_quanti); puti(i_sb, 0, (8 - i_quanti)); }
    2594 replies | 1118287 view(s)
  • fcorbelli's Avatar
    Yesterday, 18:36
    fcorbelli replied to a thread zpaq updates in Data Compression
    ... no attribute no checksum ... (I never use noattribute!) I'll fix with a function (edited the previous post) I attach the current source EDIT: do you know how to "intercept" the blocks just before written to disk, in the add() function? I am trying to compute the CRC32 code of the file from the resulting compression blocks, sorted (as I do for verification). I would save re-reading the file from disk (by eliminating the SHA1 calculation altogether). In short: during add (), for each file and for each compressed block (even unordered) I want to save it on my vector, and then I work it. can you help me?
    2594 replies | 1118287 view(s)
  • SpyFX's Avatar
    Yesterday, 17:52
    SpyFX replied to a thread zpaq updates in Data Compression
    yes, right, for your expansion, you should give 8 bytes for compatibility with zpaq 7.15, and then place the checksum +in your code, I don't see where the checksum is located if no attributes
    2594 replies | 1118287 view(s)
  • SpyFX's Avatar
    Yesterday, 17:32
    SpyFX replied to a thread zpaq updates in Data Compression
    p/s sorry, i deleted my post
    2594 replies | 1118287 view(s)
  • fcorbelli's Avatar
    Yesterday, 17:30
    fcorbelli replied to a thread zpaq updates in Data Compression
    Thank you, but... why? The attr does not have a fixed size, can be 3 or 5 bytes, or 0, or... 55 (50+5) bytes long. At least that's what it looks like from the Mahoney source. But I could be wrong. Indeed, more precisely, I think the solution could be "padding" to 8 the "7.15" attr (with zeros after the 3 or 5 bytes) then put "in the queue" my new attr block of 50 bytes. In this way the source 7.15 should always be able to take 8 bytes, of which 3 or 4 (the latter) are zero, to put in dtr.attr if (i<8) -7.15tr- << 40 bytes of SHA1 >> zero <<CRC >> zero 12345678 1234567890123456789012345678901234567890 0 12345678 0 lin00000 THE-SHA1-CODE-IN-ASCII-HEX-FORMAT-40-BYT 0 ASCIICRC 0 windo000 THE-SHA1-CODE-IN-ASCII-HEX-FORMAT-40-BYT 0 ASCIICRC 0 Seems legit? Something like that (I know, I know... not very elegant...) void Jidac::writefranzoffset(libzpaq::StringBuffer& i_sb, uint64_t i_data, int i_quanti,bool i_checksum,string i_filename,char *i_sha1) { if (i_checksum) /// OK, we put a larger size { /// experimental fix: pad to 8 bytes (with zeros) for 7.15 enhanced compatibility /// in this case 3 attr, 5 pad, then 50 const char pad = {0}; puti(i_sb, 8+FRANZOFFSET, 4); // 8+FRANZOFFSET block puti(i_sb, i_data, i_quanti); i_sb.write(&pad,(8-i_quanti)); // pad with zeros (for 7.15 little bug) if (getchecksum(i_filename,i_sha1)) { i_sb.write(i_sha1,FRANZOFFSET); if (!pakka) printf("SHA1 <<%s>> CRC32 <<%s>> %s\n",i_sha1,i_sha1+41,i_filename.c_str()); } else // if something wrong, put zeros i_sb.write(&pad,FRANZOFFSET); } else { // default ZPAQ puti(i_sb, i_quanti, 4); puti(i_sb, i_data, i_quanti); } } .... if ((p->second.attr&255)=='u') // unix attributes writefranzoffset(is,p->second.attr,3,checksum,filename,p->second.sha1hex); else if ((p->second.attr&255)=='w') // windows attributes writefranzoffset(is,p->second.attr,5,checksum,filename,p->second.sha1hex); else puti(is, 0, 4); // no attributes With observation I found a possible bug (what happens if the CRC calculation fails? Actually I should do it FIRST and if OK, insert a FRANZOFFSET block. In other words: if a file cannot be opened, then I could save space by NOT storing SHA1 and CRC32. Next release ...
    2594 replies | 1118287 view(s)
  • fcorbelli's Avatar
    Yesterday, 17:27
    fcorbelli replied to a thread zpaq updates in Data Compression
    This is version 41 of zpaqfranz. It begins to look like something vaguely functioning. Using the -checksum switch (in the add command) stores BOTH SHA1 and CRC32 of each file inside the ZPAQ file. Those codes can be seen with the normal command l (list) in zpaqfranz. With zpaq 7.15 should be ignored without errors. The t and p commands test the new format. The first uses the CRC32 codes (in present), and if desired with -force also do a comparison from the filesystem files for a double check. About as fast as standard test for ZPAQ 7.15 The second p (as paranoid) does it on SHA1. In this case it's MUCH slower and uses MUCH more RAM, so that it often crashes on 32-bit systems. -verbose gives a more extended result list. Examples zpaqfranz a z:\1.zpaq c:\zpaqfranz\* -checksum -summary 1 -pakka zpaqfranz32 t z:\1.zpaq -force -verbose I emphasize that the source is a real mess, due to "injection" of different programs into the original file, so as not to differentiate it too much from "normal" zpaq. It should be corrected and fixed, perhaps in the future. To summarize: now zpaqfranz-41 can check file integrity (CRC32) in a (hopefully) perfectly backward compatible way with ZPAQ 7.15 EXE for Windows32 http://www.francocorbelli.it/zpaqfranz32.exe EXE for Win64 http://www.francocorbelli.it/zpaqfranz.exe Any feedback is welcomed.
    2594 replies | 1118287 view(s)
  • SpyFX's Avatar
    Yesterday, 17:21
    SpyFX replied to a thread zpaq updates in Data Compression
    p/s sorry i blunted, my post needs to be deleted (:
    2594 replies | 1118287 view(s)
  • Jon Sneyers's Avatar
    Yesterday, 12:35
    It may be a nice lossy image codec, but lossless it is not: Lenna_(test_image).png PNG 512x512 512x512+0+0 8-bit sRGB 473831B 0.010u 0:00.009 Lenna.bmp BMP3 512x512 512x512+0+0 8-bit sRGB 786486B 0.000u 0:00.000 Image: Lenna_(test_image).png Channel distortion: PAE red: 1028 (0.0156863) green: 1028 (0.0156863) blue: 1028 (0.0156863) all: 1028 (0.0156863) Lenna_(test_image).png=> PNG 512x512 512x512+0+0 8-bit sRGB 473831B 0.040u 0:00.020
    1 replies | 178 view(s)
  • SEELE's Avatar
    Yesterday, 11:44
    Thanks Jon, theres a bug! i'll fix and then reupload (mods feel free to delete this post for now)
    1 replies | 178 view(s)
  • Shelwien's Avatar
    Yesterday, 02:28
    Done
    4 replies | 1002 view(s)
  • L0laapk3's Avatar
    Yesterday, 01:11
    I will (hopefully) be compressing the data right away before I ever write it away to storage. Using a bitmap to only have to store 1 bit for every zero is a great idea, I'll also look into RLE, maybe I can implement something myself, or maybe I'll rely on whatever compression library I end up using to do it for me. As for the use case: In onitama, at the beginning of the game you draw 5 cards from 16 cards randomly. For the rest of the game, you use the same cards. I'm currently hoping to generate all 4368 end-game tablebases for every combination of cards once, storing all this as compressed as I can. Once the game starts, I can unpack the tablebase for the right set of cards and from there on it fully fits into RAM so access speed is not a worry. I am not quite sure what the largest 8 bit value will be, as I have only generated the tablebase for a couple of card combinations so far. Generating it for all card combinations will be quite an undertaking. Currently this 8 bit value contains the distance to win or distance to loss (signed integer), divided by 8, so 8 optimal steps in the direction of the win correspond with a decrease in this value of 1. My implementation of the games AI should be able to navigate its way trough this. This means with the 8 bit value, I would be able to store distances to win of up to 1024 (any longer sequences I would just discard), which does seem a little on the high side, I might be able to get away with 6 or 7 bits, but I will only know for sure once I generate all the tablebases for every card combination. In the file above, I had not included all the losing values. Since for every winning board, there is exactly 1 opposite board with the same losing value, I can compute these when unpacking. So I can throw away the sign bit of the value, leaving only 7 bits. Further, I have the exact count of each value before I even start compressing: ~90% of non-zero values will be 1, ~8% will be 2, ... For the higher values this depends quite a bit on the card set, some barely have any high values at all and some have a lot. I can try creating some sort of huffman encoding for this myself, I'm not sure if this will hinder the actual compression later if I give it worse huffman encoded stuff than it could do itself with raw data.
    4 replies | 283 view(s)
  • Gotty's Avatar
    23rd November 2020, 23:30
    I'd like to hear more about you use case. I suppose you'd need to acquire fast the 8-bit information having the 31-bit key, right? Would such a lookup be performed multiple times during a single board-position evaluation, or just once? Or just occasionally during the endgame, when the number of pieces are blow n, for example? So: how often do you need to fetch data from these maps? How much time is OK to fetch a 8-bit value? Nanoseconds, millisecond, one sec? What is the largest 8-bit value you may have in all your maps? 90 in the sample you provided. Any larger? It looks like only even number. Is it the case only for this file or is it general? Or some files have odd some even numbers (consistently)?
    4 replies | 283 view(s)
  • CompressMaster's Avatar
    23rd November 2020, 22:45
    @shelwien, could you delete member BestComp? That account belongs to me (I created it when we discussed one email address, two accounts issue), but I don´t remember registration email and I dont want to have this account at all. Thanks.
    4 replies | 1002 view(s)
  • Shelwien's Avatar
    23rd November 2020, 20:14
    1. One common method for space reduction is bitmaps: add a separate bit array for (value>0) flags, this would result in 8x compression of zero bytes. Of course, you can also assign flags to pages of convenient size, or even use a hierarchy of multiple bitmaps. 2. There're visible patterns in non-zero values, eg. ("04" 29 x "00") x 16. Maybe use RLE (or any LZ, like zlib,zstd) for non-zero pages.
    4 replies | 283 view(s)
  • pklat's Avatar
    23rd November 2020, 17:28
    I'd just use a 'proper' filesystem that supports sparse files on 'proper' OS :) On windows, perhaps you could use on-the-fly compression, I think it doesnt support sparse files. tar supports them, iirc.
    4 replies | 283 view(s)
  • lz77's Avatar
    23rd November 2020, 15:37
    Hm, when I run TestGLZAdll.exe in test.bat for my compress dll renamed to GLZA.dll, I get error "TestGLZAdll.exe - application error 0xc000007b"... The test works correct with original GLZA.dll. Oh, the original GLZA.dll also contains both decodeInit & decodeRun, but I have separate dll's for encoding and decoding. Perhaps this is the reason for the error...
    125 replies | 12735 view(s)
  • L0laapk3's Avatar
    23rd November 2020, 14:14
    Hi all, let me start off by mentioning that I am a complete noob to compression. I understand basic concepts about information density but thats about it. I'm currently building tablebases for a board game (onitama) and the files I end up with are quite large. the table is an unordered map structure with a unique 31 bit board key and 8 bit distance to win value. There are around 2E8 entries in the map. The best results I've managed is by removing the key altogether and storing a large 2^31 address byte array. Since most of the keys don't exist, the array consists mostly of zeroes. This results in a much larger uncompressed file of 2GB, however 7z managed to compress it to just 47MB, 2% of its original volume. My question is the following: are there more efficient ways to store such sparse data? My goal is to store around 4400 of these, which currently puts the total size in excess of 200GB. I have included one such files for testing: https://we.tl/t-cYtvo5Fw90 (47MB) Many thanks L0laapk3
    4 replies | 283 view(s)
  • Dresdenboy's Avatar
    23rd November 2020, 11:22
    :_good2:
    125 replies | 12735 view(s)
  • lz77's Avatar
    23rd November 2020, 11:04
    I just saw where the error is: It happened while adapting my subroutine to the conditions of the competition. I confused the name of one variable.
    125 replies | 12735 view(s)
  • suryakandau@yahoo.co.id's Avatar
    23rd November 2020, 02:54
    using -8 option on enwik6 result is: paq8sk35 196112 bytes paq8sk36 196112 bytes here is paq8sk35 and paq8sk36 rebuilt using -DNDEBUG flag
    176 replies | 15878 view(s)
  • suryakandau@yahoo.co.id's Avatar
    23rd November 2020, 02:26
    @darek yes gotty is right about paq8sk34 , paq8sk34 had a serious flaw. @gotty i will put back -DNDEBUG flag, fix CHANGELOG and README. Thank you Darek!! thank you Gotty!!
    176 replies | 15878 view(s)
  • Cyan's Avatar
    23rd November 2020, 00:46
    Cyan replied to a thread ARM vs x64 in The Off-Topic Lounge
    Phoronix made an early comparison of M1 cpu performance on Mac mini: https://www.phoronix.com/scan.php?page=article&item=apple-mac-m1&num=1 Results are impressive, especially for the kind of power class M1 works at. Even when running `x64` code with an emulation layer, M1 is still faster than its ~2 years old Intel competitor (i7-8700B). The downside of this study is that, since it only uses Mac Mini platforms, it isn't comparing vs Intel's newer Tiger Lake, which would likely show a more nuanced picture. Still, it's an impressive first foray into PC territory.
    17 replies | 2045 view(s)
  • Gotty's Avatar
    23rd November 2020, 00:15
    Gotty replied to a thread Paq8sk in Data Compression
    @Darek, I think, sk34 is not worth testing - it had a serious flaw. It is fixed in sk35.
    176 replies | 15878 view(s)
  • Gotty's Avatar
    23rd November 2020, 00:01
    Gotty replied to a thread Paq8sk in Data Compression
    I'm glad you found the problem of the assert. Now I didn't look too deeply (into v35), found mostly cosmetic issues so far. Also the compression looks good (for text files); for binary files there are fluctuations but some files are significantly worse. You should REALLY test with many file types - as I mentioned a couple of times and Darek also has just wrote the same in his latest post. Did you put back the -DNDEBUG flag when publishing the release build? (With the asserts in effect speed is worse.) You'll need to fix CHANGELOG as it is missing the log entry for paq8px_v197. README: as I mentioned you cannot just replace px to sk. From the README one would get that paq8sk started in 2009. The link to the thread is also incorrect. Please read the README carefully (and maybe fix it). There are also many references to paq8px in many other files. The most important ones are probably the Visual Studio solution file and CMakeLists.txt - they are still looking for paq8px.cpp. Do you plan to fix them?
    176 replies | 15878 view(s)
  • suryakandau@yahoo.co.id's Avatar
    22nd November 2020, 22:52
    Yes sir I can provide it
    176 replies | 15878 view(s)
  • Darek's Avatar
    22nd November 2020, 22:03
    Darek replied to a thread Paq8sk in Data Compression
    @suryakandau -> at first: paq8sk34 got quite interesting results for textual files on my testbed -> not big differences but best scores for all textual files overall!!!! Other files scores are worse than paq8px_v195 or paq8px_v196 which are the base (I understand) for paq8sk34. However, I'll test paq8sk35 and paq8sk36 next. You should have to put more attention to overall compression ratio. It's not the best practice when one or two kind of files ratios are improved by the cost of other files. My request: could you provide comparison of results of enwik6 for paq8sk34, paq8sk35 and paq8sk36? I'll prepare to test one of them on enwik8 (at now there is my possibility to test such lenght of file only) and I need to choose the best :)
    176 replies | 15878 view(s)
  • Shelwien's Avatar
    22nd November 2020, 17:17
    Kennon Conrad just posted his framework in other thread - https://encode.su/threads/1909-Tree-alpha-v0-1-download?p=67549&viewfull=1#post67549 You can rename your dll to GLZA.dll and see what happens. Reposting the binaries here, because he missed a mingw dll.
    125 replies | 12735 view(s)
  • lz77's Avatar
    22nd November 2020, 16:47
    Thanks. If I knew in June where is the right category for my participation, I would make a preprocessor and write these dll's in FASM for Win64... Unfortunately I haven't practiced in C yet. Maybe someone will send me this C test program for CodeBlocks... Here is my Delphi 7 example. It gives dll contents of file named "bl00.dat" and writes the compressed data to a file named "packed". P.S. packedsize is a var parameter and passed to dll by pointer. Here's a declaration from the dll: type pb = ^byte; ... function encodeRun(inSize: dword; inPtr: pb; var outSize: dword; outPtr: pb; cmprContext: pointer): longint; cdecl;
    125 replies | 12735 view(s)
  • Dresdenboy's Avatar
    22nd November 2020, 16:29
    Could be both. Reason could for example be accessing something via a freed pointer or outside of allocated memory arrays.
    125 replies | 12735 view(s)
  • suryakandau@yahoo.co.id's Avatar
    22nd November 2020, 16:24
    Paq8sk36 - improve jpeg recompression ration - the speed is still fast here is source code and binary file inside the package. using -8 option on a10.jpg paq8px197 624646 bytes paq8sk36 624084 bytes using -8 option on dsc0001.jpg paq8px197 2194630 bytes paq8sk36 2193315 bytes ​
    176 replies | 15878 view(s)
  • lz77's Avatar
    22nd November 2020, 16:21
    Error 0xC0000005 means only writing to unallocated memory, not reading?
    125 replies | 12735 view(s)
  • Dresdenboy's Avatar
    22nd November 2020, 15:54
    That result sounds good! The error is a typical Access Violation error (either non allocated memory accessed or some function called, which doesn't exist). You might need to build a very small C console app to test it. The free Visual Studio Express might help here.
    125 replies | 12735 view(s)
  • Shelwien's Avatar
    22nd November 2020, 15:46
    > dll fails with the error 0xC0000005. This does not mean anything to me Well, its windows crash code for "access violation", ie writing to unallocated memory. Could be some buffer overflow.
    125 replies | 12735 view(s)
  • Jyrki Alakuijala's Avatar
    22nd November 2020, 15:28
    We acknowledge these. We will deliver decoding speed improvements in coming versions, most likely week-by-week basis. I think we have 20+ % in the pipeline now. A larger jump (2x+) requires transferring much of the computation to the GPU. This is similar to Chrome doing color conversions on the GPU with libjpeg-turbo. We just can transfer more there, including some parts of progressive renderings, 'gaborish' and 'epf' filtering. For encoding or decoding memory reduction, we know how to do it and we need to focus on that specifically soon. It will be an increase in complexity which will slow down further development speed -- that is why we have postponed it until the format is frozen.
    31 replies | 2105 view(s)
  • lz77's Avatar
    22nd November 2020, 15:12
    On November 18, I accidentally saw that in the test 4, blocks rapid results are weak. Therefore, on November 19, day and night, I adapted my simple lz type example written in the old (from year 2002) Delphi 7 for this test and sent it out at 5 am. It should compress block40.dat on the test PC to 146.4 mill. bytes for ~3.3 sec. (global time), and decompress it for ~1.25 sec. But testing staff emailed me that my compress dll fails with the error 0xC0000005. This does not mean anything to me, perhaps there is an error somewhere when working with encodeRun parameters. All functions in dll's are declared as cdecl. In my test on Delphi everything works without errors. I have until November 27th to fix the bug. And where to look for it is not entirely clear... I feel that I will not see the prize as my ears... :_sorry2: It would be nice if the organizers of this competition published a template for these dll's and a test samples on Delphi 32 before it began, then there would be no such questions...
    125 replies | 12735 view(s)
  • maadjordan's Avatar
    22nd November 2020, 13:53
    I hope Jpeg XL is added to PDF format too but decoding speed need to be improved and memory requirement reduced
    31 replies | 2105 view(s)
  • suryakandau@yahoo.co.id's Avatar
    22nd November 2020, 12:56
    Paq8sk35 - based on paq8px197 with improve text compression ratio. - compression speed is the same as paq8px197 here is package file with source code and binary. using -8 option on enwik6.txt is: paq8px197 196266 bytes paq8sk35 196112 bytes using -8 option on enwik5.txt is: paq8px197 23752 bytes paq8sk35 23727 bytes
    176 replies | 15878 view(s)
  • Kennon Conrad's Avatar
    22nd November 2020, 11:09
    I found a few problems while developing a .dll for the GDC competition. There were a couple memory initialization problems, a memory leak, a model initialization problem, and a problem with files that ended with a capital locked string. Also, I am including a second .zip file that contains GLZA.dll, a .dll based on GLZA v0.11.4 that is compatible with the GDC dll requirements.
    878 replies | 435362 view(s)
  • Dresdenboy's Avatar
    21st November 2020, 16:03
    Did you manage to finish something? It didn't look that hard to beat the current contenders. My from scratch implementation with hash based LZ77 and some form of my own ANS lands at ~126*10^6 bytes in 6 s for the open part without much tuning and throwing out some more complex ideas. But I would have had to do full testing of the DLL interface and some debugging of the decompressor. I had to give up due to the last weeks already being full with work, family demands, and other energy consuming tasks. But at least the temporal motivation brought me forward quite a bit! :_good2:
    125 replies | 12735 view(s)
  • aev's Avatar
    21st November 2020, 15:38
    aev replied to a thread paq8px in Data Compression
    Thanks for clarifying. I was impressed at the high compression ratio of this compressor. And I wanted to use it as an alternative to LZMA2, Brotli, ZSTD to compress small utf-8 text blocks (1-64 kb) ...
    2252 replies | 588756 view(s)
  • FatBit's Avatar
    21st November 2020, 10:57
    FatBit replied to a thread zpaq updates in Data Compression
    I guess nobody cares about having a quick method of verifying ZPAQ files… Wrong assumption, at least me… Best regards, FatBit
    2594 replies | 1118287 view(s)
  • danlock's Avatar
    21st November 2020, 06:54
    In case anyone finds it relevant, here are two blog posts from Intel about TSX (Transactional Synchronization Extensions), one of which includes a link to a PDF file containing a Draft Specification of Transaction Language Constructs for C++ (v1.1, 35 pages, 725KB). I'm not sure how useful that C++ spec will be. Here are the two blog pages: https://software.intel.com/content/www/us/en/develop/blogs/coarse-grained-locks-and-transactional-synchronization-explained.html https://software.intel.com/content/www/us/en/develop/blogs/transactional-synchronization-in-haswell.html Chapters 1 & 8 of the Intel Architecture Instruction Set Extensions Programming Reference 319433-012A/Feb 2012 (PDF file, 2.49MB: https://software.intel.com/sites/default/files/m/9/2/3/41604 ) contain many details. In fact, all of Chapter 8 (the last chapter, prior to appendices and index), is about TSX! The PDF format makes it easy to click between topics. *shrug*
    29 replies | 5788 view(s)
  • fcorbelli's Avatar
    20th November 2020, 18:27
    fcorbelli replied to a thread zpaq updates in Data Compression
    It wasn't super easy, but you can do an integrity check of the ZPAQ files (using the venerable CRC32) at pretty much the rate at which it extracts the files. The "trick" consists in keeping track of each fragment decompressed in parallel by the various threads, dividing them by file, and then calculating its CRC32. Broadly speaking like this uint32_t crc; crc=crc32_16bytes (out.c_str()+q, usize); s_crc32block myblock; myblock.crc32=crc; myblock.crc32start=offset; myblock.crc32size=usize; myblock.filename=job.lastdt->first; g_crc32.push_back(myblock); Finally, after having sorted the individual blocks... bool comparecrc32block(s_crc32block a, s_crc32block b) { char a_start; char b_start; sprintf(a_start,"%014lld",a.crc32start); sprintf(b_start,"%014lld",b.crc32start); return a.filename+a_start<b.filename+b_start; } the "final" CRC32 can be calculated by combining the individual CRC32s (snippet for a single file) for (auto it = g_crc32.begin(); it != g_crc32.end(); ++it) { printf("Start %014lld size %14lld (next %14lld) CRC32 %08X %s\n",it->crc32start,it->crc32size,it->crc32start+it->crc32size,it->crc32,it->filename.c_str()); } printf("SORT\n"); sort(g_crc32.begin(),g_crc32.end(),comparecrc32block); uint32_t currentcrc32=0; for (auto it = g_crc32.begin(); it != g_crc32.end(); ++it) { printf("Start %014lld size %14lld (next %14lld) CRC32 %08X %s\n",it->crc32start,it->crc32size,it->crc32start+it->crc32size,it->crc32,it->filename.c_str()); currentcrc32=crc32_combine (currentcrc32, it->crc32,it->crc32size); } printf("%08X\n",currentcrc32); By getting something like this example ?======================?????????????????????????? 13 Start 00000001680383 size 93605 (next 1773988) CRC32 401AB2BB z:/myzarc/Globals.pas Start 00000001123668 size 143047 (next 1266715) CRC32 7060F541 z:/myzarc/Globals.pas Start 00000001354597 size 239761 (next 1594358) CRC32 E183AA23 z:/myzarc/Globals.pas Start 00000001798290 size 110253 (next 1908543) CRC32 ECC91F12 z:/myzarc/Globals.pas Start 00000001928573 size 64971 (next 1993544) CRC32 0F19B1F5 z:/myzarc/Globals.pas Start 00000000189470 size 372350 (next 561820) CRC32 AFC2ADD7 z:/myzarc/Globals.pas Start 00000001266715 size 87882 (next 1354597) CRC32 4086324B z:/myzarc/Globals.pas Start 00000001594358 size 86025 (next 1680383) CRC32 92AA9167 z:/myzarc/Globals.pas Start 00000001773988 size 24302 (next 1798290) CRC32 44F3E2AE z:/myzarc/Globals.pas Start 00000001908543 size 20030 (next 1928573) CRC32 1D7D9886 z:/myzarc/Globals.pas Start 00000000059210 size 130260 (next 189470) CRC32 64E567C0 z:/myzarc/Globals.pas Start 00000000561820 size 561848 (next 1123668) CRC32 481E0307 z:/myzarc/Globals.pas Start 00000000000000 size 59210 (next 59210) CRC32 E1B00AE8 z:/myzarc/Globals.pas SORT Start 00000000000000 size 59210 (next 59210) CRC32 E1B00AE8 z:/myzarc/Globals.pas Start 00000000059210 size 130260 (next 189470) CRC32 64E567C0 z:/myzarc/Globals.pas Start 00000000189470 size 372350 (next 561820) CRC32 AFC2ADD7 z:/myzarc/Globals.pas Start 00000000561820 size 561848 (next 1123668) CRC32 481E0307 z:/myzarc/Globals.pas Start 00000001123668 size 143047 (next 1266715) CRC32 7060F541 z:/myzarc/Globals.pas Start 00000001266715 size 87882 (next 1354597) CRC32 4086324B z:/myzarc/Globals.pas Start 00000001354597 size 239761 (next 1594358) CRC32 E183AA23 z:/myzarc/Globals.pas Start 00000001594358 size 86025 (next 1680383) CRC32 92AA9167 z:/myzarc/Globals.pas Start 00000001680383 size 93605 (next 1773988) CRC32 401AB2BB z:/myzarc/Globals.pas Start 00000001773988 size 24302 (next 1798290) CRC32 44F3E2AE z:/myzarc/Globals.pas Start 00000001798290 size 110253 (next 1908543) CRC32 ECC91F12 z:/myzarc/Globals.pas Start 00000001908543 size 20030 (next 1928573) CRC32 1D7D9886 z:/myzarc/Globals.pas Start 00000001928573 size 64971 (next 1993544) CRC32 0F19B1F5 z:/myzarc/Globals.pas 4A99417C The key is to use the CRC32 property of being able to be used "in blocks" (distributive). It should be remembered that the sequence of blocks is not ordered, and that in general there are multiple threads in simultaneous processing. Using a method similar to the one I implemented in my fork, ie extend the file's attribute portion, you can store the CRC32 inside the ZPAQ file in a way that is compatible with older versions. I guess nobody cares about having a quick method of verifying ZPAQ files, but it's one of the most annoying shortcomings for me It was a difficult job to "disassemble" the logic of ZPAQ without help... But ... i did it!
    2594 replies | 1118287 view(s)
  • moisesmcardona's Avatar
    20th November 2020, 15:39
    moisesmcardona replied to a thread paq8px in Data Compression
    There's no DLL implementation, but if you want to integrate it into a software, you can call it via command line like I do in PAQCompress. Then you can parse the console output. The only downside is that it will not report the real progress due to it using redirected output.
    2252 replies | 588756 view(s)
  • Scope's Avatar
    20th November 2020, 13:06
    Yes, at home I have 1 Gb/s Internet, but even at this speed, I sometimes notice that images are not instantly displayed on many sites (it also does not mean that such speed will be up to any servers and there will be no other delays in image transfer), it's good that large sites are trying to optimize images and use LQIP or color blocks until the full image is received by the client. And I just recently left the hospital where I was more than two weeks, where I had to use the Internet from my mobile operator who has not very good coverage and often overloaded the network, the speed was very unstable from 50 Mbps to 2 Kbps and browsing some sites with images became very uncomfortable, especially where LQIP and its alternatives and progressive loading were not used at all and I had to wait only for the full image. At the moment I encode the whole set of images in lossless WebP v2 and want to add it to my comparison, the preliminary result is almost everywhere better than WebP v1, also the encoder in lossless mode has a very high multithreading efficiency (if the resolution is enough), the speed at maximum compression (effort 9) is quite slow, although it is higher than speed 9 in Jpeg XL. However, in lossy mode for some reason multithreading works badly (but since I compile the Windows version, it's possible that something doesn't work properly) and therefore the encoding is extremely slow (20-50 times slower than AVIF), at the same time AVIF (using the latest Libaom builds) became quite fast even at the slowest speed. The quality of WebP v2 at the moment and on my test images was worse than AVIF, and visually, the compression is very similar to how AVIF works. As an advantage, memory consumption when encoding in WebP v2 is minimal (compared to AVIF and even Jpeg XL on large images, although as far as I know their encoders are not fully implemented chunking and tiling). I haven't compared Near-lossless modes yet, but it's a good idea to place them in 96-99 quality values, so they will be found and used much more often (i learned about Near-lossless mode in WebP v1 not so long ago, in more than 10 years of format existence).
    31 replies | 2105 view(s)
  • Ms1's Avatar
    20th November 2020, 11:30
    Yes, received. We collect emails from several mailboxes, so, hopefully, everything is delivered eventually, but it makes sense to double-check if no confirmation for a long time.
    125 replies | 12735 view(s)
  • Jyrki Alakuijala's Avatar
    20th November 2020, 02:03
    In some use cases it is a problem, in others it is a blessing. Creative compression can be bad in verification and validation imaging, medical imaging, or recording a crime scene. In content creation such as game textures it is just good that the system fills in the gaps. In a quick selfie it can be ok that skin imperfections are erased. It all comes down to system utility and fairness. If the system utility and fairness are increased overall, then a technology may be good to deploy. Certainly the neural compression systems can increase the compression density.
    4 replies | 779 view(s)
  • lz77's Avatar
    20th November 2020, 01:23
    Unfortunately not enough time to do that... At Nov 18 I saw that in rapid block test the competition is weak, and Now 19 I start with rapid block test. At this moment I'm finishing these dlls... May be Nov 20 afternoon I will beat zstd... ;)
    125 replies | 12735 view(s)
  • Gotty's Avatar
    20th November 2020, 01:05
    Gotty replied to a thread paq8px in Data Compression
    No, there is not. It's an experimental compressor: too slow, too much memory use. Impractical. So no one is banging on the door that it should be easily embedded into other software. So it has a simple interface (command line), no API, no DLL version. What is your use case? What's on your mind? (Warning: it's not recommended for any production use.)
    2252 replies | 588756 view(s)
  • Ms1's Avatar
    20th November 2020, 00:17
    I hope somebody of you guys will beat the reference lib (Zstd). After all, the block test is the most data-dependent and you know 40% of data.
    125 replies | 12735 view(s)
  • Ms1's Avatar
    20th November 2020, 00:08
    Well, welcome. The rapid category of the block test is a bit neglected, so, in term of strategy, this is a wise choice. But I may say that sending a submission on the last day requires some guts, especially for the library test. I recommend sending something ASAP in order to check that the library works correctly.
    125 replies | 12735 view(s)
  • lz77's Avatar
    19th November 2020, 22:50
    A-a-a... What is the ratio? :_superstition2:
    125 replies | 12735 view(s)
  • aev's Avatar
    19th November 2020, 22:50
    aev replied to a thread paq8px in Data Compression
    Is there any DLL implementation of paq8px ?
    2252 replies | 588756 view(s)
  • pklat's Avatar
    19th November 2020, 20:31
    so the problem now is that compression artifact are 'unpredictable'? you can't tell how much or in what way the image has been changed/distorted unless you have original?
    4 replies | 779 view(s)
  • Darek's Avatar
    19th November 2020, 20:28
    Darek replied to a thread paq8px in Data Compression
    Max ultra is my optimized settings for all corpuses (method plus swithes) which I've found. Here, in file are my maxed options.
    2252 replies | 588756 view(s)
  • Dresdenboy's Avatar
    19th November 2020, 19:48
    On Nov 3rd I started with a compressor implementation from scratch (but with preexisting ideas of mine), trying to learn some techs. Results don't look too bad, so I'm finalizing it now. :) Edit: This will be a block compressor for the rapid category, tailored a bit to the task.
    125 replies | 12735 view(s)
  • Jyrki Alakuijala's Avatar
    19th November 2020, 19:23
    There is a conference on the topic at http://www.compression.cc/ It has been organized since 2018 and participants to that conference represent the current-state-of-the-art. It is relatively easy to participate in their automated compression competition that is organize about 2 months before the conference. The best effort from the compression density/visual quality viewpoint that I know of is https://hific.github.io/ Most such efforts are not practical with current hardware, and the next challenge in the field is to get some of the benefits into a practical form. We have attempted to get some of those benefits into JPEG XL. Specifically, I plan to use a gradient search at encoding time, but more traditional -- but larger, selectively layered and overcomplete -- transforms at decoding time.
    4 replies | 779 view(s)
  • LucaBiondi's Avatar
    19th November 2020, 18:16
    LucaBiondi replied to a thread paq8px in Data Compression
    Hello Darek, What is the "max ultra" label in the paq8px results? Are two options? Thank you! Luca
    2252 replies | 588756 view(s)
  • Darek's Avatar
    19th November 2020, 16:50
    Darek replied to a thread paq8px in Data Compression
    Scores for 4 corpuses by paq8px v197 and table with the best scores for 4 corpses by best compressors in terms of compression ratio. In general: - best score for Calgary Corpus (for non solid/tar files) - best score for Canterbury Corpus at all (solid or tar files got worse scores) - best score for MaximumCompression Corpus - first time ever total score deep below 5'900'000 bytes (!) - still second score for Silesia Corpus - only 55.5KB worse score than the latest cmix v18 version with precomp preprocessor (!).
    2252 replies | 588756 view(s)
  • pklat's Avatar
    19th November 2020, 16:31
    it seems nvidia is already doing it with live video: https://arstechnica.com/gadgets/2020/11/nvidia-used-neural-networks-to-improve-video-calling-bandwidth-by-10x/
    4 replies | 779 view(s)
  • Kennon Conrad's Avatar
    19th November 2020, 11:01
    I did some BIOS testing at work about a month ago. The Spectre and Meltdown fixes added almost 1 minute (50%) to the OS boot time.
    29 replies | 5788 view(s)
  • hotaru's Avatar
    19th November 2020, 09:57
    these mitigations are essential for anyone who runs untrusted code. that includes typical home users who run JavaScript (much of which is from malicious advertisements) in web browsers.
    29 replies | 5788 view(s)
  • Jyrki Alakuijala's Avatar
    19th November 2020, 01:23
    Today's average website has likely 1.5 MB images. An image heavy site has 3-10 MB? Backend response times may relate to the payload size. 4k and 8k monitors are coming and require more bytes to provide value to the user. Some use cases (like going through all the images in an album visually) benefit a lot from a < 50 ms response. Sometimes a millisecond of reduced latency means observable and significant-to-business change in user behaviour.
    31 replies | 2105 view(s)
  • e8c's Avatar
    19th November 2020, 00:52
    In some cases. For web consumption, FLIF better than PNG only if network speed lower than 20-30 Mb/s. Best compression has no sense if decompression speed too low.
    31 replies | 2105 view(s)
  • danlock's Avatar
    18th November 2020, 22:20
    It's important to note that although these CPU patches and side-channel attack mitigations are essential for servers, CDNs, and other important WAN-facing machines to apply in order to prevent serious attacks, a typical home user might not need to worry about them. Someone like me, who rarely leaves his PC powered and online for multiple days at a time and never exposes it to the WAN unprotected, can probably get away with leaving those vulnerabilities unpatched (for the most part) and enjoy using the CPU without any slowdowns. Edit: I forgot about the many laptop users, who don't frequently power down completely, and other home users. They are almost certainly not as careful as I am and probably don't notice a few seconds or a minute added to a task or OS boot time. Those people, the majority, should have as many patches and mitigations enabled as possible... especially when they're employees in a large company. Large companies and organizations are of particular interest to distributors of ransomware, etc.
    29 replies | 5788 view(s)
More Activity