Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • Shelwien's Avatar
    Today, 15:49
    > So it appears there is something wrong with the mp3 compression and > decompression using the codecs packmp3b, packmp3c, and packmp3d. Yeah, mainly that they don't work like you expected. If you want default packmp3/mpzapi, you can get 7zdll_vF2, it works like you wanted. However now only packmp3c/d should be actually used (packmp3b still remains for backward compatibility, because a version of PA with it was released). packmp3c/d are actually versions of the same codec with different entropy models. packmp3c is a little stronger but slower, packmp3d is faster, but still better than previous ones. And so they have to be used like this: 7z.exe a -m0=mp3det -m1=packmp3c 1c.pa cat.mp3 or: 7z.exe a -m0=mp3det -m1=packmp3c:mt4:c16M 1c4.pa cat.mp3 (mt# = number of threads, c# = chunk size for a thread) see test-mp3b5.bat for the full script, since proper mp3 recompression apparently also involves dedup and jpeg recompression (for ID3 tags). Also I have this: http://nishi.dreamhosters.com/7zcmd.html
    3 replies | 18 view(s)
  • brispuss's Avatar
    Today, 15:04
    Thanks for the prompt and fairly detailed reply! At the moment I understand some of what is being said. I've downloaded the 7zdll_vF7.rar file and installed it under Windows 7 64 bit. Unfortunately there are two issues! As a test I tried running the sample batch file test-mp3.bat. A command window appears briefly and then disappears with nothing more apparently being done!? I had a look at the batch file and found the call to the codec packmp3, but this codec is (apparently) not listed in the list of supported codecs (checked by running 7z.exe i). But I found three similar codecs listed instead - packmp3b packmp3c packmp3d So I tried substituting packmp3b for the non-existent "packmp3" codec and tried to run the command manually as - 7z a -m0=packmp3b 1.pa cat.mp3. But now get error message that 7-zip console has stopped working and the compression of file cat.mp3 was not completed. I tried this command several times with the same error appearing every time!? Then tried packmp3c and then packmp3d codecs instead (one at a time), and these codecs appeared to work. So there seems to be an issue with the packmp3b codec!? However! The codecs packmp3c and packmp3d both produced a file of only 3 kB from an initial size of 157 kB!!! Tried extracting these 3kB files back to cat.mp3 and they appeared to produce the same sized files (157 kB) as the original. BUT! There was a CRC failed error message each time on extraction (using command 7z.exe e 1.pa, or 7z.exe e 2.pa). And sometimes I got the 7-zip console stopped working error as well! I compared initial and extracted files and they were definitely NOT bit identical. So it appears there is something wrong with the mp3 compression and decompression using the codecs packmp3b, packmp3c, and packmp3d.
    3 replies | 18 view(s)
  • Shelwien's Avatar
    Today, 13:31
    There's simply no usage text, nobody needed it before :) The .bat files are supposed to be usage examples instead. Its like this: reflate.exe c advcomp1z.tgz .unp .hif -- preprocess (.unp contains uncompressed data, .hif contains diff data for lossless restoring) reflate.exe d .unp .hif .rec -- restore or: reflate.exe c1 advcomp1z.tgz .unp .hif -- preprocess with specified target deflate level (-1 here; default is -6) reflate.exe d .unp .hif .rec -- restore reflate.exe uses dlls, supports nested processing (by adding more .hif filenames on cmdline) and some extra options (which most users won't need), while reflate_std.exe is standalone, supports only one .hif, but works with stdio pipes. reflate is also integrated in 7zdll: http://nishi.dreamhosters.com/u/7zdll_vF7.rar There you can use it like this: 7z a -m0=reflate -m1=lzma archive.pa files or: 7z.exe a -m0=reflate*4:x6666 -m1=lzma archive.pa advcomp1z.tgz
    3 replies | 18 view(s)
  • Shelwien's Avatar
    Today, 13:12
    Yes, but flif encoding is pretty slow, and when its faster (eg. -E1) the compression gets worse than pik. Also in pik you'd basically need a couple of files (lossless8.cc, lossless_entropy.cc and headers), while flif is more complicated. Btw, here's an idea: make a mode with external files in addition to .pcf (basically -cn). Write bmps to %08X.bmp, jpegs to %08X.jpg etc. Now there're archive formats with recompression (7z with Aniskin's plugins, .pa, freearc), why not let them do this?
    11 replies | 487 view(s)
  • brispuss's Avatar
    Today, 12:49
    I'm currently experimenting with several (pre)compressors on a variety of files with the intention of archiving the files. I've downloaded the latest version of "reflate" (version 1l2) from this thread. The download includes six executables and two dynamic link library files, plus a sample file and (sample) batch files. Attempting to run any one of the four reflate*.exe executables in an elevated command window results in nothing apparently happening? No help, no syntax, nothing. The executables shar.exe and plzma.exe when run under the command window give syntax usage/help. But what are these two executables for? Why are they included? As per the thread title, how do you use "reflate" on files!? There are no instructions on how to install and on how to run "reflate". Thank you.
    3 replies | 18 view(s)
  • Shelwien's Avatar
    Today, 12:24
    > since bit-probabilities are also modelled with a local laplace-model, I actually reached it by extrapolating the codelength. l0 = expected codelength in case of next bit=0 l1 = expected codelength in case of next bit=1 p0 = 2^-l0 // likelihood of the case with next bit=0 p1 = 2^-l1 p = p0/(p0+p1) The coef is basically the counter update rate. > Some time ago I also tested compression of the residuals with paq8, and the results were inferior. paq8px doesn't really have much for that kind of data. Maybe try attaching a wav or bmp header to your residuals, also test paq8k/kx, cmix, nncp. > Testing for static p is also a good idea I actually have an asynchronous entropy coder based on this (psrc). On decoding, the model receives an already uncompressed block of bits sorted by probability and only has to read a bit from memory after computing the probability. There's a fairly low number of quantized probability values (30-60 depending on data) and a secondary entropy model in the coder thread. It was supposed to be a speed optimization, but in addition I've also got a compression improvement on BWT data.
    55 replies | 47359 view(s)
  • schnaader's Avatar
    Today, 10:40
    Thanks, works now and looks like a good candidate between FLIF and webp for both ratio and decompression speed: 3.299.928 suns.pcf_cn, decomp 1.0 s 1,805,356 suns.png 1,620,308 suns.pcf, decomp 1.3 s 1,258,722 suns.webp, decomp 0.1 s 1,200,934 suns.pik, decomp 0.7 s 1,112,302 suns.flif (-e), decomp 1.8 s 1,115,026 suns.flif (-e -N), decomp 1.8 s 1,096,019 suns.flif (-e -Y), decomp 1.8 s 1,092,796 suns.flif (-e -Y -R4), decomp 1.8 s By the way, the FLIF results show what I like about this format: It has high ratios for photographic images and some switches to tune it (-N, -Y, -R) that give reliable improvements without trying too much. Decompression speed will stay mostly the same and though it's the slowest candidate, it's still asymmetric (encoding takes longer). Also note that PNG reconstruction will be bound by Precomp -cn time which is 1.0 s, so decompression doesn't necessarily have to be as fast as webp.
    11 replies | 487 view(s)
  • vteromero's Avatar
    Today, 10:16
    vteromero replied to a thread VTEnc in Data Compression
    I'm now focused on testing the library with different data sets. I've identified a couple of suitable ones that have been tested by others before: ts.txt (a text file with a big list of timestamps) and gov2.sorted (which is part of the "Document identifier data set" created by D. Lemire). Besides existing data sets, it'd be ideal to test it with various data distributions such as random distributions. I need to think more about this option. In any case, I decided to create a separate repository to do the benchmarking. So, If anyone is interesting in helping with testing/benchmarking, contributions are welcome! Here is the repository: https://github.com/vteromero/integer-compression-benchmarks With regards to the algorithms to test, I'd like to use as much as possible. For now, I'm using the ones included in SIMDCompressionAndIntersection because they are optimised for sorted list of integers. But I have in mind others like: Elias-Fano, Binary Interpolative, Roaring, etc. Thanks!
    10 replies | 1026 view(s)
  • Sebastian's Avatar
    Today, 08:18
    Well, it does make sense, since bit-probabilities are also modelled with a local laplace-model, and the distribution of the residuals is considered (assumed) to be laplace. Some time ago I also tested compression of the residuals with paq8, and the results where inferior. Testing for static p is also a good idea :)
    55 replies | 47359 view(s)
  • Shelwien's Avatar
    Today, 05:29
    > I have tested Shelwiens idea for "static SSE". > I tested if a bitplane is actually compressing and not expanding the data. > Unfortunately it only helped (more than ~100 bytes) on one file. I suppose it means that bits in these bitplanes are still predictable? That's kinda strange for supposed noise. It should also make sense to test bits coded with specific probability (or range): to detect cases where from 100 bits encoded with p=0.5, actually there're 40 zeroes. Also "probability extrapolation" can be considered static SSE too: For example, the optimized extrapolation coef value for order0 in CMo8 is 8480/8192=1.035. Its also possible to apply these to binary mixer weights and mixed probabilities.
    55 replies | 47359 view(s)
  • Shelwien's Avatar
    Today, 04:51
    Shelwien replied to a thread LZ98 in Data Compression
    So as I said, you need to learn how to rebuild and flash the original firmware first. Maybe have to delete bytes from these offsets. I wonder if its some protection rather than checksums or relocs :)
    39 replies | 2369 view(s)
  • Shelwien's Avatar
    Today, 04:37
    @schnaader: I recompiled pik*.exe with SIMD_ENABLE=0 and reuploaded the archive: http://nishi.dreamhosters.com/u/pik_20190529.7z
    11 replies | 487 view(s)
  • Shelwien's Avatar
    Today, 04:31
    > What is 'rep1'? Does it have anything to do with Bulat's rep? > Is it an improvement over it, or another thing entirely? Its my dedup preprocessor, same function but different implementation. Srep is better, but rep1 is much easier for integration since its stream-based, doesn't need tempfiles etc. > About exe preprocessors, look at this numbers: You're partially right, but I actually meant original dispack from fgiesen, aka disfilter. Just that, 1) Freearc having some codec doesn't mean that schnaader can use it. For example, nosso installer has the best disasm preprocessor, but so what? 2) Testing a few files with freearc doesn't really prove anything, in particular with external exe preprocessing there's a frequent problem that archiver would still apply its own exe preprocessing (eg. nz,rz do that). So here I ported dispack from freearc: http://nishi.dreamhosters.com/u/dispack_arc_v0.rar And yes, it is frequently better than x64flt3. But sometimes it is not: 1,007,616 oodle282x64_dll 1,000,369 oodle282x64_dll.dia 336,415 oodle282x64_dll.b2l // BCJ2 + delta + lzma 332,920 oodle282x64_dll.xfl // x64flt3 + delta + lzma 334,971 oodle282x64_dll_dia.lzma // dispack + delta + lzma 34,145,968 powerarc_exe 33,603,128 powerarc_exe.dia 5,255,372 powerarc_exe.b2l 5,105,065 powerarc_exe.xfl 5,531,003 powerarc_exe_dia.lzma http://nishi.dreamhosters.com/u/exetest_20191112.7z
    11 replies | 487 view(s)
  • telengard's Avatar
    Today, 00:55
    telengard replied to a thread LZ98 in Data Compression
    Something is definitely going on before writing the firmware to flash. There are some subtle changes the code that copies to RAM makes to the data, seems to be inserting a word in a few different places, which changes the offsets of everything. I'm digging into why that is happening. I took a RAM snapshot right after the compressed firmware was copied to ram, but before writing it to flash. In RAM there are these small differences. The attached screenshot, in the top section at offset 0x2AB2E which is the file before being flashed, the bytes 6E 6F are there, but are gone once copied to RAM! And the compressed file in RAM at the end of that diff has a 53 B8 appended which isn't in the file. A similar thing happens in 4 other places in the entire 4M of data. :confused: Very strange! Definitely does not seem to be related to the compression, there's no unpacking going on here, just copying. I guess it is possible the code expects certain "sentinal" words at specific offsets (maybe checksums, etc).
    39 replies | 2369 view(s)
  • Sebastian's Avatar
    Yesterday, 22:07
    I have tested Shelwiens idea for "static SSE". I tested if a bitplane is actually compressing and not expanding the data. Unfortunately it only helped (more than ~100 bytes) on one file.
    55 replies | 47359 view(s)
  • data man's Avatar
    Yesterday, 20:50
    data man replied to a thread VTEnc in Data Compression
    It would be interesting to compare with https://github.com/RoaringBitmap/CRoaring
    10 replies | 1026 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 19:46
    I just want to show that android can do computer vision not too long. I think computer vision and data compression has the related process. Like what or which byte to recognize etc.
    50 replies | 3763 view(s)
  • schnaader's Avatar
    Yesterday, 18:05
    > pik_c --lossless suns.png suns.pik CPU does not support all enabled targets => exiting. Guess my CPU is too old? Intel Core i5 520M here, instruction set support is MMX, SSE, SSE2..4.2, EM64T, VT-x, AES - so AVX/AVX2 is missing. I'd be happy to include pik to my tests, though. At the moment, the plan would be to include both FLIF and webp as there are images for both where the other performs better. More extreme settings would even include comparing results with/without PNG filtering and with pure lzma. Somehow, image compression is still kind of randomish. From your post, it sounds like pik might be a good default setting, offering fast and decent compression offer many types of image.
    11 replies | 487 view(s)
  • Gonzalo's Avatar
    Yesterday, 16:57
    @Shelwien: What is 'rep1'? Does it have anything to do with Bulat's rep? Is it an improvement over it, or another thing entirely? About exe preprocessors, look at this numbers: Maybe I'm doing something wrong, but after applying Bulat's 'delta' and flzma2, dispack has an advantage over x64flt3 on both PE and ELF formats. Both rclone and precomp are for linux x86_64. Delta has a positive effect on both filters. ​This is some crude test I did today, but I remember doing some more thorough comparisons a while back, and having the same results.
    11 replies | 487 view(s)
  • Shelwien's Avatar
    Yesterday, 13:33
    For exes I can suggest this: http://nishi.dreamhosters.com/u/x64flt3_v0.rar http://freearc.dreamhosters.com/delta151.zip http://freearc.dreamhosters.com/mm11.zip (all with source) dispack is actually not that good - its better than dumb E8, but can only parse 32-bit code and what it does with parsed code isn't much smarter than E8. So if you want something more advanced than x64flt3, consider courgette (it has 32/64 ELF/COFF x86/arm support) or paq8px exe parser. For bmps I'd suggest http://nishi.dreamhosters.com/u/pik_20190529.7z FLIF is imho too inconvenient for precomp use (slow encoding etc). If you want a dedup filter, I can post rep1 from .pa, its a configurable CDC dedup preprocessor. Same with cdm, I suppose.
    11 replies | 487 view(s)
  • Bulat Ziganshin's Avatar
    Yesterday, 13:26
    lzma by itself doesn't include exe filters, they are in separate bcj and bcj2 algos also, it's not a good idea to make a lot of small independently compressed (lzma) streams, although front-end deduplication may significantly reduce losses
    11 replies | 487 view(s)
  • Krishty's Avatar
    Yesterday, 12:05
    What’s the relation of your link to the topic? My CPU is old, so your mileage may vary: RGBA 100²: a minute or so 1000² pixels: 1–2 hours for palettized images: multiply with 120 due to ECT trying 120 palette permutations At least it scales very well with the number of cores.
    50 replies | 3763 view(s)
  • schnaader's Avatar
    Yesterday, 12:04
    Compiling was a good test, thanks for that! I'm not looking for anything specific, it's more of a "more eyes find more bugs" case, throwing the test version at different people and their systems with different settings. Bitmap preprocessing will most likely be done by solving the "Using FLIF for image compression" issue and adding as much image formats as possible (like BMP, PBM/PGM/P.., TGA, PCX, TIFF, ...) to feed FLIF. Also, I recently did some experiments to autodetect images in data without any header (would help for game data like Unity resources), but this is more of an experimental thing that wouldn't be used as a default setting. For exe preprocessing, there are two possible ways: 1) Detecting exe streams and processing them in a seperate lzma stream, 2) Detecting and preprocessing exe streams "manually", by using some library like dispack. The first one would be easier to implement, but would still rely on lzma and is limited to lzma's filters. So it seems in the long run, the second solution is inevitable indeed, but will need some time to develop.
    11 replies | 487 view(s)
  • CompressMaster's Avatar
    Yesterday, 10:14
    CompressMaster replied to a thread cmix in Data Compression
    Please stop spamming other threads with unrelated things (youtube links and your problem). As I told you earlier, that´s off-topic. You started one, but it does not properly describing your problem... so please ask Shelwien to delete it and you can start a RELATED new one - i.e. ransomware help or something like that. You need to provide more info about it - screenshots etc. Then I can assist better. As for the executable, unfortunately, no, because I don´t have developed it so far, albeit base kinda works (real test was done only on one particular sample, but I analyzed many others and it seems that it should work). Therefore it won´t defend your system properly. Once I will have spare time, on my website there will be one (among many softwares incl. BestComp).
    421 replies | 97929 view(s)
  • Shelwien's Avatar
    Yesterday, 06:20
    Shelwien replied to a thread LZ98 in Data Compression
    Ah, so there's totally a different reason for it, maybe a flashing bug or something. Because compressed files for "orig_uncompressed.bin" and "orig_uncompressed.bin" match up to 0x8CE6D, and compressed data up to that point unpacks to 0x136C35 bytes. While your actual modified byte is at 0x136C3B. And decoding error starts at 0x1A60C, so it should happen with original firmware too.
    39 replies | 2369 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 06:00
    https://youtu.be/eh4c0prEU7g How extremely long for PNG file ? 1 day ? 1 week ?
    50 replies | 3763 view(s)
  • telengard's Avatar
    Yesterday, 05:53
    telengard replied to a thread LZ98 in Data Compression
    I totally agree, it really doesn't make sense. I had posted partials of the modified and compressed firmware, but here are links to the original uncompressed, and my modified by one char one. Binary diff'ing them should literally show that one byte change. I put the files here in this zip (they are each ~6.8M, but are both the same exact size): https://www.dropbox.com/s/y2su8uve4ps25ts/uncompressed_files.zip?dl=0 If you can't get at that for some reason, please let me know. I have been doing compression w/ the v2 you had posted a little while back. I didn't install it because running my script (I've attached just in case you were interested in seeing it, probably not relevant at all - it's a shell script) with the original uncompressed, then re-compressed was byte for byte exact with the original firmware I started with from Roland. That's not to say there is not an issue with my script, but in that case it generated the original. I have not yet, but this is a great thing to check, I'll dump that memory tonight. I had scanned over a few hundred bytes in the debugger and compared at the point of the failure, but I will dump the entire thing. Also all great suggestions, thank you! :) I will try these as well.
    39 replies | 2369 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 05:43
    May you sent the executable file in this forum
    421 replies | 97929 view(s)
  • Shelwien's Avatar
    Yesterday, 04:03
    Shelwien replied to a thread LZ98 in Data Compression
    No idea yet. You didn't post your modified firmware, so I can't even analyze the differences in compressed data. Usually I'd expect some special reasons for incorrect decoding... like wrap-around match or specific match position, or specific flag byte value. But here we seem to have exactly the same compression algo, so any strange restrictions (like window reset every 64k) don't seem possible. 1) Did you test your firmware reconstruction script? I mean, does it work with unmodified original firmware when you compress it, attach headers etc? 2) Did you try dumping _compressed_ data from the hardware decoder? Maybe it gets patched somewhere? 3) There's also v1 encoder, maybe its output is unpacked correctly? 4) Split uncompressed data to 64k blocks, compress blocks individually, then concatenate compresses files - it should be still decodable, but without references between 64k blocks. 5) Try producing an earlier decoding unsync. Since decoding is incorrect anyway, you can change the data at any offset without worrying about its integrity.
    39 replies | 2369 view(s)
  • Gotty's Avatar
    Yesterday, 01:05
    With your compile my G5600 crashes (probably your target has avx/avx2 which I don't have). But it runs fine on my i5-7200 (which has avx2): 11s and 2:40 respectively. My G5600 is significantly faster than my i5-7200, so our timings should approximately match. But they don't. It looks like a compilation issue then. I will look into it. Thanx for the test and for the the timing! I'll will come back with what I'll find. Edit: yes, it was an embarrassing compilation problem on my side. Fixed. It's now 0:07 and 01:45 on my G5600. Thanx for the help! And sorry for the false alarm.
    55 replies | 47359 view(s)
  • Krishty's Avatar
    10th November 2019, 22:36
    Glad you like it! Excellent find! I’ll see what I can do. Nonetheless, that’s too long. I recommend: Throw a dozen of files at it so Optimize becomes enabled and click that. Really just a few files to get it going with all available threads. Then throw your huge directories at it and it will analyze in the background while optimizing. (I may fix that clunky handling in a future version.) I know Windows has some limitation where you cannot create more than 65k temporary directories. My Optimizer may well be affected. Better run it with batches of 10k until I reworked the temporary directory handling. You will notice PNG takes extremely long already, so be careful what you wish for :) Infinite looping will not buy you anything, I’m afraid. The reason is in the curves here: https://encode.su/threads/2274-ECT-an-file-optimizer-with-fast-zopfli-like-deflate-compression?p=61627&viewfull=1#post61627 I’m obsessed with squeezing out single bytes, rest assured, but we’re within 100 B of the perfect result using the current tools and it takes other means to improve compression. E.g. I’ll try running huffmix on a collection of ECT passes soon.
    50 replies | 3763 view(s)
  • Sebastian's Avatar
    10th November 2019, 22:05
    Hello Gotty, I've downloaded the binary release from GitHub and have no problems on my side. Seems that your CPU is around 8 times slower than mine. sac.exe --encode --high --sparse-pcm 0359.wav 0359.sac takes around 9sec on my side. sac.exe --encode --high --optimize=fast --sparse-pcm 0359.wav 0359.sac takes about 02:02 min, so it should take ~15min on your side. "optimize=high" around 3h for a 4s file :D I've attached a version with a counter for the optimization runs, so you can see its progress. If you don't want to use my compile, you can add a line like std::cout << "dds: " << nfunc << "/" << nfunc_max << "\r"; ​to the "dds.h" at line 30.
    55 replies | 47359 view(s)
  • OneDeltaTenTango's Avatar
    10th November 2019, 21:56
    Did some quick tests. I really like the multi threading. Its not APNG safe, test file https://red.dazoe.net/stuff/9.png Just to see what happens, I threw my comics/manga at it and will its still Analyzing after +8hr but still progressing. granted its like ~75,000 files. On the other-hand FileOptimizer from https://nikkhokkho.sourceforge.io/static.php?page=FileOptimizer would at-least be working on it by now, sadly its not multi threaded :_sleep2: and Pingo from https://css-ig.net/ would have been done with it by now if it managed to not hit a folder with a super long path or strange characters. :_shame2: If i didn't miss it somewhere would it be possible to add an endless trials/brute-force compression option so it endlessly loops on a batch of pngs.
    50 replies | 3763 view(s)
  • Gonzalo's Avatar
    10th November 2019, 21:15
    Gonzalo replied to a thread VTEnc in Data Compression
    I'd be happy to help you test, if you tell me what kinds of tests are you interested in. I would prefer if you wrote an executable using the library the way you intend for it to be used, although I believe I could compile the sample programs on the GitHub description if thatś the way to go.
    10 replies | 1026 view(s)
  • Gonzalo's Avatar
    10th November 2019, 20:20
    Good work @schnaader! Do you need us to run some test in particular? I've been doing the basics, i.e. compiling, roundtrip, edge cases, etc. and so far all smooth. I can confirm that this version compile on both GCC 9.2 and clang 9.0 on Manjaro Deepin x86-64. It seems like clang is catching up with the optimizations bc there is no noticeable difference in speed, although it is faster and cleaner to compile (warnings aren't so messy). Personal opinion, I'm happy with the default settings, i.e. brunsli then packjpg, no brotli, no thumbnails recursion. ------------------------------------ Another thing entirely: Do you have any plans to include bitmap and exe pre-processing? It seems that 7z's delta can be very helpful but AFAIK it needs to be manually tuned. I believe you already included exe pre-processing in precomp but only as a part of lzma compression. In my scripts, I frequently use good old "arc -m0=rep+dispack+delta" to pre-process executables and it hasn't failed in almost a decade. It makes for an important difference in ratio, not only on Windows exes, but on executable code in general. It would be a very, very good thing to add to precomp if you will Thanks in advance for your reply!
    11 replies | 487 view(s)
  • Gotty's Avatar
    10th November 2019, 20:06
    Systems: Windows 10 x64, locale: Hungarian; RAM. 8 GB; CPU: Intel Gold G5600 Windows 8.1 x64, locale: Hungarian; RAM: 4 GB; CPU: Intel i5-7200 As seen on the screen output: gcc version is 7.2.0 (mingw-w64\x86_64-7.2.0-win32-seh-rt_v5-rev1) I attached the sample seen in the screen output in my post above. But it happens with any file from rarewares, too: 41_30sec.wav, ATrain.wav, Bachpsichord.wav, Bartok_strings2.wav, etc. (The original wv files were converted to wav by wavpack-5.1.0-x64).
    55 replies | 47359 view(s)
  • telengard's Avatar
    10th November 2019, 19:40
    telengard replied to a thread LZ98 in Data Compression
    That worked perfectly on my 1 change then compressed with v2 blob, only difference is EOF, can't possibly be a factor here. There must be something else going on. I observe it in the debugger pulling/utilizing the value from the compressed data incorrectly. Would it be useful to know any values at the time this happens, or the offset into the compressed file? Not sure exactly how to debug this, but have the ability to do it. I can dump memory and registers, single step, etc. Also, here's a diff of the 2 (uncompressed as on the device -vs- uncompressed out of Ghirdra with my change). I can see some of the valid data mixed in there, but it is garbled... i.e. AKAI, Groove.Drums, Combination, etc.
    39 replies | 2369 view(s)
  • vteromero's Avatar
    10th November 2019, 19:24
    vteromero replied to a thread VTEnc in Data Compression
    Here are the first results of comparing VTEnc with other integer compression algorithms: | Algorithm |Encoded Size|Ratio %|Encoding Speed|Decoding Speed| |:-------------------|-----------:|------:|-------------:|-------------:| | VTEnc | 21,686| 0.0038| 60.9155 G/s | 734.54 M/s | | Delta+FastPFor256 | 1,179,312| 0.20| 2.00714 G/s | 4.75146 G/s | | Delta+FastPFor128 | 2,306,544| 0.40| 1.9029 G/s | 4.82925 G/s | | Delta+BinaryPacking| 4,552,280| 0.79| 8.5867 G/s | 5.77439 G/s | | Delta+VariableByte | 144,285,504| 25.0| 4.86063 G/s | 5.09461 G/s | | Delta+VarIntGB | 180,356,880| 31.25| 6.75428 G/s | 9.2638 G/s | | Copy | 577,141,992| 100.0| 10.4087 G/s | - | These results correspond to benchmarking the data set ts.txt, which is a text file with a large list of timestamps. This data set is distinguished by having many repeated values. In this case, VTEnc offers an impressive compression ratio (0.0038% of the original size) at an even more impressive speed (60 G/s). The decoding speed is not very good though. I'll keep you posted with new results for other data sets.
    10 replies | 1026 view(s)
  • suryakandau@yahoo.co.id's Avatar
    10th November 2019, 12:50
    Famous Chinese actor : donie yen https://youtu.be/_7mnHxlmRwE Please LIKE/SUBSCRIBE/SHARE this YouTube link
    1 replies | 27 view(s)
  • Krishty's Avatar
    10th November 2019, 12:02
    I’ve released a new version. See https://papas-best.com/optimizer_en#download or attachments to first post. Changes: added checksum test to archive optimization I want to avoid silent data corruption at any cost. For ZIP/gzip, the Optimizer now uses 7-Zip to compute checksums of archive content before and after optimization with AdvZIP/ECT/DeflOpt/defluff. If data is corrupted, the job will abort with the message ERROR: content broke during optimization and the original file remains untouched. I noticed that ECT does not have any means to disable recursive optimization. I.e. if you optimize a ZIP which contains another ZIP, and you unchecked recursive, then optimization will fail with said message because ECT optimized with the inner ZIP file even though you didn’t want that. I still need to decide how this situation should be treated gracefully. Names will not be checked. I would have loved to do this, but:DeflOpt deletes all empty directories from archives, changing the checksum. I could work around that, but … it takes time. 7-Zip has a bug with checksums of file names in ZIPs if they contain relative directories (as commonly produced by DeflOpt/defluff). 7-Zip cannot compute name checksums for single items (as an information size optimization). added setup You can now download a MSI package of the optimizer. It’s plain simple; no choosing paths or anything. By default, it installs for the current user only. If you want to install it for all users, run msiexec /i "Papas Best Optimizer.msi" MSIINSTALLPERUSER="" fixed mislabeled Cancel button There were two Optimize buttons instead of Optimize and Cancel. Stupid copy-paste error during UI cleanup. Fixed. fixed one thread appearing dead in the UI If your CPU had four cores and you selected four threads, the 4th thread would appear stuck even though it optimized fine. Fixed that. fixed recursive ZIP setting The recursive check often did not work. Fixed that. fixed gzip optimization potentially deleting filenames There was a formatting bug in ECT’s command line, hiding the --strict switch for gzip archives which normally forces preserving file names. My tests did not produce any errors, but it is possible that some gzip files lost file names. Anyway, that’s fixed now.
    50 replies | 3763 view(s)
  • Sebastian's Avatar
    10th November 2019, 09:41
    Thanks for testing :) Strange, never encountered something like this before. -what is your system-spec? -can you upload a sample file where this happens?
    55 replies | 47359 view(s)
  • Gotty's Avatar
    10th November 2019, 01:52
    Downloaded the 0.5.1 (current) release from https://github.com/slmdev/sac It works fine without option --optimize=... c:\tmp>sac.exe --encode --high --sparse-pcm 0359.wav 0359.sac Sac v0.5.1 - Lossless Audio Coder (c) Sebastian Lehmann compiled on Nov 3 2019 (64-bit) with GCC 7.2.0 Open: '0359.wav': ok (736380 Bytes) WAVE Codec: PCM (1411 kbps) 44100Hz 16 Bit Stereo 184064 Samples Create: '0359.sac': ok Profile: High mapsize: 1600 Bytes mapsize: 1399 Bytes Timing: pred 72.91%, enc 27.09%, misc 0.00% 736380->226149=30.7% (4.915 bps) Total time: But with --optimize=fast or --optimize=normal or --optimize=high compression just never ends: c:\tmp>sac --encode --high --optimize=fast --sparse-pcm 0359.wav 0359.sac Sac v0.5.1 - Lossless Audio Coder (c) Sebastian Lehmann compiled on Nov 3 2019 (64-bit) with GCC 7.2.0 Open: '0359.wav': ok (736380 Bytes) WAVE Codec: PCM (1411 kbps) 44100Hz 16 Bit Stereo 184064 Samples Create: '0359.sac': ok Profile: High (optimize fast) optimize m=0, div=4, n=100 Waited three hours (with option --optimize=high). Nothing. Tried different files. All the same.
    55 replies | 47359 view(s)
  • Shelwien's Avatar
    10th November 2019, 01:27
    Sorry, no idea. Icons are here: https://encode.su/favicon.ico https://encode.su/favicon.png https://encode.su/favicon114.png and all seem to be the right ones. But there were some points (after server changed and after upgrade to php7) when forum showed default icons because I replaced the forum files with default ones. Could it be that your browser cached them? Or uses some other file for favicon besides listed ones?
    42 replies | 2572 view(s)
  • Shelwien's Avatar
    10th November 2019, 01:17
    Shelwien replied to a thread LZ98 in Data Compression
    I compiled and tested your decompiled decoder, and it seems to work normally. Try decoding your modified compressed blob with it? > Is there some kind of look-ahead to this algorithm? Decoder fetches a flag byte which determines whether to decode next 8 records as matches or as literals. But its not that much... max lookahead should be 8*18=144 bytes here.
    39 replies | 2369 view(s)
  • telengard's Avatar
    9th November 2019, 22:12
    telengard replied to a thread LZ98 in Data Compression
    Here's my annotated de-compiled source for what seems to be the decompression algorithm. Any insight here would be very appreciated. ptr_to_function is NULL, so that code block is never hit.
    39 replies | 2369 view(s)
  • telengard's Avatar
    9th November 2019, 20:47
    telengard replied to a thread LZ98 in Data Compression
    Last night for the first time I tried installing an updated firmware blob to my device. I changed a single string character in the uncompressed program, compressed it using v2 of enLZ98, did the rest that was needed to make it whole (pre-pend bootloader, postpend flash padding (0xFF) out to 4M, and the sentinel string at the end). It successfully flashed (a good thing) but on boot, the main program decompresses up until a certain point and then goes off the rails. I've attached 2 files, zipped. One is the the decompressed program in RAM (just the first relevant bit of the 7+ Meg program) and the decompressed program as it was before running enLZ98. I'm not sure exactly what is useful to debug here as far as files, but I can get them. What's odd is, the decompression becomes invalid way before the general location of my single character change. Is there some kind of look-ahead to this algorithm?
    39 replies | 2369 view(s)
  • CompressMaster's Avatar
    9th November 2019, 16:50
    @Shelwien icons on browser tab are different - prior it was light blue with setting icon inside. Now it´s (?)vbulletin icon. I noticed this immediately after encode.su appears, but I prior thought that it´s temporary problem - as I can see, it is not. Problem is when I am outside of http://encode.su - at https://encode.su/forum/. SSL-related issue? I don´t think so... When encode.ru was active, there wasn´t problems...
    42 replies | 2572 view(s)
  • CompressMaster's Avatar
    9th November 2019, 16:41
    @schaader, are you mean this? Good decision, schnaader! Small note - it´s kinda too late. For example when I will have developed BestComp (I don´t know exactly when, because I need to heavily optimize it´s code - base kinda works), all my versions goes in BestComp thread as a Mauro Vezzosi´s CMV. But it´s better to have only one (like will be this) instead of multiple of course. Good work, though!
    11 replies | 487 view(s)
  • Shelwien's Avatar
    8th November 2019, 22:58
    > We're doing OOP in C. That's actually worse, because you just lack syntax sugar? Just imagine having to setup a compression pipeline, eg. BCJ+LZMA, with BCJ and LZMA as standalone codec objects with their own callbacks. I also remembered another potential problem - when codec is MT, a callback can be called in context of a random thread.
    7 replies | 300 view(s)
  • bmcdonnell's Avatar
    8th November 2019, 18:08
    Thank you both for the feedback. I'll be discussing our options with my coworkers soon. We're doing OOP in C.
    7 replies | 300 view(s)
  • Mauro Vezzosi's Avatar
    8th November 2019, 16:33
    Mauro Vezzosi replied to a thread NNCPa in Data Compression
    NNCPa 2019-11-06. Option Description cell LSTM cell variant (default=lstmc). Added lstmc-f (=lstmc) and lstmc-i. lstmc | lstmc-f = c * f + min(1 - f, i) * in. lstmc-i = c * min(f, 1 - i) + i * in. n_symb n Number of symbols. Added n=0 (new default): the program automatically sets n_symb with the minimum number of symbols (in the range 0-255) by reading the infile bytes. New ----------- proj_size n Number of projection states. Projection was already implemented in NNCP 2019/06/29, this option has been added to use it. block_loop n Repeats the training on the current block n times. block_iter n Repeats the training specified by block_loop for the first n blocks, then sets block_loop = 1. n_embed_hid n Number of layers in hidden embedding. t_embed_hid Type of hidden embedding. t_embed_out Type of output embedding. ln_embed_hid Layer normalization in hidden embedding. ln_embed_out Layer normalization in output embedding. ig_activ Input gate activation function. fg_activ Forget gate activation function. og_activ Output gate activation function. ​ More descriptions are in Changelog.
    1 replies | 194 view(s)
  • Mauro Vezzosi's Avatar
    8th November 2019, 16:33
    Mauro Vezzosi started a thread NNCPa in Data Compression
    NNCPa has forked from NNCP 2019-06-29 by Fabrice Bellard. Versions of NNCPa: 2019-11-06 File version 1 NNCP: https://bellard.org/ https://bellard.org/nncp/ https://encode.su/threads/3094-NNCP-Lossless-Data-Compression-with-Neural-Networks
    1 replies | 194 view(s)
  • schnaader's Avatar
    8th November 2019, 13:37
    Note: To follow the way that was recommended in some other post, I decided to also do only one post about Precomp instead of one per version, so both development news and future releases will go here. Here's a first development version of Precomp 0.4.8 that integrates brunsli JPG compression. There might be some minor changes in the final version, but the main work is done. By default, brunsli is enabled and brotli compression of metadata is disabled - to enable the subsequent compressors (like lzma2) to compress the metadata. This gives much faster JPG compression and decompression with almost the same ratio as packJPG. There are new command line switches to control the behaviour (found in the -longhelp). "brunsli" (default is on) enables/disables brunsli, "brotli" (default is off) enables/disables brotli and "packjpg" (default is on) can be used to disable the packJPG fallback. Known limitations brunsli uses a pessimistic memory estimate, so it's quite memory hungry, that's why Stephan Busch's test images from https://github.com/google/brunsli/issues/16 will still fallback to packJPG. Tried to raise the kBrunsliMaxNumBlocks variable to test it - for the nasa jpg, precomp tried to allocate 10 GB RAM (have 4 GB here) and crashed. However, it's not that bad for everyday JPGs below 100 MB, e.g. the 60 MB test image below uses 800 MB memory using packJPG and 1300 MB using brunsli. no multithreading support (yet). There are some first experiments in brunsli to use "groups" compression that enables multithreading, but I couldn't get it compiled. But I'll give it another shot and also try to enable multithreading in reconstruction (-r) for JPG and MP3. no automatic recursion. Metadata compression with brotli is deactivated by default, so thumbnails in the metadata could be compressed with recursion. But doing this automatically would introduce some performance penalty and results aren't always better (thumbnails are usually very small). So at the moment, you'll have to call precomp twice for thumnail compression. The attached version is a Windows 64-bit MSVC compile, if you want to compile your own version, you can use the "brunsli_integration" branch on GitHub. Please test it on your files and report any bugs. Loch Lubnaig test image from Wikimedia Commons Original: 62.069.950 Bytes Precomp 0.4.7 -cn: 48.626.609, 2 min 14 s, -r: 2 min 18 s Precomp 0.4.7: 48.629.092, 2 min 55 s, -r: 2 min 18 s Precomp 0.4.8dev -cn: 49.822.256, 25 s, -r: 22 s Precomp 0.4.8dev -cn -brotli+: 49.787.761, 26 s, -r: 22 s Precomp 0.4.8dev: 49.790.048, 1 min 9 s, -r: 23 s
    11 replies | 487 view(s)
  • data man's Avatar
    8th November 2019, 13:30
    data man replied to a thread Zstandard in Data Compression
    https://github.com/facebook/zstd/releases/tag/v1.4.4 This release includes some major performance improvements and new CLI features, which make it a recommended upgrade. Faster Decompression Speed Decompression speed has been substantially improved, thanks to @terrelln. Exact mileage obviously varies depending on files and scenarios, but the general expectation is a bump of about +10%. The benefit is considered applicable to all scenarios, and will be perceptible for most usages. Faster Compression Speed when Re-Using Contexts In server workloads (characterized by very high compression volume of relatively small inputs), the allocation and initialization of zstd's internal datastructures can become a significant part of the cost of compression. For this reason, zstd has long had an optimization (which we recommended for large-scale users, perhaps with something like this): when you provide an already-used ZSTD_CCtx to a compression operation, zstd tries to re-use the existing data structures, if possible, rather than re-allocate and re-initialize them. Historically, this optimization could avoid re-allocation most of the time, but required an exact match of internal parameters to avoid re-initialization. In this release, @felixhandte removed the dependency on matching parameters, allowing the full context re-use optimization to be applied to effectively all compressions. Practical workloads on small data should expect a ~3% speed-up. In addition to improving average performance, this change also has some nice side-effects on the extremes of performance. On the fast end, it is now easier to get optimal performance from zstd. In particular, it is no longer necessary to do careful tracking and matching of contexts to compressions based on detailed parameters (as discussed for example in #1796). Instead, straightforwardly reusing contexts is now optimal. Second, this change ameliorates some rare, degenerate scenarios (e.g., high volume streaming compression of small inputs with varying, high compression levels), in which it was possible for the allocation and initialization work to vastly overshadow the actual compression work. These cases are up to 40x faster, and now perform in-line with similar happy cases. Dictionaries and Large Inputs In theory, using a dictionary should always be beneficial. However, due to some long-standing implementation limitations, it can actually be detrimental. Case in point: by default, dictionaries are prepared to compress small data (where they are most useful). When this prepared dictionary is used to compress large data, there is a mismatch between the prepared parameters (targeting small data) and the ideal parameters (that would target large data). This can cause dictionaries to counter-intuitively result in a lower compression ratio when compressing large inputs. Starting with v1.4.4, using a dictionary with a very large input will no longer be detrimental. Thanks to a patch from @senhuang42, whenever the library notices that input is sufficiently large (relative to dictionary size), the dictionary is re-processed, using the optimal parameters for large data, resulting in improved compression ratio. The capability is also exposed, and can be manually triggered using ZSTD_dictForceLoad. New commands zstd CLI extends its capabilities, providing new advanced commands, thanks to great contributions : zstd generated files (compressed or decompressed) can now be automatically stored into a different directory than the source one, using --output-dir-flat=DIR command, provided by @senhuang42 . It’s possible to inform zstd about the size of data coming from stdin . @nmagerko proposed 2 new commands, allowing users to provide the exact stream size (--stream-size=# ) or an approximative one (--size-hint=#). Both only make sense when compressing a data stream from a pipe (such as stdin), since for a real file, zstd obtains the exact source size from the file system. Providing a source size allows zstd to better adapt internal compression parameters to the input, resulting in better performance and compression ratio. Additionally, providing the precise size makes it possible to embed this information in the compressed frame header, which also allows decoder optimizations. In situations where the same directory content get regularly compressed, with the intention to only compress new files not yet compressed, it’s necessary to filter the file list, to exclude already compressed files. This process is simplified with command --exclude-compressed, provided by @shashank0791 . As the name implies, it simply excludes all compressed files from the list to process. Single-File Decoder with Web Assembly Let’s complete the picture with an impressive contribution from @cwoffenden. libzstd has long offered the capability to build only the decoder, in order to generate smaller binaries that can be more easily embedded into memory-constrained devices and applications. @cwoffenden built on this capability and offers a script creating a single-file decoder, as an amalgamated variant of reference Zstandard’s decoder. The package is completed with a nice build script, which compiles the one-file decoder into WASM code, for embedding into web application, and even tests it. As a capability example, check out the awesome WebGL demo provided by @cwoffenden in /contrib/single_file_decoder/examples directory! Full List perf: Improved decompression speed, by > 10%, by @terrelln perf: Better compression speed when re-using a context, by @felixhandte perf: Fix compression ratio when compressing large files with small dictionary, by @senhuang42 perf: zstd reference encoder can generate RLE blocks, by @bimbashrestha perf: minor generic speed optimization, by @davidbolvansky api: new ability to extract sequences from the parser for analysis, by @bimbashrestha api: fixed decoding of magic-less frames, by @terrelln api: fixed ZSTD_initCStream_advanced() performance with fast modes, reported by @QrczakMK cli: Named pipes support, by @bimbashrestha cli: short tar's extension support, by @stokito cli: command --output-dir-flat=DIE , generates target files into requested directory, by @senhuang42 cli: commands --stream-size=# and --size-hint=#, by @nmagerko cli: command --exclude-compressed, by @shashank0791 cli: faster -t test mode cli: improved some error messages, by @vangyzen cli: fix rare deadlock condition within dictionary builder, by @terrelln build: single-file decoder with emscripten compilation script, by @cwoffenden build: fixed zlibWrapper compilation on Visual Studio, reported by @bluenlive build: fixed deprecation warning for certain gcc version, reported by @jasonma163 build: fix compilation on old gcc versions, by @cemeyer build: improved installation directories for cmake script, by Dmitri Shubin pack: modified pkgconfig, for better integration into openwrt, requested by @neheb misc: Improved documentation : ZSTD_CLEVEL, DYNAMIC_BMI2, ZSTD_CDict, function deprecation, zstd format misc: fixed educational decoder : accept larger literals section, and removed UNALIGNED() macro
    340 replies | 115143 view(s)
  • Mauro Vezzosi's Avatar
    8th November 2019, 10:32
    Seems to be an Internet Explorer 11 problem. Doesn't work: - Win 7 + IE 11, Win 8.1 + IE 11. - Copy: Ctrl+C, Ctrl+Ins; from Windows Notepad, TextPad, post editor, ... - Paste in post editor: Ctrl+V (never worked), Shift+Ins (never worked), Ctrl+Shift+Ins, Right Click + Paste (worked until now), Paste button in post editor + Clipboard | Textbox. - Paste in "Title" works. Works: - Win 7 + Chrome 78. - Copy: various combinations. - Paste in post editor: ~all. Win 7 + IE 11/Chrome 78 is not the computer I usually use, however it works.
    8 replies | 298 view(s)
  • CompressMaster's Avatar
    7th November 2019, 09:57
    It works perfectly for me using ctrl-c and ctrl-v command as before. I´m using latest version of firefox. Try to clear your browser cache. Or some lastly installed addons may interfere. Or it can also be misconfigured browser. But it should works without any problems anyway...
    8 replies | 298 view(s)
  • Shelwien's Avatar
    7th November 2019, 02:38
    Could be this? https://stackoverflow.com/questions/44646968/paste-option-in-ckeditor-doesnt-seem-to-work-in-chrome-and-firefox https://github.com/ckeditor/ckeditor4/issues/469#issuecomment-306279192 I always used Shift-Insert to paste text, so I didn't notice any changes :)
    8 replies | 298 view(s)
  • AiZ's Avatar
    7th November 2019, 00:06
    Any news? :D The "company" website's domain name has expired last month, I have a bad feeling... :o
    27 replies | 3528 view(s)
  • Mauro Vezzosi's Avatar
    6th November 2019, 23:15
    I can no longer paste text from my computer into a new post. Are there any known problems with paste after upgrading to PHP 7.2?
    8 replies | 298 view(s)
  • rarkyan's Avatar
    6th November 2019, 16:53
    Well, to compress all files i still choose 2^n or nCr formula. Create the short notation using that way.
    228 replies | 78308 view(s)
  • suryakandau@yahoo.co.id's Avatar
    6th November 2019, 16:25
    Please LIKE/SUBSCRIBE/SHARE on YouTube. Thank you https://youtu.be/UIar_XDO004
    1 replies | 27 view(s)
  • CompressMaster's Avatar
    6th November 2019, 09:54
    CompressMaster replied to a thread cmix in Data Compression
    Maybe I can try to help you with. That would be, of course, off-topic, so please let´s start at Off-Topic subforum. btw, I have developed some times ago world´s first file ransomware file obfuscation utility, although I have not developed it fully (although it works according my VM test with WannaCry real sample) so far due little spare time. But it will acts as a ultimate ransomware protection - until hackers won´t discovers my algorithm - the same for BestComp - ultimate data compressor - yes, I have discovered method how to do that... But that´s not a point of this post. Firstly, of course don´t pay any money! Better not to restart PC as the corresponding RSA keys gets deleted after reboot/restart. btw, sorry for spamming cmix thread, but member needs help.
    421 replies | 97929 view(s)
  • Shelwien's Avatar
    6th November 2019, 09:28
    Here's something similar, but actually working: https://encode.su/threads/1211-Compressing-pi Demonstrating 100x better compression than popular archivers is not really a problem, its compressing _all_ files which is the problem.
    228 replies | 78308 view(s)
  • Shelwien's Avatar
    6th November 2019, 09:14
    I guess srep is a good example, since there's always more storage than RAM, and dedup algorithm can use random access to compare or fetch data fragments. In this case multi-stream i/o is not a workaround, so it looks like a truly universal API would require both position and stream_id arguments.
    7 replies | 300 view(s)
  • rarkyan's Avatar
    6th November 2019, 08:44
    ​Interesting. Can you decompress it back to original file?
    228 replies | 78308 view(s)
  • Bulat Ziganshin's Avatar
    6th November 2019, 08:26
    It's not always possible. F.e. SREP decompression consumes matches info in the MATCH SOURCE order while compressor obviously produces them in the MATCH DESTINATION order
    7 replies | 300 view(s)
  • Shelwien's Avatar
    6th November 2019, 08:25
    Shelwien replied to a thread cmix in Data Compression
    https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win32/Personal%20Builds/mingw-builds/
    421 replies | 97929 view(s)
  • suryakandau@yahoo.co.id's Avatar
    6th November 2019, 08:04
    ​where can i get mingw installer ? because my laptop have infected by ransomware
    421 replies | 97929 view(s)
  • Shelwien's Avatar
    6th November 2019, 07:57
    Bulat is right, there're some known algorithms with multiple input and/or output streams (BCJ2, some recompression or preprocessing libs). And with multiple streams there's always an issue of stream interleaving. In particular, 7-zip i/o callbacks do include a Seek method: https://github.com/upx/upx-lzma-sdk/blob/master/C/7zip/IStream.h#L41 I still think that its better to avoid it (interleaving can be handled internally, eg. by flushing the codec when further buffering is impossible), but its true that random access would be necessary to implement .7z (de)compression, which can be considered a "known algorithm".
    7 replies | 300 view(s)
  • Shelwien's Avatar
    6th November 2019, 07:20
    Shelwien replied to a thread cmix in Data Compression
    makefile is not a shell script, different syntax. Unpack attached scripts to cmix\src\, update gcc path in g.bat (C:\MinGW510\bin to your path with g++.exe), then run g.bat.
    421 replies | 97929 view(s)
More Activity