Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • Gotty's Avatar
    Today, 12:06
    I'm happy that you are happy and relieved. I think trying compressing random data is a must for everyone who wants to understand entropy. I'm with you, understand your enthusiasm and I do encourage you to experiment even more. After you understand it deeply, you will not post more different ideas. ;-)
    18 replies | 320 view(s)
  • compgt's Avatar
    Today, 11:24
    Dresdenboy, if you're interested in LZSS coding, search for my "RLLZ" in this forum for my remarks on it. RLLZ doesn't need literal/match prefix bit, no match_len for succeeding similar strings past the initially encoded string, and no literal_len. It was my idea in high school (1988-1992), but i was forgetting computer programming then and we didn't have access to a computer. I remembered it in 2018, so it's here again, straightened out, better explained. https://encode.su/threads/3013-Reduced-Length-LZ-(RLLZ)-One-way-to-output-LZ77-codes?highlight=RLLZ
    17 replies | 804 view(s)
  • compgt's Avatar
    Today, 11:01
    It's a relief that somebody here is admitting he actually tried compressing random files, like me. And actually suggests us to experiment with a random file. But not too much i guess. I tried random compression coding in 2006-2007 that i actually thought i solved it, that i thought i got a random data compressor. I feared the Feds and tech giants will come after me, so i deleted the compressor, maybe even without a decoder yet. Two of my random compression ideas are here: https://encode.su/threads/3339-A-Random-Data-Compressor-s-to-solve RDC#1 and RDC#2 are still promising, worth the look for those interested. Maybe, i still have some random compression ideas but i am not very active on it anymore. There are some "implied information" that a compressor can exploit such as the order or sequence of literals (kinda temporal) in my RLLZ idea, and the minimum match length in lzgt3a. Search here in this forum "RLLZ" for my posts. https://encode.su/threads/3013-Reduced-Length-LZ-(RLLZ)-One-way-to-output-LZ77-codes?highlight=RLLZ > Randomness is an issue. And randomness is the lack of useful patterns. Randomness is the lack of useful patterns, i guess, if your algorithm is a "pattern searching/encoding" algorithm. Huffman and arithmetic coding are not pattern searchers but naturally benefit on the occurrences of patterns. LZ based compressors are.
    18 replies | 320 view(s)
  • phuong2020's Avatar
    Today, 10:59
    Or too much to learn from this. Thanks _______________ Friv 9 | Friv1000
    14 replies | 18575 view(s)
  • Aniskin's Avatar
    Today, 10:25
    Aniskin replied to a thread 7-zip plugins in Data Compression
    Is there a way to get a sample of such file to debug?
    10 replies | 3158 view(s)
  • Bulat Ziganshin's Avatar
    Today, 10:24
    first, we need to extend the vocabulary: static code: single encoding for the entire file, encoding tables are stored in the compressed file header block-static one: file is split into blocks, each block has its own encoding tables stored with the block dynamic: encoding tables are computed on-the-fly from previous data, updated each byte block-dynamic: the same, but encoding tables are updated f.e. once each 1000 bytes so, - the first LZ+entropy coder was lzari with dyn. AC - second one was lzhuf aka lharc 1.x with dyn. huffman - then, pkzip 1.x got Implode with static huffman coder. It employed canonical huffman coding so compressed file header stored only 4 bits of prefix code length for each of 256 chars (and more for LZ codewords). - then, ar002 aka lha 2.x further improved this idea and employed block-static huffman. It also added secondary encoding tables used to encode code lengths in the block header (instead of fixed 4-bit field) - pkzip 2.x added deflate which used just the same scheme as ar002 (in this aspect, there were a lot of other changes) Since then, block-static huffman became de-facto standard for fast LZ77-based codecs. It's used in RAR2 (which is based on deflate), cabarc (lzx), brotli (that added some O1 modelling), zstd (that combines block-static huffman with block-static ANS). Static/block-static codes require two passes over data - first you compute frequencies, then build tables and enocde data. You can avoid first pass by using fixed tables (or use more complex tricks such as building tables on the first 1% of data). Deflate block header specifies whether it uses custom encoding tables encoded in the block header (DYNAMIC) or fixed ones defined in the code (STATIC). So, this field in the block has its own vocabulary. Tornado implements both block-dynamic huffman and block-dynamic AC. Dynamic/block-dynamic codes uses only one pass over data, and block-dynamic coding is as fast as second pass of *static one.
    12 replies | 218 view(s)
  • Gotty's Avatar
    Today, 09:16
    Gotty replied to a thread Fp8sk in Data Compression
    2 replies | 85 view(s)
  • Dresdenboy's Avatar
    Today, 08:36
    My own experiments look promising. With a mix of LZW, LZSS and numeral system ideas (not ANS though ;)), I can get close to apultra, exomizer, packfire for smaller files, while the decompression logic is still smaller than theirs.
    39 replies | 2488 view(s)
  • Dresdenboy's Avatar
    Today, 08:33
    Here's another paper describing an LZSS variant for small sensor data packets (so IoT, sensor mesh, SMS, network message compression related works look promising): An Improving Data Compression Capability in Sensor Node to Support SensorML-Compatible for Internet-of-Things http://bit.kuas.edu.tw/~jni/2018/vol3/JNI_2018_vol3_n2_001.pdf
    17 replies | 804 view(s)
  • Bulat Ziganshin's Avatar
    Today, 08:28
    tANS requires memory lookups which is 2 loads/cycle in the best case (on intel cpus). so probably you can't beat SIMD rANS
    8 replies | 400 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 08:22
    @sportman/Darek could you test it on GDCC public test set file (test 1,2,3,4) please ?
    2 replies | 85 view(s)
  • SolidComp's Avatar
    Today, 05:43
    Durability is testable, but the problem is that the data is not public. We have no way of knowing how durable a phone's buttons are, since we have no access to the manufacturer's test results. Both the manufacturers and the wireless carriers conduct extensive durability testing. A good example is T-Mobile's robot "Tappy", the one Huawei tried to copy: https://www.npr.org/2019/01/29/689663720/a-robot-named-tappy-huawei-conspired-to-steal-t-mobile-s-trade-secrets-says-doj Since extensive testing is expensive and requires large teams, one inference we can make is that phones from the top two or three companies are likely to be there best tested and most durable. So Samsung and Apple. Any phone sold in the last few years can easily record 1080p video. Most newer phones can do 4K, usually at 60 fps. I just checked cheap phones on Amazon and the BLU G6 records 1080p and costs $90 in the US. It's probably not as durable as a Samsung Galaxy S20, but flagships have gotten extremely expensive, like $1,000.
    2 replies | 40 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 04:40
    this is based on fp8v6 with small improvement ​
    2 replies | 85 view(s)
  • SolidComp's Avatar
    Today, 02:37
    I don't follow. There's a difference between Huffman trees and Huffman coding in this context? What's the difference? Those are just the stats from Willie's benchmarks. How else do you measure memory use but by measuring memory use during program execution? I like software that is engineered for performance and low overhead, like SLZ. I want it to be 1% CPU or so. SIMD can be one way to get there, so long as it doesn't throttle down core frequencies like using AVX-512 does. What is faster than SLZ? It's much faster than the ridiculous Cloudflare zlib patch that no one can actually build anyway (typical Cloudflare). It's faster than libdeflate. What else is there?
    12 replies | 218 view(s)
  • Shelwien's Avatar
    Today, 01:36
    Shelwien replied to a thread Paq8pxd dict in Data Compression
    > I am trying to learn how paq8px/pxd works but you know is not easy at all! Did you read DCE? http://mattmahoney.net/dc/dce.html > I am a delphi developer and i know c only a little. Fortunately paq doesn't use that much of mainstream C++. There're some tools like this: https://github.com/WouterVanNifterick/C-To-Delphi > What do you mean exactly with "instance updated with filtered byte values" mod_ppmd.inc has this function (at the end): U32 ppmd_Predict( U32 SCALE, U32 y ) { if( cxt==0 ) cxt=1; else cxt+=cxt+y; if( cxt>=256 ) ppmd_UpdateByte( U8(cxt) ), cxt=1; if( cxt==1 ) ppmd_PrepareByte(); U32 p = U64(U64(SCALE-2)*trF)/trT+1; return p; } Unlike paq's main model, it does update for a whole byte at once, so we can change it to something like ppmd_UpdateByte( Map ) which could provide effects similar to preprocessing.
    945 replies | 319023 view(s)
  • LucaBiondi's Avatar
    Today, 01:18
    LucaBiondi replied to a thread Paq8pxd dict in Data Compression
    Thank you Darek!!!
    945 replies | 319023 view(s)
  • LucaBiondi's Avatar
    Today, 01:18
    LucaBiondi replied to a thread Paq8pxd dict in Data Compression
    Hi Shelwien, Thank you for your explanation! I am trying to learn how paq8px/pxd works but you know is not easy at all! I am a delphi developer and i know c only a little. But ...i am trying and my goal is to learn. What do you mean exactly with "instance updated with filtered byte values" I you have some time try to explain me and if i will be able i will do it. Luca Learn to a developer and you will have a collegues .... :_yes3:
    945 replies | 319023 view(s)
  • Gotty's Avatar
    Today, 01:18
    You need to actually try experimenting with a random file. And you'll see with your own eyes. Saying ("As I said before") in not enough. You must try. It worth it. First, let's fix your definition a little bit. This is a bit better definition of randomness: What does that mean? Do you see a pattern here: 000000000000000000000 ? These are 21 '0' bits. And is there a pattern here: 1111111111111111111111 ? These are 22 '1' bits. Yes, indeed they are patterns. Repeating bits. But these patterns are unfortunately worthless in the file where I found them. How can that be? Let me show you. I grabbed the latest 1MB random file (2020-07-02.bin) from https://archive.random.org/binary The above bit patterns are in this random file. They are the longest repeats of zeroes and ones. And there is only one from each. No more. You will understand the real meaning of a "useful" pattern when you try to actually make the file shorter by using the fact that it contains these patterns. When you would like to encode the information that the 1111111111111111111111 patterns is there, you will need to encode the position where this pattern is found in the file (and its length of course). It starts at bit position 5245980 and it's length is 22. The file being 8388608 bits (or 1048576 bytes) long, encoding any position in this file would cost us log2( 8388608 ) = 23 bits. Oh. See the problem? Even the pattern of 22 repeating '1' bits is in the file, it is still not long enough to be useful. Encoding this info would cost us at least 23 bits. We cannot use it to shorten the file. And there are no longer repeats... We are out of luck. When I first started experimenting with data compression I was trying to compress random files and find patterns. Like everybody else, I guess. I did find patterns but not useful ones. Eventually when you count all (!) possible bit-combinations in a random file you will end up with the pure definition of randomness. Everything has a near 50% chance. Count the number of '1's and '0's. Count the number of '00', '01', '10', '11', ... all of them will have a near equal probability. When I first experienced that it was of course discouraging, but beautiful at the same time. Lack of patterns? No. Lack of useful patterns. Let me quote you again: Randomness is an issue. And randomness is the lack of useful patterns.
    18 replies | 320 view(s)
  • JamesB's Avatar
    Today, 01:08
    For what it's worth I reckon tANS ought to be faster still than rANS, but I'm not aware of anyone going to such extreme SIMD levels. I only did static rANS because I understood it and kind of stuck with it. :-) A lame reason really!
    8 replies | 400 view(s)
  • cssignet's Avatar
    Today, 00:27
    yes, in that case, it would not be really surprising that pingo would be slower with mp. IMHO, the test would be unfair: first, the optimized binary (AVX2 etc) vs generic SSE2, second but not least the mp in pingo rc2 40 would not run the same number of threads (not in the same exact context) since i do not have the required hardware, and when you have some time, could you try the mp in rc2 41 vs 40? -s0 on ~ 96 files should be enought (if it works - perhaps it would be possible to make it better). thanks
    166 replies | 40970 view(s)
  • Gotty's Avatar
    Today, 00:11
    Durability is not really measurable. But satisfaction rate (that included durability, too) is measurable. Xiaomi Redmi Note 8T has the highest user satisfaction rate for a currently available affordable phone.
    2 replies | 40 view(s)
  • JamesWasil's Avatar
    Today, 00:01
    The advantage that arithmetic encoding has over Huffman and other prefix implementations is that it is able to achieve fractional bits to assign a probability to a codeword rather than a whole entire bit. You're going to get more out of arithmetic compression whenever the symbol probabilities are not a power of 2 because of the amount gained from the delta values between it. For example, if you were to grab the probabilities for symbols from a file and see that you only needed 1.67 bits per symbol, with arithmetic encoding you'll be able to get it represented as 1.67 bits or very close to it on a fractional margin. Whereas with Huffman and other methods, it still requires at least 2 bits to represent the same probability. Whenever these instances occur, you'll be able to save the difference (in this example, .33 of a bit) for each and every occurrence of that probability. These savings add up, and the larger your data source is, the more significant these savings of fractional bits will be over the use of Huffman and other methods. So basically, Huffman will only be optimal if what you're reading is a power of 2. If it's not, then arithmetic encoding will give you more. If you're combining contexts, then you'll get more out of arithmetic compression usually because the additional contexts can be represented with fractional bits and save on that difference, too.
    12 replies | 218 view(s)
  • Gotty's Avatar
    Yesterday, 23:40
    Summary. >>And Hoffman theory is useless if not for programmers. As you see from those examples above, compression theory is embedded in our decisions and is a serious part of our everyday life. We didn't really invent compression, we discovered it and formulated it mathematically and statistically. It's just everywhere.
    18 replies | 320 view(s)
  • Gotty's Avatar
    Yesterday, 23:27
    And finally... language (morphology). In every (human) language the words that are used more often tend to be the shortest. (Zipf's law) We, humans, intuitively created our languages optimal in the sense that we express our thoughts and exchange information with the least effort - fewest possible sounds and fewest possible letters to convey the intended information. Thus we compress information as we speak. In a Huffman-like way. Isn't in phenomenal?
    18 replies | 320 view(s)
  • Shelwien's Avatar
    Yesterday, 23:00
    > Is he mistaken about zlib? http://www.libslz.org/ No, you are. What he says is "Instead, SLZ uses static huffman trees" and "the dynamic huffman trees would definitely shorten the output stream". I don't see anything about dynamic _coding_ there. > Eugene, deflate doesn't have a predefined Huffman or prefix table, does it? https://en.wikipedia.org/wiki/DEFLATE#Stream_format "01: A static Huffman compressed block, using a pre-agreed Huffman tree defined in the RFC" > What are the assumptions of Huffman optimality? Huffman algorithm is the algorithm for generation of a binary prefix code which compresses a given string to a shortest bitstring. > Does that mean that for some data distributions, arithmetic coding has no > advantage over Huffman? Static AC has the same codelength in a case where probabilities of all symbols are equal to 1/2^l. Adaptive AC is basically always stronger for real data, but its possible to generate artifical samples where Huffman code would win. > Oh, and does deflate use canonical Huffman coding? Yes, if you accept limited-length prefix code as Huffman code. > So when you say there's nothing good about zlib memory usage, you mean it uses too much? It uses a hashtable and a hash-chain array. So there're all kinds of ways to use less memory - from just using a hashtable alone to BWT SA to direct string search. > It uses about 100 MB RSS, while SLZ uses only 10 MB. SLZ is my baseline for > most purposes. It would be interesting if a LZ+AC/ANS solution could be > made that used 10 or fewer MB, and very little CPU, and was fast. You have very strange methods of measurement. If some executable uses 100MB of RAM, it doesn't mean that algorithm there requires that much memory - it could be just as well some i/o buffer for files. Also what do you mean by "uses little CPU"? There's throughput, latency, number of threads, SIMD width, cache dependencies, IPC, core load... its hard to understand what you want to minimize. For example, SLZ has low latency and memory usage, but its not necessarily fastest deflate implementation for a single long stream. > I think maybe brotli and Zstd in their low modes, > maybe -1 or so, could approach those metrics, but I'm not sure. For zstd you'd probability have to tweak more detailed parameters, like wlog/hlog. I don't think it has defaults to save memory to _that_ extent. > then generated Huffman trees for each block separately, that's still > static Huffman, right? Yes. If the code says the same within a block, then its static.
    12 replies | 218 view(s)
  • JamesWasil's Avatar
    Yesterday, 22:59
    To answer your question, yes. :) I set out to do this about a year or two ago, seeing that most of the source code was always in C++ or ANSI C, and rarely if ever in anything easier to read and closer to natural language for people who were intermediate or beginners. Many people started out with languages like Basic or Turbo Pascal, although C++ or assembly language is going to be your best bet long-term for programming efficiency, speed, and most real-world applications these days. BUT -- if you're getting started or are versed with Basic, you might want to start there and then gradually branch out to other languages like C++, Python, or Java...all of which are now industry standards. There were some great and helpful commenters on this thread when initially introduced (please ignore the 1 jerk, maybe 2 spouting off on there and read past it to get what you need out of the posts and the code): https://encode.su/threads/3022-TOP4-algorithm As a bonus, Sportman was especially helpful and compiled his own version along with the one that I submitted. He did independent testing as did at least one or two others. (jibz did one in C if you need it, too) Although the thread title is slightly misleading because in reality it doesn't always produce code better than Huffman...there are many modifications that you can do with it to where it can be made to achieve more with partial contexts and better compression. But I would use ait s a starting point for an easy way to understand, since there are few places to find anything easier or more straight-forward than this. Now please understand that there are other compressors that will do better too that are most likely based on Arithmetic Encoding or Range encoding, but with those is more complexity and might be too much for someone to start out with. A lot of people suggest "PAQ", but it's a lot of unnecessary stuff to do very basic compression and understand the premise of it. When you're ready you can do PAQ probably after Arithmetic Encoding and traditional Huffman, but for an easy and fast way to do things, I'd start here. With TOP4, you get a basic skeleton frame I made that is table-based in BASIC and compresses with a very straight-forward, WYSIWYG approach. There's a separate file that is able to reverse the process as a decompressor. It reads the bytes at the front of the file to get a table for codewords, and then decompresses data based on that. How it works: It represents the 4 most statistically likely symbols with 1 bit shorter code word, while adding 1 bit longer to the least frequent symbols at the end. By doing this, you get compression because of the frequencies with shorter codewords at the front always outweighing the frequency of the least occurring ones at the end. The less compressed and "balanced" the frequencies are for the symbols, the more you're able to compress data at the top and expand the few at the bottom. Your compression is what you get from the difference of this when all the bits are tallied up and converted back to 8 bit ASCII symbols. (I did one in C and used QB64 for what was submitted, but you can make it for VB6 or use Sportman's VB.Net submission just as well) The basic code should be easy enough to read to where you can adapt it to any language you fancy or want to use, since it's very close to pseudocode for beginners. You'll find however that a lot of things (most things?) are written in C++ now, and people are using that as their pseudocode as a defacto-standard. What I would suggest is using this to get an understanding, but gradually branch out to C++ or Python from here and adapt it to that. Then, you can move on to actual Huffman coding or Adaptive Arithmetic encoding and more. This is more or less instantly gratification to help you get your feet wet with compressing text files, EXE files, BMP files, and others that are easily compressed. Once you're comfortable with this, it'll be even easier for you to continue on and adapt as you grow. :) And of course, the code was submitted royalty-free with no restrictions really, more as a learning tool for people to use freely. If you're interested in things like BWT, there's sections on that, too. I made some BWT implementations entirely in BASIC and one in Python (not sure where I put that, but I still have the basic one on a flash drive), but honestly you'll find more straight-forward BWT implementations from others searching this site than what I have to share. Michael has a really good BWT compressor on here that I've seen. And Shelwein and Encode have tons of random stuff lol There may be other really easy compressors on here for you to check out too if you search for them. They'll either be on threads or under the download sections with source code.
    18 replies | 320 view(s)
  • Gotty's Avatar
    Yesterday, 22:49
    Decision making - again. When you try to decide something you actually try to predict what the outcome of that decision would be. Would it be good? Would it be bad for me? When you need to buy a new mobile phone for example, you have different options. Buy a high end one, and hope that it will last many-many years and you'll be satisfied with the packed in features. Or for quarter of the price you buy one from the low range? Probably it would fail sooner or you would need to exchange it sooner than the top one. Also it may be lagging or miss some features, so eventually your satisfaction would be a bit lower. Or a second hand phone? Hm, the failure rate could be even higher and you don't have warranty. But the price is really tempting... You make a decision by trying to predict the outcome based on different metrics: price, satisfaction rate, warranty, probability of a failure. You don't foresee the future. But your past experiences, listening to experts, asking the opinion of friends will help you making a good decision. (This is also called an informed decision.) An entropy-based compression software does exactly that: trying to predict the next character in a file and the better the prediction is, the better the compression will be. (Entropy-based) compression = prediction. When you try to predict the future: what's the probability that it would rain or the probability of a successful marriage with a person, you actually apply the same theory that is used in compression.
    18 replies | 320 view(s)
  • Gotty's Avatar
    Yesterday, 22:26
    Your decisions are greatly determined by the "success rate" and the "magnitude" of positive or negative feedback you experience. We are pursuing happiness throughout our entire lives and in order to reach it we make "statistics" starting in a very early age. We evaluate all situations based on these statistics and decide what to do and what not to do. For example I tried football, basketball, handball and I know I'm rather lame at any ball game. Even at snooker (I have my statistics ;-)). In these activities I can't shine, so they don't give me satisfaction, so I try to avoid them. But I'm good at running and jumping (I can jump my height ;-)), I got a bunch of medals from my teens, and so I love them (got my statistics). If I need to chose what to do, these statistics will dictate me that I better not go to play basketball in my free time but go running in the evenings. When having many friends with many different interests, how do we decide what to do when we meet and spend time together? We will instinctively maximize our shared happiness based on how much we like or dislike these activities. We will do activities more frequently that are liked by the majority of us and less frequently that are not liked by so many but still a couple of us enjoy - for the sake of those few. We can formulate the "high happiness score" as "less regret" or "less cost" and "low happiness score "as high regret" or "high cost". Summary: maximum happiness = do the "high regret"/"high cost" activities less frequently and the "low regret"/"low cost" activities more frequently. >>And Hoffman theory is useless if not for programmers. Huffman theory says: maximum compression = "As in other entropy encoding methods, more common symbols are generally represented using fewer bits than less common symbols." wikipedia] Hmmm... sounds familiar? Our decision making is intuitively based on this compression theory. Not just for programmers. For everybody.
    18 replies | 320 view(s)
  • Gotty's Avatar
    Yesterday, 22:21
    Let me tell you a couple of interesting facts.
    18 replies | 320 view(s)
  • SolidComp's Avatar
    Yesterday, 22:14
    So when you say there's nothing good about zlib memory usage, you mean it uses too much? It uses about 100 MB RSS, while SLZ uses only 10 MB. SLZ is my baseline for most purposes. It would be interesting if a LZ+AC/ANS solution could be made that used 10 or fewer MB, and very little CPU, and was fast. I think maybe brotli and Zstd in their low modes, maybe -1 or so, could approach those metrics, but I'm not sure. If we preprocess and sort data into blocks that group say all the numeric data in one block, all the text in another, hard random data in another, and then generated Huffman trees for each block separately, that's still static Huffman, right? It should help because you could get shorter codes on average if you just have numbers, or letters, and so forth, depending on the overhead of the tables.
    12 replies | 218 view(s)
  • CompressMaster's Avatar
    Yesterday, 22:01
    Hello, I am looking for the most durable (by durability, I mean material+buttons durability, not fall resistance or waterproofing) smartphone. Thing is, I actually own Lenovo a-2010 and I am unable to turn it off due broken button (everyday usage), so I´d have to make a shortcut via volume buttons. But, there´s another problem - I am unable to get into opened windows and also back button - button does not respond. Thus copying files would be kinda hard... So I decided to buy a new one - most durable, full hd video, good camera. But I am not entirely sure which to take... Thanks a lot. CompressMaster
    2 replies | 40 view(s)
  • SolidComp's Avatar
    Yesterday, 21:57
    Oh, and does deflate use canonical Huffman coding?
    12 replies | 218 view(s)
  • SolidComp's Avatar
    Yesterday, 21:56
    Thanks everyone. A couple of points: What's this idea that deflate only uses static Huffman? One of the reasons SLZ is so much faster than zlib is supposed to be that Willie only uses static Huffman. This was supposed to be a difference from standard zlib operation (for gzipping). Willie talks about this in his write-up. Is he mistaken about zlib? http://www.libslz.org/ Eugene, deflate doesn't have a predefined Huffman or prefix table, does it? Predefined table is different from static. When I say predefined, I mean a table that says here's the code for the letter A, here's the code for the letter B, etc. Like the HPACK table, which covers all of ASCII. I've never seen a predefined table in a deflate spec. Did I miss it? What are the assumptions of Huffman optimality? Was it something about the distribution of the data? Does that mean that for some data distributions, arithmetic coding has no advantage over Huffman?
    12 replies | 218 view(s)
  • Scope's Avatar
    Yesterday, 21:38
    No, for Pingo and ECT I don't use external programs, only: ect --mt-file * .png pingo * .png But, as I have time, I will test Pingo and ECT on the whole set (it won't take as long as testing all optimizers), probably on a different configuration, with a different CPU and HDD/SDD. ​ This is good, although I'm not a supporter of fragmentation, but if it's another new, incompatible format, it's better to get as many improvements and features as possible that weren't implemented in previous formats in the past.
    166 replies | 40970 view(s)
  • Sportman's Avatar
    Yesterday, 21:19
    Sportman replied to a thread Paq8sk in Data Compression
    enwik8: 15,641,922 bytes, 8,629.563 sec., paq8sk32 -x15 -w -e1,english.dic
    139 replies | 10775 view(s)
  • Shelwien's Avatar
    Yesterday, 21:18
    Shelwien replied to a thread Paq8pxd dict in Data Compression
    > Shelwien, what do you think about? Did you expect more gain? Not really, in PPMd these parameters have little effect on actual statistics and predictions. Like, o40 is only relevant for symbols after 40-byte repeated prefix, and m3360 vs m1360 is only relevant for files longer than 90m... maybe test it on enwik9? However isn't it still better to use tested parameter profiles rather than something random? Btw, there're still other options - like an instance with something like o6 m1 r0, or an instance updated with filtered byte values (upcased, or with all non-letters replaced with space). Also it could be interesting to see where ppmd actually provides benefits, ie in which contexts it beats paq model.
    945 replies | 319023 view(s)
  • Shelwien's Avatar
    Yesterday, 20:54
    I don't see a point in being too strict about definitions here. For example, zlib implementation limits codelengths to 15 bits or less, so strictly speaking that's not really Huffman code either. So, 1) deflate only uses static Huffman coding, with predefined table or block header. Predefined code is still a valid huffman code for some other data. 2) Adaptive code is based on idea of data stationarity - that statistics for known symbols s..s would also apply to s. In most cases adaptive coders would adjust their codes after processing each symbol, but its also possible to do batch updates. > do you think it would be feasible to implement adaptive arithmetic coding > with less CPU and memory overhead than deflate? This depends on implementation. Compared to zlib - most likely yes, since its not very speed-optimized. Also there's nothing good about zlib memory usage, so its not a problem. But a speed optimized deflate implementation has a decoding speed of >1GB/s, which is not feasible for truly adaptive AC/ANS. Might be technically possible with batch updates using _large_ batches (better to just encode the stats), or only apply AC for rarely used special symbols (so most of the code would remain huffman). However LZ is not always applicable (might not find matches in some data), so always beating deflate would also mean adaptive AC with >1GB/s, which is not realistic on current hardware. And encoding speed is hard to compare, since there're many different algorithms used to encode to same deflate format. I'd say that it should be possible to make LZ+AC/ANS coder which would be both faster and stronger than all zlib levels (zstd likely already applies).
    12 replies | 218 view(s)
  • cssignet's Avatar
    Yesterday, 20:34
    ​ it could be somehow possible. now, about how you compared stuff what/how did you compare? did you ask an external program to run 8 threads of pingo/ECT?
    166 replies | 40970 view(s)
  • Jarek's Avatar
    Yesterday, 20:20
    I don't know when first prefix codes were used - definitely before Huffman, but Morse code is not prefix-free, instead it uses three types of gaps for separation (this redundancy is very helpful for synchronization, error correction). Regarding context-dependece and adaptivity, they are separate concepts: - fixed Markov process is example of the former, ARMA model uses context as a few previous values, lossless image compression uses context as already decoded neighboring pixels, etc. - adaptation usually refers to evolving models/distribution for non-stationarity, e.g. independent variables of evolving probability distribution in adaptive prefix/AC/ANS, can be e.g. Gaussian/Laplace distribution of evolving parameters (e.g. https://arxiv.org/pdf/2003.02149 ). We can combine both, e.g. in Markov process of online optimized parameters, adaptive ARMA, LSTM etc.
    12 replies | 218 view(s)
  • Shelwien's Avatar
    Yesterday, 20:19
    Shelwien replied to a thread 7-zip plugins in Data Compression
    > Iso7z support for bin files are limited to 4gb That could be because 7z's default solid mode has 4g blocks. Try adding -ms=1t or something.
    10 replies | 3158 view(s)
  • Shelwien's Avatar
    Yesterday, 20:18
    Maybe like this?
    68 replies | 80373 view(s)
  • Shelwien's Avatar
    Yesterday, 20:13
    The description says "texts from Project Gutenberg in UTF-8 characters, so it’s essentially ASCII", not that its ASCII. > It contains characters left over from UTF-8 That's intentional in this case. No need to make it too simple.
    39 replies | 2240 view(s)
  • Darek's Avatar
    Yesterday, 19:50
    Darek replied to a thread Paq8pxd dict in Data Compression
    Scores of paq8pxd_v89_40_3360 on my testset. In general 640 bytes of improvement. Always something!
    945 replies | 319023 view(s)
  • Jyrki Alakuijala's Avatar
    Yesterday, 19:26
    Even when they are called Huffman codes, these are not Huffman codes. They are static prefix codes. Static prefix coding that is not adapted to a particular use was used in 1840s when Morse, Gerke and ITU codes were introduces. Huffman coding is an improvement over static prefix coding. Huffman coding follows a process introduced in 1952 http://compression.ru/download/articles/huff/huffman_1952_minimum-redundancy-codes.pdf A Huffman code is an optimal prefix code -- when not including the coding length of the code itself. Dynamic Huffman in deflate vocabulary means a normal Huffman code. Deflate does not have actual dynamic (adaptive) Huffman codes despite that the normal Huffman codes are called dynamic. A real adaptive (or dynamic) Huffman code builds a more accurate version as data is being transmitted. Overall this is (roughly) equal density to sending the codes separately, but can receive the first symbols of the stream with less latency. The cost for decompression is quite big so this method is usually avoided. There are some more sane variations of adaptive prefix coding that are slightly cheaper, but don't follow the Huffman process for rebuilding the code. These, too, tend to be too complex and slow for practical use. In a canonical prefix code -- including the canonical Huffman code -- the codes are lexically ordered by length. You only communicate the lengths and the codes are implicit. Going from prefix coding to arithmetic is not a big improvement. One needs to have context, too, usually for adaptive coding. Adaption is based on context, often the probabilities of a few symbols are changed based on context. Traditionally prefix coding is not compatible with the concept of context. I believe that the first practical implementations of contextful prefix coding were in WebP lossless and Brotli. In JPEG XL we use that context-based method for ANS, too.
    12 replies | 218 view(s)
  • Scope's Avatar
    Yesterday, 19:11
    I don't think so, just different configurations and different conditions, when there will be time (or rather a free CPU), maybe I will run more thoughtful tests. On a single file, Pingo is faster, but when processing multiple files in parallel with all threads, I noticed that on my configuration ECT is often faster. 05vnjqzhrou31.png powershell Measure-Command ECT -1 -s TotalMilliseconds : 3573,3911 ECT -5 -s TotalMilliseconds : 9823,7649 ECT -5 --mt-deflate -s TotalMilliseconds : 7871,9845 ECT_PGO -1 -s TotalMilliseconds : 3383,1625 ECT_PGO -5 -s TotalMilliseconds : 9509,1655 ECT_PGO -5 --mt-deflate -s TotalMilliseconds : 7393,5582 pingo -s0 -strip TotalMilliseconds : 2681,7484 pingo -s5 -strip TotalMilliseconds : 8587,4876 ProcProfile64 ECT -1 -s User Time : 3.593s Kernel Time : 0.046s Process Time : 3.639s Clock Time : 3.646s Working Set : 42440 KB Paged Pool : 125 KB Nonpaged Pool : 9 KB Pagefile : 41500 KB Page Fault Count : 12840 ECT -5 -s User Time : 9.437s Kernel Time : 0.421s Process Time : 9.858s Clock Time : 9.879s Working Set : 100856 KB Paged Pool : 125 KB Nonpaged Pool : 10 KB Pagefile : 102556 KB Page Fault Count : 510657 ECT -5 --mt-deflate -s User Time : 10.562s Kernel Time : 0.546s Process Time : 11.108s Clock Time : 7.848s Working Set : 167096 KB Paged Pool : 125 KB Nonpaged Pool : 14 KB Pagefile : 180024 KB Page Fault Count : 505302 ECT_PGO -1 -s User Time : 3.500s Kernel Time : 0.046s Process Time : 3.546s Clock Time : 3.541s Working Set : 42380 KB Paged Pool : 125 KB Nonpaged Pool : 9 KB Pagefile : 41568 KB Page Fault Count : 12830 ECT_PGO -5 -s User Time : 9.265s Kernel Time : 0.421s Process Time : 9.686s Clock Time : 9.700s Working Set : 100672 KB Paged Pool : 125 KB Nonpaged Pool : 10 KB Pagefile : 102552 KB Page Fault Count : 510700 ECT_PGO -5 --mt-deflate -s User Time : 10.203s Kernel Time : 0.531s Process Time : 10.734s Clock Time : 7.485s Working Set : 168432 KB Paged Pool : 125 KB Nonpaged Pool : 14 KB Pagefile : 181772 KB Page Fault Count : 502639 pingo -s0 -strip User Time : 1.890s Kernel Time : 0.109s Process Time : 1.999s Clock Time : 1.992s Working Set : 96100 KB Paged Pool : 124 KB Nonpaged Pool : 9 KB Pagefile : 104164 KB Page Fault Count : 115651 pingo -s5 -strip User Time : 8.328s Kernel Time : 0.171s Process Time : 8.499s Clock Time : 8.524s Working Set : 103628 KB Paged Pool : 124 KB Nonpaged Pool : 9 KB Pagefile : 112428 KB Page Fault Count : 171346 ​
    166 replies | 40970 view(s)
  • maadjordan's Avatar
    Yesterday, 18:19
    maadjordan replied to a thread 7-zip plugins in Data Compression
    Iso7z support for bin files are limited to 4gb .. larger files are cut to 4gb limit even if using 64bit dll module.
    10 replies | 3158 view(s)
  • lz77's Avatar
    Yesterday, 18:13
    TS40.txt is not an ASCII, even not an Latin-1 file. It contains characters left over from UTF-8 (with decimal codes 128, 145, 147, 225, 226). For example see next line after line "Thank Heaven! . . . . Good-night." Or see next word after She learned of the great medicine,
    39 replies | 2240 view(s)
  • Trench's Avatar
    Yesterday, 18:12
    compgtzip is thr standard now while in the 90's it was not a standard in windows I think. Now you click on a zip file and it opens up like a regular folder to view. If you feel it takes a certain mindset to do compression then maybe that is the problem that its just that one mindset. Maybe a more abstract approach is needed to create something new. Many programmers want to be game programmers thinking they can make something good to make money which they almost all have bad design. Everyone wants to be a 1 man show to do it all which its hard to even do 1 skill right let alone many. Gotty Again you are right. Even Excel is somewhat a programming language despite encodes in the program. Non data compression programmers is different but not completely different another field. Their is a program called cheat engine which modifies almost any programs to act differently which doesnt take programming knowledge. Obviously its silly since its mainly for games. Excel modified things within the program to find patterns quicker than coding. Programmers as a whole are kind of ridged since one has to be to follow the rules of programming and might be the case here and why I stated the ones that help make compression like Hoffman were not programmers but engineers. And Hoffman theory is useless if not for programmers. Sometimes people look for the hardest solution when the simplest one is more effective. As i said before everyone here is defining compression as the wrong thing when they say randomness being an issue which is not it is lack of patterns. Which all those fields you stated are put in the same place when we evaluate it like that. Everyone is trying to make something more complex which would take up more cpu and memory to make the program more unusable. I assume you disagree, but disagreeing cuts your view from another perspective. I might have something or I might not but if i have something that cant be proven then its useless if i dont have something than can test it out then again useless. All I am is trying to assist in the mindset to think about of the box since a lot of things stated in the forum from all like minded people with someone with "no data compression" a you call that is trying to shine a light on another view. Just a tough that the best selling burgers in the world is made by a real estate company that rents out stores called McDonald. the best tasting burgers do not come close in sales. The reason McDonald did so well is to switch mentality from a burger company to a real estate company.
    18 replies | 320 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 18:01
    Paq8sk32 has better score and faster than paq8sk29.
    139 replies | 10775 view(s)
  • Darek's Avatar
    Yesterday, 17:48
    Darek replied to a thread Paq8sk in Data Compression
    Ok and?
    139 replies | 10775 view(s)
  • compgt's Avatar
    Yesterday, 17:41
    When i was writing The Data Compression Guide, i included Adaptive Huffman Coding of Knuth by reading the paper of Vitter. My implementations for Knuth's and Vitter's algorithms are correct, exactly adhering to Vitter's description. Accordingly, Knuth's algorithm is used in the early program "compact". You can optimize my sample code or re-implement these algorithms. https://sites.google.com/site/datacompressionguide/fgk https://sites.google.com/site/datacompressionguide/vitter Yes, adaptive huffman coding can be done per block to avoid constant updates to the dynamic Huffman tree. There are compressors that do this as listed in comp.compression FAQ. Canonical Huffman coding and length-limited codes are studied extensively by A. Moffat.
    12 replies | 218 view(s)
  • Jyrki Alakuijala's Avatar
    Yesterday, 17:32
    My understanding is that WebP v2 lossless will be based on my original architecture for WebP v1 lossless, just with most parts slightly improved from the nine years of experience we had in between -- like changing prefix coding to ANS. Due to its heritage I expect it to be consistently a few % (perhaps 3-5 %) better than WebP v1 with roughly similar execution speed. WebP v2's lossy improvements are not incremental from WebP v1, it looks like a full redesign with some inspiration from the AV1/AVIF/VP10/VP9 family.
    166 replies | 40970 view(s)
  • Jarek's Avatar
    Yesterday, 16:57
    Probably nearly all used Huffman is static and canonical (?) - what reduces size of headers. Adaptive Huffman is interesting for theoretical considerations, but probably more costly than adaptive AC - doesn't seem to make sense in practice (?) But adaptive AC/ANS is quite popular for exploiting non-stationarity, e.g. LZNA, RAZOR, lolz use adaptive rANS, reaching ~100MB/s/core for 4bit alphabet (additional memory cost is negligible). https://sites.google.com/site/powturbo/entropy-coder
    12 replies | 218 view(s)
  • cssignet's Avatar
    Yesterday, 16:16
    ​perhaps i am wrong, so if you have some spare time to try quick tests, i am curious to see the results from your computer. if you could compile ECT (with default stuff), and then run those from original file (i randomly choose this file, but pick whatever in the list): timer ECT -1 -s file.png timer ECT -5 -s file.png timer ECT -5 --mt-deflate -s file.png timer pingo -s0 -strip file.png could you please paste the logs from each here? thanks
    166 replies | 40970 view(s)
  • LucaBiondi's Avatar
    Yesterday, 15:35
    LucaBiondi replied to a thread Paq8pxd dict in Data Compression
    Hi Darek this is the executable. Happy testing! paq8pxd_v89_40_3360.zip ​ Luca
    945 replies | 319023 view(s)
  • SolidComp's Avatar
    Yesterday, 15:29
    Hi all – I have some questions about the different forms of Huffman coding, and where they're used, and I figured many of you would be able to fill in the blanks. Thanks for your help. Static Huffman: Does this mean 1) a tree generated from a single pass over all the data, or 2) some sort of predefined table independent of any given data, like defined in a spec? I'm seeing different accounts from different sources. For example, the HPACK header compression spec for HTTP/2 has a predefined static Huffman table, with codes specified for each ASCII character (starting with five-bit codes). Conversely, I thought static Huffman in deflate / gzip was based on a single pass over the data. If deflate or gzip have predefined static Huffman tables (for English text?), I've never seen them. Dynamic/Adaptive Huffman: What's the definition? How dynamic are we talking about? It's used in deflate and gzip right? Is it dynamic per block? (Strangely, the Wikipedia article says that it's rarely used, but I can't think of a codec more popular than deflate...) Canonical Huffman: Where is this used? By the way, do you think it would be feasible to implement adaptive arithmetic coding with less CPU and memory overhead than deflate? The Wikipedia article also said this about adaptive Huffman: "...the cost of updating the tree makes it slower than optimized adaptive arithmetic coding, which is more flexible and has better compression." Do you agree that adaptive arithmetic coding should be faster and with better ratios? What about the anticipated CPU and memory overhead? Thanks.
    12 replies | 218 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 15:08
    ​i just compare paq8sk32 with paq8sk29 on enwik8
    139 replies | 10775 view(s)
  • Darek's Avatar
    Yesterday, 14:30
    Darek replied to a thread Paq8sk in Data Compression
    Do you have the same score for paq8sk23 or paq8sk28?
    139 replies | 10775 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 14:23
    How to compile fp8 using this batch script ?
    68 replies | 80373 view(s)
  • Scope's Avatar
    Yesterday, 14:04
    I can also add these results to the comparison (as tests on another configuration and number of threads) if they are done for all other modes and optimizers. That was my main problem, because not all optimizers processed correctly PNG from the whole set (and if they skip PNG after an error, they don't waste time optimizing it and it's not a very honest speed result). Otherwise, I also tried to make it simple, accurate and fair, the only difference is that: - I tested on another configuration, it's a dedicated i7-4770K (Haswell, AVX2, 16GB RAM, Windows 10 64-bit) - additional optimization flags were used during compilation (but they were used equally on all open source optimizers) - I ran tests 3 times on the same set with the same optimizers to make sure there was no impact of sudden Windows background processes, HDD caching, swap, etc. and get more accurate results - for a simpler and more convenient result, and since this is not the time spent on the whole set (for the reasons described above), for 1x was taken the fastest result, instead of a long numerical value of time spent such as Total Milliseconds - 2355561,4903. Mostly for the same reasons I didn't want to compare speed results at all, because they may depend on different configurations, CPU, even HDD speed (especially for fast modes), although they still give approximate data about speed of different optimizers. Yes, some files are deleted over time, this is one of the disadvantages of using public images, but my whole idea was to compare real public images rather than specialized test sets. Also, if needed, I can upload and provide a link (in pm) to a set of files on which I was comparing the speed.
    166 replies | 40970 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 13:57
    Paq8sk32 - improve text model by new hash function the result of enwik8 using -s6 -w -e1,english.dic is Total 100000000 bytes compressed to 16285631 bytes. Time 20340.22 sec, used 1290 MB (1352947102 bytes) of memory faster than paq8sk29 ​enwik9 is running ​Here is the source code... here is the binary too
    139 replies | 10775 view(s)
  • kaitz's Avatar
    Yesterday, 12:16
    kaitz replied to a thread Paq8pxd dict in Data Compression
    I cant see how this can be done. Last version uses less memory 5gb or less (max option), cant remember. And size diff for enwik9 is 50kb (worse). So it is not only about memory. There are no new(+) context in wordmodel since v80, only preprocessing for enwik. Next time i will probably work on this in feb. edit: +RC,modSSE
    945 replies | 319023 view(s)
  • Darek's Avatar
    Yesterday, 10:59
    Darek replied to a thread Paq8pxd dict in Data Compression
    @LucaBiondi - could you attach exe file of modified paq8pxd_v89? According to benchmark procedure - good idea in my opinion but there should be the same benchmark file to test it - maybe procedurally generated one by progam before start the test? @Kaitz - I have some idea but maybe it could be silly or not duable. Is it possible to use some sort of very light compression of program memory during use? As I understand majority of memory is use on some kinds of trees or other structure data representative. Is it possible to use lightly compressed data which would be virtual simulation of more memory usage? I think it could be still room for improvement for the biggest files (enwik8/9) if we could use more memory but maybe is not need to use more phisical memory and instead of this made kind of trick like this? Of course it would be more time consuming but maybe it could be worth it...
    945 replies | 319023 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 10:20
    Let me try , but I don't promise because I am not a programmer :)
    33 replies | 1573 view(s)
  • lz77's Avatar
    Yesterday, 10:16
    ​> TS40.txt: > 132,248,557 bytes, 8.055 sec. - 0.544 sec., zstd -7 --ultra --single-thread > 130,590,357 bytes, 10.530 sec. - 0.528 sec., zstd -8 --ultra --single-thread What ratio show zstd after preprocessing meaning 40/2.5=16 sec. for compression + decompression, 5% off? What ratio at all will be the best in 16 seconds? ........... lzturbo seems to be winner in the Rapid compression.
    33 replies | 1573 view(s)
  • Darek's Avatar
    Yesterday, 09:35
    yes, because it's option made for enwik8/9 :) 70'197'866 - TS40.txt -x15 -e1,english.dic by Paq8sk30, time - 73'876,51s - good score, bad time - paq8sk23 should be about 20x faster to meet contest criteria could you try to add use of more threads/cores?
    33 replies | 1573 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 04:12
    On enwik8/9 there is no error when use -w option
    33 replies | 1573 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 03:21
    paq8pxd_v89 when using -w option cause error message Transform fails at 333440671, skipping... so it detect ts40.txt as default , not bigtext wrt then that cause the compression ratio is worse
    33 replies | 1573 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 03:04
    so the best score is using -x15 -e1,english.dic. @sportman could you add it to GDCC public test set file ?
    33 replies | 1573 view(s)
  • cssignet's Avatar
    Yesterday, 00:53
    ​i would suggest a more simple, accurate, verifiable and *fair* test for time comparison. pingo/ECT binaries with same compiler/flags, cold-start running on dedicated resource (FX-4100 @ 3.6 Ghz — 8GB RAM — Windows 7 64-bit), tested on files found in PNG tab (aside note: i could not grab those https://i.redd.it/5lg9uz7fb7a41.png https://i.redd.it/6aiqgffywbk41.png https://i.redd.it/gxocab3x91e41.png https://i.redd.it/ks8z85usbg241.png https://i.redd.it/uuokrw18s4i41.png so input would be 1.89 GB (2 027 341 876 bytes) - 493 files) pingo (0.99 rc2 40) - ECT (f0b38f7 (0.8.3)) (with -strip) multi-processing (4x): ECT -1 --mt-file Kernel Time = 14.133 = 1% User Time = 3177.709 = 390% Process Time = 3191.842 = 392% Virtual Memory = 438 MB Global Time = 813.619 = 100% Physical Memory = 433 MB pingo -s0 Kernel Time = 86.518 = 16% User Time = 1740.393 = 328% Process Time = 1826.912 = 344% Virtual Memory = 1344 MB Global Time = 530.104 = 100% Physical Memory = 1212 MB ECT -5 --mt-file Kernel Time = 1557.482 = 43% User Time = 9361.869 = 259% Process Time = 10919.352 = 303% Virtual Memory = 1677 MB Global Time = 3601.090 = 100% Physical Memory = 1514 MB pingo -s5 Kernel Time = 144.550 = 6% User Time = 6937.879 = 317% Process Time = 7082.429 = 324% Virtual Memory = 1378 MB Global Time = 2183.105 = 100% Physical Memory = 1193 MB file per file: ECT -1 Kernel Time = 20.326 = 0% User Time = 2963.472 = 93% Process Time = 2983.799 = 99% Virtual Memory = 283 MB Global Time = 2984.405 = 100% Physical Memory = 282 MB pingo -s0 -nomulti Kernel Time = 68.468 = 4% User Time = 1443.711 = 95% Process Time = 1512.180 = 99% Virtual Memory = 905 MB Global Time = 1513.683 = 100% Physical Memory = 887 MB ECT -5 --mt-deflate Kernel Time = 886.538 = 14% User Time = 8207.743 = 134% Process Time = 9094.281 = 149% Virtual Memory = 1000 MB Global Time = 6083.433 = 100% Physical Memory = 916 MB <-- multithreaded pingo -s5 -nomulti Kernel Time = 109.107 = 1% User Time = 5679.091 = 98% Process Time = 5788.198 = 99% Virtual Memory = 978 MB Global Time = 5789.232 = 100% Physical Memory = 980 MB <-- *not* multithreaded regular -sN profiles in pingo goes more for small/avg image size paletted/RGBA. if someone would seek for speed over space, -sN -faster could be used instead. on some samples, it could be still competitive https://i.redd.it/05vnjqzhrou31.png (13 266 623 bytes) ECT -1 (out: 10 023 297 bytes) Kernel Time = 0.140 = 2% User Time = 5.928 = 97% Process Time = 6.068 = 99% Virtual Memory = 27 MB Global Time = 6.093 = 100% Physical Memory = 29 MB pingo -s0 (out: 8 777 351 bytes) Kernel Time = 0.280 = 8% User Time = 2.870 = 90% Process Time = 3.151 = 99% Virtual Memory = 98 MB Global Time = 3.166 = 100% Physical Memory = 90 MB pingo -s0 -faster (out: 9 439 005 bytes) Kernel Time = 0.124 = 6% User Time = 1.825 = 92% Process Time = 1.950 = 99% Virtual Memory = 86 MB Global Time = 1.965 = 100% Physical Memory = 78 MB
    166 replies | 40970 view(s)
More Activity