Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • compgt's Avatar
    Today, 08:38
    @SvenBent, i am telling the truth. I just want a piece of the Hollywood and tech billion$. LOL
    14 replies | 1169 view(s)
  • compgt's Avatar
    Today, 08:26
    James, Apple is now, just now, releasing their SoC? This is already 2020, we may not have anticipated this, but maybe. Of course, i won't accurately say that these technologies were designed by us in perfect pristine form, unchanged from the 1970s. I understand that they're updating our technologies, to meet the changing needs of the times. I remember designing, improving AMD and Nvidia technologies. Even technologies on data centers or cloud computing, maybe even DeepMind. They were betting on my research all the time. Because i led my clan's science research. Because i was like a "central hub" among scientist networks. I own Microsoft, IBM, Intel, and AMD too. I was just betrayed and denied in the end. Don't get intimidated by these pronouncements of mine. I am just telling the truth. I will not be making noise like this, if i am at least billionaire in Hollywood and tech. The truth is that i made Apple, i made it happen. I chose Steve Jobs. Others knew me as "the little Steve Jobs". I was Wozniak's Boss. My family designed the early Apple computers because we co-own Apple. I even made the movie "Jobs (2013)", not just the Marvel movies. It was Cold War, we knew flight technologies, weapons tech, even nukes. Consider me the almost one-man Standards Committee of most science and technologies. We were Starfleet, and we were taking charge of the modern science and technologies for the modern World Peace that we worked upon.
    14 replies | 1169 view(s)
  • SvenBent's Avatar
    Today, 07:50
    Seriously get help. Something is not right inside you
    14 replies | 1169 view(s)
  • moisesmcardona's Avatar
    Today, 00:47
    A few years ago, there was a Cloud service called Bitcasa. They seem to have a hash table of some sort. Files uploaded would be split into chunks (I believe 512kb chunks) and it would only upload those if it didn't exist on their servers. Basically reducing redundancy data for stuff like legally purchased music or videos, where several users would store the same data over and over again. This saved bandwidth on the user side as well but required significant IO due to the file chunking. Not to mention the NTFS Master File Table being growing a lot! There was no compression performed.
    17 replies | 882 view(s)
  • JamesWasil's Avatar
    Yesterday, 22:02
    Apple is now releasing their own SOC (system on a chip). If your claims are correct, then you would already know how this chip works, the undocumented modes for it, and can show us schematics and partial things here that would prove it was developed during the Cold War and convince everyone that you were slighted by big tech companies rather than a doctor's prescription for medication. Are you able to provide the proof for this? Even partial photos that cannot be used for anything tangible but visible and verifiable enough to prove that you had it before (since no one but Apple currently has these, and if your claim is true, then you could draw or outline how these work and differences between theirs with proprietary modifications and AMD/Intel and ARM and RISC-V cpus before they ever tell the public or private engineers for firms about them. And of course you would already know what these modifications and proprietary things are, because you already invented them! Yes?). Can you provide this as evidence to prove your claims?
    14 replies | 1169 view(s)
  • Sportman's Avatar
    Yesterday, 20:46
    Dutch investor Eckhart Wintzen (https://web.archive.org/web/20090713021317/http://www.extent.nl/about-eckart/) said to Dutch inventor Jan Sloot in 1998 "demo or die" because he wanted proof the chip card manual put in Sloot’s device was the source of the video play back information. Even if you show a working inventing you need to confirm to all requested proof to get an investment. How do you want to backup your claims? Did you patent or deposit them at a notary office as Sloot did? Did you gave demo’s with witnesses who are still alive as Sloot did? I don't think you make any chance against a big company without rock solid proof. The universe is one big information field and in source we are all one. Everything we go to invent in human life time already exist, it's all about tuning to a hint or idea in the information field. I noticed when I invented something one or more invented a similar thing round the same time. In the beginning I thought they copied me, but after contacting them I discovered it was done total independent while very similar. I went once to the local patent office to check an idea and they could not find the idea in my country, but in an other country there was the same inventing patented some years before with even a drawing what was a simple version of my drawing, so also very similar. Ideas exist for everybody but you need to focus to tune in and heavy labor to build a product and company. I remember when I was young that James Dyson showing his vacuum cleaner idea with a container for dust but nobody wanted to do something with it. Ten years later he build and sold them with his own company what's today a very big company. There is no free lunch.
    14 replies | 1169 view(s)
  • Darek's Avatar
    Yesterday, 20:02
    Darek replied to a thread Paq8pxd dict in Data Compression
    300KB is a very, very, good score! The first fourth submitted places for LTCB are: cmix v18 = 115,714,367 phda9 1.8 = 116,544,849 nncp 2019-11-16 = 119,167,224 paq8pxd_v48_bwt1 -s14 = 126,183,029 and there are paq8pxd_v89 and paq8sk23 to be submitted. paq8sk23score = 122'364'274 (but 2.5x higher time and 4GB more memory used)
    955 replies | 319546 view(s)
  • moisesmcardona's Avatar
    Yesterday, 16:41
    moisesmcardona replied to a thread Fp8sk in Data Compression
    You put the current copyright year. See paq8px and paq8pxd.
    10 replies | 438 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 16:14
    I don't know which the exactly copyright year of this. Is it 2008 or 2018 ? I have written Matt Mahoney as a credit for him
    10 replies | 438 view(s)
  • LucaBiondi's Avatar
    Yesterday, 16:09
    LucaBiondi replied to a thread Paq8pxd dict in Data Compression
    Thank you Darek! Next time we should go under 123.000.0000 for enwik9.. Near 300k of improveement very good!!! P.s. what is actual record for enwik9?
    955 replies | 319546 view(s)
  • moisesmcardona's Avatar
    Yesterday, 16:03
    moisesmcardona replied to a thread Fp8sk in Data Compression
    The copyright year was removed from the executable.
    10 replies | 438 view(s)
  • Darek's Avatar
    Yesterday, 15:48
    Darek replied to a thread Paq8pxd dict in Data Compression
    enwik scores for paq8pxd v89 Luca's modification: 15'728'903 - enwik8 -s15 -w -e1,english.dic by Paq8pxd_v89 15'655'526 - enwik8 -x15 -w -e1,english.dic by Paq8pxd_v89 123'301'984 - enwik9 -x15 -w -e1,english.dic by Paq8pxd_v89 15'728'903 - enwik8 -s15 -w -e1,english.dic by paq8pxd_v89_40_3360, change: 0,00% 15'654'147 - enwik8 -x15 -w -e1,english.dic by paq8pxd_v89_40_3360, change: -0,01% 123'013'220 - enwik9 -x15 -w -e1,english.dic by paq8pxd_v89_40_3360, change: -0,23% - very good change, no more memory usage and 3% of compression time improvenment!
    955 replies | 319546 view(s)
  • Scope's Avatar
    Yesterday, 15:47
    I haven't tested the speed yet, but I've compared the overall size of the same test set (personally, the most important thing for me is maximum efficiency at the slowest speed and I would be interested in an even slower mode like -sb, but designed for large images, and at faster speeds, exchange a little efficiency for higher speed is reasonable) 498 files (2 053 230 418 bytes) pingo (25) -s0 -strip 1 623 898 128 pingo (25) -s5 -strip 1 574 034 276 pingo (25) -sa -strip 1 555 294 498 pingo (44) -s0 -strip 1 623 898 128 pingo (44) -s5 -strip 1 577 248 424 pingo (44) -sa -strip 1 555 295 235 pingo (45) -sa -strip 1 555 294 330 proto -s0 -strip 1 624 029 494 proto -s5 -strip 1 567 973 360 proto -sa -strip 2 053 230 418
    477 replies | 127275 view(s)
  • compgt's Avatar
    Yesterday, 15:00
    ivan, do you want me to sing for you? No, never mind, i hardly sing these days. Just listen to American songs, they're mostly my compositions, or i was co-composer. I wonder if i composed Russian songs during the Cold War.
    14 replies | 1169 view(s)
  • SolidComp's Avatar
    Yesterday, 14:14
    How did you get the 400% CPU figure? How does network bitrate convert to CPU utilization percentage?
    12 replies | 683 view(s)
  • maadjordan's Avatar
    Yesterday, 11:26
    maadjordan replied to a thread 7-zip plugins in Data Compression
    I will try this and post result
    12 replies | 3337 view(s)
  • maadjordan's Avatar
    Yesterday, 11:25
    maadjordan replied to a thread 7-zip plugins in Data Compression
    check your inbox
    12 replies | 3337 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 04:51
    mixed40.dat using fp8sk3 -8 Total 400000000 bytes compressed to 46685762 bytes. block40.dat using fp8sk3 -8 Total 399998976 bytes compressed to 61313849 bytes.
    40 replies | 2501 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 04:43
    Fp8sk3 using -8 option on mixed40.dat(GDCC public test set files) is Total 400000000 bytes compressed to 46685762 bytes. using -8 option on block40.dat(GDCC public test set file) is Total 399998976 bytes compressed to 61313849 bytes. here is the source code and executable files
    10 replies | 438 view(s)
  • cssignet's Avatar
    Yesterday, 02:16
    the changes would be implemented atm (rc2 44). i did not it test widely though
    477 replies | 127275 view(s)
  • Trench's Avatar
    Yesterday, 01:56
    As stated on wikipedia "GIF images are compressed using the Lempel–Ziv–Welch (LZW) lossless data compression technique to reduce the file size without degrading the visual quality. This compression technique was patented in 1985. Controversy over the licensing agreement between the software patent holder, Unisys, and CompuServe in 1994 spurred the development of the Portable Network Graphics (PNG) standard. By 2004 all the relevant patents had expired." "Welch filed a patent application for the LZW method in June 1983. The resulting patent, US 4558302, granted in December 1985""when the patent was granted, Unisys entered into licensing agreements with over a hundred companies" https://en.wikipedia.org/wiki/GIF Did they make money? can you if you did the same? if its a small difference no but if its a big maybe? "In January 2016, Telegram started re-encoding all GIFs to MPEG4 videos that "require up to 95% less disk space for the same image quality." well mpeg4 doesnt seem as convenient or as fast. A bigger hard drive you can buy easier than time. In other fiends some people creates things and big companies pay them million to buy the patent and the company never sells the thing they buy since more money to be made in the thing they have. But other programs say free to use for public use but must pay if corporation. And some use the honor system while others give a free demo for a free days are tries only or 1 time daily. So many questions arise and more which a few are. Now is their a legal form one must fill out to get the assurances? Even if you do make something wont it hurt others to try to improve it? How long will the patent or payment last? Would it be better to not have a patent and keep ti secret for personal use? If you do make money will you use the money to make something else better or just screw everyone else like all rich which make money got to their head to destroy nations for their ego? You cant make money giving money away. and you cant go forward if you spend your time to make some thing good and cant afford to feed yourself. if Nikola Tesla made money or had plenty of money more things would have been done. Let's say you are smart and can make money if your dont have money no matter how smart you are. Unless you are smart enough to make money and connections which make the world go round then maybe you can influence the world in one thought. while others think they can influence with just ideas. As Christ said about watching out for "CASTING PEARLS BEFORE. SWINE Matthew 7:1-6" But a lot of things in the bible were stated by the ancient Greeks and everything people have now as well. But many push away who they came with the dance with to dance with another and then wonder why it was not so good in the end. In Ancient Greece only the rich paid taxes IF they wanted to. And many wanted to pay taxes to help their fellow man and to help their status and business as well. But the rich mainly went to war too. Now its backwards for poor die for the rich, the non rich pay the most taxes, the rich want to give as little and when they give its for tax breaks, the rich get tax money for their business. People seem to give praise to some rich despite they are ripping them off. Maybe the masses like to be ripped off and the rich give them what they want since people are conditioned to be that way. And the Rich will get richer and only focus on money and if you feel you are morally right and give things away for free then dont blame them if they profit off you. You can only blame yourself. I am not saying dont give things away I am saying dont hurt yourself and others. Play it smart since if you are smart enough to make something find someone else smart enough to protect you. But dont hurt the masses that can benefit from your idea. Kind of all those big tech companies getting rich by you using their product since they make money with people going on the site. you can be part of the problem if you disagree with them. Sure they will make their money without you but the mentality of the masses is the issue. Which maybe that can never change but maybe your help to use their tactics against them if you have the capability. Maybe resistance is futile and give up everything. As the Ancient Greeks said leisure time is used to learn something new.
    0 replies | 43 view(s)
  • Trench's Avatar
    4th July 2020, 23:11
    ​JamesCool. I thought about that too to replace the most used with a smaller one and the least used with bigger but in another way. But its also kind of used in coding in some ways I think. Gotty agreed. nice associations, so my wording was wrong and I emphasized it too much. But that was a side comment. Similarities from one field to another. The main point is not addressed which is programmers need completely different other fiends for perspective as for randomness you are right for the most part. when you open up a combination lock you have a limit of patterns to put in. the more digits the more numbers. if its 1 digit and binary you have 50% odds, if its 2 digits and binary its 4, 3 digits and binary etc you get the idea. I dont know if you saw my other post about randomness but I explained more in more details with examples. but in short a computer does not know the difference between random or order and we define it with the formula in what we understand. Maybe you are 100% right but as for not I do not agree fully. Also sorry but I disagree with discouraging other to disagree with random files. You have to push yourself to achieve something harder which makes the other feel easier. Randomness file compression is the future I feel despite almost everyone does not see it. https://encode.su/threads/3338-Random-files-are-compressed-all-the-time-by-definition-is-the-issue I forgot my programming 20 years ago and only do simple things like html, excel, or hex edit. Its a different mindset when dealing with other things and to be away form coding. Compgt very interesting comments. Did you have proof that it would be an issue? but if you can you can at least make one for yourself and have a dead man switch. I made a post about that too. https://encode.su/threads/3346-No-hope-for-better-compression-even-if-possible People think of the positive aspects of finding the ultimate compression but the issue is many ignore or imagine the negative side about it. Good programmers but not practical. What if you were in charge of a nations GDP, what if you are in charge of a companies fiduciary duty, what if you are in charge of livelihood of others. 1000 steps forward 2000 steps back. If one is going to release fire they better be able to control it. This forum is for compression but the balance is more than that, but not talked about since again this forum is just for file compression. Its hard to balance many aspects. yes they benefit from the occurrence of patterns, and that the edge you take and exploit. Just like a boxer exploits the opponents weakness the same with the coding. As for making money on their ideas. Well how did this work for Gif? I should make a new topic. Since I feel many are holding back out of fear,out of money, etc. Anyway I suggest people do what Christ said. Be like a child to at least start from the begging and understand how they learn. The obvious is not so obvious. I talk vague since I am trying to make others think about it and dont like to say much about it.
    20 replies | 429 view(s)
  • cssignet's Avatar
    4th July 2020, 20:32
    ​these are the results i expected. i planned changes for rc2 45 as it could require more trials. thanks for your tests, it was very useful
    477 replies | 127275 view(s)
  • compgt's Avatar
    4th July 2020, 18:39
    compgt replied to a thread 2019-nCoV in The Off-Topic Lounge
    Update, from me: Coronavirus is not new. We knew about it in the 1970s. In the end, i had many thought-up vaccines for coronaviruses, even for mutations. Many were following me, asking for my inputs. Because researcher groups were tipping me on their results too, even from on-the-spot lectures. But I recall a group deliberately brainstorming powerful coronaviruses mutations in the 1970s Cold War, maybe in early EDSA, QC, Philippines. They were the bio-weapons type of guys, who would willingly monetize on coronaviruses vaccines too. Now, this year 2020, many nations are offering billion$ worth for funding covid-19 vaccine research, to find an immediate vaccine. Need i say that the vaccines "hydroxychloroquine, avigan, and remdesivir" etc. were probably designed, named or suggested by me with my Ramizo relatives who were medical researchers saving the world from coronaviruses or bio-weapons? In the 1970s, actually. Japan, my ally, can probably validate this claim. If i was the one who designed remdesivir, then how many million$ for me from designing this vaccine? But that is not needed if i get my Hollywood and tech billion$. (5/7/2020)
    41 replies | 3478 view(s)
  • Scope's Avatar
    4th July 2020, 18:35
    I did a test on the entire PNG set (after these multithreading changes are in Pingo, I will redo the speed results of all optimizers when I have time). 498 files (2 053 230 418 bytes) ect -1 -strip --mt-file *.png Processed 498 files Saved 332.32MB out of 1.91GB (16.9717%) Kernel Time = 0.015 = 0% User Time = 0.000 = 0% Process Time = 0.015 = 0% Virtual Memory = 9 MB Global Time = 342.581 = 100% Physical Memory = 8 MB proto -s0 -strip *.png proto - (213.94s): ------------------------------------------------------------------------- 498 files => 419141.53 KB - (20.90%) saved ------------------------------------------------------------------------- Kernel Time = 0.015 = 0% User Time = 0.000 = 0% Process Time = 0.015 = 0% Virtual Memory = 9 MB Global Time = 213.986 = 100% Physical Memory = 8 MB
    477 replies | 127275 view(s)
  • cssignet's Avatar
    4th July 2020, 16:26
    ​ here we are. proto is pingo, just with fair comparison of threading. thanks to your test, i would fix that later just chunks removal, this would be 'fixed' now, if you are still up for it, the last trial: proto -s0 -strip, on the set you have test (~900 files) on your benchmark (heuristic test), and a comparison if possible with ECT -1 -strip. thanks!
    477 replies | 127275 view(s)
  • compgt's Avatar
    4th July 2020, 16:18
    Gotty, the idea is to inform the computer science world that these tech companies were started up or planned during the Cold War. American students are now reaping the benefits of world peace which we were working on in the 1970s to the 80s. World peace happened because those militaristic power groups were satisfied already they will own these billion$-companies that i was creating and planning. I negotiated for world peace immediately because i was also thinking of my ownerships, of course! The next idea is clearly for money purposes now (amidst this pandemic and economic recessions around the world). They owe me. Plain and simple.
    14 replies | 1169 view(s)
  • compgt's Avatar
    4th July 2020, 15:54
    Maybe there was a "Gotty" handle then, see there's a 'G' and 't' in Gotty? But i can't recall fully, his work on paq might be new indeed, done at present realtime, not 1980s. ivan2k2, i'm fine. Maybe even finer than you. I'm remembering right? :)
    14 replies | 1169 view(s)
  • Jyrki Alakuijala's Avatar
    4th July 2020, 15:25
    Entropy codings tend to be less than 10 gbit/s on software. If you have a 10 GbE network connection 4 : 1 compression, you need 400 % of CPU to fill the 10 gbits network. 1 % is about 3 orders of magnitude off. It might be possible to be done with relatively simple hardware. Still, you'd be using more than 10 % of the memory bus for storing the decompressed data. ​So, no, it is not possible with 1 %.
    12 replies | 683 view(s)
  • Scope's Avatar
    4th July 2020, 15:10
    Same set, timer64 (Timer 14.00), pingo (43), proto. Hmm, not bad, proto is noticeably faster and more efficient at speed 5 (including -mp=8, and faster, but slightly less efficient at speed 0 (maybe HDD in this configuration can affect, so I tested slower speed 5 too). ​ First run: pingo.exe -s0 -strip *.png pingo - (181.04s): ----------------------------------------------------------------- 236 files => 128508.08 KB - (15.57%) saved ----------------------------------------------------------------- Kernel Time = 0.015 = 0% User Time = 0.000 = 0% Process Time = 0.015 = 0% Virtual Memory = 9 MB Global Time = 181.088 = 100% Physical Memory = 8 MB pingo.exe -s5 -strip *.png pingo - (499.17s): ----------------------------------------------------------------- 236 files => 148348.89 KB - (17.97%) saved ----------------------------------------------------------------- Kernel Time = 0.015 = 0% User Time = 0.000 = 0% Process Time = 0.015 = 0% Virtual Memory = 9 MB Global Time = 499.209 = 100% Physical Memory = 8 MB pingo.exe -s0 -strip *.png -mp=8 pingo - (122.29s): ----------------------------------------------------------------- 236 files => 128508.08 KB - (15.57%) saved ----------------------------------------------------------------- Kernel Time = 0.015 = 0% User Time = 0.000 = 0% Process Time = 0.015 = 0% Virtual Memory = 9 MB Global Time = 122.331 = 100% Physical Memory = 8 MB pingo.exe -s5 -strip *.png -mp=8 pingo - (487.98s): ----------------------------------------------------------------- 236 files => 148348.89 KB - (17.97%) saved ----------------------------------------------------------------- Kernel Time = 0.031 = 0% User Time = 0.000 = 0% Process Time = 0.031 = 0% Virtual Memory = 9 MB Global Time = 488.028 = 100% Physical Memory = 8 MB proto.exe -s0 -strip *.png proto - (97.50s): ------------------------------------------------------------------------- 236 files => 128497.20 KB - (15.56%) saved ------------------------------------------------------------------------- Kernel Time = 0.000 = 0% User Time = 0.015 = 0% Process Time = 0.015 = 0% Virtual Memory = 9 MB Global Time = 97.553 = 100% Physical Memory = 8 MB proto.exe -s5 -strip *.png proto - (262.32s): ------------------------------------------------------------------------- 236 files => 151476.87 KB - (18.35%) saved ------------------------------------------------------------------------- Kernel Time = 0.031 = 0% User Time = 0.000 = 0% Process Time = 0.031 = 0% Virtual Memory = 9 MB Global Time = 262.347 = 100% Physical Memory = 8 MB Second run: pingo.exe -s0 -strip *.png pingo - (164.35s): ----------------------------------------------------------------- 236 files => 128508.08 KB - (15.57%) saved ----------------------------------------------------------------- Kernel Time = 0.015 = 0% User Time = 0.000 = 0% Process Time = 0.015 = 0% Virtual Memory = 9 MB Global Time = 164.398 = 100% Physical Memory = 8 MB pingo.exe -s5 -strip *.png pingo - (504.40s): ----------------------------------------------------------------- 236 files => 148348.89 KB - (17.97%) saved ----------------------------------------------------------------- Kernel Time = 0.031 = 0% User Time = 0.000 = 0% Process Time = 0.031 = 0% Virtual Memory = 9 MB Global Time = 504.451 = 100% Physical Memory = 8 MB pingo.exe -s0 -strip *.png -mp=8 pingo - (124.96s): ----------------------------------------------------------------- 236 files => 128508.08 KB - (15.57%) saved ----------------------------------------------------------------- Kernel Time = 0.031 = 0% User Time = 0.015 = 0% Process Time = 0.046 = 0% Virtual Memory = 9 MB Global Time = 124.992 = 100% Physical Memory = 8 MB pingo.exe -s5 -strip *.png -mp=8 pingo - (489.69s): ----------------------------------------------------------------- 236 files => 148348.89 KB - (17.97%) saved ----------------------------------------------------------------- Kernel Time = 0.000 = 0% User Time = 0.015 = 0% Process Time = 0.015 = 0% Virtual Memory = 9 MB Global Time = 489.742 = 100% Physical Memory = 8 MB proto.exe -s0 -strip *.png proto - (97.89s): ------------------------------------------------------------------------- 236 files => 128497.20 KB - (15.56%) saved ------------------------------------------------------------------------- Kernel Time = 0.015 = 0% User Time = 0.000 = 0% Process Time = 0.015 = 0% Virtual Memory = 9 MB Global Time = 97.936 = 100% Physical Memory = 8 MB proto.exe -s5 -strip *.png ​proto - (262.13s): ------------------------------------------------------------------------- 236 files => 151476.87 KB - (18.35%) saved ------------------------------------------------------------------------- Kernel Time = 0.000 = 0% User Time = 0.015 = 0% Process Time = 0.015 = 0% Virtual Memory = 9 MB Global Time = 262.171 = 100% Physical Memory = 8 MB
    477 replies | 127275 view(s)
  • ivan2k2's Avatar
    4th July 2020, 15:00
    One day he will say something like: "i gave Gotty few ideas about paq8px, but he dont want to mention me anywhere". The only help he needs is better doctors and/or forum ban.
    14 replies | 1169 view(s)
  • Gotty's Avatar
    4th July 2020, 14:17
    I also don't know how to respond. Sorry, compgt. I know a girl, who when she was still attending kindergarten told us many "true stories" that she owned dragons and distinguished fires (her father is a firefighter) and about many heroic acts she did. It was cute, but sometimes I could not handle her stories properly. Still unsure how to properly handle them. Now she is attending school, and these imaginations of hers slowly fade away. I can understand: she needed attention, more than she got. Maybe since she had a younger brother? I don't know. Today she is more mature. She still needs more attention than an average kid. It's also true that she is extremely intelligent - very smart girl and very active. What you wrote - the style and content - reminds me of her stories. I have no idea why you would write what you wrote, and don't know how to respond properly. Probably the reason is the same: you need attention, and don't get enough? But you are certainly not attending kindergarten, so I'm puzzled. And JamesWasil is also puzzled. We would tell you something useful, give you something that helps, but don't know what. ​ How can we help?
    14 replies | 1169 view(s)
  • cssignet's Avatar
    4th July 2020, 11:04
    ​from user pov, perhaps. from devs, it makes sense to compare stuff in the same scope. anyway, the huge speed difference here would not be about compiler/flags, but an issue which seem to be related to my tool itself (possibly mp or heuristics that would failed on the specific set you have tested). if i could solve this, then pingo *should* be faster, as expected
    168 replies | 41436 view(s)
  • compgt's Avatar
    4th July 2020, 10:38
    I'm not trolling. I don't intend it. I am not sowing discord among you. I am simply stating here the real history of modern computing. That the Philippines was the main venue of its making. (We were a military superpower, my clan. We were NASA/Starfleet.) And it's about justice, me being excluded as co-owner of the tech giants, and me being unpaid of my Hollywood music and movies. I made Star Wars, Star Trek and Transformers, you think that's cool? I say it would be cooler if i am duly paid for making these movies. So i ask million$, even billion$, from them. https://grtamayoblog.blogspot.com/2018/10/hollywood-billion.html?m=1 Well, i'll say it again. I was a child genius in the 1970s to early 80s dictating on computer science matters. Don't be intimidated by genius keyword; well maybe "talented, prolific, precocious" child, though i failed in realtime college in the 1990s. I was designing Intel cpus, DOS and Microsoft Windows, Microsoft Office, Borland compilers, Visual Studio, etc. I moderated on the tech giants because i got shares in them and ownership bonds. I favored Microsoft, IBM and Intel. I accepted AMD and Cyrix, so they existed. This is the timetable i dictated for data compression and encryption too. I co-developed many ZIP compressors with my cousins and aunts (pkzip, WinZip and WinRAR), snappy, arj. The bakery bread-named compressors by Google were probably developed with me; dnd's lzturbo optimization techniques related to cpu processing, even ZSTD webpage in Facebook website is familiar, suggesting i co-developed zstd core algorithm too, like bzip2 and nanozip (i was the co-programmer, our ideas). I created and approved many ciphers too that i now rediscover in Wikipedia. Yet my ownership bonds were not honored! Understand me. I co-own Apple, Microsoft, IBM, Intel, AMD, Yahoo, Google, and Facebook in the 1970s. I remember i was designing the Facebook GUIs while making/shooting the "Star Trek: Enterprise" tv series. These companies, to me, were already existent as we were planning the timetable of their products and technologies. I negotiated among these companies for our future plans. I outlined the computer science timetable in Windows OSes, Visual compilers, and data compression and encryption technologies, among others. I co-own Apple that Wozniak considered me his Boss. I was genius designing computers and algorithms and software. We planned github and LinkedIn too. Then they all stole the corporate side of me, totally pushed me out. But in the 1980s they would still come to me to ask me on computer matters, exploiting me, especially on quantum computing which i and my family pioneered. The fact that they still went to me in the 1980s is proof i was a major player in tech. Others were there to steal my shares and ownerships, wanted to take videos of me saying i didn't own the companies, wanted me to sign papers or agreements with their constant threats. Already owning Apple, Microsoft, IBM, and Intel, i guess my plan was for me to officially be co-founder of Yahoo, Google, and Facebook at their official start up dates in the 1990s and 2000s which they just had to follow--should had followed! I moderated on everything tech that i co-own these tech giants! They're too hardcore greedy that they wanted my ownerships for themselves, brainwashed me. That sums it up. We're talking of immense wealth here. People will do anything to be in Google, for example. Naturally, the taller, bigger, and handsome Americans and Europeans look more credible than me. But modern computer science we designed from here, in the Philippines. See, the freedom to be and to do on the Net is spawning many geniuses' creative works in the many sciences around the globe now. And on top of all these achievements, i knew quantum computing will change the world.
    14 replies | 1169 view(s)
  • JamesWasil's Avatar
    4th July 2020, 04:59
    I'm not sure how to answer this poll. I remember receiving spam emails about this, but I'm not sure if this is supposed to be an effort at trolling or an exercise for mental health awareness?
    14 replies | 1169 view(s)
  • cssignet's Avatar
    4th July 2020, 01:21
    would you please do few more tests, preferably on the same set againts pingo rc2 43 vs proto pingo -s0 -strip pingo -s0 -strip -mp=8 proto -s0 -strip proto -s0 -strip -mp=8 it would be nice if you can post the log from pingo + timer/PP64 instead of pshell. thanks
    477 replies | 127275 view(s)
  • Jarek's Avatar
    4th July 2020, 00:16
    In https://sites.google.com/site/powturbo/entropy-coder the fastest rANS has ~600/1500 MB/s/core enc/dec ... AC 33/22 MB/s/core. But in https://github.com/jkbonfield/rans_static there is faster AC: 240/150 MB/s/core. Generally ANS is much more convenient for vectorization due to state being single number, in AC one needs to process two numbers ... might be doable, but I haven't seen it. Another story is SIMD 4bit adaptive rANS - started in LZNA, simultaneously updates 16 probabilities, and compares with all CDFs to find the proper subrange.
    12 replies | 683 view(s)
  • Sportman's Avatar
    3rd July 2020, 23:04
    Sportman replied to a thread Fp8sk in Data Compression
    TS40.txt: 79,571,976 bytes, 2,152.885 sec., fp8sk1 -8
    10 replies | 438 view(s)
  • CompressMaster's Avatar
    3rd July 2020, 21:37
    Thanks, but I checked videotest in full hd and I am unsatisfied with quality. Thus I need something better - full hd video, and rugged design such as blackview.
    4 replies | 91 view(s)
  • Darek's Avatar
    3rd July 2020, 18:08
    Darek replied to a thread Paq8pxd dict in Data Compression
    Ok, then there no changes... 15'655'526 - enwik8 -x15 -w -e1,english.dic by Paq8pxd_v89, change: -0,01% 15'654'147 - enwik8 -x15 -w -e1,english.dic by paq8pxd_v89_40_3360, change: -0,01% Looks like 14KB of gain for enwik9...
    955 replies | 319546 view(s)
  • compgt's Avatar
    3rd July 2020, 17:54
    Hear me, hear me, hear me: https://encode.su/threads/3338-Random-files-are-compressed-all-the-time-by-definition-is-the-issue?#10 https://grtamayoblog.blogspot.com/2020/02/paq-compression-programs.html?m=1
    14 replies | 1169 view(s)
  • Gotty's Avatar
    3rd July 2020, 17:21
    SolidComp, BLU G6 is unfortunately not available around here (in Slovakia where CompressMaster resides). Nevertheless, I checked this phone and found that users and reviews are not really satisfied. It's screen is not HD, and it's camera seems to be low end, screen is not gorilla protected. It is is certainly a low budget model. I'm afraid low budget phones are not the king of durability. Why do you suggested this phone? Because of it's price?
    4 replies | 91 view(s)
  • Shelwien's Avatar
    3rd July 2020, 16:55
    > You said SLZ wasn't the fastest deflate implementation, now you're talking about Zstd and LZ4. Well, they can be modified to write to deflate format. The slow part in LZ encoding is matchfinding, not huffman coding. > I was asking what other deflate implementations are faster. Based on this, intel gzip is faster: https://sites.google.com/site/powturbo/home/web-compression Also there's hardware for deflate encoding: https://en.wikipedia.org/wiki/DEFLATE#Hardware_encoders > On memory managers, you mean OS, or something application specific? Layers are added by OS, standard library, sometimes app itself too. SLZ is designed for a special use case where it has to compress 100s of potentially infinite streams in parallel on cheap hardware. However its not a good solution for other compression tasks, not even storage or filesystems.
    15 replies | 383 view(s)
  • SolidComp's Avatar
    3rd July 2020, 16:39
    Should SIMD rANS always beat SIMD AC?
    12 replies | 683 view(s)
  • Shelwien's Avatar
    3rd July 2020, 16:39
    Shelwien replied to a thread Paq8pxd dict in Data Compression
    @Darek: -s doesn't use mod_ppmd
    955 replies | 319546 view(s)
  • SolidComp's Avatar
    3rd July 2020, 16:37
    You said SLZ wasn't the fastest deflate implementation, now you're talking about Zstd and LZ4. I was asking what other deflate implementations are faster. On memory managers, you mean OS, or something application specific?
    15 replies | 383 view(s)
  • Shelwien's Avatar
    3rd July 2020, 16:33
    Shelwien replied to a thread Fp8sk in Data Compression
    @suryakandau: Don't mind him. In your case its better to make new threads. Or even better if you'd just use one thread for all of your clones of codecs. @CompressMaster: Do you really want *sk versions in main threads of these codecs?
    10 replies | 438 view(s)
  • Shelwien's Avatar
    3rd July 2020, 16:14
    > I don't follow. There's a difference between Huffman trees > and Huffman coding in this context? What's the difference? Dynamic and static coding are different algorithms. Dynamic is more complex and slower, but doesn't require multiple passes. "How the huffman tree is generated" and "how the tree is encoded" and "how the data is encoded using the tree" are completely unrelated questions, only the latter one is about whether coding is static or dynamic. > Those are just the stats from Willie's benchmarks. Well, the only table there with 100M zlib is for "HAProxy running on a single core of a core i5-3320M, with 500 concurrent users, the default 16kB buffers" so 100M is used by 500 instances of zlib, thus 200kb per instance, which is reasonable. As to CPU usage, its actually limited by output bandwidth there. "gigabit Ethernet link (hence the 976 Mbps of IP traffic)" So I guess your "CPU" is supposed to be measured as (976000000/8)*100/(encoding_speed_in_bytes/compression_ratio), so 256MB/s corresponds to 100% and 512MB/s to 50% (at SLZ's CR 2.1). This is not something obvious and is only defined for a specific use case, so don't expect anybody understanding you without explicit term definition. > How else do you measure memory use but by measuring memory use during program execution? Sure, its okay in libslz case, because he compares stats of the same program (haproxy), just with different compression modules. Comparing different programs like that is much less precise because stats don't show which algorithms use this memory. Well, anyway, in this case the best approach would be to use internal measurement (by replacing the standard memory manager) For example, this: for( i=0; i<1000000; i++ ) new char; // PeakWorkingSetSize: 34,775,040 and this: for( i=0; i<1; i++ ) new char; // PeakWorkingSetSize: 2,777,088 both allocate exactly 1000000 bytes of useful memory. But the memory usage by corresponding executables is 17x higher in first case. Why? Because of overhead of specific default memory manager. Which can be replaced with something custom-made and then the examples would be suddenly equal in memory usage. > I like software that is engineered for performance and low overhead, like SLZ. > I want it to be 1% CPU or so. So in SLZ terms its 25GB/s. SLZ itself only reaches 753MB/s (in some special cases), so good like with that. Maybe sleeping for 10-20 years would help :) > It's faster than libdeflate. What else is there? Well, zstd https://github.com/facebook/zstd#benchmarks or LZ4: https://github.com/lz4/lz4#benchmarks https://sites.google.com/site/powturbo/compression-benchmark https://sites.google.com/site/powturbo/entropy-coder Thing is, normally decoding is a more frequent operation, so codec developers mostly work on decoding speed optimization.
    15 replies | 383 view(s)
  • Scope's Avatar
    3rd July 2020, 16:09
    Unfortunately, I read it late and have already rewritten version 40, but I have done other tests with a different set: 236 files (845 416 026 bytes) ect -1 -strip --mt-file *.png TotalMilliseconds : 187197,7023 ppx2 -P 8 -L 1 ect.exe -1 -strip "{}" (8 MF) TotalMilliseconds : 201311,9329 ect -5 -strip --mt-file *.png TotalMilliseconds : 505926,7081 ppx2 -P 8 -L 1 ect.exe -5 -strip "{}" (8 MF) TotalMilliseconds : 514763,0062 pingo (41) -s0 -strip *.png TotalMilliseconds : 163415,1559 ppx2 -P 8 -L 1 pingo (41) -s0 -strip -nomulti "{}" (8 MF) TotalMilliseconds : 142454,0556 pingo (41) -s5 -strip *.png ​TotalMilliseconds : 498413,4901 ppx2 -P 8 -L 1 pingo (41) -s5 -strip -nomulti "{}" (8 MF) TotalMilliseconds : 398038,6889 pingo (42) -s0 -strip *.png TotalMilliseconds : 126019,8598 pingo (42) -s5 -strip *.png TotalMilliseconds : 493232,3343
    477 replies | 127275 view(s)
  • Scope's Avatar
    3rd July 2020, 15:25
    Yes and no. If people are able to use a better compiler, flags, etc. for open source applications, they are likely to use this (or use more optimally compiled files from other people). Closed source applications may be compiled from an old or not optimal compiler version (for example, in my experience, the speed of some applications even between MSVC/GCC/Clang compilers may be noticeably different), with non-optimal flags, but nothing can be done about it, it's not a user problem, I won't be able to "speed them up" if I want, although I can compile other applications with the same version, flags and compilers to make them slower (but since this isn't a speed test of individual algorithms and compilers, but a comparison of applications that will be used, it's not quite an honest comparison either). As an alternative, there can be two versions, stable for generic CPUs and for more modern ones with AVX2 support, with more aggressive optimization flags, etc., like when I tested Lepton, there were different compiled versions: https://github.com/dropbox/lepton/releases. It's also better to move to another topic (like ECT), as this is no longer a discussion of Google projects.
    168 replies | 41436 view(s)
  • Darek's Avatar
    3rd July 2020, 15:23
    Darek replied to a thread Paq8pxd dict in Data Compression
    15'728'903 - enwik8 -s15 -w -e1,english.dic by Paq8pxd_v89, change: -0,02% 15'728'903 - enwik8 -s15 -w -e1,english.dic by paq8pxd_v89_40_3360, change: 0,00% Hmmm... identical scores! Testing -x15...
    955 replies | 319546 view(s)
  • suryakandau@yahoo.co.id's Avatar
    3rd July 2020, 14:58
    I agree with you but how about jan ondrus ? does he agree if i make some improvement on it ?
    10 replies | 438 view(s)
  • LucaBiondi's Avatar
    3rd July 2020, 14:56
    LucaBiondi replied to a thread Paq8pxd dict in Data Compression
    Great!!! Have a good day!
    955 replies | 319546 view(s)
  • CompressMaster's Avatar
    3rd July 2020, 14:53
    CompressMaster replied to a thread Fp8sk in Data Compression
    @suryakandau@yahoo.co.id May I know WHY you posted new version in a separate thread AGAIN? If it´s an upgrade (new features, tweaked code etc.) IT WOULD BE FAR BETTER to have only one thread - Jan Ondrus started it. If you are disagree with my statement, consider an extreme case - we decided to make new thread for every new paq8px version. Result? Complicated navigation through many irrelevant threads. So please stop that, otherwise I report you to Shelwien.
    10 replies | 438 view(s)
  • suryakandau@yahoo.co.id's Avatar
    3rd July 2020, 13:58
    Paq8sk30 -s1 ts40.txt is: Total 400000000 bytes compressed to 80211410 bytes. Time 76785.83 sec, used 658 MB (690640027 bytes) of memory Paq8sk32 -s1 ts40.txt is Total 400000000 bytes compressed to 79461853 bytes. Time 65043.98 sec, used 659 MB (691656451 bytes) of memory i wonder if using paq8sk32 -x15 -e1,english.dic on ts40.txt can it reach below 70.xxx.xxx ?
    140 replies | 11003 view(s)
  • Darek's Avatar
    3rd July 2020, 13:39
    Darek replied to a thread Paq8pxd dict in Data Compression
    I'll test it. Starting from enwik8.
    955 replies | 319546 view(s)
  • compgt's Avatar
    3rd July 2020, 13:11
    Maybe Jyrki and Google (with all their expertise) can implement RLLZ into an actual compressor and see actual gains for LZ77 and LZSS... Since the write buffer is imposed in the decoding algorithm, it should be very fast like byte-aligned LZ.
    18 replies | 950 view(s)
  • LucaBiondi's Avatar
    3rd July 2020, 13:06
    LucaBiondi replied to a thread Paq8pxd dict in Data Compression
    Hi Darek are you able to test enwin9? I can't because is too big for me! Thank you!!! I will try to do some other expreriment!!!!
    955 replies | 319546 view(s)
  • compgt's Avatar
    3rd July 2020, 12:37
    Thanks for replying Gotty! Your posts here at encode.su are very informative, clearly well explained and sure are very helpful to anyone doing data compression, experts or enthusiasts alike. > I do encourage you to experiment even more. Well, maybe not too much in random data compression, but on algorithms very different than Huffman, LZ, grammar based, and arithmetic/ANS coding. If luck wills it, the question is again how programmers can "monetize" on their compression ideas and compressors.
    20 replies | 429 view(s)
  • Darek's Avatar
    3rd July 2020, 12:26
    Darek replied to a thread Paq8pxd dict in Data Compression
    Scores for 4 Corpuses by paqone paq8pxd_v89_40_3360. This versionget better scores for all corpuses. For all Silesia, Calgary and MaximumCompression set the paq8pxd records, however MaximumCompression tar version 5'991'491 bytes is in my opinion best scores ever! For Silesia there are 15KB of gain - nice! There is another tfing worth to mention for "nci" file from Silesia this version got the best score ever - beat cmix v18! The same case of for A10.jpg, FP.FOG and vcfiu.hlp from Maximum Compression!
    955 replies | 319546 view(s)
  • Gotty's Avatar
    3rd July 2020, 12:06
    I'm happy that you are happy and relieved. I think trying compressing random data is a must for everyone who wants to understand entropy. I'm with you, understand your enthusiasm and I do encourage you to experiment even more. After you understand it deeply, you will not post more different ideas. ;-)
    20 replies | 429 view(s)
  • compgt's Avatar
    3rd July 2020, 11:24
    Dresdenboy, if you're interested in LZSS coding, search for my "RLLZ" in this forum for my remarks on it. RLLZ doesn't need literal/match prefix bit, no match_len for succeeding similar strings past the initially encoded string, and no literal_len. It was my idea in high school (1988-1992), but i was forgetting computer programming then and we didn't have access to a computer. I remembered it in 2018, so it's here again, straightened out, better explained. https://encode.su/threads/3013-Reduced-Length-LZ-(RLLZ)-One-way-to-output-LZ77-codes?highlight=RLLZ
    18 replies | 950 view(s)
  • compgt's Avatar
    3rd July 2020, 11:01
    It's a relief that somebody here is admitting he actually tried compressing random files, like me. And actually suggests us to experiment with a random file. But not too much i guess. I tried random compression coding in 2006-2007 that i actually thought i solved it, that i thought i got a random data compressor. I feared the Feds and tech giants will come after me, so i deleted the compressor, maybe even without a decoder yet. Two of my random compression ideas are here: https://encode.su/threads/3339-A-Random-Data-Compressor-s-to-solve RDC#1 and RDC#2 are still promising, worth the look for those interested. Maybe, i still have some random compression ideas but i am not very active on it anymore. There are some "implied information" that a compressor can exploit such as the order or sequence of literals (kinda temporal) in my RLLZ idea, and the minimum match length in lzgt3a. Search here in this forum "RLLZ" for my posts. https://encode.su/threads/3013-Reduced-Length-LZ-(RLLZ)-One-way-to-output-LZ77-codes?highlight=RLLZ > Randomness is an issue. And randomness is the lack of useful patterns. Randomness is the lack of useful patterns, i guess, if your algorithm is a "pattern searching/encoding" algorithm. Huffman and arithmetic coding are not pattern searchers but naturally benefit on the occurrences of patterns. LZ based compressors are.
    20 replies | 429 view(s)
  • Aniskin's Avatar
    3rd July 2020, 10:25
    Aniskin replied to a thread 7-zip plugins in Data Compression
    Is there a way to get a sample of such file to debug?
    12 replies | 3337 view(s)
  • Bulat Ziganshin's Avatar
    3rd July 2020, 10:24
    first, we need to extend the vocabulary: static code: single encoding for the entire file, encoding tables are stored in the compressed file header block-static one: file is split into blocks, each block has its own encoding tables stored with the block dynamic: encoding tables are computed on-the-fly from previous data, updated each byte block-dynamic: the same, but encoding tables are updated f.e. once each 1000 bytes so, - the first LZ+entropy coder was lzari with dyn. AC - second one was lzhuf aka lharc 1.x with dyn. huffman - then, pkzip 1.x got Implode with static huffman coder. It employed canonical huffman coding so compressed file header stored only 4 bits of prefix code length for each of 256 chars (and more for LZ codewords). - then, ar002 aka lha 2.x further improved this idea and employed block-static huffman. It also added secondary encoding tables used to encode code lengths in the block header (instead of fixed 4-bit field) - pkzip 2.x added deflate which used just the same scheme as ar002 (in this aspect, there were a lot of other changes) Since then, block-static huffman became de-facto standard for fast LZ77-based codecs. It's used in RAR2 (which is based on deflate), cabarc (lzx), brotli (that added some O1 modelling), zstd (that combines block-static huffman with block-static ANS). Static/block-static codes require two passes over data - first you compute frequencies, then build tables and enocde data. You can avoid first pass by using fixed tables (or use more complex tricks such as building tables on the first 1% of data). Deflate block header specifies whether it uses custom encoding tables encoded in the block header (DYNAMIC) or fixed ones defined in the code (STATIC). So, this field in the block has its own vocabulary. Tornado implements both block-dynamic huffman and block-dynamic AC. Dynamic/block-dynamic codes uses only one pass over data, and block-dynamic coding is as fast as second pass of *static one.
    15 replies | 383 view(s)
  • Gotty's Avatar
    3rd July 2020, 09:16
    Gotty replied to a thread Fp8sk in Data Compression
    10 replies | 438 view(s)
  • Dresdenboy's Avatar
    3rd July 2020, 08:36
    My own experiments look promising. With a mix of LZW, LZSS and numeral system ideas (not ANS though ;)), I can get close to apultra, exomizer, packfire for smaller files, while the decompression logic is still smaller than theirs.
    39 replies | 2581 view(s)
  • Dresdenboy's Avatar
    3rd July 2020, 08:33
    Here's another paper describing an LZSS variant for small sensor data packets (so IoT, sensor mesh, SMS, network message compression related works look promising): An Improving Data Compression Capability in Sensor Node to Support SensorML-Compatible for Internet-of-Things http://bit.kuas.edu.tw/~jni/2018/vol3/JNI_2018_vol3_n2_001.pdf
    18 replies | 950 view(s)
  • Bulat Ziganshin's Avatar
    3rd July 2020, 08:28
    tANS requires memory lookups which is 2 loads/cycle in the best case (on intel cpus). so probably you can't beat SIMD rANS
    12 replies | 683 view(s)
  • suryakandau@yahoo.co.id's Avatar
    3rd July 2020, 08:22
    @sportman/Darek could you test it on GDCC public test set file (test 1,2,3,4) please ?
    10 replies | 438 view(s)
More Activity