Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • Adreitz's Avatar
    Today, 00:08
    ​Just an update, Jyrki and Jon. After more testing, my disappointing compression results seem to be due in particular to the -P parameter. I did another round of testing on the "Artificial" test image and discovered that -Y can actually have an effect with lower values (on certain images) than I had previously discovered, down to 59 (that is, for this image, -Y 59 is equivalent to -Y 0). I've now started to do test runs at -Y 55 to be safe. So I was able to shave off a few KB with -q 100 -s 9 -C 19 -P 5 -E 2 -I 1 -c 1 -X 99 -Y 55 at 824957 bytes. Still not close to the simple -q 100 -s 9 setup. So then I got to thinking about the heuristics you were talking about vs. the -P parameter. Something I never understood about the cjpegxl help text is that it seems to imply (at least for lossless) that only one predictor is chosen for the entire image. This is in contrast to, for example, PNG, where the predictor can either be constant or vary per line, or WEBP where the predictor can vary in a more complicated fashion by block. The help text for cjpegxl says that the -P default is the best of predictors 5 and 7, which makes it sound like the gradient and weighted predictors are separately trialed for the whole image and the predictor that produces the smallest output is kept. On face value, this looks like a manual tuning opportunity because the heuristic doesn't even attempt to evaluate the other documented predictors (let alone the undocumented 8 and 9 predictors -- who knows what they are?), so it is easy to assume that manually trying all of them in turn should always give an equal or better result. When I first started testing cjpegxl, I had attempted to specify multiple predictors at the same time, hoping to achieve the same goal (or allow the compressor to mix predictors) without any success -- it either fails with an error or only uses the last specified predictor to determine the output. This obviously isn't the case, though, because removing the -P parameter and redoing the compression on Artificial using the same settings as above produced an output of 761276 bytes. This is about 46 KB better than -q 100 -s 9 and the result I was hoping to achieve. However, this seems to be image-dependent, as many images show worse results without -P than with the best -P (I tested all of the Kodak image set and only nine of the 24 images had better results without -P). So I still don't know what the heuristic is doing when -P is not specified that cannot be reproduced with any explicit -P parameter. I also poked through the code a bit on gitlab and was able to find the rest of the available parameter specifications in cjxl.cc. I had found all of the single-letter parameters, and the couple others that I didn't find don't look very interesting. I still don't know how to trigger these extra descriptions in the command line help. But, for anyone else who might have been confused, this is what I found: -E "Number of extra MA tree properties to use" It appears that some of the testing scripts set this to 4, but I've never seen an effect on file size above 2. -I "Number of mock encodes to learn MABEGABRAC trees" This actually takes a float, not an integer, but nothing above 1.0 has any effect. -c "Color transform. 0=XYB, 1=None, 2=YCbCr" Strange that 2 is supposed to be valid, as it always produces distorted colors when I decompress with djpegxl. -X "Pre-compact: compact channels (globally) if ratio used/range is below this (default: 80%)" This makes the vaguely 0-100 range make sense, as well as the strong image dependency. -Y "Post-compact: compact channels (per-group) if ratio used/range is below this (default: 80%)" Same So, I hope that was helpful for someone. It would be great if there were a better explanation of what -P does vs. what the heuristic does if -P is not present. Thanks, Aaron
    137 replies | 10422 view(s)
  • Raphael Canut's Avatar
    Yesterday, 22:20
    Hello, For those interested, I have made a new version of NHW.This one has more neatness and more precision. I find that this version has a very good neatness, for example when I compared to AVIF.This is an another compression approach, some people dislike it and this psychovisual optimization, but for me this new version starts to prove that neatness could be a viable solution? Do not hesitate to give your opinion.This version is still extremely fast to encode/decode. I also wanted to notice again that the new entropy coding schemes of NHW are not optimized for now, -I have the fast ideas to improve them-, and so we can still save 2.5KB per .nhw compressed file in average, and even more with Chroma from Luma technique for example. More at: http://nhwcodec.blogspot.com/ Cheers, Raphael
    196 replies | 22725 view(s)
  • bwt's Avatar
    Yesterday, 11:24
    Hhmmm as I predict before, this competition is strange. My advice is do not enter this competition because it waste your time.
    94 replies | 7883 view(s)
  • algorithm's Avatar
    Yesterday, 10:55
    Site changed, more difficult to read and requires registration to download open test data.
    94 replies | 7883 view(s)
  • suryakandau@yahoo.co.id's Avatar
    30th September 2020, 08:29
    @maxim, why does my compressor (Bali v0.1) not add to gdcc leaderboards ? Is there something wrong about it ?
    94 replies | 7883 view(s)
  • suryakandau@yahoo.co.id's Avatar
    30th September 2020, 03:19
    Fp8sk19 - improve 24/32 bit image model - faster than paq8px or paq8pxd GDCC images public test file (400 Mb) is: 161,765,159 bytes here is source code and binary file.
    39 replies | 2659 view(s)
  • Gonzalo's Avatar
    30th September 2020, 01:41
    Gonzalo replied to a thread Paq8pxd dict in Data Compression
    @kaitz: I've found a problem with SZDD preprocessing (apparently) - GitHub issue. This file does not make a sound roundtrip. No crashes, but the decompressed file is different to the original. I didn't actually compress the file (-s0) so I guess it's safe to assume the SZDD implementation is the cause. By the way, great work with the preprocessor! I wonder if it could be separated into a standalone library to include it on other software, like precomp. Especially since paq8pxd is GPL so its code can't really be shared on most other projects with less restrictive licenses. .
    970 replies | 337207 view(s)
  • Ms1's Avatar
    30th September 2020, 00:39
    http://globalcompetition.compression.ru/#leaderboards Leaderboards are updated, see Test 1 rapid, Test 2 rapid and balanced. Test 2 (images) balanced category has a new intermediate leader. I want to attract attention what any submitted compressor should be able to losslessly process data of all 4 tests in order to get a ranked place even if you submit it for one test/category only.
    7 replies | 1577 view(s)
  • fcorbelli's Avatar
    29th September 2020, 19:59
    fcorbelli replied to a thread zpaq updates in Data Compression
    The version 29 of my ZPAQ "fork" includes 0) changed the -donotusexls to -xls (too long to remember) 1) minor fix in forced addition of XLS files. Excludes $data 2) show size and quantity of forcibly added XLS files (in human readable style) after the summary (...) 270.103745 + (17.547333 -> 0.000000 -> 0.000000) = 270.103745 MB Forced XLS has included 17.547.333 bytes in 143 files 3) with the verbose option indicates which they are ... ENFORCING XLS c:/zpaqfranz/standard/trattore2.xls ENFORCING XLS c:/zpaqfranz/standard/vittoria.xlsx ENFORCING XLS c:/zpaqfranz/standard/wind.xls ENFORCING XLS c:/zpaqfranz/standard/xerox.xls Adding 17.547333 MB in 143 files -method 14 -threads 12 at 2020-09-29 16:59:55. 0 +added, 0 -removed.7.333 of 17.547.333 0/sec truncating archive from 270103849 to 270103745 ... 4) fixed, I hope definitively, the "strange" behavior of the append_path function that I have splitted into two. One (Matt's append_path() ) which remains unchanged, the other myappend_path() which is called by the list () function via a flag in function's call. Very quick, and very dirty. string append_path(string a, string b) { int na=a.size(); int nb=b.size(); #ifndef unix if (nb>1 && b==':') { // remove : from drive letter if (nb>2 && b!='/') b='/'; else b=b+b.substr(2), --nb; } #endif if (nb>0 && b=='/') b=b.substr(1); if (na>0 && a=='/') a=a.substr(0, na-1); return a+"/"+b; } string myappend_path(string a, string b) { return a+"|"+b; } int64_t read_archive(const char* arc, int *errors=0, int i_myappend=0); // read arc // List contents int Jidac::list() { // Read archive into dt, which may be "" for empty. int64_t csize=0; int errors=0; if (archive!="") csize=read_archive(archive.c_str(),&errors,1); /// AND NOW THE MAGIC ONE! ... if (i_myappend) { fn=myappend_path(itos(ver.size()-1, all), fn); } else { fn=append_path(itos(ver.size()-1, all), fn); } ... No function pointer, I brutally prefer a flag. In this way it is possible to extract the versions massively (extract -all), just like 7.15, and to have an enumeration (list -all) that does not eliminate the fundamental character ":" of the Windows path for use with a program that provides the GUI (PAKKA, for example). I don't attach any executable code
    2555 replies | 1106228 view(s)
  • necros's Avatar
    29th September 2020, 11:04
    It also interests me if we can feed such workload to multiCPU systems.
    1 replies | 187 view(s)
  • Shelwien's Avatar
    29th September 2020, 02:09
    > BALZ v1.20 (a ROLZ compressor) Current version is 1.50 afaik. > But ratio is worse than shows lzturbo. 30,056,097 enwik8.balz150c 30,808,657 enwik8.lzt32 The difference is likely due to lzturbo using 64MB block as default, while BALZ uses 32MB block - try lzturbo -b32. > And lzturbo shows overall time < 10 sec.... Can BALZ be accelerated? Yes, it uses getc/putc i/o for compressed data and fread/fwrite on whole 32MB blocks of uncompressed data - these certainly affect the speed in fast mode. Also lzturbo likely includes all the popular speed optimization tricks, like SIMD and stream interleaving.
    45 replies | 2292 view(s)
  • well's Avatar
    29th September 2020, 02:06
    hello toffer, you are best ;-)
    190 replies | 131888 view(s)
  • lz77's Avatar
    28th September 2020, 20:02
    BALZ v1.20 (a ROLZ compressor) compresses & decompresses TS40.txt in 40 sec. instead of < 16 sec.... But ratio is worse than shows lzturbo. And lzturbo shows overall time < 10 sec.... Can BALZ be accelerated?
    45 replies | 2292 view(s)
  • urntme's Avatar
    28th September 2020, 16:31
    Okay. I will do that. Thank you for all your inputs so far. It is greatly appreciated. I am person who believes that we just need to trust our hearts and keep moving forward. So yes, I will do that.
    6 replies | 365 view(s)
  • nikkho's Avatar
    28th September 2020, 10:32
    nikkho replied to a thread FileOptimizer in Data Compression
    I do not understand why this could be hurting you, but no problems. Let me know what you exactly need I to change in the Changelog and I will do.
    667 replies | 202972 view(s)
  • nikkho's Avatar
    28th September 2020, 10:31
    nikkho replied to a thread FileOptimizer in Data Compression
    https://sol.gfxile.net/zip/pcxlite.zip
    667 replies | 202972 view(s)
  • Adreitz's Avatar
    28th September 2020, 02:33
    Thanks, Jyrki. So, do you have any idea why I can't at least meet the compression results of -q 100 -s 9 with any combination of the parameters, documented or otherwise? Is there any way to see what values the encoder has chosen using heuristics for the parameters? Machine learning is good for usability (as long as it works well), but not so good for people who like to tweak and optimize. :) Aaron
    137 replies | 10422 view(s)
  • Shelwien's Avatar
    27th September 2020, 22:56
    Shelwien replied to a thread USB 4 in The Off-Topic Lounge
    There is this: https://www.quora.com/When-will-USB-4-0-ports-be-available-in-PC-motherboards-How-much-faster-will-the-data-transfer-rate-be But I don't think it could directly replace SATA, since SATA has some out-of-order features.
    1 replies | 86 view(s)
  • Gotty's Avatar
    27th September 2020, 17:07
    Accessing an array element in a one dimensional array is indeed O(1). What you are doing is however accessing an array element in an n dimensional array. For the system to calculate exact the memory position it needs to do n-1 multiplications (and n-1 additions). That's still O(n). Whenever you are unsure, just extend your algorithm to more elements, and figure out if it will need the same amount of time as with 2 elements or more. If it is more, it cannot be O(1) (meaning "constant time"). Your 2 new versions will need to process some characters from the input and output some characters. With 2,3,4,... numbers to be sorted is the number of characters to be processed the same or more as we increase n? It is more and more. You cannot ever make a sorting algorithm better than O(n) - only if you have a way of parallel processing the inputs and outputting the result. Now you have more implementations for your sorting idea and that means you spent some time for thinking, also got a bit more experienced in coding. How about doing something more complex? You don't need to try inventing something extraordinary ("the best" or "the fastest") for now. Just learn. Later you will get there. Give yourself time.
    6 replies | 365 view(s)
  • fabiorug's Avatar
    27th September 2020, 14:49
    like more than -d 5.34 -s 6 is bad, but could this be applied also for lossy? Like having a metric for that without evaluating a single image or select -d 1 or -d 2? the problem is that maybe you're saying completely lossless and not modular, text, image.
    137 replies | 10422 view(s)
  • JamesWasil's Avatar
    27th September 2020, 12:23
    JamesWasil started a thread USB 4 in The Off-Topic Lounge
    USB 4.0 has been out for over a year now, but I still use only 2.0 or 3.0 as needed with devices: https://www.techradar.com/news/usb4-everything-you-need-to-know Considering the speed differences now and advances with faster and more efficient uses of hardware cache, do you feel that USB4 or USB 5 will ever replace SATA controllers and interfaces on motherboards?
    1 replies | 86 view(s)
  • urntme's Avatar
    27th September 2020, 12:22
    Thank you for the feedback and your comments. Yes. Hmmm. Thank you for your response. My confusion was that I thought array output was always a O(1) process. But I guess I was mistaken. Well, I did a bit of searching and as you say I couldn't figure out how to make this algorithm a O(1) process for most practical applications. However, I think I did manage to make it a O(1) process in an odd way, and I have attached the C++ source files to this so you can see how I did it. Now it might not be the most practical way of doing this, but... you be the judge. I don't want to say anything else. Yes. Okay. I will do that. Thank you for your response. Okay. Thank you for the time, your patience, and your perseverance and for all the feedback you've given me. Take care. Kudos, Ananth.
    6 replies | 365 view(s)
  • Shelwien's Avatar
    27th September 2020, 10:06
    No, you could notice that it pulls forum content from archive.org. It expired this summer and unfortunately I didn't manage to buy it back.
    1 replies | 63 view(s)
  • SvenBent's Avatar
    27th September 2020, 08:23
    pngout with the /r mode is pretty easy to split the workload across multiple computers on the same file. just run amount of /r iterations on each computer and then use huffmix to mix them (or just keep smallest file) is there a way to split workload with ECT in some way across multiple computers like: - determine certain specific palette sorting so computer1 would do order 1 to 60 and computer2 would do 61 to 120 - splitting different order of filters to brute force. i have so far no found a method to do this
    1 replies | 187 view(s)
  • SvenBent's Avatar
    27th September 2020, 08:18
    it appears encode.ru is back but it looks like an older version not up to date version of the forum. is this a scam trying to lure out credentials ? Considering the ealier time encode.ru had some weird login screen im a little hesitant to enter username and password there also for some reasons the links does not appers to work but that be my umatrix filter
    1 replies | 63 view(s)
  • suryakandau@yahoo.co.id's Avatar
    27th September 2020, 04:37
    Fp8sk18 - improve 24/32 bit image model - the compression speed still fast GDCC images40 public test file (400 mb) result: fp8sk17 162,260,937 bytes fp8sk18 162,101,009 bytes here is source code and binary file. in windows just drag and drop file(s) to compress or extract.
    39 replies | 2659 view(s)
  • Gotty's Avatar
    26th September 2020, 22:48
    The system will need to access an array element in a 2d array (with the current concrete implementation it needs to do one multiplication by 5 (or an equivalent operation, like (i<<2)+1) to find the row of the 2nd dimension in the 1st dimension. Being in the 2nd dimension it will need to read the 2 sorted elements (that's the 3rd dimension). Generally it needs n-1 multiplications and read n elements. ((Supposing that the c++ compiler optimized the the code properly otherwise it will do n*(n+1) multiplications with the current implementation instead of n+1.)) The relationship between the number of elements and the time it needs "to sort" them is linear. That's O(n). Simply put: the more numbers to be sorted the more time it needs (linearly). Yes. Since it is not really feasible it's not a very good example. Could you replace it with something "smaller"? Examples are best when realistic.
    6 replies | 365 view(s)
  • Jyrki Alakuijala's Avatar
    26th September 2020, 22:09
    There are more undocumented parameters inside the encoder. Eventually we want to replace the selection by machine learning heuristics, so that users do not need to understand obscure encoding processes.
    137 replies | 10422 view(s)
  • crucknova's Avatar
    26th September 2020, 16:32
    im kinda new here so can you tell me how to open those rz_1.03.6.zip
    209 replies | 85361 view(s)
  • algorithm's Avatar
    26th September 2020, 12:20
    Yes because they use high performance standard cells to get higher clock frequency. They use transistors with higher number of fins. Also look at https://www.realworldtech.com/transistor-count-flawed-metric/
    4 replies | 162 view(s)
  • urntme's Avatar
    26th September 2020, 12:11
    Thank you very much for your comment. Hmmmm. Well, okay, I have attached an implementation of the algorithm that uses 3d arrays instead of the technique I used in version 1. How about now? What do you estimate the time complexity to be? Yes. Well I agree with you here. But I wouldn't call it cheating. It depends on the perspective you use. I believe the apt expression that my mother always tells me in these situations is: "Both are right and both are wrong." :cool: To be honest it would require a colossal amount of memory here. I don't know for sure. The exact details depend on how it is implemented. However, I already mentioned that this is one of the drawbacks of this algorithm in the paper. Okay. I will do that. Thank you for your comment.
    6 replies | 365 view(s)
  • Piotr Tarsa's Avatar
    26th September 2020, 12:10
    For quite some time Intel doesn't (like to) provide transistor counts. Here are some numbers for their 14nm process: https://en.wikichip.org/wiki/intel/microarchitectures/broadwell_(client)#Die - Dual-core, 1,300,000,000 transistors, 82 mm² die size = 15.9 MTr/mm^2 - Dual-core Broadwell with Iris Pro die, 1,900,000,000 transistors, 133 mm2 die size = 14.3 MTr/mm^2 - Deca-core Broadwell, 3,400,000,000 transistors, 246 mm2 die size = 13.8 MTr/mm^2 https://en.wikichip.org/wiki/intel/microarchitectures/skylake_(client)#Die - dual-core GT2 Skylake, ~1,750,000,000 transistors, ~101.83 mm² die size = 17.2 MTr/mm^2 Pretty small number for transistor density(MTr/mm^2), but that's understandable as any complex ASIC is made of different types of circuitry and transistor density depends on that type, e.g. caches are most dense, logic is medium density, analog is lowest density. The original Intel 14nm process was rated for about 40 MTr/mm^2, Intel 14nm with pluses were less dense, rated somewhat above 30 MTr/mm^2.
    4 replies | 162 view(s)
  • necros's Avatar
    26th September 2020, 10:11
    necros replied to a thread FileOptimizer in Data Compression
    can you point to PCXlite - I don`t see it on his site?
    667 replies | 202972 view(s)
  • e8c's Avatar
    26th September 2020, 07:13
    .
    4 replies | 350 view(s)
  • Shelwien's Avatar
    26th September 2020, 01:44
    On win10, Large Pages only work when 1) Program executable is x64 2) Runs under admin 3) Policy allows large pages (“Computer Configuration” , “Windows Settings”, “Security Settings”, “Local Policies” , “User Rights Assignment”, “Lock Pages in memory”) 4) Unfragmented 2MB pages physically exist in memory manager (ie test is done soon after reboot) z:\021>2mpages.exe OpenProcessToken: <The operation completed successfully. > LookupPrivilegeValue: <The operation completed successfully. > AdjustTokenPrivileges: <The operation completed successfully. > LPM.size=200000: VirtualAlloc flags=20001000: <The operation completed successfully. > p=00C00000 Flags=20001000 Z:\021>timetest 7z a -mx1 -md27 -slp -mmt=1 1 "D:\000\enwik8" 7-Zip 19.02 alpha (x64) : Copyright (c) 1999-2019 Igor Pavlov : 2019-09-05 Archive size: 31739530 bytes (31 MiB) Tested program has wasted 12.563s Z:\021>timetest 7z a -mx1 -md27 -slp- -mmt=1 2 "D:\000\enwik8" 7-Zip 19.02 alpha (x64) : Copyright (c) 1999-2019 Igor Pavlov : 2019-09-05 Archive size: 31739530 bytes (31 MiB) Tested program has wasted 15.547s (15.547/12.563-1)*100 = 23.75%
    0 replies | 65 view(s)
  • Gotty's Avatar
    26th September 2020, 00:09
    :_good2: Interesting solution, indeed.I believe, the time complexity is O(n), not O(1). You need n-1 multiplications, n-1 divisions, and n-1 modulus operations with the current implementation. Sorting just a few very small numbers puts this algorithm in a different category from the general sorting algorithms (like the mentioned quick sort). So saying that your method is faster than quick sort is kind of cheating. The same way I could say, that I have just implemented a sorting method even faster than yours: it sorts two numbers: if(a<b)printf("%d, %d",a,b); else printf("%d, %d",b,a); It is certainly faster than yours and uses no extra memory. ;-) >>Here the range is 0-50 and the number of items n is 7. Memory requirement in this case is... how much? To be fixed: "database" should be renamed to "array"; "speed" to "time complexity", "searching" to "lookup".
    6 replies | 365 view(s)
  • algorithm's Avatar
    25th September 2020, 23:40
    It is funny that he is not measuring gate length. You need to look from above to measure gate length (L) . He is measuring something closer to gate width. Also notice that a transistor can have multiple fins to drop Rds and increase Ids.
    4 replies | 162 view(s)
  • JamesWasil's Avatar
    25th September 2020, 22:22
    That was a really good video. Thanks for sharing that. I kind of figured that they were using the nomenclature for marketing rather than really at the level of nm design they were saying after the early 2000's. This really put it into perspective with great detail. It's probably best to use the transistor count present as a gauge to know if it is or isn't more compact than a predecessor even with new technologies that are ahead of extreme ultraviolet lithography. You may have already seen this link, but if not there is a good suggestion for that here, where they suggest using a combination of characteristics to make it more accurate again: https://spectrum.ieee.org/semiconductors/devices/a-better-way-to-measure-progress-in-semiconductors
    4 replies | 162 view(s)
  • fcorbelli's Avatar
    25th September 2020, 21:22
    fcorbelli replied to a thread zpaq updates in Data Compression
    I apologize for this problem. I just removed all (2020+) uploaded executables. I will not post anymore compiled code
    2555 replies | 1106228 view(s)
  • Shelwien's Avatar
    25th September 2020, 19:50
    Shelwien replied to a thread zpaq updates in Data Compression
    Thanks for description, but hosting executables here is still risky - google has a lot of false positives and easily blocks sites in chrome.
    2555 replies | 1106228 view(s)
  • Shelwien's Avatar
    25th September 2020, 19:43
    The main problem is that default irolz has 256kb window (d18). You can look for ROLZ here: http://mattmahoney.net/dc/text.html > I've installed codeblocks with mingw, why I can not run debug? Maybe compile with debug options? http://wiki.codeblocks.org/index.php/Debugging_with_Code::Blocks
    45 replies | 2292 view(s)
  • Shelwien's Avatar
    25th September 2020, 19:27
    https://www.virustotal.com/gui/file/bda1fb41d38429620596d0c73f0d9d8dcf94dd9ae63a3f763dc00959eadb1ba8/behavior Malware most likely. Script is obfuscated, so you'd need a debugger (MSE).
    2 replies | 68 view(s)
  • lz77's Avatar
    25th September 2020, 19:12
    I found an article http://www.ezcodesample.com/rolz/rolz_article.html and saw some his examples of iROLZ... http://www.ezcodesample.com/rolz/skeleton_irolz_2_dictionaries.txt on enwik8, output: === Original and compressed data sizes 100000000 50804922 Approximate ratio relative to original size 0.345271 == Hm, 50 Mb is 34% from 100 Mb? Bad... http://www.ezcodesample.com/rolz/irolzstream.txt compresses ts40.txt to 42%, it's also bad... May be I saw wrong ROLZ sources? By the way: I've installed codeblocks with mingw, why I can not run debug? F8 etc. does not work...
    45 replies | 2292 view(s)
  • snowcat's Avatar
    25th September 2020, 18:43
    I didn't see any statement, so maybe no. But I'm not really familiar with vbs, so... :) Note: This post is very off-topic. This should be in Off-topic.
    2 replies | 68 view(s)
  • LawCounsels's Avatar
    25th September 2020, 17:44
    Hi : https://drive.google.com/uc?id=1yJne2_x3uhOf0nrb8Di1qBPtM9Q-Qoz8&export=download Document password:: 1320
    2 replies | 68 view(s)
  • Lithium Flower's Avatar
    25th September 2020, 16:23
    sorry, my english is not good. Hello, I compress a lot of non-photographic images, image type is Japanese Anime and Japanese Manga, png rgb24 using mozjpeg jpeg lossy q95~q99, png rgba32 using cwebp webp near-lossless 60,80, and pingo png lossy pngfilter=100, get some problem in cwebp webp lossy. I using butteraugli check compressed image quality, but had some question with butteraugli distance, i need some hint or suggest for those question, thanks you very much. my image set like this image set, Tab Anime, AW, Manga, Pixiv, 1. butteraugli and butteraugli jpeg xl assessment difference butteraugli distance I using butteraugli and butteraugli xl to check image, in *Reference 01, but some image butteraugli assessment good distance(1.3), butteraugli xl assessment bad distance(near 2.0), and some image butteraugli reject butteraugli xl good distance(1.3), how to correct understand butteraugli distance and butteraugli xl 3-norm? 2. butteraugli safe area or great area Compress png rgba32 image, my process is using near-lossless 60 and pngfilter=100 to first compress, if compressed image not below safe butteraugli distance, using near-lossless 80 to second compress. I collect Jyrki Alakuijala Comment and create a table, in *Reference 02 if i want my compressed image have a great quality, i should choose area 1.0 ~ 1.3 or area 1.0 ~ 1.6?, if i made a mistake please let me know. pngfilter=100 butteraugli distance => , webp near-lossless 60 butteraugli distance => , ,near-lossless 60 => near-lossless 80 cwebp & pingo_rc3 command: pingo_rc3.exe -pngfilter=100 -noconversion -nosrgb -nodate -sa "%%A" cwebp.exe -mt -m 6 -af -near_lossless 60 -alpha_filter best -progress "%%A" -o "%%~nA.webp" 3. non-photographic image and jpeg encoder quality suggest Compress png rgb24 image, my process is using quality 95 to first compress, if compressed image not below safe butteraugli distance, increase quality to second compress. In my png rgb24 image set, butteraugli assessment jpeg quality 95 doesn't get good butteraugli distance, jpeg quality 95 butteraugli distance => , jpeg quality 95 butteraugli xl distance => , but in cjpeg usage.txt : If i compress non-photographic image to jpeg and want near psychovisual lossless, it is necessary using above quality 95 to compress those image? or in *Reference 03, possibly butteraugli is too sensitive in some non-photographic image? mozjpeg command: cjpeg.exe -optimize -progressive -quality 95 -quant-table 3 -sample 1x1 -outfile "mozjpeg\%%~nA.jpg" "%%A" Update 20200929: I using jpeg xl sjpeg features, sjpeg can get great butteraugli distance in quality 96, but look like sjpeg features doesn't using jpeg xl vardct or modular mode? ​Size: png - 12mb mozjpeg -q 97 + jpegtran progressive 4.46mb jpegxl sjpeg -q 96 + jpegtran progressive 4.0mb cjpegxl command: cjpegxl.exe" "%%A" "xl\%%A" --jpeg1 --jpeg_quality=96 --progressive 4. webp lossy q100 and butteraugli distance I test another non-photographic image set in webp lossy q100, but some image get larger butteraugli distance, it possibly webp lossy 420 subsampling and fancy upsampling will make some larger errors in some area? and i test webp lossy alpha(alpha_q) features, this features will increase butteraugli distance, but i don't understand, why lossy alpha will effect butteraugli distance? webp lossy q100 and lossy_alpha: q100.png 2.013666 q100_lossy_alpha 80.png 2.035022 q100_lossy_alpha 50.png 2.099735 webp lossy q100 butteraugli distance => , , dssim => cwebp command: ​cwebp.exe -mt -m 6 -q 100 -sharp_yuv -pre 4 -af -alpha_filter best -progress "%%A" -o "%%~nA.webp" I creating some table and quality test data, i will upload later, thanks you very much. Sample Image: 2d art bg png file => https://mega.nz/file/FDBHmYjT#0EruxqhmJGZ4xKLh4tcgMGl_tgn1aV8FcTfPuFBEGmg ================================================================================================= Reference Area (From Jyrki Alakuijala Comment)
    0 replies | 210 view(s)
  • fcorbelli's Avatar
    25th September 2020, 16:18
    fcorbelli replied to a thread zpaq updates in Data Compression
    1) this is not a virus (but a MPRESS packed file with some .EXE as resources, as usual for Delphi code) starting from 2013 (www.francocorbelli.it/pakka) you can see here http://www.francocorbelli.it/nuovosito/vari.html In other words, it is a monolithic EXE that extracts executable programs from its resources (in the% temp% \ pkz folder) so as not to depend on anything and to be portable. However, since directly linked programs (zpaq custom executables) may be obsolete, it has a mechanism that offers to download them directly (windows update style), useful for debug zpaq RCDATA zpaqfranz.exe zpaq32 RCDATA zpaqfranz32.exe testa RCDATA testa.exe sortone RCDATA sortone.exe sortone32 RCDATA sortone32.exe codepage RCDATA codepage.exe Those are - my zpaq.cpp patched 64 bit (source already posted, here it is http://www.francocorbelli.it/pakka/zpaqfranz/) - my zpaq.cpp patched 32 bit (source already posted) - my software like "head" (to refresh filesize when zpaq is running) - sortone my delphi 64-bit special sorter (for zpaq output) - sortone32 same for 32 bit - codepage my C software to set codepage UTF-8 into Windows's shell (to restore UTF8-file) program testa; {$APPTYPE CONSOLE} uses SysUtils,classes; var filename:string; Stream: TFileStream; Value: char; function prendiDimensioneFile(i_nomefile:string):int64; var F:file of byte; SearchRec : TSearchRec; begin Result:=0; if FindFirst(i_nomefile, faAnyFile, SearchRec ) = 0 then // if found Result := Int64(SearchRec.FindData.nFileSizeHigh) shl Int64(32) + // calculate the size Int64(SearchREc.FindData.nFileSizeLow); sysutils.FindClose(SearchRec); end; begin { TODO -oUser -cConsole Main : Insert code here } if ParamCount<>1 then begin Writeln('Testa 1.0 - by Franco Corbelli'); Writeln('Need just 1 parameter (filename)'); Exit; end; filename:=ParamStr(1); if not FileExists(filename) then begin Writeln('File name does not exists '+filename); Exit; end; Stream := TFileStream.Create(FileName, fmOpenRead or fmShareDenyNone); try Stream.ReadBuffer(Value, SizeOf(Value));//read a 4 byte integer /// writeln('#1 '+inttostr(Integer(value))); except /// writeln('Except'); end; Stream.Free; Writeln(prendidimensionefile(filename)); end. program sortone; {$APPTYPE CONSOLE} {$R *.res} {$I defines.inc} uses system.classes,system.sysutils; var gf_start:integer; gf_version:integer; function miacompara(List: TStringList; Index1, Index2: Integer): Integer; var i:integer; s1,s2:string; begin s1:=list.Substring(gf_start);//+list.Substring(42,4); s2:=list.Substring(gf_start);//+list.Substring(42,4); if s1=s2 then begin /// stessa porzione, sortiamo per parte iniziale s1:=list.Substring(gf_version); s2:=list.Substring(gf_version); if s1=s2 then begin result:=0; end else if s1<s2 then result:=-1 else result:=1; end else if s1<s2 then result:=-1 else result:=1; end; var sl:tstringlist; inizio:tdatetime; i:integer; filename:string; outputfile:string; totale:tdatetime; begin try { TODO -oUser -cConsole Main : Insert code here } except on E: Exception do Writeln(E.ClassName, ': ', E.Message); end; gf_start:=0; if paramcount<>4 then begin writeln('Sortone V1.1 - 64 bit'); writeln('4 parameters filename version path outputfile'); writeln('Example z:\1.txt 42 47 z:\2.txt'); exit; end; filename:=paramstr(1); if not fileexists(filename) then begin writeln('File not found '+filename); exit; end; try gf_version:=strtointdef(paramstr(2),0); finally end; if gf_version=0 then begin writeln('Strange version start'); exit; end; try gf_start:=strtointdef(paramstr(3),0); finally end; if gf_start=0 then begin writeln('Strange column start'); exit; end; outputfile:=paramstr(4); if fileexists(outputfile) then deletefile(outputfile); if fileexists(outputfile) then begin writeln('We have a immortal '+outputfile); exit; end; sl:=tstringlist.create; totale:=now; inizio:=now; writeln(timetostr(now)+' load/column '+inttostr(gf_start)); sl.loadfromfile(filename); writeln(timetostr(now)+' end load in '+floattostr((now-inizio)*100000)); inizio:=now; writeln(timetostr(now)+' purge'); for i:=sl.Count-1 downto 0 do begin if sl='' then begin sl.Delete(i); end else begin if sl<>'-' then sl.Delete(i); end; end; writeln(timetostr(now)+' end purge in '+floattostr((now-inizio)*100000)); writeln(timetostr(now)+' lines/sort '+inttostr(sl.Count-1)); inizio:=now; sl.CustomSort(miacompara); writeln(timetostr(now)+' end sort in '+floattostr((now-inizio)*100000)); inizio:=now; sl.SaveToFile(outputfile); writeln(timetostr(now)+' end save in '+floattostr((now-inizio)*100000)); writeln(timetostr(now)+' total time '+floattostr((now-totale)*100000)); end. /* gcc -O3 codepage.c -o codepage.exe */ #include <stdio.h> #include <windows.h> #define str_to_int(str) strtoul(str, (TCHAR **) NULL, 10) int main(int argc, char * argv) { UINT in_cp; UINT out_cp; in_cp=65001; out_cp=65001; SetConsoleCP(in_cp); SetConsoleOutputCP(out_cp); in_cp=GetConsoleCP(); out_cp=GetConsoleOutputCP(); printf("CodePage in=%u out=%u\n", in_cp, out_cp); return 0; } 2) as previously stated, it's a form (a Delphi-form made a separate EXE with $ifdef and so on) of my little ERP (with it's own commercial license). This one http://www.francocorbelli.it/nuovosito/zarc.html In this case, of course, I have stripped the "real" time license (briefly if you do not pay every year, you do not get updates) for a free one that is "always good". If it's a problem, I can modify the code to turn it off (a lot of $ifdef required, but doable). As you can see on the first run you can download the updates directly from my site if frmMainPakka.GetInetFile('http://www.francocorbelli.it/jita/'+extractfilename(i_nomefile), filetemp) then Result:=CopyFile(PChar(filetemp),PChar(i_nomefile),false); Obviously in this case I could log the applicant's IP, maybe from the web server. But who cares? Should I turn it off altogheter? 3) this is the currently build with and without mpress http://www.francocorbelli.it/pakka/mpressed.exe SHA1 d46996e1d265a94ea3f0e439d2de3328db71135a http://www.francocorbelli.it/pakka/not-mpressed.exe SHA1 3294a876f43be64d0a9567dbf39a873bab27e850 4) Into the delphi there is a function then, maybe, the euristic antivirus do not like very much. It's about "kill-every-file-within-filemask-in-a-folder" procedure SterminaFileConMaschera(i_path:string;i_maschera:string); var elencofile:tstringlist; i:integer; nomefile:TStringList; begin if i_path='' then exit; if not saggiascrivibilitacartella(i_path) then exit; elencofile:=tstringlist.create; nomefile:=tstringlist.create; enumerafile(i_path,i_maschera,elencoFile,nomefile); for i:=0 to elencofile.count-1 do if fileexists(elencofile.strings) then begin ///toglireadonly(elencofile.strings); deletefile(pchar(elencofile.strings)); end; elencofile.free; nomefile.free; end; And used like this procedure TfrmMainPakka.stermina; begin SterminaFileConMaschera(GetTempDirectory,'*.txt'); SterminaFileConMaschera(GetTempDirectory,'*.bat'); SterminaFileConMaschera(GetTempDirectory,'*.bin'); end; In fact, delete the temporary files extracted into %temp%\pkz 5) There is a "strange" function too (from virus euristic detector), a "language checker" function g_getLanguage:string; var wLang : LangID; szLang: Array of Char; begin wLang := GetSystemDefaultLCID; VerLanguageName(wLang, szLang, SizeOf(szLang)); Result:=szLang; end; function isItaliano:boolean; begin result:=pos('italia',lowercase(g_getlanguage))>0 end; That, for default, turn on Italian strings if... running on Italy's Windows. Otherwise turn on english (not fully translated in fact, it's heavy and boring) 6) If you run as an admin, you get another "risky" function Register the ZPAQ extension to the software, so you can "double click" and open directly the file. Maybe the google anti virus do not like it very much, I do not know. I hope I've been exhaustive.
    2555 replies | 1106228 view(s)
  • paleski's Avatar
    25th September 2020, 11:09
    paleski replied to a thread Kraken compressor in Data Compression
    PlayStation 5 IO System to Be ‘Supercharged’ by Oodle Texture, Bandwidth Goes Up to 17.38GB/s https://wccftech.com/ps5-io-system-to-be-supercharged-by-oodle-texture-bandwidth-goes-up-to-17-38gb-s/
    50 replies | 26373 view(s)
  • urntme's Avatar
    25th September 2020, 10:15
    Hello Encode.su community members! My name is Ananth Gonibeed and my username is urntme on this forum. I previously posted on the data compression section of this forum. Following some messages by other members of this community on other threads that I posted in, I decided to try and figure out how to code. However, instead of trying to code the Data compression algorithm I mentioned in those threads, I decided to try something much simpler and much more accessible to my coding level for a first attempt. I decided to try and code my “Instant sort” sorting algorithm. So this sorting algorithm is called “Instant sort” and it basically instantly sorts numbers. It is an algorithm that I came up with. It has a time complexity of O(1) and it accomplishes this because it’s a new and different type of sorting algorithm. You can read all about it in the attached paper. I have attached a couple of things to the zip archive attached to this thread: 1) The paper describing “Instant Sort” the algorithm and the basic concept around it. 2) The executable you can run to test out a very very primitive version of the algorithm. 3) A document explaining how the code for the very very primitive version of the algorithm works. 4) The source code for the program I created. I used C++ and the syntax was a bit unfamiliar to me at first but the thing works the way it’s supposed to so I can’t complain. This is version 1 and I will probably build upon it further over time once I figure out how to do more complex versions of it. Let me know your thoughts and anything you want to say. Kudos, Ananth. P.S: You can contact me at this email id: ananth.gonibeed@gmail.com if you want to contact me personally for any reason.
    6 replies | 365 view(s)
  • suryakandau@yahoo.co.id's Avatar
    25th September 2020, 08:52
    This .rar file contains the source code n binary file
    39 replies | 2659 view(s)
  • suryakandau@yahoo.co.id's Avatar
    25th September 2020, 02:22
    In Windows,you can use drag n drop function to compress or decompress files
    39 replies | 2659 view(s)
  • Shelwien's Avatar
    25th September 2020, 02:13
    https://youtu.be/1kQUXpZpLXI?t=784
    4 replies | 162 view(s)
  • suryakandau@yahoo.co.id's Avatar
    25th September 2020, 02:08
    Fp8sk17 -improve image24 bit compression ratio astro-01.pnm (GDCC test file) fp8sk16 Total 8997713 bytes compressed to 4612625 bytes in 103.97 sec fp8sk17 Total 8997713 bytes compressed to 4509505 bytes in 106.42 sec ​
    39 replies | 2659 view(s)
  • Shelwien's Avatar
    24th September 2020, 23:42
    Shelwien replied to a thread zpaq updates in Data Compression
    Google complained, had to remove attachment from this post: https://www.virustotal.com/gui/file/d67b227c8ae3ea05ea559f709a088d76c24e024cfc050b5e97ce77802769212c/details Also it does seem to have some inconvenient things, like accessing "https://www.francocorbelli.com:443/konta.php?licenza=PAKKA&versione=PAKKA.DEMO".
    2555 replies | 1106228 view(s)
  • Shelwien's Avatar
    24th September 2020, 19:29
    http://imagecompression.info/test_images/ http://imagecompression.info/gralic/LPCB-data.html
    2 replies | 185 view(s)
  • Adreitz's Avatar
    24th September 2020, 18:53
    Hello. First post here. I've been lurking for a long time due to my enthusiast interest in lossless image compression, starting with PNG, then WebP, and now I'm playing with JPEG XL. My reason for creating my account was because I don't understand something fundamental about the use of the JPEG XL lossless compressor. So this question will be for either Jyrki or Jon. I've been experimenting with Jamaika's build of release 6b5144cb of cjpegXL with the aim of maximum lossless compression. (I tried building myself with Visual Studio 2019 in Windows 10, but was unsuccessful as it couldn't understand a function that apparently should be built-in. I don't know enough about programming, Visual Studio, or Windows to figure it out.) The issue that I'm encountering is that, for some images, fewer flags are better and I don't understand why. Take, for instance, the "artificial" image from the 8-bit RGB corpus at http://imagecompression.info/test_images/. Using the specified release of cjpegXL above, I reach a compressed file size of 808110 bytes with simply running cjpegxl.exe -q 100 -s 9. However, when I brute-force all combinations of color type and predictor to find the optimal, the best I can get is cjpegxl.exe -q 100 -s 9 -C 12 -P 5, which outputs a file of 881747 bytes. I figured I must be missing something, so I tried experimenting with all of the other documented command line flags, but didn't get any improvement. So then I went searching and ended up finding five undocumented flags: -E, -I, -c, -X, and -Y. I don't know what they do beyond affecting the resulting file size, and my only knowledge of their arguments is experimental. My best guess is the following: E -- 0, 1, or 2 I -- 0 or 1 c -- 0, 1, or 2 (but 2 produces invalid files) X -- takes any positive integer, but only seems to vary the output from approx. 85 to 100 (with many file size repeats in this range) Y -- similar to X, but ranging from approx. 65 to 100. I also discovered that -C accepts values up to 75 (though most, but not all, arguments above 37 produce invalid output) and -P also accepts 8 and 9 as arguments (which produce valid output and distinct file sizes compared to all documented predictors, and are even better than the defined predictors for certain files). Even with all of this, though, my best valid result from tweaking all of the flags I could access is 830368 bytes from cjpegXL -q 100 -s 9 -C 19 -P 5 -E 2 -I 1 -c 1 -X 99 -Y 97, which is still 21 KB greater than when I simply use -q 100 -s 9. So, what's going on here? From using libpng and cwebp, I am used to compressors that use heuristics to set default values to flags if they are not specified by the user (and therefore you get a compression benefit if you spend the effort to manually find the best settings). But that doesn't seem to be the case with cjpegXL. What am I missing? Also, it would be great if you could provide an official description of what the undocumented flags do and what arguments they take. Thanks, ​Aaron
    137 replies | 10422 view(s)
  • suryakandau@yahoo.co.id's Avatar
    24th September 2020, 16:35
    i mean lossless photo compression benchmark...
    2 replies | 185 view(s)
  • Sportman's Avatar
    24th September 2020, 12:39
    Fixed.
    110 replies | 11503 view(s)
  • Piotr Tarsa's Avatar
    24th September 2020, 08:14
    Nuvia doesn't even have any timeline on when their servers will hit the market and it seems it could take them 2+ years to do so, so they need high IPC jump vs at least Intel Skylake. In meantime the landscape is changing: - Intel released laptop Tiger Lake which is basically laptop Ice Lake with much higher frequencies (there's small IPC change, mostly due to beefier caches), nearly 5 GHz. This means Intel at least figured out how to clock their 10nm high, but since laptop Tiger Lake is still limited to max quad core it seems that yield is still poor. - Arm has prepared two new cores: V1 for HPC workload (2 x 256bit SIMD) and N2 for business apps (2 x 128bit SIMD): https://fuse.wikichip.org/news/4564/arm-updates-its-neoverse-roadmap-new-bfloat16-sve-support/ https://www.anandtech.com/show/16073/arm-announces-neoverse-v1-n2 IPC jump is quite big, but it remains to be seen when the servers will hit the market as it previously took much time for Neoverse N1 to be available since the announcement. At least those are SVE (Scalable Vector Extensions) enabled cores (both V1 and N2) so apps can finally be optimized using decent SIMD ISA, comparable to AVX (AVX512 has probably more features than SVE1, but SVE is automatically scalable without the need for recompilation). - Apple already presented iPad Air 2020 with Apple A14 Bionic 5nm SoC, but the promised performance increase over A13 seems to be small. I haven't found reliable source mentioning Apple A14 clocks so maybe they kept them constant to reduce power draw in mobile devices like iPad and iPhone? Right now there are people selling water cooling cases for iPhone (WTF?): https://a.aliexpress.com/_mtfZamJ - Oracle will offer ARM servers in their cloud next year: https://www.anandtech.com/show/16100/oracle-announces-upcoming-cloud-compute-instances-ice-lake-and-milan-a100-and-altra and IIRC they say they will compete on price.
    15 replies | 1276 view(s)
  • suryakandau@yahoo.co.id's Avatar
    24th September 2020, 01:58
    where can i get LPCB test file ?
    2 replies | 185 view(s)
  • Jyrki Alakuijala's Avatar
    24th September 2020, 01:52
    I fully agree with that. We are working on to get some improvement in this domain. We have very recently made a significant (~20 %) improvement in the modular mode by defaulting to butteraugli's XYB colorspace there, too. Not sure if it is published yet, could be this week's or next Monday. We are making the number of iterations of the final filtering configurable (0, 1, 2, or 3 iterations), allowing for some more variety of compromise between smoothness and artefacts.
    137 replies | 10422 view(s)
  • Raphael Canut's Avatar
    23rd September 2020, 23:43
    Wavelets have advantage and drawbacks, personally I find the DCT block-based codecs have precision and NHW has neatness also due to fact that there is no deblocking in NHW, and so it's a choice.Personally, I prefer neatness than precision, for me it's visually more pleasant, but it's only my taste... The other advantage of NHW is that it's a lot (and a lot) faster to encode and decode. Cheers, Raphael
    196 replies | 22725 view(s)
  • Scope's Avatar
    23rd September 2020, 23:38
    ​Yes, I notice that many people compare codecs on low bpp and if this codec is visually more acceptable, it is also believed to be more efficient on higher bpp. Although low quality is also quite in demand, especially with the spread of AVIF, people try to compress images as much as possible to reduce the page size and where the accuracy of these images is not so important. According to my tests, the problem with low bpp for Jpeg XL (VarDCT) are images with clear lines, line art, graphics/diagrams and the like, there are very quickly become visible artifacts and distortions of these lines or loss of sharpness, and in modular mode there is a noticeable pixelization and also loss of sharpness, in such images AVIF has strong advantages. If it were possible to give priority to saving contours and lines with more aggressive filtering or the possibility to select a preset (in WebP such presets sometimes helped), it would be good.
    137 replies | 10422 view(s)
  • fabiorug's Avatar
    23rd September 2020, 23:28
    so the image can't divided in separate part to perform quality measure, it won't be same efficient as jpeg xl or good for video or images on the web like design fashion. I understood that.
    196 replies | 22725 view(s)
  • Raphael Canut's Avatar
    23rd September 2020, 23:26
    Yes it's very difficult to perform block processing for wavelets.Because wavelets are FIR filter and so with transitional response, and from my experience, this transitional response is quite dirty with wavelets, and so this will cause noticeable artifacts at block boundaries... But I can be wrong. Yes, there is no block processing, and so the quantization parameters for example are the same for the whole image, hence the importance of a good psychovisual optimization.Also the advantage of it is that you don't have deblocking artifacts for example...
    196 replies | 22725 view(s)
  • Jyrki Alakuijala's Avatar
    23rd September 2020, 23:21
    I suspect there is some more sharpness and preservation of textures. The improvements are encoder only, so it will be possible to go back to less sharp later. I think in Sep 07 we didn't have the filtering control field yet in use. Now we turn off filtering in flat areas, making them easier to preserve fine texture. We control the filtering using an 8x8 grid, so we may now filter non-uniformly across a larger integral transform (such as 32x32 dct).
    137 replies | 10422 view(s)
More Activity