Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • Nania Francesco's Avatar
    Today, 12:25
    I have managed to change yours nationality! Sorry I'm late!
    204 replies | 129759 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 11:47
    May you give some JPEG files ? I don't have any files except test.jpg
    2164 replies | 580594 view(s)
  • Gotty's Avatar
    Today, 10:32
    Gotty replied to a thread paq8px in Data Compression
    I see. You should test your changes with many different files. We don''t build paq8px to compress a single test.jpg. It's a general compressor. Never add complexity or use more memory or slow down compression just for one file. If you don't test with multiple files, you will not be able to properly evaluate your changes. So please do.
    2164 replies | 580594 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 10:23
    When I add that context it can reduce test.jpg size
    2164 replies | 580594 view(s)
  • Gotty's Avatar
    Today, 10:17
    Gotty replied to a thread paq8px in Data Compression
    Yes. Almost. The "tweeks" are tweaks. Parameter tuning = trying out different parameters and test them. In that part it's the same as yours. However parameter tuning should be applied to parameters only. Your code contained changes to values that are not parameters (like those 1024->2048 changes), and so should be exactly some exact value. So it helps parameter tweaking when you know some background: what you may tune and what shall stay fixed or be calculated. It's also important to know how you can combine contexts. (Learning opportunity for you. ->) When it's about combining vectors, you may add them, average them, weight them. But when the contexts are simply independent values you should not add them. Your code (wrong): cxt = hash(++n, hc, advPred / 11 + (runPred), sSum2 >> 6); Fix: cxt = hash(++n, hc, advPred / 11, runPred, sSum2 >> 6); So this release is mostly fixing your code and testing your additional contexts more thoroughly. I removed some contexts of yours because they seemed to not have any benefit. My question is: why did you add them? Did you have some files where they are beneficial?
    2164 replies | 580594 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 02:31
    It looks similar like I do, random forest too...:_haha2:
    2164 replies | 580594 view(s)
  • kaitz's Avatar
    Today, 00:42
    kaitz replied to a thread paq8px in Data Compression
    Size Compressed Sec paq8pxd_v90 -s8 PH01046J.JPG 135611 107464 6 paq8px -8 PH01046J.JPG 135611 105624 7 paq8pxd_v90 -s8 20200708_105932.jpg 3009379 2237601 83 paq8px -8 20200708_105932.jpg 3009379 2756675 748 (drops to default*) paq8pxd_v90 -s8 A10.jpg 842468 623059 25 paq8px -8 A10.jpg 842468 624651 36 paq8pxd_v90 -s8 test.JPG 3162196 2187172 85 paq8px -8 test.JPG 3162196 2194672 134 paq8pxd_v90 -s8 paq8px_v193_4_Corpuses.jpg 3340610 1418082 96 paq8px -8 paq8px_v193_4_Corpuses.jpg 3340610 1513474 142 paq8px -8 maxpc.pdf 14384698 8895217 1827 paq8pxd_v90 -s8 maxpc.pdf 14384698 8920647 1352 paq8px 2520 MB paq8pxd_v90 1978 MB (about 600MB is JPEG) *replace JASSERT((cPos & 63U) == 0) with: while (cPos &63U) cPos ++;
    2164 replies | 580594 view(s)
  • Gotty's Avatar
    Yesterday, 23:39
    Gotty replied to a thread paq8px in Data Compression
    Fixes and improvements in jpeg model - fixed indexing (thank you, mpais!) - removed 2 contexts and 1 apm, added 1 context - hashing instead of direct contexts in apms (thank you, Kaitz!) - using abs for advPred and lcp in apms (thank you, Kaitz!) - reduced memory use in apms (-24 MB on level 8) - tweaks
    2164 replies | 580594 view(s)
  • Nania Francesco's Avatar
    Yesterday, 22:53
    As soon as I can get my hands on the resource files I use for the construction of html pages ! I'm sorry I got the nation wrong!
    204 replies | 129759 view(s)
  • schnaader's Avatar
    Yesterday, 21:15
    Thanks for adding Precomp to the benchmark! Could you change the country from RUS to GER? Some suggestions for options that might improve results for some of the tests: "-cl -intense" for summary, app1, app2, game1, game2, iso, raw, bin tests (intense is a bit slower, but detects headerless zlib streams) "-cl -lf+d1", "-cl -lf+d2" or "-cl -lf+d4" for wav test (enables lzma2 delta compression which will often lead to better results on audio)
    204 replies | 129759 view(s)
  • Darek's Avatar
    Yesterday, 19:15
    Darek replied to a thread Paq8pxd dict in Data Compression
    At first my testset for paq8px_v90. Nice improvements. Still some bytes behind latest paq8px (about 100KB - without LSTM) but almost all files got some gains. There is only some loses for biggest 24bpp images - both of them loses about 400 bytes.
    975 replies | 342275 view(s)
  • anormal's Avatar
    Yesterday, 18:08
    After some fixing and testing, I finished using this format: 5bits-number of tokens tokens... =12bits, 9bits index in dict+3bits length of match, encoded as 2 groups of 6bit literals...=6bits each one Each group of 6bits is finally encoded with a standard huffman encoder. I am happy as even very small strings, 4-5 chars are always compressed. Some stats, sizes are in bits: line:775 tokens:5(60) literals:25(150) orig:392 - naive6 : 294 - compressed no huffman:214 - HUFFMAN :202 line:776 tokens:3(36) literals:5(30) orig:136 - naive6 : 102 - compressed no huffman:70 - HUFFMAN :60 line:777 tokens:4(48) literals:11(66) orig:232 - naive6 : 174 - compressed no huffman:118 - HUFFMAN :110 line:778 tokens:13(156) literals:21(126) orig:592 - naive6 : 444 - compressed no huffman:286 - HUFFMAN :301 line:779 tokens:1(12) literals:11(66) orig:112 - naive6 : 84 - compressed no huffman:82 - HUFFMAN :69 line:780 tokens:2(24) literals:10(60) orig:136 - naive6 : 102 - compressed no huffman:88 - HUFFMAN :73 line:781 tokens:2(24) literals:3(18) orig:88 - naive6 : 66 - compressed no huffman:46 - HUFFMAN :46 line:782 tokens:2(24) literals:9(54) orig:128 - naive6 : 96 - compressed no huffman:82 - HUFFMAN :74 line:783 tokens:2(24) literals:14(84) orig:216 - naive6 : 162 - compressed no huffman:112 - HUFFMAN :92 line:784 tokens:6(72) literals:9(54) orig:240 - naive6 : 180 - compressed no huffman:130 - HUFFMAN :120 line:785 tokens:1(12) literals:5(30) orig:80 - naive6 : 60 - compressed no huffman:46 - HUFFMAN :42 line:786 tokens:1(12) literals:5(30) orig:80 - naive6 : 60 - compressed no huffman:46 - HUFFMAN :42 line:787 tokens:5(60) literals:14(84) orig:280 - naive6 : 210 - compressed no huffman:148 - HUFFMAN :139 line:788 tokens:4(48) literals:6(36) orig:168 - naive6 : 126 - compressed no huffman:88 - HUFFMAN :98 line:789 tokens:0(0) literals:5(30) orig:40 - naive6 : 30 - compressed no huffman:34 - HUFFMAN :28 Final compression no huffman:216246 / 356872 = ,6059 Final compression huffman:185306 / 356872 = ,5193 I'll try now to build a better dictionary that provides more matches and so better compression. What I am using now is simply inserting+merging ngrams sorted by frequency of use.
    7 replies | 403 view(s)
  • moisesmcardona's Avatar
    Yesterday, 16:02
    moisesmcardona replied to a thread paq8px in Data Compression
    Honestly, I don't see where is the bullying part in this topic... Let's be good people and move on and continue delivering the best we do here :D
    2164 replies | 580594 view(s)
  • Nania Francesco's Avatar
    Yesterday, 14:38
    I was able to test all the archivers and compressors again and also added Lzpm 0.18 and Precomp 0.4.7. I hope to release more updates soon! only from: http://heartofcomp.altervista.org/MOC/MOCA.htm
    204 replies | 129759 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 10:39
    2164 replies | 580594 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 06:14
    I believe that every people in this forum is not a child so they can know which is bullying and which is not bullying. Imagine if your children becomes bullying victim so what should you do ? Does this forum support bullying to another people ? I believe that the answer is doesn't.
    2164 replies | 580594 view(s)
  • Gotty's Avatar
    Yesterday, 02:20
    Gotty replied to a thread paq8px in Data Compression
    It does not sound bullying to me. It is more like expressing anger or dislike because of your past. You did stupid things. <- It's also not bullying, it's a statement. Despite of all that you are welcome here. Read again our posts above and you'll see. For me your intention is clear in this thread: you would like to contribute (in contrast to your past when you wanted to have fame for yourself only). I take it as a sign of change and willingness. Let's see the present. "Blind" parameter tuning will not be enough to get far. As mpais stated it works but it's not very efficient. Note: please understand his motivation. By reading all his posts you will see that he would like to make/keep paq8px as high quality as possible. And any change that is not in that direction makes him feel uneasy. He expressed his fear that your contributions may not reach a high quality mark. And we all know that his fear has ground - I expressed the same in my earlier posts when I fixed bugs in your code. Let's look into the future then. Please, expect to spend 3-6 months just to understand the code base. I actually needed more time, so it may be more ;-) You will need to read the code line by line. Are you ready to do it? It's not an easy piece of software, you already know that. Also you will need to do proper testing before any contribution. What does it mean? - You need to test on many different files; - You need to test the effect of your changes one by one. I have the feeling that you add many new context at once and when the result is satisfying you don't even check which context was the one that actually helped. You must include only those changes that actually help and eliminate any changes that have no or little effect as with any unnecessary code the compression gets slower and slower; - You need to disable NDEBUG and run sanity tests to see if your changes trigger any asserts. Your contribution to the jpeg model was a good start. Nice compression gain but not so good memory usage and time. Although I don't have much time I'll see what I can do to make it a bit better in those areas.
    2164 replies | 580594 view(s)
  • suryakandau@yahoo.co.id's Avatar
    24th October 2020, 22:49
    as I said before it is not about programmers ethic anymore it is about bullying and it becomes human legal case now. Is this community support bullying another people ?
    2164 replies | 580594 view(s)
  • moisesmcardona's Avatar
    24th October 2020, 22:16
    Yes sir, attached are both native (AVX2) and standard (SSE2) builds, compiled with Multithreading support (-DMT)
    975 replies | 342275 view(s)
  • Darek's Avatar
    24th October 2020, 22:07
    Darek replied to a thread Paq8pxd dict in Data Compression
    If you can make build with paq8pxd v90 I could test it. Question - 4 corpuses are enough to test or I could take another files?
    975 replies | 342275 view(s)
  • Gotty's Avatar
    24th October 2020, 21:47
    Gotty replied to a thread paq8px in Data Compression
    The file has the same content, only line endings are different. Paq8px doesn't care, so the files are essentially the same. You can use either of them.
    2164 replies | 580594 view(s)
  • hexagone's Avatar
    24th October 2020, 21:35
    hexagone replied to a thread paq8px in Data Compression
    If you want to be part of a community, such comments are not helpful. Given your well documented history of mis-representations (that led to your disqualification from the GDCC), you should know better than questioning people's ethics and character.
    2164 replies | 580594 view(s)
  • Darek's Avatar
    24th October 2020, 20:21
    Darek replied to a thread paq8px in Data Compression
    Scores of my testset for paq8px_v185 - nice gain (150 bytes) for F.JPG file. :)
    2164 replies | 580594 view(s)
  • moisesmcardona's Avatar
    24th October 2020, 18:33
    Hey @kaitz, I went ahead and created a CMakeLists.txt file to allow compilation using CMake/Make. See my PR here: https://github.com/kaitz/paq8pxd/pull/12 :_yahoo2:
    975 replies | 342275 view(s)
  • anormal's Avatar
    24th October 2020, 17:43
    I don't think is useful out of this special case. Maybe it could be scaled in a similar way, for encoding a predefined set of strings, and when you need to decompress any string one at a time. But for modern processors/ram, you could use a much bigger dict, longest pointers in dictionary, etc. As I said you could you check other projects as femtozip or unishox, and Littlebit as posted before (I tested many, and usually shoco and smasz are suggested, but these I mentioned are much advanced and mature) I am going to try a GA to see if there are more interesting strategies while packing the dictionary, than the one I am using. If anyone could explain in few words as Z-Std algorithms for building dictionary works, cover and fastcover? (as I see in sources), I'd be happy :D @danlock, yes you are right, standard Sinclair Spectrum has 48KB (including screen ram), I think Vic-20 had 4KB? There are also many MSX systems with varied ram sizes, 16,32,64, etc, etc... Spectrum 128, has pageable banks, etc... Edit: I've just remembered I was looking for something similar years ago, while I was building a Chess png repo, and tried to compress each game apart in the DB. I'd try with pngs when I've tested more things, but with maybe a 1MB dict? It could be fun to explore this.
    7 replies | 403 view(s)
  • kaitz's Avatar
    24th October 2020, 15:33
    kaitz replied to a thread Paq8pxd dict in Data Compression
    Confirmed. I made GitHub issue for it. Will look into it in january. --- Also uploaded paq8pxd_v90, if someone wants to test. Cant upload large files now, so source only.
    975 replies | 342275 view(s)
  • suryakandau@yahoo.co.id's Avatar
    24th October 2020, 15:15
    This is not about programmers ethic anymore but it is about human legal case now. If gdcc choose you as the winner I think that is not a wise decision.
    2164 replies | 580594 view(s)
  • Darek's Avatar
    24th October 2020, 13:29
    Darek replied to a thread paq8px in Data Compression
    Scores of paq8px_v194 for my testset - there no big changes (especially on my jpeg file too :( )
    2164 replies | 580594 view(s)
  • Darek's Avatar
    24th October 2020, 13:26
    Darek replied to a thread paq8px in Data Compression
    Lenght of english.exp file is different (shorter) than latest issue. You need to swap this file for newest. However, from tests on my and Silesia files there no big impact of change this file.
    2164 replies | 580594 view(s)
  • suryakandau@yahoo.co.id's Avatar
    24th October 2020, 12:42
    I smell bullying effort here. That is not good for this forum to keep that. I know your Emma compressor is number one in gdcc image compression but your character is not good.
    2164 replies | 580594 view(s)
  • Gotty's Avatar
    24th October 2020, 12:12
    Gotty replied to a thread paq8px in Data Compression
    I've just came up with a name of this approach: "Random Forest" :D Yes, it's very clear he has no understanding.
    2164 replies | 580594 view(s)
  • mpais's Avatar
    24th October 2020, 11:09
    mpais replied to a thread paq8px in Data Compression
    Seems I'm late to the party as usual JpegModel.cpp, line 699: cxt = hash(++n, hc, advPred / 13, prevCoef2 / 20, prevCoefRs); advPred is And I'm guessing no one is testing his fpaq8 fork on 8 bit audio files. Almost 50% slower for about 0.22% to 0.28% improvement from 193fix2. 17 extra contexts and 13 extra daisy-chained SSE stages from a quick glance. Don't get me wrong, it's a valid approach, even if it's not very efficient, and the JPEG model is one of the lowest complexity models in paq8 so there's still some low hanging fruit, so it's probably a good choice for someone wanting to start helping development as it's one of the easiest to improve. And I don't really have the time right now to rewrite this model so I can't really put my money where my mouth is, hence you guys are free to take my opinion as you please. There is however the history of blatant disregard for proper attribution from Surya (and I'm not entirely convinced he and bwt are different users, but that's a different story), which it seems has even gotten him banned from the GDCC. I'll be the first to admit my coding skills are mediocre at best, and I usually make dumb mistakes that you so kindly fix :D, but this is just throwing stuff at the wall and seeing what sticks, without even understanding what one is doing. I get it, it's easy to just learn how to compile paq8, then tweak some numbers or add "hashes" (as he puts it) with different combinations of previous contexts, run it and see what effect it had. Oh, yeah, and strip the interface, change the program name, release the binary only and do wild claims... Surya, if you have changed your ways and really want to help, I welcome your efforts and I'm sure everyone here will help you learn. Tempting as it may be to just brute-force compression improvements, I suggest you try to first learn and code some simpler compression methods to hone your skills and get a better understanding of the field, instead of jumping head first into such a complex codebase.
    2164 replies | 580594 view(s)
  • Gotty's Avatar
    23rd October 2020, 23:59
    Gotty replied to a thread paq8px in Data Compression
    Passed all my tests. Impressive results. I needed to make some cosmetic changes only. Well done, Surya! @all: You may test with the exe posted by Surya above. It's official ;-) Pushed the changes to my git and created a pull request to hxim.
    2164 replies | 580594 view(s)
  • Gotty's Avatar
    23rd October 2020, 23:48
    Gotty replied to a thread paq8px in Data Compression
    The difference of the english.exp file is due to different line endings (UNIX vs DOS). Paq8px does not care, so both are OK. On my local system it has DOS line endings (as in the original releases by mpais), and I attach releases with DOS line endings. It looks like that on git some of the files are converted to UNIX line endings (like english.exp and most of the cpp files) and some stayed as DOS (like english.dic and some of the hpp files). Strange, isn't it? Even stranger: when I clone or pull the repository with git the english.exp contains DOS line endings. So when Surya downloaded the source from git using the direct link he downloaded it with UNIX line endings and my attachment contained DOS line endings. This is the source of the confusion. As soon as Surya will use "git clone" to grab the full contents of the repo, he will also have the same line endings as most of us. No harm is made. It's safe to use the training files with any line endings.
    2164 replies | 580594 view(s)
  • danlock's Avatar
    23rd October 2020, 18:54
    Hmm... an 8-bit machine from the '80s with a total of 64K RAM would have less than 64K available for use.
    7 replies | 403 view(s)
  • Gotty's Avatar
    23rd October 2020, 17:18
    Gotty replied to a thread paq8px in Data Compression
    The next step would be to learn git. Unfortunately a couple of forum posts here will not be enough to guide you through. Sorry, git is a larger topic. But it is a must today, so you have not much of a choice but to google read and learn. https://guides.github.com/activities/hello-world/
    2164 replies | 580594 view(s)
  • skal's Avatar
    23rd October 2020, 16:56
    Interesting... Is this happening only for qualities around q=100? Or at lower ones too? (q=60-80 for instance)... skal/ ​
    7 replies | 1015 view(s)
  • e8c's Avatar
    23rd October 2020, 14:49
    Algorithm Errors as Art: https://app.box.com/s/gtm698mi8ns8adv62i1s57wgcuj0lab1
    7 replies | 814 view(s)
  • Jyrki Alakuijala's Avatar
    23rd October 2020, 14:39
    Sanmayce's testing from 2.5 years ago in https://github.com/google/brotli/issues/642 shows that large-window Brotli competes in density with 7zip, but decodes much faster. Brotli 11d29 is on pareto-optimal decoding speed/density for every large-corpora test, sometimes improving by more than 10x on the previous entry on pareto-front.
    204 replies | 129759 view(s)
  • suryakandau@yahoo.co.id's Avatar
    23rd October 2020, 14:32
    I have created GitHub account so what is the next step ?
    2164 replies | 580594 view(s)
  • moisesmcardona's Avatar
    23rd October 2020, 13:39
    moisesmcardona replied to a thread paq8px in Data Compression
    Please practice on committing your changes on Git. Thanks.
    2164 replies | 580594 view(s)
  • suryakandau@yahoo.co.id's Avatar
    23rd October 2020, 13:26
    no, I have changed jpeg model only. Why ?
    2164 replies | 580594 view(s)
  • LucaBiondi's Avatar
    23rd October 2020, 13:24
    LucaBiondi replied to a thread paq8px in Data Compression
    Have you changed the english.exp file? Thank you Luca
    2164 replies | 580594 view(s)
  • suryakandau@yahoo.co.id's Avatar
    23rd October 2020, 03:06
    here is source code and binary file inside the package.
    2164 replies | 580594 view(s)
  • Nania Francesco's Avatar
    22nd October 2020, 22:47
    Due to an error encountered, I am testing the simple compressors again in "iso" format I am currently continuing with all the compressors already tested (ZSTD, LAZY etc). I'm sorry! I hope to republish soon !
    204 replies | 129759 view(s)
  • a902cd23's Avatar
    22nd October 2020, 15:00
    a902cd23 replied to a thread WinRAR in Data Compression
    WinRAR - What's new in the latest version Version 6.0 beta 1 1. "Ignore" and "Ignore All" options are added to read error prompt. "Ignore" allows to continue processing with already read file part only and "Ignore All" does it for all future read errors. For example, if you archive a file, which portion is locked by another process, and if "Ignore" is selected in read error prompt, only a part of file preceding the unreadable region will be saved into archive. It can help to avoid interrupting lengthy archiving operations, though be aware that files archived with "Ignore" are incomplete. If switch -y is specified, "Ignore" is applied to all files by default. Previosuly available "Retry" and "Quit" options are still present in read error prompt as well. 2. Exit code 12 is returned in the command line mode in case of read errors. This code is returned for all options in the read error prompt, including a newly introduced "Ignore" option. Previously more common fatal error code 2 was returned for read errors. 3. If several archives are selected, "Extract archives to" option group in "Options" page of extraction dialog can be used to place extracted files to specified destination folder, to separate subfolders in destination folder, to separate subfolders in archive folders and directly to archive folders. It replaces "Extract archives to subfolders" option and available only if multiple archives are selected. 4. New -ad2 switch places extracted files directly to archive's own folder. Unlike -ad1, it does not create a separate subfolder for each unpacked archive. 5. "Additional switches" option in "Options" page of archiving and extraction dialogs allows to specify WinRAR command line switches. It might be useful if there is no option in WinRAR graphical interface matching a switch. Use this feature only if you are familiar with WinRAR command line syntax and clearly understand what specified switches are intended for. 6. Compression parameters in "Benchmark" command are changed to 32 MB dictionary and "Normal" method. They match RAR5 default mode and more suitable to estimate the typical performance of recent WinRAR versions than former 4 MB "Best" intended for RAR4 format. Latest "Benchmark" results cannot be compared with previous versions directly. New parameters set produces different values, likely lower because of eight times larger dictionary size. 7. When unpacking a part of files from solid volume set, WinRAR attempts to skip volumes in the beginning and start extraction from volume closest to specified file and with reset solid statistics. By default WinRAR resets the solid statistics in the beginning of large enough solid volumes where possible. For such volumes extracting a part of files from the middle of volume set can be faster now. It does not affect performance when all archived files are unpacked. 8. Previously WinRAR automatically resorted to extracting from first volume, when user started extraction from non-first volume and first volume was available. Now WinRAR does so only if all volumes between first and specified are also available. 9. Warning is issued when closing WinRAR if one or more archived files had been modified by external apps, but failed to be saved back to archive, because an external app still locks them. Such warning includes the list of modified files and proposes to quit immediately and lose changes or return to WinRAR and close an editor app. Previous versions issued a similar warning while editing a file, but did not remind it again when quitting. 10. "Move to Recycle Bin" option in "Delete archive" options group of extraction dialog places deleted archives to Recycle Bin instead of deleting them permanently. 11. New "Clear history..." command in "Options" menu allows to remove names of recently opened archives in "File" menu and clear drop down lists with previously entered values in dialogs. For example, these values include archive names in archiving dialog and destination paths in extraction dialog. 12. "File time" options in "Advanced" part of extraction dialog are now available for 7z archives. Additionally to modification time, WinRAR can set creation and last access time when unpacking such archives. 13. ""New" submenu items" options group is added to "Settings/Integration/Context menu items..." dialog. You can use these options to remove "WinRAR archive" and "WinRAR ZIP archive" entries in "New" submenu of Windows context menu. New state of these option is applied only after you press "OK" both in "Context menu items" and its parent "Settings" dialog. 14. <Max>, <Min> and <Hide> commands can be inserted before the program name in SFX "Setup" command to run a program in maximized, minimized or hidden window. For example: Setup=<Hide>setup.exe 15. It is possible to specify an additional high resolution logo for SFX module. If such logo is present, SFX module scales and displays it in high DPI Windows mode, providing the better visible quality compared to resizing the standard logo. Use "High resolution SFX logo" in "Advanced SFX options" dialog to define such logo. In command line mode add second -iimg switch to set the high resolution logo. Recommended size of high resolution logo PNG file is 186x604 pixels. 16. If archive currently opened in WinRAR shell was deleted or moved by another program, WinRAR displays "Inaccessible" before archive name in the window title. Also it flashes the window caption and taskbar button. 17. "Total information" option in "Report" dialog is renamed to "Headers and totals". Now it also adds headers of report columns additionally to total information about listed files and archives. 18. If archive processing is started from Windows context menu in multiple monitor system, WinRAR operation progress and dialogs use the monitor with context menu. While basic multiple monitor support was present in previous versions shell extension for mouse driven commands, now it is extended to operations initiated from keyboard and to dropping files to archives. 19. New -imon<number> switch allows to select a monitor to display WinRAR operation progress and dialogs in the command line mode. Use -imon1 for primary and -imon2 for secondary monitor. For example, "WinRAR x -imon2 arcname" will start extraction on secondary monitor. It works only in the command line mode and does not affect interactive WinRAR graphical inteface also as console RAR. 20. Switch -idn hides archived names output in archiving, extraction and some other commands in console RAR. Other messages and total percentage are not affected. You can use this switch to reduce visual clutter and console output overhead when archiving or extracting a lot of small files. Minor visual artifacts, such as percentage indicator overwriting few last characters of error messages, are possible with -idn. 21. Former "-im - show more information" switch is changed to "-idv - display verbose output" for consistency with console RAR -id message control options and avoid a potential name conflict with newer -imon switch. While WinRAR still recognizes both -im and -idv, in the future -im support can be dropped. 22. It is allowed to add an optional %arcname% variable to compression profile name. Such variable will be replaced with actual archive name. It might be convenient when using with "Add to context menu" profile option. For example, you can create ZIP compression profile and set its name to "Add to %arcname%", to display it with actual ZIP archive name in context menu. 23. Ctrl+C and Ctrl+Ins keyboard shortcuts can be used in "Diagnostic messages" window to copy contents to clipboard. 24. More text is allowed in tray icon hint before a lengthy text is truncated. Also such text is now truncated in the middle of string, so both command type and completion percentage are still visible. 25. In case of clean install, if previous version compression profiles are not present, "Files to store without compression" field in newly created predefined compression profiles is set to: *.rar *.zip *.cab *.7z *.ace *.arj *.bz2 *.gz *.lha *.lzh *.taz *.tgz *.xz *.txz You can change this field and save a modified value to compression profile later. Previous versions set this field to blank for clean install. 26. Destination path history in extraction dialog treats paths like 'folder' and 'folder\' as the same path and displays only 'folder' entry. Previously they occupied two entries in the history. 27. "Enable Itanium executable compression" GUI option and -mci command line switch are removed. Optimized compression of Itanium executables is not supported anymore. WinRAR still can decompress already existing archives utilizing Itanium executable compression. 28. Bugs fixed: a) "Lock", "Comment" and "Protect" commands could not be applied to several archives selected in WinRAR file list at once; b) SFX archive process did not terminate after completing extraction in Windows 10 if archive comment included "Setup" and "SetupCode" commands, did not include "TempMode" command and setup program was running for more than 8 minutes; c) compression profiles with quote character in profile name could not be invoked from Explorer context menu.
    187 replies | 133231 view(s)
  • Lithium Flower's Avatar
    22nd October 2020, 12:32
    @Jyrki Alakuijala sorry, my english is not good. I'm sorry for reply so lately. Thank you for your reply, I'm really grateful. 1. use butteraugli metric compare different encoder I develop a python multithread program use butteraugli metric compare different encoder, and choose below maxButteraugliSocre and smaller image file. maxButteraugliSocre should setting on 1.6 or setting on 1.3? 2. Some butteraugli question In my first post *Reference 02 ButteraugliSocre reference list and guetzli quality.cc, those score reference list still available in Butteraugli-master and Butteraugli-jpeg xl? https://github.com/google/guetzli/blob/b473cf61275991e2a937fe0402d28538b342d2f8/guetzli/quality.cc#L26 in some non-photographic image, Butteraugli-jpeg xl butteraugli score will get some unexpected behavior, 1. in jpeg q96,q97,q98 butteraugli-jpeg xl still get larger butteraugli score(1.8~2.0) 2. rgba32 have transparent png file butteraugli score get larger butteraugli score(426.4349365234), behavior like @cssignet post sample. 3. if Butteraugli-master and Butteraugli-jpeg xl return different butteraugli score, i should choose Butteraugli-master score, because Butteraugli-master xyb more accurate? 4. butteraugli-jpeg xl 3rd norm and 12 norm have a quality reference list? ,like guetzli/quality.cc list and first post *Reference 02 list. 3. jpeg xl -jpeg1 feature I build a sjpeg-master, and test some non-photographic image, get difference butteraugli score in same quality level, -jpeg1 get great butteraugli score, https://github.com/webmproject/sjpeg I think jpeg xl -jpeg1 don't use sjpeg to convert, only output, and if input image have transparent, jpeg xl -jpeg1 will keep transparent, sjpeg can't success output. i can't find more -jpeg1 detail, could you provide me the details? -jpeg1 butteraugli score data sheet https://docs.google.com/spreadsheets/d/1skRbwQ32Qpdyidx8UYLaOpeB8SCQ7Z9Xtsoiz39mol0/edit#gid=0 4. lossy translucency(alpha) feature humans can't see translucency, webp lossy have compression factor, if i setting -alpha_q compression factor to 1 or 0, image translucency will get more lossy, this lossy is unvisible but lossy alpha this feature will make some unvisible bad side-effect or format bad side-effect? i very curious this question, could you teach me about this feature? @cssignet Thank you for your reply, I'm really grateful. I'm sorry, i did't make myself clear, I use butteraugli metric compare pingo near-lossless and pngfilter, and choose below maxButteraugliSocre and smaller image file. Thank your suggest, pingo near-lossless is working very well, and Thank you develop pingo. :) about larger butteraugli score, maybe this comment can explanation? i don't really understand this.
    7 replies | 1015 view(s)
  • LucaBiondi's Avatar
    22nd October 2020, 10:50
    LucaBiondi replied to a thread paq8px in Data Compression
    Hello All, These are results of my big testset. v193fix2 Vs. v194 We have a big gain of about 150K on the JPEG section. We also a gain of 22K on pdf files. Great Job! Luca https://sqlserverperformace.blogspot.com
    2164 replies | 580594 view(s)
  • Jyrki Alakuijala's Avatar
    22nd October 2020, 01:29
    Out of curiosity: Is there a practical use for this or is it for fun with retro computing?
    7 replies | 403 view(s)
  • anormal's Avatar
    21st October 2020, 19:53
    @xezz you were right, similar ideas and I learnt a lot. Also I found there about the paper about Syllabe encoding, they even tried and tested a genetic evolver for building a dictionary. It really surprised me what I thought was a silly idea when I thought about it :D, it was really used and solved problems :D Thanks
    7 replies | 403 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 19:22
    i have implemented you idea (in apm div lcp or adv from 16 to 14) it can reduced size smaller. thank you.
    2164 replies | 580594 view(s)
  • Gotty's Avatar
    21st October 2020, 17:28
    Gotty replied to a thread paq8px in Data Compression
    ​@suryakandau@yahoo.co.id I would also highly encourage you use the git repo. But since your fixes had issues, I would like to ask you to wait with sending pull requests to the hxim until you can comfortably do fixes, tweaks and improvements without introducing bugs. You may fork my repo (https://github.com/GotthardtZ/paq8px) if you wish and make pull requests to me, so we can learn together and make code reviews. My repo is up to date with hxim.
    2164 replies | 580594 view(s)
  • kaitz's Avatar
    21st October 2020, 17:24
    kaitz replied to a thread paq8px in Data Compression
    On jpeg: in apm div lcp with 16->14 or you lose info, combine and hash them. Maybe 3 apms less. Previous errors also help. You can find it :) use abs values in apm /lcp, adv/, apm stepsize can also be lower then 20 if hashed and more memory is used. Owerall same usage probably. in m1 mixer lower layer use update rate 2 or 1.5 lower column count in main mixer. ​mcupos in m1? On suryakandau test.jpg gain should be 8kb.
    2164 replies | 580594 view(s)
  • moisesmcardona's Avatar
    21st October 2020, 15:46
    moisesmcardona replied to a thread paq8px in Data Compression
    You need to have a Github account and create a fork. You'll then need to clone your fork and add hxim's repo as the upstream. From there, you'll fetch upstream and merge with your own repo. Once your changes are done, commit them and push them. Then, you can create a Merge Request in Github to have your changes into consideration to be merged into the main repo.
    2164 replies | 580594 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 15:28
    how to submit my changes to github repo ?
    2164 replies | 580594 view(s)
  • moisesmcardona's Avatar
    21st October 2020, 15:23
    moisesmcardona replied to a thread paq8px in Data Compression
    1. The best way to easily build paq8px as of the moment is to use a MinGW installation. It's just a matter of running cmake and make. Because I like to generate data, I use the Media-Autobuild-Suite to build those media tools. I use that same suite to build paq8px since it already includes dependencies and always updates the components at each run. Note that, I do build paq8px manually regardless of the suite. Just that this way, I can keep MinGW up to date and use it for other purposes too. 2. I've never had issues with CMD on Windows. Unless you're trying to compress folders which will not work unless you do the text file with the content you want to pass. 3. I can try to add them. The objective of my tool is to keep it updated with newer releases for backward-compatibility purposes. However, given that these "fixes" builds are not in the Git repo, I'll need to do that manually. I'll try to test them later today. 4. My builds are also 64-bit, compiled on an AMD CPU (which triggered that Intel/AMD incompatibility issue) and as of the latest versions I'm building paq8px both native and non-native. Built on MinGW from my point #1. Please use Git to submit your changes to the repo and keep a history of changes. Other than code management, it's great to have a historical repository containing every changes performed to paq8px. Not to mention, it's easier to revert to older versions and compile them. My local repository has the xhim repo as the main upstream, as well as mpais and Gotty's repos. It's more convenient when it's time to merge them into the main repo and test/build them.
    2164 replies | 580594 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 15:08
    ​what is the difference between lcp and advPred ?
    2164 replies | 580594 view(s)
  • Gotty's Avatar
    21st October 2020, 09:26
    Gotty replied to a thread paq8px in Data Compression
    We have a git repository. You will always find the up-to-date source there. The official "master" branch with the current commit: https://github.com/hxim/paq8px/pull/149
    2164 replies | 580594 view(s)
  • rarkyan's Avatar
    21st October 2020, 03:44
    "If it doesn't, then if you're using a lookup table, most of the data will return correct but the states that overlap will cause errors when you try to return data back to 1:1 states with decompression." I think it will not overlap because each hex on the data structure are mapped. From the beginning they are not overlap each other --------------------------- "If it is different from that, I am not sure how you are translating the hex bytes that overlap unless there is something I'm not seeing like a substitution algorithm that treats non-aligned bytes as search and replace matches similar to when you use a text editor to search and replace data and replace it with a smaller symbol or string?" Yes im trying to replace the sequence of hex pattern using a string, as short as possible. Talk about clock mechanism, let say the smallest units is in seconds, minute, hour, day, etc. 60s = 1min, second will always tick, but we dont call the next tick as 61 seconds but 1min 1s. Something similiar like that. First i must create a limit to the string itself using x digit to replace the pattern. Using 2 digit i can generate 256^2 = 65.536 different name. Using 3 digit i can generate 256^3 = 16.777.216, etc. Problem is, its hard for me to observe and trying on actual file. I know if the hex pattern sequence is < 256^n string (ID), then im pretty sure this is where the compression happen. But since i cant create the program to observe the sample, this explanation maybe lead to misunderstand. --------------------------- "I will wait to see your response but another thing I'm wondering is where the compression is happening because on the example BMP file with 384 bytes, it gets expanded to 2176 bytes as a text file containing hex symbols that are each 4 bits per symbol" The compression happen when the complete pattern are created and replaced by short string. In the example, my files gets expanded into 2176 bytes because they didnt meet the output file requirement. File too small, and output file write too many string. I need to check them at large file but i need programmers help. If anyone want to be a volunteer, or maybe want to help me create the program i am very grateful. ​Thanks
    253 replies | 98793 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 01:39
    i have downloaded it and there is no source code inside the package file. please share it so we can learn together. thank you.
    2164 replies | 580594 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 01:30
    thank you for your input. next time i will study the internal working fisrt and ask some question here .
    2164 replies | 580594 view(s)
  • Gotty's Avatar
    21st October 2020, 00:49
    Gotty replied to a thread paq8px in Data Compression
    1) I'm not sure what you mean. Do you try building paq8px? Or do you try running it? If so what command line parameters are you trying to use? 2) Please give more information. 3) I suppose you mean PAQCompress, right? Let's consider 194fix3-4-5 as "unofficial" releases sent for code review. For me personally the quality of these contributions is not there yet. 4) My builds are always general 64-bit Windows binaries. The executables in PAQCompress are compiled by Moisés. I believe they are also general 64-bit Windows builds (non-native).
    2164 replies | 580594 view(s)
  • Gotty's Avatar
    21st October 2020, 00:18
    Gotty replied to a thread paq8px in Data Compression
    - Fixed and further enhanced jpeg model
    2164 replies | 580594 view(s)
  • Gotty's Avatar
    21st October 2020, 00:15
    Gotty replied to a thread paq8px in Data Compression
    Code review These contexts are not protected against column values greater than 1023. m1->set(column,1024); m.set(column,1024);Fixed. This looks incorrect: static constexpr int MIXERINPUTS = 2 * N + 6+128;Fixed. Adding contexts or setting mixer contexts in alternate routes should follow the order (and number) in the main route. Fixed. This looks very incorrect in IndirectMap, and you also forgot to adjust MIXERINPUTS from 2 to 3. m.add(((p1 )) >> 9U);Reverted. These 2048 values won't bring any change in compression - they should stay as 1024. Did you test? It looks like, you don't really know the purpose of these numbers. Hm. m1->set(static_cast<uint32_t>(firstCol), 2048); m1->set(coef | (min(3, huffBits) << 8), 2048); m1->set(((hc & 0x1FEU) << 1) | min(3, ilog2(zu + zv)), 2048);Reverted. These new contexts from fix3 seem to have no benefit (even loss): cxt = hashxxk(++n, coef, advPred / 12 + (runPred << 8U), sSum2 >> 6U, prevCoef / 72); cxt = hashxxk(++n, coef, advPred / 12 + (runPred << 8U), sSum2 >> 6U, prevCoef / 72);Reverted. These new contexts from fix5 seem to have no real benefit: cxt = hash(hc2, advPred / 13, prevCoef / 11, static_cast<int>(zu + zv < 4)); cxt = hash(hc2, coef, advPred / 12 + (runPred << 8U), sSum2 >> 6U, prevCoef / 72);Reverted. Your changes here cause a noticeable loss for MJPEG files. I suppose you didn't test with MJPEG files at all? MJPEGMap.set(hash(mcuPos & 63, column, row, hc));Reverted. You defined apm12 in fix5 and not used it. Could you please "finalize" your code (clean it up) next time before posting? Is there a particular reason why... ...you implemented a new hash function for the 2 new contexts in fix3? I've just tested: using the usual hash function yields a better compression (of course only for larger files - for smaller files there is a small fluctuation). ...you changed the scaling of the mixer contexts: m1->add(st) to m1->add(st >> 2U) and m.add(st >> 1U) to -> and m.add(st >> 2U)? ? If I revert your changes (which I did) compression is noticably better. ...you removed these contexts: m.add((pr >> 2U) - 511);? Re-adding them improves compression. So I re-added them. Could you please always adjust your formatting to the code style of the existing code? Do you test your changes with multiple files (small to large, mjpeg, jpeg with thumbnails, etc.)? Some of your changes brought a noticeable compression gain, so thank you! Having that long chain of apms is probably unintentional on your side but it works rather well. After testing each of your changes and cleaning the code I improved the jpeg model a little bit further and published it as v194. Nevertheless, it looks like you are trying changing contexts ("tweaking") without actually knowing how things work. Please study the internal workings before trying more improvements. If you are uncertain what is what, please ask your questions here.
    2164 replies | 580594 view(s)
  • fcorbelli's Avatar
    20th October 2020, 17:34
    fcorbelli replied to a thread zpaq updates in Data Compression
    Here current versions of zpaqfranz and zpaqlist. In zpaq SetConsoleCP(65001); SetConsoleOutputCP(65001); enable UTF-8 codepage directly on the caller shell. A new switch -pakka for different output In zpaqlist a -pakka ("verbose") and a -out filename.txt where the output goes. Here the 32.2 version of PAKKA (alpha stage-half italian interface) http://www.francocorbelli.it/pakka.exe It's possible to use control wheel, shift wheel and control/shift wheel to change the font Need more debug
    2568 replies | 1111184 view(s)
  • suryakandau@yahoo.co.id's Avatar
    20th October 2020, 16:28
    Paq8px193fix5 - slightly improve jpeg recompression here is binary and source code. please test it whether there is an error. thank you
    2164 replies | 580594 view(s)
  • Raphael Canut's Avatar
    20th October 2020, 01:08
    Hello, Just a quick message, I have fixed a bug in the last version of NHW because there could be a segmentation fault for some images.Sorry for the error. Correction and update on my demo page: http://nhwcodec.blogspot.com/ Else very quickly I find that this last version has now a good balance between neatness and precision, and I find this visually pleasant, and more and more interesting, I am currently comparing with AVIF.I'll try to improve this again, but for example I have a processing that improves again precision but now it starts to decrease neatness, and so I find results less interesting... Neatness is still the main advantage of NHW I want to preserve. Also I am (visually) comparing with avifenc with -s 0 setting: slowest speed/best quality, but ultra-optimized (multi-threading, SIMD,...) AVIF -s 0 then takes in average 15s to encode an image on my computer, whereas totally unoptimized NHW takes 30ms to encode that same image!... So extremely fast speed is also an advantage of NHW! Cheers, Raphael
    205 replies | 25221 view(s)
  • fcorbelli's Avatar
    19th October 2020, 22:54
    fcorbelli replied to a thread zpaq updates in Data Compression
    My current bottleneck is the output stage, the fprintf. With or without text buffering. I need to make something different, like 'sprintf' in a giant RAM buffer to be written in one time. And output running when taking data, instead of like now (first decode, then output). A very big work (I do not like very much the ZPAQ source code, I have to rewrite myself,but too time is needed) Even without this my zpaqlist is two time faster than anything else (on my testbed). Tomorrow I will post the source, and my fix for zpaq to support direct utf8 extraction without supplementary software (running on Windows). Now my little PAKKA (alpha stage) runs zpaqlist and zpaqfranz in a background thread, so the Delphi GUI show progression in real time, instead of running a batch file
    2568 replies | 1111184 view(s)
  • SpyFX's Avatar
    19th October 2020, 22:12
    SpyFX replied to a thread zpaq updates in Data Compression
    Hi fcorbelli for zpaq list 1 to calculate the size of files in the archive, you need to load all h blocks into memory, this is the first memory block that zpaq consumes - I don't know how to optimize it yet 2 to understand that the files in the archive are duplicates, load the corresponding sets of indexes into memory and compare them by length or by element if the length of the sets is the same for the files - at the moment my solution is as follows, I calculate sha1 (a set of indices) for each file and then sort the list of files by key (size, hash) in descending order
    2568 replies | 1111184 view(s)
  • CompressMaster's Avatar
    19th October 2020, 21:31
    CompressMaster replied to a thread paq8px in Data Compression
    @moisesmcardona made a useful manual how to do that.
    2164 replies | 580594 view(s)
More Activity