Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • suryakandau@yahoo.co.id's Avatar
    Today, 22:49
    as I said before it is not about programmers ethic anymore it is about bullying and it becomes human legal case now. Is this community support bullying another people ?
    2153 replies | 580005 view(s)
  • moisesmcardona's Avatar
    Today, 22:16
    Yes sir, attached are both native (AVX2) and standard (SSE2) builds, compiled with Multithreading support (-DMT)
    974 replies | 341835 view(s)
  • Darek's Avatar
    Today, 22:07
    Darek replied to a thread Paq8pxd dict in Data Compression
    If you can make build with paq8pxd v90 I could test it. Question - 4 corpuses are enough to test or I could take another files?
    974 replies | 341835 view(s)
  • Gotty's Avatar
    Today, 21:47
    Gotty replied to a thread paq8px in Data Compression
    The file has the same content, only line endings are different. Paq8px doesn't care, so the files are essentially the same. You can use either of them.
    2153 replies | 580005 view(s)
  • hexagone's Avatar
    Today, 21:35
    hexagone replied to a thread paq8px in Data Compression
    If you want to be part of a community, such comments are not helpful. Given your well documented history of mis-representations (that led to your disqualification from the GDCC), you should know better than questioning people's ethics and character.
    2153 replies | 580005 view(s)
  • Darek's Avatar
    Today, 20:21
    Darek replied to a thread paq8px in Data Compression
    Scores of my testset for paq8px_v185 - nice gain (150 bytes) for F.JPG file. :)
    2153 replies | 580005 view(s)
  • moisesmcardona's Avatar
    Today, 18:33
    Hey @kaitz, I went ahead and created a CMakeLists.txt file to allow compilation using CMake/Make. See my PR here: https://github.com/kaitz/paq8pxd/pull/12 :_yahoo2:
    974 replies | 341835 view(s)
  • anormal's Avatar
    Today, 17:43
    I don't think is useful out of this special case. Maybe it could be scaled in a similar way, for encoding a predefined set of strings, and when you need to decompress any string one at a time. But for modern processors/ram, you could use a much bigger dict, longest pointers in dictionary, etc. As I said you could you check other projects as femtozip or unishox, and Littlebit as posted before (I tested many, and usually shoco and smasz are suggested, but these I mentioned are much advanced and mature) I am going to try a GA to see if there are more interesting strategies while packing the dictionary, than the one I am using. If anyone could explain in few words as Z-Std algorithms for building dictionary works, cover and fastcover? (as I see in sources), I'd be happy :D @danlock, yes you are right, standard Sinclair Spectrum has 48KB (including screen ram), I think Vic-20 had 4KB? There are also many MSX systems with varied ram sizes, 16,32,64, etc, etc... Spectrum 128, has pageable banks, etc... Edit: I've just remembered I was looking for something similar years ago, while I was building a Chess png repo, and tried to compress each game apart in the DB. I'd try with pngs when I've tested more things, but with maybe a 1MB dict? It could be fun to explore this.
    6 replies | 344 view(s)
  • kaitz's Avatar
    Today, 15:33
    kaitz replied to a thread Paq8pxd dict in Data Compression
    Confirmed. I made GitHub issue for it. Will look into it in january. --- Also uploaded paq8pxd_v90, if someone wants to test. Cant upload large files now, so source only.
    974 replies | 341835 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 15:15
    This is not about programmers ethic anymore but it is about human legal case now. If gdcc choose you as the winner I think that is not a wise decision.
    2153 replies | 580005 view(s)
  • Darek's Avatar
    Today, 13:29
    Darek replied to a thread paq8px in Data Compression
    Scores of paq8px_v194 for my testset - there no big changes (especially on my jpeg file too :( )
    2153 replies | 580005 view(s)
  • Darek's Avatar
    Today, 13:26
    Darek replied to a thread paq8px in Data Compression
    Lenght of english.exp file is different (shorter) than latest issue. You need to swap this file for newest. However, from tests on my and Silesia files there no big impact of change this file.
    2153 replies | 580005 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Today, 12:42
    I smell bullying effort here. That is not good for this forum to keep that. I know your Emma compressor is number one in gdcc image compression but your character is not good.
    2153 replies | 580005 view(s)
  • Gotty's Avatar
    Today, 12:12
    Gotty replied to a thread paq8px in Data Compression
    I've just came up with a name of this approach: "Random Forest" :D Yes, it's very clear he has no understanding.
    2153 replies | 580005 view(s)
  • mpais's Avatar
    Today, 11:09
    mpais replied to a thread paq8px in Data Compression
    Seems I'm late to the party as usual JpegModel.cpp, line 699: cxt = hash(++n, hc, advPred / 13, prevCoef2 / 20, prevCoefRs); advPred is And I'm guessing no one is testing his fpaq8 fork on 8 bit audio files. Almost 50% slower for about 0.22% to 0.28% improvement from 193fix2. 17 extra contexts and 13 extra daisy-chained SSE stages from a quick glance. Don't get me wrong, it's a valid approach, even if it's not very efficient, and the JPEG model is one of the lowest complexity models in paq8 so there's still some low hanging fruit, so it's probably a good choice for someone wanting to start helping development as it's one of the easiest to improve. And I don't really have the time right now to rewrite this model so I can't really put my money where my mouth is, hence you guys are free to take my opinion as you please. There is however the history of blatant disregard for proper attribution from Surya (and I'm not entirely convinced he and bwt are different users, but that's a different story), which it seems has even gotten him banned from the GDCC. I'll be the first to admit my coding skills are mediocre at best, and I usually make dumb mistakes that you so kindly fix :D, but this is just throwing stuff at the wall and seeing what sticks, without even understanding what one is doing. I get it, it's easy to just learn how to compile paq8, then tweak some numbers or add "hashes" (as he puts it) with different combinations of previous contexts, run it and see what effect it had. Oh, yeah, and strip the interface, change the program name, release the binary only and do wild claims... Surya, if you have changed your ways and really want to help, I welcome your efforts and I'm sure everyone here will help you learn. Tempting as it may be to just brute-force compression improvements, I suggest you try to first learn and code some simpler compression methods to hone your skills and get a better understanding of the field, instead of jumping head first into such a complex codebase.
    2153 replies | 580005 view(s)
  • Gotty's Avatar
    Yesterday, 23:59
    Gotty replied to a thread paq8px in Data Compression
    Passed all my tests. Impressive results. I needed to make some cosmetic changes only. Well done, Surya! @all: You may test with the exe posted by Surya above. It's official ;-) Pushed the changes to my git and created a pull request to hxim.
    2153 replies | 580005 view(s)
  • Gotty's Avatar
    Yesterday, 23:48
    Gotty replied to a thread paq8px in Data Compression
    The difference of the english.exp file is due to different line endings (UNIX vs DOS). Paq8px does not care, so both are OK. On my local system it has DOS line endings (as in the original releases by mpais), and I attach releases with DOS line endings. It looks like that on git some of the files are converted to UNIX line endings (like english.exp and most of the cpp files) and some stayed as DOS (like english.dic and some of the hpp files). Strange, isn't it? Even stranger: when I clone or pull the repository with git the english.exp contains DOS line endings. So when Surya downloaded the source from git using the direct link he downloaded it with UNIX line endings and my attachment contained DOS line endings. This is the source of the confusion. As soon as Surya will use "git clone" to grab the full contents of the repo, he will also have the same line endings as most of us. No harm is made. It's safe to use the training files with any line endings.
    2153 replies | 580005 view(s)
  • danlock's Avatar
    Yesterday, 18:54
    Hmm... an 8-bit machine from the '80s with a total of 64K RAM would have less than 64K available for use.
    6 replies | 344 view(s)
  • Gotty's Avatar
    Yesterday, 17:18
    Gotty replied to a thread paq8px in Data Compression
    The next step would be to learn git. Unfortunately a couple of forum posts here will not be enough to guide you through. Sorry, git is a larger topic. But it is a must today, so you have not much of a choice but to google read and learn. https://guides.github.com/activities/hello-world/
    2153 replies | 580005 view(s)
  • skal's Avatar
    Yesterday, 16:56
    Interesting... Is this happening only for qualities around q=100? Or at lower ones too? (q=60-80 for instance)... skal/ ​
    7 replies | 989 view(s)
  • e8c's Avatar
    Yesterday, 14:49
    Algorithm Errors as Art: https://app.box.com/s/gtm698mi8ns8adv62i1s57wgcuj0lab1
    7 replies | 795 view(s)
  • Jyrki Alakuijala's Avatar
    Yesterday, 14:39
    Sanmayce's testing from 2.5 years ago in https://github.com/google/brotli/issues/642 shows that large-window Brotli competes in density with 7zip, but decodes much faster. Brotli 11d29 is on pareto-optimal decoding speed/density for every large-corpora test, sometimes improving by more than 10x on the previous entry on pareto-front.
    200 replies | 129570 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 14:32
    I have created GitHub account so what is the next step ?
    2153 replies | 580005 view(s)
  • moisesmcardona's Avatar
    Yesterday, 13:39
    moisesmcardona replied to a thread paq8px in Data Compression
    Please practice on committing your changes on Git. Thanks.
    2153 replies | 580005 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 13:26
    no, I have changed jpeg model only. Why ?
    2153 replies | 580005 view(s)
  • LucaBiondi's Avatar
    Yesterday, 13:24
    LucaBiondi replied to a thread paq8px in Data Compression
    Have you changed the english.exp file? Thank you Luca
    2153 replies | 580005 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 03:06
    here is source code and binary file inside the package.
    2153 replies | 580005 view(s)
  • Nania Francesco's Avatar
    22nd October 2020, 22:47
    Due to an error encountered, I am testing the simple compressors again in "iso" format I am currently continuing with all the compressors already tested (ZSTD, LAZY etc). I'm sorry! I hope to republish soon !
    200 replies | 129570 view(s)
  • a902cd23's Avatar
    22nd October 2020, 15:00
    a902cd23 replied to a thread WinRAR in Data Compression
    WinRAR - What's new in the latest version Version 6.0 beta 1 1. "Ignore" and "Ignore All" options are added to read error prompt. "Ignore" allows to continue processing with already read file part only and "Ignore All" does it for all future read errors. For example, if you archive a file, which portion is locked by another process, and if "Ignore" is selected in read error prompt, only a part of file preceding the unreadable region will be saved into archive. It can help to avoid interrupting lengthy archiving operations, though be aware that files archived with "Ignore" are incomplete. If switch -y is specified, "Ignore" is applied to all files by default. Previosuly available "Retry" and "Quit" options are still present in read error prompt as well. 2. Exit code 12 is returned in the command line mode in case of read errors. This code is returned for all options in the read error prompt, including a newly introduced "Ignore" option. Previously more common fatal error code 2 was returned for read errors. 3. If several archives are selected, "Extract archives to" option group in "Options" page of extraction dialog can be used to place extracted files to specified destination folder, to separate subfolders in destination folder, to separate subfolders in archive folders and directly to archive folders. It replaces "Extract archives to subfolders" option and available only if multiple archives are selected. 4. New -ad2 switch places extracted files directly to archive's own folder. Unlike -ad1, it does not create a separate subfolder for each unpacked archive. 5. "Additional switches" option in "Options" page of archiving and extraction dialogs allows to specify WinRAR command line switches. It might be useful if there is no option in WinRAR graphical interface matching a switch. Use this feature only if you are familiar with WinRAR command line syntax and clearly understand what specified switches are intended for. 6. Compression parameters in "Benchmark" command are changed to 32 MB dictionary and "Normal" method. They match RAR5 default mode and more suitable to estimate the typical performance of recent WinRAR versions than former 4 MB "Best" intended for RAR4 format. Latest "Benchmark" results cannot be compared with previous versions directly. New parameters set produces different values, likely lower because of eight times larger dictionary size. 7. When unpacking a part of files from solid volume set, WinRAR attempts to skip volumes in the beginning and start extraction from volume closest to specified file and with reset solid statistics. By default WinRAR resets the solid statistics in the beginning of large enough solid volumes where possible. For such volumes extracting a part of files from the middle of volume set can be faster now. It does not affect performance when all archived files are unpacked. 8. Previously WinRAR automatically resorted to extracting from first volume, when user started extraction from non-first volume and first volume was available. Now WinRAR does so only if all volumes between first and specified are also available. 9. Warning is issued when closing WinRAR if one or more archived files had been modified by external apps, but failed to be saved back to archive, because an external app still locks them. Such warning includes the list of modified files and proposes to quit immediately and lose changes or return to WinRAR and close an editor app. Previous versions issued a similar warning while editing a file, but did not remind it again when quitting. 10. "Move to Recycle Bin" option in "Delete archive" options group of extraction dialog places deleted archives to Recycle Bin instead of deleting them permanently. 11. New "Clear history..." command in "Options" menu allows to remove names of recently opened archives in "File" menu and clear drop down lists with previously entered values in dialogs. For example, these values include archive names in archiving dialog and destination paths in extraction dialog. 12. "File time" options in "Advanced" part of extraction dialog are now available for 7z archives. Additionally to modification time, WinRAR can set creation and last access time when unpacking such archives. 13. ""New" submenu items" options group is added to "Settings/Integration/Context menu items..." dialog. You can use these options to remove "WinRAR archive" and "WinRAR ZIP archive" entries in "New" submenu of Windows context menu. New state of these option is applied only after you press "OK" both in "Context menu items" and its parent "Settings" dialog. 14. <Max>, <Min> and <Hide> commands can be inserted before the program name in SFX "Setup" command to run a program in maximized, minimized or hidden window. For example: Setup=<Hide>setup.exe 15. It is possible to specify an additional high resolution logo for SFX module. If such logo is present, SFX module scales and displays it in high DPI Windows mode, providing the better visible quality compared to resizing the standard logo. Use "High resolution SFX logo" in "Advanced SFX options" dialog to define such logo. In command line mode add second -iimg switch to set the high resolution logo. Recommended size of high resolution logo PNG file is 186x604 pixels. 16. If archive currently opened in WinRAR shell was deleted or moved by another program, WinRAR displays "Inaccessible" before archive name in the window title. Also it flashes the window caption and taskbar button. 17. "Total information" option in "Report" dialog is renamed to "Headers and totals". Now it also adds headers of report columns additionally to total information about listed files and archives. 18. If archive processing is started from Windows context menu in multiple monitor system, WinRAR operation progress and dialogs use the monitor with context menu. While basic multiple monitor support was present in previous versions shell extension for mouse driven commands, now it is extended to operations initiated from keyboard and to dropping files to archives. 19. New -imon<number> switch allows to select a monitor to display WinRAR operation progress and dialogs in the command line mode. Use -imon1 for primary and -imon2 for secondary monitor. For example, "WinRAR x -imon2 arcname" will start extraction on secondary monitor. It works only in the command line mode and does not affect interactive WinRAR graphical inteface also as console RAR. 20. Switch -idn hides archived names output in archiving, extraction and some other commands in console RAR. Other messages and total percentage are not affected. You can use this switch to reduce visual clutter and console output overhead when archiving or extracting a lot of small files. Minor visual artifacts, such as percentage indicator overwriting few last characters of error messages, are possible with -idn. 21. Former "-im - show more information" switch is changed to "-idv - display verbose output" for consistency with console RAR -id message control options and avoid a potential name conflict with newer -imon switch. While WinRAR still recognizes both -im and -idv, in the future -im support can be dropped. 22. It is allowed to add an optional %arcname% variable to compression profile name. Such variable will be replaced with actual archive name. It might be convenient when using with "Add to context menu" profile option. For example, you can create ZIP compression profile and set its name to "Add to %arcname%", to display it with actual ZIP archive name in context menu. 23. Ctrl+C and Ctrl+Ins keyboard shortcuts can be used in "Diagnostic messages" window to copy contents to clipboard. 24. More text is allowed in tray icon hint before a lengthy text is truncated. Also such text is now truncated in the middle of string, so both command type and completion percentage are still visible. 25. In case of clean install, if previous version compression profiles are not present, "Files to store without compression" field in newly created predefined compression profiles is set to: *.rar *.zip *.cab *.7z *.ace *.arj *.bz2 *.gz *.lha *.lzh *.taz *.tgz *.xz *.txz You can change this field and save a modified value to compression profile later. Previous versions set this field to blank for clean install. 26. Destination path history in extraction dialog treats paths like 'folder' and 'folder\' as the same path and displays only 'folder' entry. Previously they occupied two entries in the history. 27. "Enable Itanium executable compression" GUI option and -mci command line switch are removed. Optimized compression of Itanium executables is not supported anymore. WinRAR still can decompress already existing archives utilizing Itanium executable compression. 28. Bugs fixed: a) "Lock", "Comment" and "Protect" commands could not be applied to several archives selected in WinRAR file list at once; b) SFX archive process did not terminate after completing extraction in Windows 10 if archive comment included "Setup" and "SetupCode" commands, did not include "TempMode" command and setup program was running for more than 8 minutes; c) compression profiles with quote character in profile name could not be invoked from Explorer context menu.
    187 replies | 132963 view(s)
  • Lithium Flower's Avatar
    22nd October 2020, 12:32
    @Jyrki Alakuijala sorry, my english is not good. I'm sorry for reply so lately. Thank you for your reply, I'm really grateful. 1. use butteraugli metric compare different encoder I develop a python multithread program use butteraugli metric compare different encoder, and choose below maxButteraugliSocre and smaller image file. maxButteraugliSocre should setting on 1.6 or setting on 1.3? 2. Some butteraugli question In my first post *Reference 02 ButteraugliSocre reference list and guetzli quality.cc, those score reference list still available in Butteraugli-master and Butteraugli-jpeg xl? https://github.com/google/guetzli/blob/b473cf61275991e2a937fe0402d28538b342d2f8/guetzli/quality.cc#L26 in some non-photographic image, Butteraugli-jpeg xl butteraugli score will get some unexpected behavior, 1. in jpeg q96,q97,q98 butteraugli-jpeg xl still get larger butteraugli score(1.8~2.0) 2. rgba32 have transparent png file butteraugli score get larger butteraugli score(426.4349365234), behavior like @cssignet post sample. 3. if Butteraugli-master and Butteraugli-jpeg xl return different butteraugli score, i should choose Butteraugli-master score, because Butteraugli-master xyb more accurate? 4. butteraugli-jpeg xl 3rd norm and 12 norm have a quality reference list? ,like guetzli/quality.cc list and first post *Reference 02 list. 3. jpeg xl -jpeg1 feature I build a sjpeg-master, and test some non-photographic image, get difference butteraugli score in same quality level, -jpeg1 get great butteraugli score, https://github.com/webmproject/sjpeg I think jpeg xl -jpeg1 don't use sjpeg to convert, only output, and if input image have transparent, jpeg xl -jpeg1 will keep transparent, sjpeg can't success output. i can't find more -jpeg1 detail, could you provide me the details? -jpeg1 butteraugli score data sheet https://docs.google.com/spreadsheets/d/1skRbwQ32Qpdyidx8UYLaOpeB8SCQ7Z9Xtsoiz39mol0/edit#gid=0 4. lossy translucency(alpha) feature humans can't see translucency, webp lossy have compression factor, if i setting -alpha_q compression factor to 1 or 0, image translucency will get more lossy, this lossy is unvisible but lossy alpha this feature will make some unvisible bad side-effect or format bad side-effect? i very curious this question, could you teach me about this feature? @cssignet Thank you for your reply, I'm really grateful. I'm sorry, i did't make myself clear, I use butteraugli metric compare pingo near-lossless and pngfilter, and choose below maxButteraugliSocre and smaller image file. Thank your suggest, pingo near-lossless is working very well, and Thank you develop pingo. :) about larger butteraugli score, maybe this comment can explanation? i don't really understand this.
    7 replies | 989 view(s)
  • LucaBiondi's Avatar
    22nd October 2020, 10:50
    LucaBiondi replied to a thread paq8px in Data Compression
    Hello All, These are results of my big testset. v193fix2 Vs. v194 We have a big gain of about 150K on the JPEG section. We also a gain of 22K on pdf files. Great Job! Luca https://sqlserverperformace.blogspot.com
    2153 replies | 580005 view(s)
  • Jyrki Alakuijala's Avatar
    22nd October 2020, 01:29
    Out of curiosity: Is there a practical use for this or is it for fun with retro computing?
    6 replies | 344 view(s)
  • anormal's Avatar
    21st October 2020, 19:53
    @xezz you were right, similar ideas and I learnt a lot. Also I found there about the paper about Syllabe encoding, they even tried and tested a genetic evolver for building a dictionary. It really surprised me what I thought was a silly idea when I thought about it :D, it was really used and solved problems :D Thanks
    6 replies | 344 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 19:22
    i have implemented you idea (in apm div lcp or adv from 16 to 14) it can reduced size smaller. thank you.
    2153 replies | 580005 view(s)
  • Gotty's Avatar
    21st October 2020, 17:28
    Gotty replied to a thread paq8px in Data Compression
    ​@suryakandau@yahoo.co.id I would also highly encourage you use the git repo. But since your fixes had issues, I would like to ask you to wait with sending pull requests to the hxim until you can comfortably do fixes, tweaks and improvements without introducing bugs. You may fork my repo (https://github.com/GotthardtZ/paq8px) if you wish and make pull requests to me, so we can learn together and make code reviews. My repo is up to date with hxim.
    2153 replies | 580005 view(s)
  • kaitz's Avatar
    21st October 2020, 17:24
    kaitz replied to a thread paq8px in Data Compression
    On jpeg: in apm div lcp with 16->14 or you lose info, combine and hash them. Maybe 3 apms less. Previous errors also help. You can find it :) use abs values in apm /lcp, adv/, apm stepsize can also be lower then 20 if hashed and more memory is used. Owerall same usage probably. in m1 mixer lower layer use update rate 2 or 1.5 lower column count in main mixer. ​mcupos in m1? On suryakandau test.jpg gain should be 8kb.
    2153 replies | 580005 view(s)
  • moisesmcardona's Avatar
    21st October 2020, 15:46
    moisesmcardona replied to a thread paq8px in Data Compression
    You need to have a Github account and create a fork. You'll then need to clone your fork and add hxim's repo as the upstream. From there, you'll fetch upstream and merge with your own repo. Once your changes are done, commit them and push them. Then, you can create a Merge Request in Github to have your changes into consideration to be merged into the main repo.
    2153 replies | 580005 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 15:28
    how to submit my changes to github repo ?
    2153 replies | 580005 view(s)
  • moisesmcardona's Avatar
    21st October 2020, 15:23
    moisesmcardona replied to a thread paq8px in Data Compression
    1. The best way to easily build paq8px as of the moment is to use a MinGW installation. It's just a matter of running cmake and make. Because I like to generate data, I use the Media-Autobuild-Suite to build those media tools. I use that same suite to build paq8px since it already includes dependencies and always updates the components at each run. Note that, I do build paq8px manually regardless of the suite. Just that this way, I can keep MinGW up to date and use it for other purposes too. 2. I've never had issues with CMD on Windows. Unless you're trying to compress folders which will not work unless you do the text file with the content you want to pass. 3. I can try to add them. The objective of my tool is to keep it updated with newer releases for backward-compatibility purposes. However, given that these "fixes" builds are not in the Git repo, I'll need to do that manually. I'll try to test them later today. 4. My builds are also 64-bit, compiled on an AMD CPU (which triggered that Intel/AMD incompatibility issue) and as of the latest versions I'm building paq8px both native and non-native. Built on MinGW from my point #1. Please use Git to submit your changes to the repo and keep a history of changes. Other than code management, it's great to have a historical repository containing every changes performed to paq8px. Not to mention, it's easier to revert to older versions and compile them. My local repository has the xhim repo as the main upstream, as well as mpais and Gotty's repos. It's more convenient when it's time to merge them into the main repo and test/build them.
    2153 replies | 580005 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 15:08
    ​what is the difference between lcp and advPred ?
    2153 replies | 580005 view(s)
  • Gotty's Avatar
    21st October 2020, 09:26
    Gotty replied to a thread paq8px in Data Compression
    We have a git repository. You will always find the up-to-date source there. The official "master" branch with the current commit: https://github.com/hxim/paq8px/pull/149
    2153 replies | 580005 view(s)
  • rarkyan's Avatar
    21st October 2020, 03:44
    "If it doesn't, then if you're using a lookup table, most of the data will return correct but the states that overlap will cause errors when you try to return data back to 1:1 states with decompression." I think it will not overlap because each hex on the data structure are mapped. From the beginning they are not overlap each other --------------------------- "If it is different from that, I am not sure how you are translating the hex bytes that overlap unless there is something I'm not seeing like a substitution algorithm that treats non-aligned bytes as search and replace matches similar to when you use a text editor to search and replace data and replace it with a smaller symbol or string?" Yes im trying to replace the sequence of hex pattern using a string, as short as possible. Talk about clock mechanism, let say the smallest units is in seconds, minute, hour, day, etc. 60s = 1min, second will always tick, but we dont call the next tick as 61 seconds but 1min 1s. Something similiar like that. First i must create a limit to the string itself using x digit to replace the pattern. Using 2 digit i can generate 256^2 = 65.536 different name. Using 3 digit i can generate 256^3 = 16.777.216, etc. Problem is, its hard for me to observe and trying on actual file. I know if the hex pattern sequence is < 256^n string (ID), then im pretty sure this is where the compression happen. But since i cant create the program to observe the sample, this explanation maybe lead to misunderstand. --------------------------- "I will wait to see your response but another thing I'm wondering is where the compression is happening because on the example BMP file with 384 bytes, it gets expanded to 2176 bytes as a text file containing hex symbols that are each 4 bits per symbol" The compression happen when the complete pattern are created and replaced by short string. In the example, my files gets expanded into 2176 bytes because they didnt meet the output file requirement. File too small, and output file write too many string. I need to check them at large file but i need programmers help. If anyone want to be a volunteer, or maybe want to help me create the program i am very grateful. ​Thanks
    253 replies | 98731 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 01:39
    i have downloaded it and there is no source code inside the package file. please share it so we can learn together. thank you.
    2153 replies | 580005 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 01:30
    thank you for your input. next time i will study the internal working fisrt and ask some question here .
    2153 replies | 580005 view(s)
  • Gotty's Avatar
    21st October 2020, 00:49
    Gotty replied to a thread paq8px in Data Compression
    1) I'm not sure what you mean. Do you try building paq8px? Or do you try running it? If so what command line parameters are you trying to use? 2) Please give more information. 3) I suppose you mean PAQCompress, right? Let's consider 194fix3-4-5 as "unofficial" releases sent for code review. For me personally the quality of these contributions is not there yet. 4) My builds are always general 64-bit Windows binaries. The executables in PAQCompress are compiled by Moisés. I believe they are also general 64-bit Windows builds (non-native).
    2153 replies | 580005 view(s)
  • Gotty's Avatar
    21st October 2020, 00:18
    Gotty replied to a thread paq8px in Data Compression
    - Fixed and further enhanced jpeg model
    2153 replies | 580005 view(s)
  • Gotty's Avatar
    21st October 2020, 00:15
    Gotty replied to a thread paq8px in Data Compression
    Code review These contexts are not protected against column values greater than 1023. m1->set(column,1024); m.set(column,1024);Fixed. This looks incorrect: static constexpr int MIXERINPUTS = 2 * N + 6+128;Fixed. Adding contexts or setting mixer contexts in alternate routes should follow the order (and number) in the main route. Fixed. This looks very incorrect in IndirectMap, and you also forgot to adjust MIXERINPUTS from 2 to 3. m.add(((p1 )) >> 9U);Reverted. These 2048 values won't bring any change in compression - they should stay as 1024. Did you test? It looks like, you don't really know the purpose of these numbers. Hm. m1->set(static_cast<uint32_t>(firstCol), 2048); m1->set(coef | (min(3, huffBits) << 8), 2048); m1->set(((hc & 0x1FEU) << 1) | min(3, ilog2(zu + zv)), 2048);Reverted. These new contexts from fix3 seem to have no benefit (even loss): cxt = hashxxk(++n, coef, advPred / 12 + (runPred << 8U), sSum2 >> 6U, prevCoef / 72); cxt = hashxxk(++n, coef, advPred / 12 + (runPred << 8U), sSum2 >> 6U, prevCoef / 72);Reverted. These new contexts from fix5 seem to have no real benefit: cxt = hash(hc2, advPred / 13, prevCoef / 11, static_cast<int>(zu + zv < 4)); cxt = hash(hc2, coef, advPred / 12 + (runPred << 8U), sSum2 >> 6U, prevCoef / 72);Reverted. Your changes here cause a noticeable loss for MJPEG files. I suppose you didn't test with MJPEG files at all? MJPEGMap.set(hash(mcuPos & 63, column, row, hc));Reverted. You defined apm12 in fix5 and not used it. Could you please "finalize" your code (clean it up) next time before posting? Is there a particular reason why... ...you implemented a new hash function for the 2 new contexts in fix3? I've just tested: using the usual hash function yields a better compression (of course only for larger files - for smaller files there is a small fluctuation). ...you changed the scaling of the mixer contexts: m1->add(st) to m1->add(st >> 2U) and m.add(st >> 1U) to -> and m.add(st >> 2U)? ? If I revert your changes (which I did) compression is noticably better. ...you removed these contexts: m.add((pr >> 2U) - 511);? Re-adding them improves compression. So I re-added them. Could you please always adjust your formatting to the code style of the existing code? Do you test your changes with multiple files (small to large, mjpeg, jpeg with thumbnails, etc.)? Some of your changes brought a noticeable compression gain, so thank you! Having that long chain of apms is probably unintentional on your side but it works rather well. After testing each of your changes and cleaning the code I improved the jpeg model a little bit further and published it as v194. Nevertheless, it looks like you are trying changing contexts ("tweaking") without actually knowing how things work. Please study the internal workings before trying more improvements. If you are uncertain what is what, please ask your questions here.
    2153 replies | 580005 view(s)
  • fcorbelli's Avatar
    20th October 2020, 17:34
    fcorbelli replied to a thread zpaq updates in Data Compression
    Here current versions of zpaqfranz and zpaqlist. In zpaq SetConsoleCP(65001); SetConsoleOutputCP(65001); enable UTF-8 codepage directly on the caller shell. A new switch -pakka for different output In zpaqlist a -pakka ("verbose") and a -out filename.txt where the output goes. Here the 32.2 version of PAKKA (alpha stage-half italian interface) http://www.francocorbelli.it/pakka.exe It's possible to use control wheel, shift wheel and control/shift wheel to change the font Need more debug
    2568 replies | 1110882 view(s)
  • suryakandau@yahoo.co.id's Avatar
    20th October 2020, 16:28
    Paq8px193fix5 - slightly improve jpeg recompression here is binary and source code. please test it whether there is an error. thank you
    2153 replies | 580005 view(s)
  • Raphael Canut's Avatar
    20th October 2020, 01:08
    Hello, Just a quick message, I have fixed a bug in the last version of NHW because there could be a segmentation fault for some images.Sorry for the error. Correction and update on my demo page: http://nhwcodec.blogspot.com/ Else very quickly I find that this last version has now a good balance between neatness and precision, and I find this visually pleasant, and more and more interesting, I am currently comparing with AVIF.I'll try to improve this again, but for example I have a processing that improves again precision but now it starts to decrease neatness, and so I find results less interesting... Neatness is still the main advantage of NHW I want to preserve. Also I am (visually) comparing with avifenc with -s 0 setting: slowest speed/best quality, but ultra-optimized (multi-threading, SIMD,...) AVIF -s 0 then takes in average 15s to encode an image on my computer, whereas totally unoptimized NHW takes 30ms to encode that same image!... So extremely fast speed is also an advantage of NHW! Cheers, Raphael
    205 replies | 25149 view(s)
  • fcorbelli's Avatar
    19th October 2020, 22:54
    fcorbelli replied to a thread zpaq updates in Data Compression
    My current bottleneck is the output stage, the fprintf. With or without text buffering. I need to make something different, like 'sprintf' in a giant RAM buffer to be written in one time. And output running when taking data, instead of like now (first decode, then output). A very big work (I do not like very much the ZPAQ source code, I have to rewrite myself,but too time is needed) Even without this my zpaqlist is two time faster than anything else (on my testbed). Tomorrow I will post the source, and my fix for zpaq to support direct utf8 extraction without supplementary software (running on Windows). Now my little PAKKA (alpha stage) runs zpaqlist and zpaqfranz in a background thread, so the Delphi GUI show progression in real time, instead of running a batch file
    2568 replies | 1110882 view(s)
  • SpyFX's Avatar
    19th October 2020, 22:12
    SpyFX replied to a thread zpaq updates in Data Compression
    Hi fcorbelli for zpaq list 1 to calculate the size of files in the archive, you need to load all h blocks into memory, this is the first memory block that zpaq consumes - I don't know how to optimize it yet 2 to understand that the files in the archive are duplicates, load the corresponding sets of indexes into memory and compare them by length or by element if the length of the sets is the same for the files - at the moment my solution is as follows, I calculate sha1 (a set of indices) for each file and then sort the list of files by key (size, hash) in descending order
    2568 replies | 1110882 view(s)
  • CompressMaster's Avatar
    19th October 2020, 21:31
    CompressMaster replied to a thread paq8px in Data Compression
    @moisesmcardona made a useful manual how to do that.
    2153 replies | 580005 view(s)
  • Jyrki Alakuijala's Avatar
    19th October 2020, 14:43
    In 2014, Zoltan Szabadka demanded auto-encoding as a basis of our team's image coding efforts, i.e,. a development model where psychovisual decisions are fully separated from encoding decisions. Initially, I considered it a bad idea, but decided to play along with Zoltan's idea because he rarely demanded anything and he was quite convinced about this. Nowadays I consider this a great insight from him. Particularly, it was more inclusive as it allowed more mathematically (than artistically) oriented people to make significant contributions to image quality on the codec and format side. We tried out all psychovisual metrics that we could get our hands on, and none of them worked for this purpose. As a result of this I ended up building a new psychovisual metric 'butteraugli' specifically engineered to work with compression. I tried 100 different things from color/perception science and kept 15 of those that seemed to work. Our first milestone was 'guetzli', which used butteraugli to optimize normal jpegs. We got a lot of good feedback from this, and it looked like we are on the right track. In the same timeframe Jon was developing what he calls autoencoding, which uses simulacra to decide the appropriate quality for an image. This is different from guetzli in a way that a single quality value is decided, while guetzli (and JPEG XL) make local decisions about the quality guided by the distance metric. Deciding about the suitable quality locally is slightly more density efficient than trying to decide a quality that fits the whole image. Only in the last stages of the format development have we moved back into a phase, where we approximate good psychovisual results with heuristics that don't include butteraugli. This allows us to have much faster encoding options. One round of butteraugli is about 1-3 megapixels/second, whereas fastest encoding options of JPEG XL are more than 15x faster. The slower options still iterate with butteraugli.
    148 replies | 14061 view(s)
  • anormal's Avatar
    19th October 2020, 14:04
    Interesting... I missed it while diving in github looking for more info about this idea. I wonder how many nice projects in github that I cannot find with their searcher. > Is it possible to get good compression ratios while using Huffman encoding? It would require a static Huffman tree and the sum of the size of the tree and the encoded data must be competitive with other compression techniques I was thinking the same when I started this idea. But, excuse my ignorance, I know what a canonical Huffman is, but found of no use of it in my specific problem, so I use a standard algorithm that computes the Huffman codes from symbols frequency table. Is not true that Huffman builds the minimal codes from symbols frequency, always? By default? I didn't know it was a NP problem, or that some algorithms could produce "better" (total smaller size I suposse) codes? Anyway, thanks, I'd have a look at the sources, let's see if I can produce better Huffman codes :D
    6 replies | 344 view(s)
  • xezz's Avatar
    19th October 2020, 10:55
    Maybe LittleBit solves some problems.
    6 replies | 344 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 19:53
    In command line I just type fp8sk30 -8 inputfile This is the trade off between speed and compression ratio... You can look at the source code and compare between fp8sk29 and fp8sk30...
    66 replies | 5055 view(s)
  • fabiorug's Avatar
    18th October 2020, 19:09
    fabiorug replied to a thread Fp8sk in Data Compression
    C:\Users\User\Desktop So I need normal command line for PAQ AND FP8SK, at least two file on the path desktop i wrote, I'm on Windows 7 Italy, basically in fp8sk30 you used more ram for PAQ and slower speed than fp8sk29 so for options I need the one you used to benchmark fp8sk30, improved jpeg recompression, not the normal which you optimized the speed 600 mb ram, 26 seconds for 1,5-2 mb and 2 kb more size than previous but much faster.
    66 replies | 5055 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 17:42
    I compiled it on windows 10 64 bit by using gotty instruction at the above thread and it works.
    2153 replies | 580005 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 17:15
    I am sorry I don't understand your question because my English is bad. Could you simply your question please ?
    66 replies | 5055 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 17:08
    paq8px193fix4 - improve jpeg recompression here is binary and source code inside the package. please check it to compress folder with multiple file type if there is any error. thank you paq8px193fix4 -8 for jpeg file in paq8px193fix3 thread is 2198873 bytes ​
    2153 replies | 580005 view(s)
  • fabiorug's Avatar
    18th October 2020, 16:50
    fabiorug replied to a thread Fp8sk in Data Compression
    The speed is much faster for 2 mb jpg while only 2 kb more the size. but if i want the normal speed what i should use like those 17 kb you squeezed in this build. anyone who's using this program
    66 replies | 5055 view(s)
  • fabiorug's Avatar
    18th October 2020, 16:47
    fabiorug replied to a thread paq8px in Data Compression
    On windows 7 it says that it don't find documents/builds folder. Working with cmd on windows 7 is impossible. When moises cardona will include in its program? For which architecture they are compiled and included on its program?
    2153 replies | 580005 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 15:48
    How to compress folder using paq8px ?
    2153 replies | 580005 view(s)
  • DZgas's Avatar
    18th October 2020, 13:54
    DZgas replied to a thread JPEG XL vs. AVIF in Data Compression
    I used cjpegXL 0.0.1-c8ce59f8 (13 Oct) and commands only: --distance 7 --speed 9 I think it's good that "JPEG XL spends a lot of bits on low and middle frequencies, but then there is less left for high frequencies." This is best for archiving photos, because AVIF just erases very large details.
    59 replies | 5365 view(s)
  • a902cd23's Avatar
    18th October 2020, 13:53
    a902cd23 replied to a thread Fp8sk in Data Compression
    This ZIP contains my 7zip-folder and 89 jpg/png from 1997/98 No errors detected in de/compression Time not measured. Compare size against fp8v6 Directory of Z:\1\test* 2020-10-18 12:36 17 000 518 test.zip 2020-10-18 11:05 9 952 986 test.zip.fp8 2020-10-18 10:58 9 947 580 test.zip.fp8sk30 Directory of Z:\2\test* 2020-10-18 12:36 17 000 518 test.zip 2020-10-18 11:05 9 831 678 test.zip.fp8 2020-10-18 10:58 9 808 000 test.zip.fp8sk30 Directory of Z:\3\test* 2020-10-18 12:37 17 000 518 test.zip 2020-10-18 11:04 9 759 017 test.zip.fp8 2020-10-18 10:59 9 721 353 test.zip.fp8sk30 Directory of Z:\4\test* 2020-10-18 12:37 17 000 518 test.zip 2020-10-18 11:04 9 719 050 test.zip.fp8 2020-10-18 10:59 9 674 468 test.zip.fp8sk30 Directory of Z:\5\test* 2020-10-18 12:37 17 000 518 test.zip 2020-10-18 11:04 9 694 758 test.zip.fp8 2020-10-18 10:59 9 647 090 test.zip.fp8sk30 Directory of Z:\6\test* 2020-10-18 12:38 17 000 518 test.zip 2020-10-18 11:03 9 678 955 test.zip.fp8 2020-10-18 11:00 9 629 482 test.zip.fp8sk30 Directory of Z:\7\test* 2020-10-18 12:38 17 000 518 test.zip 2020-10-18 11:03 9 672 426 test.zip.fp8 2020-10-18 11:00 9 621 646 test.zip.fp8sk30 Directory of Z:\8\test* 2020-10-18 12:38 17 000 518 test.zip 2020-10-18 11:03 9 670 563 test.zip.fp8 2020-10-18 11:00 9 619 379 test.zip.fp8sk30
    66 replies | 5055 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 12:01
    I use -8 option for both of them
    2153 replies | 580005 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 10:57
    Paq8px_v193fix3 - slightly improve jpeg model paq8px_v193fix2 2202679 bytes paq8px_v193fix3 2201650 bytes here is the source code
    2153 replies | 580005 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 02:49
    Maybe you could try Fp8sk since it is the fastest in paq series.
    200 replies | 129570 view(s)
  • Gonzalo's Avatar
    18th October 2020, 02:08
    Fair enough! Thanks. Make sure you test the latest versions, though... For example, precomp has seen a dramatic increase in speed over the last few years. Nowadays it's usually faster than Razor, while providing better compression if there is something to recompress. And as far as practical compressors go, FreeArc is probably better than 7-zip even now after years of abandonment (faster & denser archives thanks to rep filter and tta among other things).
    200 replies | 129570 view(s)
More Activity