Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • Gotty's Avatar
    Yesterday, 23:59
    Gotty replied to a thread paq8px in Data Compression
    Passed all my tests, impressive results, I needed to make some cosmetic changes only. Well done, Surya! Pushed the changes to my git and created a pull request to hxim.
    2143 replies | 579485 view(s)
  • Gotty's Avatar
    Yesterday, 23:48
    Gotty replied to a thread paq8px in Data Compression
    The difference of the english.exp file is due to different line endings (UNIX vs DOS). Paq8px does not care, so both are OK. On my local system it has DOS line endings (as in the original releases by mpais), and I attach releases with DOS line endings. It looks like that on git some of the files are converted to UNIX line endings (like english.exp and most of the cpp files) and some stayed as DOS (like english.dic and some of the hpp files). Strange, isn't it? Even stranger: when I clone or pull the repository with git the english.exp contains DOS line endings. So when Surya downloaded the source from git using the direct link he downloaded it with UNIX line endings and my attachment contained DOS line endings. This is the source of the confusion. As soon as Surya will use "git clone" to grab the full contents of the repo, he will also have the same line endings as most of us. No harm is made. It's safe to use the training files with any line endings.
    2143 replies | 579485 view(s)
  • danlock's Avatar
    Yesterday, 18:54
    Hmm... an 8-bit machine from the '80s with a total of 64K RAM would have less than 64K available for use.
    5 replies | 305 view(s)
  • Gotty's Avatar
    Yesterday, 17:18
    Gotty replied to a thread paq8px in Data Compression
    The next step would be to learn git. Unfortunately a couple of forum posts here will not be enough to guide you through. Sorry, git is a larger topic. But it is a must today, so you have not much of a choice but to google read and learn. https://guides.github.com/activities/hello-world/
    2143 replies | 579485 view(s)
  • skal's Avatar
    Yesterday, 16:56
    Interesting... Is this happening only for qualities around q=100? Or at lower ones too? (q=60-80 for instance)... skal/ ​
    7 replies | 934 view(s)
  • e8c's Avatar
    Yesterday, 14:49
    Algorithm Errors as Art: https://app.box.com/s/gtm698mi8ns8adv62i1s57wgcuj0lab1
    7 replies | 766 view(s)
  • Jyrki Alakuijala's Avatar
    Yesterday, 14:39
    Sanmayce's testing from 2.5 years ago in https://github.com/google/brotli/issues/642 shows that large-window Brotli competes in density with 7zip, but decodes much faster. Brotli 11d29 is on pareto-optimal decoding speed/density for every large-corpora test, sometimes improving by more than 10x on the previous entry on pareto-front.
    200 replies | 129462 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 14:32
    I have created GitHub account so what is the next step ?
    2143 replies | 579485 view(s)
  • moisesmcardona's Avatar
    Yesterday, 13:39
    moisesmcardona replied to a thread paq8px in Data Compression
    Please practice on committing your changes on Git. Thanks.
    2143 replies | 579485 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 13:26
    no, I have changed jpeg model only. Why ?
    2143 replies | 579485 view(s)
  • LucaBiondi's Avatar
    Yesterday, 13:24
    LucaBiondi replied to a thread paq8px in Data Compression
    Have you changed the english.exp file? Thank you Luca
    2143 replies | 579485 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 03:06
    here is source code and binary file inside the package.
    2143 replies | 579485 view(s)
  • Nania Francesco's Avatar
    22nd October 2020, 22:47
    Due to an error encountered, I am testing the simple compressors again in "iso" format I am currently continuing with all the compressors already tested (ZSTD, LAZY etc). I'm sorry! I hope to republish soon !
    200 replies | 129462 view(s)
  • a902cd23's Avatar
    22nd October 2020, 15:00
    a902cd23 replied to a thread WinRAR in Data Compression
    WinRAR - What's new in the latest version Version 6.0 beta 1 1. "Ignore" and "Ignore All" options are added to read error prompt. "Ignore" allows to continue processing with already read file part only and "Ignore All" does it for all future read errors. For example, if you archive a file, which portion is locked by another process, and if "Ignore" is selected in read error prompt, only a part of file preceding the unreadable region will be saved into archive. It can help to avoid interrupting lengthy archiving operations, though be aware that files archived with "Ignore" are incomplete. If switch -y is specified, "Ignore" is applied to all files by default. Previosuly available "Retry" and "Quit" options are still present in read error prompt as well. 2. Exit code 12 is returned in the command line mode in case of read errors. This code is returned for all options in the read error prompt, including a newly introduced "Ignore" option. Previously more common fatal error code 2 was returned for read errors. 3. If several archives are selected, "Extract archives to" option group in "Options" page of extraction dialog can be used to place extracted files to specified destination folder, to separate subfolders in destination folder, to separate subfolders in archive folders and directly to archive folders. It replaces "Extract archives to subfolders" option and available only if multiple archives are selected. 4. New -ad2 switch places extracted files directly to archive's own folder. Unlike -ad1, it does not create a separate subfolder for each unpacked archive. 5. "Additional switches" option in "Options" page of archiving and extraction dialogs allows to specify WinRAR command line switches. It might be useful if there is no option in WinRAR graphical interface matching a switch. Use this feature only if you are familiar with WinRAR command line syntax and clearly understand what specified switches are intended for. 6. Compression parameters in "Benchmark" command are changed to 32 MB dictionary and "Normal" method. They match RAR5 default mode and more suitable to estimate the typical performance of recent WinRAR versions than former 4 MB "Best" intended for RAR4 format. Latest "Benchmark" results cannot be compared with previous versions directly. New parameters set produces different values, likely lower because of eight times larger dictionary size. 7. When unpacking a part of files from solid volume set, WinRAR attempts to skip volumes in the beginning and start extraction from volume closest to specified file and with reset solid statistics. By default WinRAR resets the solid statistics in the beginning of large enough solid volumes where possible. For such volumes extracting a part of files from the middle of volume set can be faster now. It does not affect performance when all archived files are unpacked. 8. Previously WinRAR automatically resorted to extracting from first volume, when user started extraction from non-first volume and first volume was available. Now WinRAR does so only if all volumes between first and specified are also available. 9. Warning is issued when closing WinRAR if one or more archived files had been modified by external apps, but failed to be saved back to archive, because an external app still locks them. Such warning includes the list of modified files and proposes to quit immediately and lose changes or return to WinRAR and close an editor app. Previous versions issued a similar warning while editing a file, but did not remind it again when quitting. 10. "Move to Recycle Bin" option in "Delete archive" options group of extraction dialog places deleted archives to Recycle Bin instead of deleting them permanently. 11. New "Clear history..." command in "Options" menu allows to remove names of recently opened archives in "File" menu and clear drop down lists with previously entered values in dialogs. For example, these values include archive names in archiving dialog and destination paths in extraction dialog. 12. "File time" options in "Advanced" part of extraction dialog are now available for 7z archives. Additionally to modification time, WinRAR can set creation and last access time when unpacking such archives. 13. ""New" submenu items" options group is added to "Settings/Integration/Context menu items..." dialog. You can use these options to remove "WinRAR archive" and "WinRAR ZIP archive" entries in "New" submenu of Windows context menu. New state of these option is applied only after you press "OK" both in "Context menu items" and its parent "Settings" dialog. 14. <Max>, <Min> and <Hide> commands can be inserted before the program name in SFX "Setup" command to run a program in maximized, minimized or hidden window. For example: Setup=<Hide>setup.exe 15. It is possible to specify an additional high resolution logo for SFX module. If such logo is present, SFX module scales and displays it in high DPI Windows mode, providing the better visible quality compared to resizing the standard logo. Use "High resolution SFX logo" in "Advanced SFX options" dialog to define such logo. In command line mode add second -iimg switch to set the high resolution logo. Recommended size of high resolution logo PNG file is 186x604 pixels. 16. If archive currently opened in WinRAR shell was deleted or moved by another program, WinRAR displays "Inaccessible" before archive name in the window title. Also it flashes the window caption and taskbar button. 17. "Total information" option in "Report" dialog is renamed to "Headers and totals". Now it also adds headers of report columns additionally to total information about listed files and archives. 18. If archive processing is started from Windows context menu in multiple monitor system, WinRAR operation progress and dialogs use the monitor with context menu. While basic multiple monitor support was present in previous versions shell extension for mouse driven commands, now it is extended to operations initiated from keyboard and to dropping files to archives. 19. New -imon<number> switch allows to select a monitor to display WinRAR operation progress and dialogs in the command line mode. Use -imon1 for primary and -imon2 for secondary monitor. For example, "WinRAR x -imon2 arcname" will start extraction on secondary monitor. It works only in the command line mode and does not affect interactive WinRAR graphical inteface also as console RAR. 20. Switch -idn hides archived names output in archiving, extraction and some other commands in console RAR. Other messages and total percentage are not affected. You can use this switch to reduce visual clutter and console output overhead when archiving or extracting a lot of small files. Minor visual artifacts, such as percentage indicator overwriting few last characters of error messages, are possible with -idn. 21. Former "-im - show more information" switch is changed to "-idv - display verbose output" for consistency with console RAR -id message control options and avoid a potential name conflict with newer -imon switch. While WinRAR still recognizes both -im and -idv, in the future -im support can be dropped. 22. It is allowed to add an optional %arcname% variable to compression profile name. Such variable will be replaced with actual archive name. It might be convenient when using with "Add to context menu" profile option. For example, you can create ZIP compression profile and set its name to "Add to %arcname%", to display it with actual ZIP archive name in context menu. 23. Ctrl+C and Ctrl+Ins keyboard shortcuts can be used in "Diagnostic messages" window to copy contents to clipboard. 24. More text is allowed in tray icon hint before a lengthy text is truncated. Also such text is now truncated in the middle of string, so both command type and completion percentage are still visible. 25. In case of clean install, if previous version compression profiles are not present, "Files to store without compression" field in newly created predefined compression profiles is set to: *.rar *.zip *.cab *.7z *.ace *.arj *.bz2 *.gz *.lha *.lzh *.taz *.tgz *.xz *.txz You can change this field and save a modified value to compression profile later. Previous versions set this field to blank for clean install. 26. Destination path history in extraction dialog treats paths like 'folder' and 'folder\' as the same path and displays only 'folder' entry. Previously they occupied two entries in the history. 27. "Enable Itanium executable compression" GUI option and -mci command line switch are removed. Optimized compression of Itanium executables is not supported anymore. WinRAR still can decompress already existing archives utilizing Itanium executable compression. 28. Bugs fixed: a) "Lock", "Comment" and "Protect" commands could not be applied to several archives selected in WinRAR file list at once; b) SFX archive process did not terminate after completing extraction in Windows 10 if archive comment included "Setup" and "SetupCode" commands, did not include "TempMode" command and setup program was running for more than 8 minutes; c) compression profiles with quote character in profile name could not be invoked from Explorer context menu.
    187 replies | 132929 view(s)
  • Lithium Flower's Avatar
    22nd October 2020, 12:32
    @Jyrki Alakuijala sorry, my english is not good. I'm sorry for reply so lately. Thank you for your reply, I'm really grateful. 1. use butteraugli metric compare different encoder I develop a python multithread program use butteraugli metric compare different encoder, and choose below maxButteraugliSocre and smaller image file. maxButteraugliSocre should setting on 1.6 or setting on 1.3? 2. Some butteraugli question In my first post *Reference 02 ButteraugliSocre reference list and guetzli quality.cc, those score reference list still available in Butteraugli-master and Butteraugli-jpeg xl? https://github.com/google/guetzli/blob/b473cf61275991e2a937fe0402d28538b342d2f8/guetzli/quality.cc#L26 in some non-photographic image, Butteraugli-jpeg xl butteraugli score will get some unexpected behavior, 1. in jpeg q96,q97,q98 butteraugli-jpeg xl still get larger butteraugli score(1.8~2.0) 2. rgba32 have transparent png file butteraugli score get larger butteraugli score(426.4349365234), behavior like @cssignet post sample. 3. if Butteraugli-master and Butteraugli-jpeg xl return different butteraugli score, i should choose Butteraugli-master score, because Butteraugli-master xyb more accurate? 4. butteraugli-jpeg xl 3rd norm and 12 norm have a quality reference list? ,like guetzli/quality.cc list and first post *Reference 02 list. 3. jpeg xl -jpeg1 feature I build a sjpeg-master, and test some non-photographic image, get difference butteraugli score in same quality level, -jpeg1 get great butteraugli score, https://github.com/webmproject/sjpeg I think jpeg xl -jpeg1 don't use sjpeg to convert, only output, and if input image have transparent, jpeg xl -jpeg1 will keep transparent, sjpeg can't success output. i can't find more -jpeg1 detail, could you provide me the details? -jpeg1 butteraugli score data sheet https://docs.google.com/spreadsheets/d/1skRbwQ32Qpdyidx8UYLaOpeB8SCQ7Z9Xtsoiz39mol0/edit#gid=0 4. lossy translucency(alpha) feature humans can't see translucency, webp lossy have compression factor, if i setting -alpha_q compression factor to 1 or 0, image translucency will get more lossy, this lossy is unvisible but lossy alpha this feature will make some unvisible bad side-effect or format bad side-effect? i very curious this question, could you teach me about this feature? @cssignet Thank you for your reply, I'm really grateful. I'm sorry, i did't make myself clear, I use butteraugli metric compare pingo near-lossless and pngfilter, and choose below maxButteraugliSocre and smaller image file. Thank your suggest, pingo near-lossless is working very well, and Thank you develop pingo. :) about larger butteraugli score, maybe this comment can explanation? i don't really understand this.
    7 replies | 934 view(s)
  • LucaBiondi's Avatar
    22nd October 2020, 10:50
    LucaBiondi replied to a thread paq8px in Data Compression
    Hello All, These are results of my big testset. v193fix2 Vs. v194 We have a big gain of about 150K on the JPEG section. We also a gain of 22K on pdf files. Great Job! Luca https://sqlserverperformace.blogspot.com
    2143 replies | 579485 view(s)
  • Jyrki Alakuijala's Avatar
    22nd October 2020, 01:29
    Out of curiosity: Is there a practical use for this or is it for fun with retro computing?
    5 replies | 305 view(s)
  • anormal's Avatar
    21st October 2020, 19:53
    @xezz you were right, similar ideas and I learnt a lot. Also I found there about the paper about Syllabe encoding, they even tried and tested a genetic evolver for building a dictionary. It really surprised me what I thought was a silly idea when I thought about it :D, it was really used and solved problems :D Thanks
    5 replies | 305 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 19:22
    i have implemented you idea (in apm div lcp or adv from 16 to 14) it can reduced size smaller. thank you.
    2143 replies | 579485 view(s)
  • Gotty's Avatar
    21st October 2020, 17:28
    Gotty replied to a thread paq8px in Data Compression
    ​@suryakandau@yahoo.co.id I would also highly encourage you use the git repo. But since your fixes had issues, I would like to ask you to wait with sending pull requests to the hxim until you can comfortably do fixes, tweaks and improvements without introducing bugs. You may fork my repo (https://github.com/GotthardtZ/paq8px) if you wish and make pull requests to me, so we can learn together and make code reviews. My repo is up to date with hxim.
    2143 replies | 579485 view(s)
  • kaitz's Avatar
    21st October 2020, 17:24
    kaitz replied to a thread paq8px in Data Compression
    On jpeg: in apm div lcp with 16->14 or you lose info, combine and hash them. Maybe 3 apms less. Previous errors also help. You can find it :) use abs values in apm /lcp, adv/, apm stepsize can also be lower then 20 if hashed and more memory is used. Owerall same usage probably. in m1 mixer lower layer use update rate 2 or 1.5 lower column count in main mixer. ​mcupos in m1? On suryakandau test.jpg gain should be 8kb.
    2143 replies | 579485 view(s)
  • moisesmcardona's Avatar
    21st October 2020, 15:46
    moisesmcardona replied to a thread paq8px in Data Compression
    You need to have a Github account and create a fork. You'll then need to clone your fork and add hxim's repo as the upstream. From there, you'll fetch upstream and merge with your own repo. Once your changes are done, commit them and push them. Then, you can create a Merge Request in Github to have your changes into consideration to be merged into the main repo.
    2143 replies | 579485 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 15:28
    how to submit my changes to github repo ?
    2143 replies | 579485 view(s)
  • moisesmcardona's Avatar
    21st October 2020, 15:23
    moisesmcardona replied to a thread paq8px in Data Compression
    1. The best way to easily build paq8px as of the moment is to use a MinGW installation. It's just a matter of running cmake and make. Because I like to generate data, I use the Media-Autobuild-Suite to build those media tools. I use that same suite to build paq8px since it already includes dependencies and always updates the components at each run. Note that, I do build paq8px manually regardless of the suite. Just that this way, I can keep MinGW up to date and use it for other purposes too. 2. I've never had issues with CMD on Windows. Unless you're trying to compress folders which will not work unless you do the text file with the content you want to pass. 3. I can try to add them. The objective of my tool is to keep it updated with newer releases for backward-compatibility purposes. However, given that these "fixes" builds are not in the Git repo, I'll need to do that manually. I'll try to test them later today. 4. My builds are also 64-bit, compiled on an AMD CPU (which triggered that Intel/AMD incompatibility issue) and as of the latest versions I'm building paq8px both native and non-native. Built on MinGW from my point #1. Please use Git to submit your changes to the repo and keep a history of changes. Other than code management, it's great to have a historical repository containing every changes performed to paq8px. Not to mention, it's easier to revert to older versions and compile them. My local repository has the xhim repo as the main upstream, as well as mpais and Gotty's repos. It's more convenient when it's time to merge them into the main repo and test/build them.
    2143 replies | 579485 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 15:08
    ​what is the difference between lcp and advPred ?
    2143 replies | 579485 view(s)
  • Gotty's Avatar
    21st October 2020, 09:26
    Gotty replied to a thread paq8px in Data Compression
    We have a git repository. You will always find the up-to-date source there. The official "master" branch with the current commit: https://github.com/hxim/paq8px/pull/149
    2143 replies | 579485 view(s)
  • rarkyan's Avatar
    21st October 2020, 03:44
    "If it doesn't, then if you're using a lookup table, most of the data will return correct but the states that overlap will cause errors when you try to return data back to 1:1 states with decompression." I think it will not overlap because each hex on the data structure are mapped. From the beginning they are not overlap each other --------------------------- "If it is different from that, I am not sure how you are translating the hex bytes that overlap unless there is something I'm not seeing like a substitution algorithm that treats non-aligned bytes as search and replace matches similar to when you use a text editor to search and replace data and replace it with a smaller symbol or string?" Yes im trying to replace the sequence of hex pattern using a string, as short as possible. Talk about clock mechanism, let say the smallest units is in seconds, minute, hour, day, etc. 60s = 1min, second will always tick, but we dont call the next tick as 61 seconds but 1min 1s. Something similiar like that. First i must create a limit to the string itself using x digit to replace the pattern. Using 2 digit i can generate 256^2 = 65.536 different name. Using 3 digit i can generate 256^3 = 16.777.216, etc. Problem is, its hard for me to observe and trying on actual file. I know if the hex pattern sequence is < 256^n string (ID), then im pretty sure this is where the compression happen. But since i cant create the program to observe the sample, this explanation maybe lead to misunderstand. --------------------------- "I will wait to see your response but another thing I'm wondering is where the compression is happening because on the example BMP file with 384 bytes, it gets expanded to 2176 bytes as a text file containing hex symbols that are each 4 bits per symbol" The compression happen when the complete pattern are created and replaced by short string. In the example, my files gets expanded into 2176 bytes because they didnt meet the output file requirement. File too small, and output file write too many string. I need to check them at large file but i need programmers help. If anyone want to be a volunteer, or maybe want to help me create the program i am very grateful. ​Thanks
    253 replies | 98709 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 01:39
    i have downloaded it and there is no source code inside the package file. please share it so we can learn together. thank you.
    2143 replies | 579485 view(s)
  • suryakandau@yahoo.co.id's Avatar
    21st October 2020, 01:30
    thank you for your input. next time i will study the internal working fisrt and ask some question here .
    2143 replies | 579485 view(s)
  • Gotty's Avatar
    21st October 2020, 00:49
    Gotty replied to a thread paq8px in Data Compression
    1) I'm not sure what you mean. Do you try building paq8px? Or do you try running it? If so what command line parameters are you trying to use? 2) Please give more information. 3) I suppose you mean PAQCompress, right? Let's consider 194fix3-4-5 as "unofficial" releases sent for code review. For me personally the quality of these contributions is not there yet. 4) My builds are always general 64-bit Windows binaries. The executables in PAQCompress are compiled by Moisés. I believe they are also general 64-bit Windows builds (non-native).
    2143 replies | 579485 view(s)
  • Gotty's Avatar
    21st October 2020, 00:18
    Gotty replied to a thread paq8px in Data Compression
    - Fixed and further enhanced jpeg model
    2143 replies | 579485 view(s)
  • Gotty's Avatar
    21st October 2020, 00:15
    Gotty replied to a thread paq8px in Data Compression
    Code review These contexts are not protected against column values greater than 1023. m1->set(column,1024); m.set(column,1024);Fixed. This looks incorrect: static constexpr int MIXERINPUTS = 2 * N + 6+128;Fixed. Adding contexts or setting mixer contexts in alternate routes should follow the order (and number) in the main route. Fixed. This looks very incorrect in IndirectMap, and you also forgot to adjust MIXERINPUTS from 2 to 3. m.add(((p1 )) >> 9U);Reverted. These 2048 values won't bring any change in compression - they should stay as 1024. Did you test? It looks like, you don't really know the purpose of these numbers. Hm. m1->set(static_cast<uint32_t>(firstCol), 2048); m1->set(coef | (min(3, huffBits) << 8), 2048); m1->set(((hc & 0x1FEU) << 1) | min(3, ilog2(zu + zv)), 2048);Reverted. These new contexts from fix3 seem to have no benefit (even loss): cxt = hashxxk(++n, coef, advPred / 12 + (runPred << 8U), sSum2 >> 6U, prevCoef / 72); cxt = hashxxk(++n, coef, advPred / 12 + (runPred << 8U), sSum2 >> 6U, prevCoef / 72);Reverted. These new contexts from fix5 seem to have no real benefit: cxt = hash(hc2, advPred / 13, prevCoef / 11, static_cast<int>(zu + zv < 4)); cxt = hash(hc2, coef, advPred / 12 + (runPred << 8U), sSum2 >> 6U, prevCoef / 72);Reverted. Your changes here cause a noticeable loss for MJPEG files. I suppose you didn't test with MJPEG files at all? MJPEGMap.set(hash(mcuPos & 63, column, row, hc));Reverted. You defined apm12 in fix5 and not used it. Could you please "finalize" your code (clean it up) next time before posting? Is there a particular reason why... ...you implemented a new hash function for the 2 new contexts in fix3? I've just tested: using the usual hash function yields a better compression (of course only for larger files - for smaller files there is a small fluctuation). ...you changed the scaling of the mixer contexts: m1->add(st) to m1->add(st >> 2U) and m.add(st >> 1U) to -> and m.add(st >> 2U)? ? If I revert your changes (which I did) compression is noticably better. ...you removed these contexts: m.add((pr >> 2U) - 511);? Re-adding them improves compression. So I re-added them. Could you please always adjust your formatting to the code style of the existing code? Do you test your changes with multiple files (small to large, mjpeg, jpeg with thumbnails, etc.)? Some of your changes brought a noticeable compression gain, so thank you! Having that long chain of apms is probably unintentional on your side but it works rather well. After testing each of your changes and cleaning the code I improved the jpeg model a little bit further and published it as v194. Nevertheless, it looks like you are trying changing contexts ("tweaking") without actually knowing how things work. Please study the internal workings before trying more improvements. If you are uncertain what is what, please ask your questions here.
    2143 replies | 579485 view(s)
  • fcorbelli's Avatar
    20th October 2020, 17:34
    fcorbelli replied to a thread zpaq updates in Data Compression
    Here current versions of zpaqfranz and zpaqlist. In zpaq SetConsoleCP(65001); SetConsoleOutputCP(65001); enable UTF-8 codepage directly on the caller shell. A new switch -pakka for different output In zpaqlist a -pakka ("verbose") and a -out filename.txt where the output goes. Here the 32.2 version of PAKKA (alpha stage-half italian interface) http://www.francocorbelli.it/pakka.exe It's possible to use control wheel, shift wheel and control/shift wheel to change the font Need more debug
    2568 replies | 1110795 view(s)
  • suryakandau@yahoo.co.id's Avatar
    20th October 2020, 16:28
    Paq8px193fix5 - slightly improve jpeg recompression here is binary and source code. please test it whether there is an error. thank you
    2143 replies | 579485 view(s)
  • Raphael Canut's Avatar
    20th October 2020, 01:08
    Hello, Just a quick message, I have fixed a bug in the last version of NHW because there could be a segmentation fault for some images.Sorry for the error. Correction and update on my demo page: http://nhwcodec.blogspot.com/ Else very quickly I find that this last version has now a good balance between neatness and precision, and I find this visually pleasant, and more and more interesting, I am currently comparing with AVIF.I'll try to improve this again, but for example I have a processing that improves again precision but now it starts to decrease neatness, and so I find results less interesting... Neatness is still the main advantage of NHW I want to preserve. Also I am (visually) comparing with avifenc with -s 0 setting: slowest speed/best quality, but ultra-optimized (multi-threading, SIMD,...) AVIF -s 0 then takes in average 15s to encode an image on my computer, whereas totally unoptimized NHW takes 30ms to encode that same image!... So extremely fast speed is also an advantage of NHW! Cheers, Raphael
    205 replies | 24970 view(s)
  • fcorbelli's Avatar
    19th October 2020, 22:54
    fcorbelli replied to a thread zpaq updates in Data Compression
    My current bottleneck is the output stage, the fprintf. With or without text buffering. I need to make something different, like 'sprintf' in a giant RAM buffer to be written in one time. And output running when taking data, instead of like now (first decode, then output). A very big work (I do not like very much the ZPAQ source code, I have to rewrite myself,but too time is needed) Even without this my zpaqlist is two time faster than anything else (on my testbed). Tomorrow I will post the source, and my fix for zpaq to support direct utf8 extraction without supplementary software (running on Windows). Now my little PAKKA (alpha stage) runs zpaqlist and zpaqfranz in a background thread, so the Delphi GUI show progression in real time, instead of running a batch file
    2568 replies | 1110795 view(s)
  • SpyFX's Avatar
    19th October 2020, 22:12
    SpyFX replied to a thread zpaq updates in Data Compression
    Hi fcorbelli for zpaq list 1 to calculate the size of files in the archive, you need to load all h blocks into memory, this is the first memory block that zpaq consumes - I don't know how to optimize it yet 2 to understand that the files in the archive are duplicates, load the corresponding sets of indexes into memory and compare them by length or by element if the length of the sets is the same for the files - at the moment my solution is as follows, I calculate sha1 (a set of indices) for each file and then sort the list of files by key (size, hash) in descending order
    2568 replies | 1110795 view(s)
  • CompressMaster's Avatar
    19th October 2020, 21:31
    CompressMaster replied to a thread paq8px in Data Compression
    @moisesmcardona made a useful manual how to do that.
    2143 replies | 579485 view(s)
  • Jyrki Alakuijala's Avatar
    19th October 2020, 14:43
    In 2014, Zoltan Szabadka demanded auto-encoding as a basis of our team's image coding efforts, i.e,. a development model where psychovisual decisions are fully separated from encoding decisions. Initially, I considered it a bad idea, but decided to play along with Zoltan's idea because he rarely demanded anything and he was quite convinced about this. Nowadays I consider this a great insight from him. Particularly, it was more inclusive as it allowed more mathematically (than artistically) oriented people to make significant contributions to image quality on the codec and format side. We tried out all psychovisual metrics that we could get our hands on, and none of them worked for this purpose. As a result of this I ended up building a new psychovisual metric 'butteraugli' specifically engineered to work with compression. I tried 100 different things from color/perception science and kept 15 of those that seemed to work. Our first milestone was 'guetzli', which used butteraugli to optimize normal jpegs. We got a lot of good feedback from this, and it looked like we are on the right track. In the same timeframe Jon was developing what he calls autoencoding, which uses simulacra to decide the appropriate quality for an image. This is different from guetzli in a way that a single quality value is decided, while guetzli (and JPEG XL) make local decisions about the quality guided by the distance metric. Deciding about the suitable quality locally is slightly more density efficient than trying to decide a quality that fits the whole image. Only in the last stages of the format development have we moved back into a phase, where we approximate good psychovisual results with heuristics that don't include butteraugli. This allows us to have much faster encoding options. One round of butteraugli is about 1-3 megapixels/second, whereas fastest encoding options of JPEG XL are more than 15x faster. The slower options still iterate with butteraugli.
    148 replies | 13997 view(s)
  • anormal's Avatar
    19th October 2020, 14:04
    Interesting... I missed it while diving in github looking for more info about this idea. I wonder how many nice projects in github that I cannot find with their searcher. > Is it possible to get good compression ratios while using Huffman encoding? It would require a static Huffman tree and the sum of the size of the tree and the encoded data must be competitive with other compression techniques I was thinking the same when I started this idea. But, excuse my ignorance, I know what a canonical Huffman is, but found of no use of it in my specific problem, so I use a standard algorithm that computes the Huffman codes from symbols frequency table. Is not true that Huffman builds the minimal codes from symbols frequency, always? By default? I didn't know it was a NP problem, or that some algorithms could produce "better" (total smaller size I suposse) codes? Anyway, thanks, I'd have a look at the sources, let's see if I can produce better Huffman codes :D
    5 replies | 305 view(s)
  • xezz's Avatar
    19th October 2020, 10:55
    Maybe LittleBit solves some problems.
    5 replies | 305 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 19:53
    In command line I just type fp8sk30 -8 inputfile This is the trade off between speed and compression ratio... You can look at the source code and compare between fp8sk29 and fp8sk30...
    66 replies | 4996 view(s)
  • fabiorug's Avatar
    18th October 2020, 19:09
    fabiorug replied to a thread Fp8sk in Data Compression
    C:\Users\User\Desktop So I need normal command line for PAQ AND FP8SK, at least two file on the path desktop i wrote, I'm on Windows 7 Italy, basically in fp8sk30 you used more ram for PAQ and slower speed than fp8sk29 so for options I need the one you used to benchmark fp8sk30, improved jpeg recompression, not the normal which you optimized the speed 600 mb ram, 26 seconds for 1,5-2 mb and 2 kb more size than previous but much faster.
    66 replies | 4996 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 17:42
    I compiled it on windows 10 64 bit by using gotty instruction at the above thread and it works.
    2143 replies | 579485 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 17:15
    I am sorry I don't understand your question because my English is bad. Could you simply your question please ?
    66 replies | 4996 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 17:08
    paq8px193fix4 - improve jpeg recompression here is binary and source code inside the package. please check it to compress folder with multiple file type if there is any error. thank you paq8px193fix4 -8 for jpeg file in paq8px193fix3 thread is 2198873 bytes ​
    2143 replies | 579485 view(s)
  • fabiorug's Avatar
    18th October 2020, 16:50
    fabiorug replied to a thread Fp8sk in Data Compression
    The speed is much faster for 2 mb jpg while only 2 kb more the size. but if i want the normal speed what i should use like those 17 kb you squeezed in this build. anyone who's using this program
    66 replies | 4996 view(s)
  • fabiorug's Avatar
    18th October 2020, 16:47
    fabiorug replied to a thread paq8px in Data Compression
    On windows 7 it says that it don't find documents/builds folder. Working with cmd on windows 7 is impossible. When moises cardona will include in its program? For which architecture they are compiled and included on its program?
    2143 replies | 579485 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 15:48
    How to compress folder using paq8px ?
    2143 replies | 579485 view(s)
  • DZgas's Avatar
    18th October 2020, 13:54
    DZgas replied to a thread JPEG XL vs. AVIF in Data Compression
    I used cjpegXL 0.0.1-c8ce59f8 (13 Oct) and commands only: --distance 7 --speed 9 I think it's good that "JPEG XL spends a lot of bits on low and middle frequencies, but then there is less left for high frequencies." This is best for archiving photos, because AVIF just erases very large details.
    59 replies | 5319 view(s)
  • a902cd23's Avatar
    18th October 2020, 13:53
    a902cd23 replied to a thread Fp8sk in Data Compression
    This ZIP contains my 7zip-folder and 89 jpg/png from 1997/98 No errors detected in de/compression Time not measured. Compare size against fp8v6 Directory of Z:\1\test* 2020-10-18 12:36 17 000 518 test.zip 2020-10-18 11:05 9 952 986 test.zip.fp8 2020-10-18 10:58 9 947 580 test.zip.fp8sk30 Directory of Z:\2\test* 2020-10-18 12:36 17 000 518 test.zip 2020-10-18 11:05 9 831 678 test.zip.fp8 2020-10-18 10:58 9 808 000 test.zip.fp8sk30 Directory of Z:\3\test* 2020-10-18 12:37 17 000 518 test.zip 2020-10-18 11:04 9 759 017 test.zip.fp8 2020-10-18 10:59 9 721 353 test.zip.fp8sk30 Directory of Z:\4\test* 2020-10-18 12:37 17 000 518 test.zip 2020-10-18 11:04 9 719 050 test.zip.fp8 2020-10-18 10:59 9 674 468 test.zip.fp8sk30 Directory of Z:\5\test* 2020-10-18 12:37 17 000 518 test.zip 2020-10-18 11:04 9 694 758 test.zip.fp8 2020-10-18 10:59 9 647 090 test.zip.fp8sk30 Directory of Z:\6\test* 2020-10-18 12:38 17 000 518 test.zip 2020-10-18 11:03 9 678 955 test.zip.fp8 2020-10-18 11:00 9 629 482 test.zip.fp8sk30 Directory of Z:\7\test* 2020-10-18 12:38 17 000 518 test.zip 2020-10-18 11:03 9 672 426 test.zip.fp8 2020-10-18 11:00 9 621 646 test.zip.fp8sk30 Directory of Z:\8\test* 2020-10-18 12:38 17 000 518 test.zip 2020-10-18 11:03 9 670 563 test.zip.fp8 2020-10-18 11:00 9 619 379 test.zip.fp8sk30
    66 replies | 4996 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 12:01
    I use -8 option for both of them
    2143 replies | 579485 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 10:57
    Paq8px_v193fix3 - slightly improve jpeg model paq8px_v193fix2 2202679 bytes paq8px_v193fix3 2201650 bytes here is the source code
    2143 replies | 579485 view(s)
  • suryakandau@yahoo.co.id's Avatar
    18th October 2020, 02:49
    Maybe you could try Fp8sk since it is the fastest in paq series.
    200 replies | 129462 view(s)
  • Gonzalo's Avatar
    18th October 2020, 02:08
    Fair enough! Thanks. Make sure you test the latest versions, though... For example, precomp has seen a dramatic increase in speed over the last few years. Nowadays it's usually faster than Razor, while providing better compression if there is something to recompress. And as far as practical compressors go, FreeArc is probably better than 7-zip even now after years of abandonment (faster & denser archives thanks to rep filter and tta among other things).
    200 replies | 129462 view(s)
  • Nania Francesco's Avatar
    18th October 2020, 01:35
    I will try to test the archivers and compressors as much as possible and accept your suggestions. As far as the "paq" series is concerned, I will do everything to test the program which, in my opinion, with the current processors is too slow and not very useful to be used in a practical way every day. Currently only Razor, Packet, Nanozip, 7zip and Rar, in my opinion, are very practical and useful archivers. The tests speak though!
    200 replies | 129462 view(s)
  • Gonzalo's Avatar
    18th October 2020, 01:11
    Good work Nania! Question: How do you decide which compressors to test? I see some heavy names missing, especially taking into consideration you include dead ones like nanozip, and slow configurations like -cc. What about paq8pxd(d) and precomp, for example? In the case of precomp, its worse-case scenario is a 7zip-like compression, and for a lot of other test sets it should take the first place, especially but not limited to pdf, mp3 and jpg. And if you include paq8px and paq8pxd, they should quickly gain #1 or #2 spots for basically each and every test (except maybe for mp3, which would still belong to precomp). You might also like to take a look at bcm, bsc for fast and strong symmetric compressors, and if you plan to keep nanozip, maybe you should include zpaq and freearc too. They are still competitive. As for closed source or propietary compressors, you might want to take a look at PowerArchiver and TC4Shell version of 7z, which includes a lot of great plugins for better compression, like wavpack for uncompressed audio.
    200 replies | 129462 view(s)
  • anormal's Avatar
    17th October 2020, 22:35
    I was interested this year in string compression, specifically for very low resource machines, usually 8bit cpus with maybe 64kb of RAM. The idea here is, what could I use to save text space? There are already modern compressors as LZSA, and many more, optimized for 8bit machines. But all those works for general compressing binary data. I wanted something that could be called with a string number, an address space, and that unique string is decompressed. The standard use case is a text adventure, as the ones developed by Infocom, Level-9, Magnetic Scrolls, etc... By the way, all those systems (adventure interpreters) use somekind of text compressing, packing 4 chars in 3bytes, I think is used for example by Level-9, and many others use a simple dictionary token substitution. I've used strings extracted from decompiled adventures in these systems to do some basic testings. I used Delphi/pascal for testing, but this is irrelevant. So, actual string focused compressors, as shoco, smaz, the trie version of smaz, etc, are not tailored to this problem. I've studied, as far as I can, sources, also for other string compressors, as Femtozip and Unishox, and came to the following conclusions: - Decompressing code needs to be small, but also there is not needed to pack it under 100bytes, but of course, smaller is better. Speed is not relevant afaik in a text adventure. - So, easy coding needs to be used, no arithmetic coding for example, I've read something about the possbility of AC in a 8bit machine, but it's totally out of my knowledge. - The best I could use is a token dictionary and maybe some coding of literals and ptrs in dict., the next one I was testing is adaptive huffman for each string, this gives a nice compression, and easy to use for strings, but it's impractical for this user-case. My definition of the problem: - alphabet size is 64 chars, so I can use 6bits in literals for example - max total text to compress is 0xffff bytes - strings are capitalized, per-platform uncompressing code could capitalize or other transformations after unpack - max 255 strings (could be more without changing method) - max string size could be more than 255bytes - strings are accessed by a number - dictionary size is 512 bytes - max dictionary match len is 8chars, 3bits, after some testing, I found larger matches are not produced if I use 4bits. I came to the following idea that is giving me around 60% final size (including dictionary, alphabet definition, and string pointers). I do not claim this is huge savings or that I 've invented something useful, I just got some things from here and there and tried to use for this specific case. This is my first try to code something related to compression, and been testing many things, maybe you have some ideas about. The final idea could be, compress the strings in a modern machine, and then output results that could be used with a small function in machines with a Z80, a 6502, early Intel, 8088, 8086, etc. - all strings are capitalized, concatenated and cleaned of unknown chars - all n-grams up to size 8 are extracted from cleaned text, and sorted by frequency of use - dictionary construction is based around what I've seen in other source codes, I tried to understand Z-standard dictionary builders (even new fastcover), but code is too complex for me :D And I think I don't need suffix arrays because maximum input size is 64KB and so I can afford extracting each n-gram? Could anyone correct me if I am wrong about this? Basically each ngram is taken by order from ngrams lists, and added to the dictionary, with the usual optimizations: ngram is not added if is a substring of another one already in dict. n-gram is packed in dictionary if head or tail of it and another one in dict coincides, maximizing coincident chars, so for example, adding "ESE" to dict. and found a token "SDES", merge both, producing a bigger token "SDESE" (tail opt. in this case) - my idea here is, please correct me if I am wrong, getting maximum number of ngrams could be packed in dict. maximazing freq. usage. This work pretty well, usually packing in 512 bytes dict, hundreds of n-grams. I have the feeling, that there are better ways to build this small dict, and improve compression, but I don't have a better idea. - compression is pretty simple: - literals use 6bits (huffman coded), and are stored and the end of the compressed string, unpacking space is prefilled with 0s, so after unpacking dict. tokens, 0bytes are filled with literals in the order they come. - pointers in dict are 12bits, 9 used for a pointer in the 512bytes dictionary, and 3 for length of match. I got this idea from femtozip I think, I use a huffman coder for encoding literal 6bit values, and decomposing the tokens 12bits in 2 halves, produces 6+6, so I can use a unique hufmman coder for encoding both literals and dict token ptrs. Femtozip uses byte nibbles, but each one use its own huffman dict., I cannot afford this luxury) - so compression works this way: - tokens in dict are found for each string from max possible length to min, so I first try to find matches with 8chars len, downto 3 - if no more tokens match, all the rest are literals, and are deferred to the end of the compressed string after all dict tokens. - I build this for each string, accumulating stats for 6bit values for my huffman encoder. - finally, huffman is used to encode tokens 6bit at a time, and then all literals are encoded and appended. ​ - strings are accessed via a pointer array, so no string lengths are needed, and as I said literals are filled at the end. I've not tested unpacking, but it should work :D Well this is my little experiment, I have the sensation that there are better strategies for example constructing a better dictionary, or choosing the token matched in the dictionary. This later case for example, is there a better way that instead of getting matches in dict. first by lenght, (longest to smallest) maybe other ways? I mean, I found a match of len 8char, nice, then found a match of len 6, but maybe if I don't use this first 8char len, I could find later 2 matches of len 7? (improving compressiong) or maybe other better combinations? Sorry I don't know how to explain this better. Well if you suvived this wall of text and have any idea to implement here, I'll try to test it. For example, I've thought in using a genetic evolver to improve match choosing, and maybe token packing in dictionary (learn about this in old times of Corewar warriors). Take in account that this case is useless in general sense, and is only very specific user case. Unpacking needs to be done in a 8bit machine, max input text size is 64Kb, and the other minutia I've written. Thanks guys! Edit: I've added some debugging output if you want to have a look at some numbers (naive6 is plain 6bits per char baseline): https://pastebin.com/xa3hCQg5
    5 replies | 305 view(s)
  • suryakandau@yahoo.co.id's Avatar
    17th October 2020, 22:03
    Fp8sk30 - improve jpeg recompression this is source code and binary file. please check if there is any error about this version. thank you. in windows just drag and drop file(s) to compress or extract it.
    66 replies | 4996 view(s)
  • fabiorug's Avatar
    17th October 2020, 20:06
    Hi! Is it auto-encoding an alien technology from the future, or only Jpeg XL as the imgready slides title states? can auto-encoding match how the average human vision, complex title, works even in jpg photos? Is Sami Boukortt or Jon Sneyers, or the complete team? Who had the idea and who made the first draft or is working on it? Is it really real? I think maybe with current jpeg xl commit you are editing bitstream so nobody has trained auto encoding at the moment or only Jon Sneyers who had the idea. I attended imgready with zoom app.
    148 replies | 13997 view(s)
  • Jyrki Alakuijala's Avatar
    17th October 2020, 19:56
    Values above 24 are an extension to the RFC7932 and will not work together with browser's HTTP content encoding (because of decoding memory use limitations). This is why you need a special flag for large window. --large_window 30 I believe only quality 11 (and possibly 10) max out the possibilities of the large window. Large window brotli would be ideal for distribution of packages, as it is highly asymmetric (super dense, but very fast to decode).
    200 replies | 129462 view(s)
  • Nania Francesco's Avatar
    17th October 2020, 19:00
    New update. Added - Thor 0.96a (Compressor) - Packet 1.9 (Archiver) - Razor 1.3.7 (Archiver) - ZCM 0.93 (Archiver) - ULZ 1.01B (Compressor) - Xeloz 0.3.5.3 (Compressor) only from: http://heartofcomp.altervista.org/MOC/MOCA.htm
    200 replies | 129462 view(s)
  • Jyrki Alakuijala's Avatar
    17th October 2020, 18:51
    This is nice and flattering, but the bit stream is still evolving. We will likely move into backward compatible (aka frozen) mode next week for the reference software. This will make such integrations a lot more useful.
    148 replies | 13997 view(s)
  • Jyrki Alakuijala's Avatar
    17th October 2020, 14:53
    Some people love progressive jpegs (vs sequential). My understanding (based on fully non-scientific twitter discussions and my own sampling) is that it is about 60 % of people. 30 % don't have an opinion when shown two alternatives. 10 % dislike progressive. The best hypothesis I have heard why some dislike progressive is that it is 'too much information' for their brains, and it reduces their focus. Switches in rendering style are contributing to the features that those brains need to process. Not having features in the progressive image that disappear in the final version will reduce the amout of work that the brain needs to do in interpreting the image. Of course many alternative hypotheses are possible.
    25 replies | 1477 view(s)
  • Jarek's Avatar
    17th October 2020, 10:53
    Just noticed that XnView supports JXL: https://www.xnview.com/mantisbt/view.php?id=1845 https://www.xnview.com/en/xnviewmp/#downloads
    148 replies | 13997 view(s)
  • Nania Francesco's Avatar
    16th October 2020, 23:43
    First release of new WCC 2020-2021 is online. I'm coming back after a long break. I wasn't stopped. I have carried out some projects which I will soon publish! Meanwhile enjoy this new Benchmark that looks interesting and above all does not give so many advantages to text compressors ! OS: Windows 10 CPU: AMD Ryzen™ 3700x (8 core– 16 thread) only from: http://heartofcomp.altervista.org/MOC/MOCA.htm
    200 replies | 129462 view(s)
  • Shelwien's Avatar
    16th October 2020, 20:29
    Its possible to make a "random data compressor" device in a form of a PCI card with a few TBs of SSD inside. Then software part would store any given file and replace it with some index value. It would be enough to trick most people, because filling 1TB at 100MB/s takes almost 3 hours of continuous work. And if we need decoding on another PC, same can be done with a wireless modem (cell/satellite).
    2 replies | 106 view(s)
  • BakingPowder's Avatar
    16th October 2020, 15:51
    In fact you could even go further with it, and make it entirely dynamic. "123456" could in fact only decompress into one string. A useless random looking string probably, or a least mostly garbage. But, you could simulate a PERSON... who only WANTED to compress garbage-looking files... And somehow "backwards evolve". We make the simulated world evolve a character with a plausible back-story, retconned basically... who only has some "garbage-looking files" for real valid reasons. Maybe very wierd programs that need these garbage-files. Where in fact the program secretly contains the missing info. Like the previous example, this person can now actually belive he is compressing random data successfully. And he has an algorithm that works. Again, its a form of illusion over the mind of a simulated character who doesn't realise he is being simulated or that information is being withheld from him.
    2 replies | 106 view(s)
  • BakingPowder's Avatar
    16th October 2020, 15:43
    What happens if we could compress random data? Well... firstly... you would have achieve the impossible. Given a string like "123456", it could decompress into almost infinite different files. So which file is the correct one? What if we could always know the correct one? By "Intuition". What if the CPU somehow had "intuition" of what is the correct way to decode this? We first need to simulate a version of reality (like a simulated world with people and objects and even computers simulated in it). Then make a CPU that can "intuit" the right answers. This basically means looking up from a table, the "correct info", that is only available to a decompressor. Basically the random information isn't compressed, its just a lookup table. where we have: Table = FredsComputerGame We run this information through the decompressor, but the user Fred doesn't actually know that the decompressor knows that he expects to get his computer game decompressed. All he knows is that he sent in the string "123456". Its a stage-trick, basically. Illusions, that sort of show-magic. Then the "Virtual character Fred" would be highly surprised to see the magical event occur, that this string "123456" was decompressed into his computer game. Then we just need to keep this virtual character "Fred"'s mind, away from the fact that other people expect the same string to decompress differently :) So he believes it is really an amzing magical algorithm. Then we have compressed and decompressed random data. At least... for a virtual character in a simulated reality, who doesn't realise its just an illusion. Perhaps it could be useful for to ease the minds of people who are going crazy trying to figure out how to compress random data, and can't handle reality.
    2 replies | 106 view(s)
More Activity