Activity Stream

Filter
Sort By Time Show
Recent Recent Popular Popular Anytime Anytime Last 24 Hours Last 24 Hours Last 7 Days Last 7 Days Last 30 Days Last 30 Days All All Photos Photos Forum Forums
  • Aniskin's Avatar
    Today, 18:35
    Technically there is no problem to create such version of the codec. I will think about it.
    19 replies | 834 view(s)
  • Shelwien's Avatar
    Today, 18:20
    @Aniskin: Btw, is it possible to get mfilter to output jpeg and metainfo to different streams? With that we could avoid compressing jpegs twice...
    19 replies | 834 view(s)
  • Aniskin's Avatar
    Today, 18:04
    If you want to use MFilter+Lepton: 7z a -m0=mfilter:a1 -m1=lzma2:x9 If you want to use MFilter+paq8: 7z a -m0=mfilter:a2 -m1=lzma2:x9 Also what about solid compression?
    19 replies | 834 view(s)
  • brispuss's Avatar
    Today, 17:38
    Thanks. I didn't remove metadata.
    19 replies | 834 view(s)
  • smjohn1's Avatar
    Today, 17:33
    Did you remove meta-info when using packJPG ( -d )? meta-info has a large percentage for small files.
    19 replies | 834 view(s)
  • Kirr's Avatar
    Today, 17:22
    Yes, fqzcomp performs well considering it works via wrapper that chops long sequence into reads. (And adds constant quality as per your idea, which I probably took a bit too far :-)). Interestingly, it is currently leading in compactness on spruce genome: chart (though this test is not complete, some compressors are still missing). Also it may still improve more after I add its newly fixed "-s9" mode. I guess it will work even better on proper fastq shord reads datasets. Thanks. Yeah, NAF is focused on transfer + decompression speed, because both of these steps can be a bottleneck in my work. I noticed that many other sequence compressors are primarily optimized for compactness (something I did not know before doing the benchmark), which partly explains why gzip remains popular.
    9 replies | 697 view(s)
  • brispuss's Avatar
    Today, 16:55
    OK. Working on other compression examples. Modified MFilter7z.ini as per earlier post, but changed paq8px version to 183fix1 instead of version 181fix1. Created sub directory named paq8px_183fix1 under Codecs sub directory under 7-zip. Paq8px_v183fix1 executable copied to the directory paq8px_183fix1. So the command for lepton plus paq8px_v183fix1 PLUS lzma2 should now be 7z a -m0=mfilter:a1 -m1=mfilter:a2 -m2=lzma2:x9 0067.7z 0067.jpg (for example)?
    19 replies | 834 view(s)
  • Marco_B's Avatar
    Today, 15:48
    Hi all, ​in this episode I am glad to tell about an attempt of my own to dispense the problem previously encountered in Grouped (ROLZ) LZW: the fixed size of the groups (dictionaries attached to contest). A way to proceed is illustrated by Nakano, Yahagi, Okada but I started from a different consideration. Every time a symbol occurs in a text, it gains an increasingly number of children and the chance for it to reappear is more and more smaller, while an entropy stage which takes its output assigns shorter codes. To conciliate this two opposite I settled down a schema where symbols belong to a set of lists entitled for the number of children, and each list is organized as LRU. A symbol now will be emitted by its list and rank inside it, respectively via the Witten-Cleary arithmetic coder (Witten Cleary arithmetic coder) and the Elias delta. I choosed an AC because it s the solely that can mimic closely the fatherhood distribution among ymbols, but this constraint put me in front of the necessity to interleave ts sequence. After a complicated period I realized that the solution must be based on two facts: (i) the encoder has to be ahead of two symbols beacause the decoder needs to start with 16 bit, (ii) the variables high and low are in lockstep between this two part. The rest, you can see it through the source below. Actually at the moment the compressor is terrible, both in terms of speed and ratio, but I made it public as the interleaver could be of some interest. I have in mind to improve the performance of the compressor imposing on it a more canonical contest apparatus, that should curtails the lists at the expense of memory consumption. I hope to see you soon, greetings, Marco Borsari
    0 replies | 34 view(s)
  • Shelwien's Avatar
    Today, 15:34
    Shelwien replied to a thread FLiF Batch File in Data Compression
    > stuffit deluxe is even commercial! Here's stuffit windows binary: http://nishi.dreamhosters.com/stuffit14_v0.rar > which one would you use on linux? There's wine on linux, so hard to say. But I like BMF.
    14 replies | 518 view(s)
  • pklat's Avatar
    Today, 15:18
    pklat replied to a thread FLiF Batch File in Data Compression
    sorry, mea culpa. so there are: rkim, StuffItDeluxe, Flic, zpaq, bmf, gralic, cmix. but most of them are either too slow or too 'exotic' imho. stuffit deluxe is even commercial! flif is supported by xnview, irfanview, imagemagick. which one would you use on linux?
    14 replies | 518 view(s)
  • Shelwien's Avatar
    Today, 14:58
    Shelwien replied to a thread FLiF Batch File in Data Compression
    Well, this says that it isn't: http://qlic.altervista.org/LPCB.html As to solid flif - there's support for multi-image files.
    14 replies | 518 view(s)
  • pklat's Avatar
    Today, 14:49
    pklat replied to a thread FLiF Batch File in Data Compression
    from my limited tests, flif has best lossless ratio. however, its slow at decoding and not supported well. it seems best to use it for archiving only for now. as someone already mentioned, pity it has no 'solid' option yet for large set of pictures like archivers do. ( or dictionary option )
    14 replies | 518 view(s)
  • Aniskin's Avatar
    Today, 14:37
    MFilter should be used with additional compression -m1=LZMA(2) because MFilter does not pass any metadata into packer. And maybe lepton.exe without MFilter will show better result.
    19 replies | 834 view(s)
  • Shelwien's Avatar
    Today, 13:51
    Add also -exact: https://encode.su/threads/3246-FLiF-Batch-File?p=62419&viewfull=1#post62419
    51 replies | 10763 view(s)
  • Shelwien's Avatar
    Today, 13:49
    1. jojpeg doesn't compress metainfo on its own, try using "pa -m0=jojpeg -m1=lzma2:x9", "pa a -m0=jojpeg -m1=lzma2:x9 -m2=lzma2:x9:lp0:lc0:pb0 -mb00s0:1 -mb00s1:2" 2. There's Aniskin's post above about integrating paq8px into mfilter, results could be better than standalone paq8px. 3. add brunsli, maybe new precomp too. 11,711,247 jpg\ 11,333,680 0.pa // pa a -m0=jojpeg 1.pa jpg 9,161,957 1.pa // pa a -m0=lzma2:x9 0.pa jpg 9,132,865 2.pa // pa a -m0=jojpeg -m1=lzma2:x9 2.pa jpg 9,133,254 3.pa // pa a -m0=jojpeg -m1=lzma2:x9 -m2=lzma2:x9:lp0:lc0:pb0 -mb00s0:1 -mb00s1:2 3.pa jpg
    19 replies | 834 view(s)
  • pklat's Avatar
    Today, 13:33
    i have tried webp (0.6.1-2) in Debian using lossless mode ( -z 9 )with .png image, but it doesn't produce exact same .bmp after unpacking?
    51 replies | 10763 view(s)
  • JamesB's Avatar
    Today, 12:25
    JamesB replied to a thread DZip in Data Compression
    The authors are heavily involved in various genome sequence data formats, so that's probably their primary focus here and why they have so much genomic data in their test corpus. So maybe they don't care so much about text compression. At the moment the tools (Harc, Spring) from some of the authors make heavy use of libbsc, so perhaps they're looking at replacing it. Albeit slowly... They'd probably be better off considering something like MCM for rapid CM encoding as a more tractable alternative, but it's always great to see new research of course and this team have a track record of interesting results.
    4 replies | 207 view(s)
  • brispuss's Avatar
    Today, 11:26
    Ran a few more compression tests using 171 jpeg image files from a HOG (Hidden Object Game). Total size of all files is 64,469,752 bytes. Created batch files to compress each file individually. original files 64,469,752 bytes jojpeg (version sh3) 55,431,288 ran under 7-zip modified (7zdll) using command pa a -m0=jojpeg where "pa" was renamed from "7z" lepton (slow version) 52,705,758 ran under original 7-zip using command 7z a -m0=mfilter:a1 packjpg (version 2.5k) 51,975,698 fast paq8 (version 6) 51,588,301 using option -8 paq8pxd (version 69) 51,365,725 using option -s9 paq8px (version 183fix1) 51,310,086 using option -9 Noticed that packjpg tended to produce larger than original files when compressing small files originally in the tens of kilobytes size. But packjpg performed better with good compression on larger files in the 100's of kilobytes size and up.
    19 replies | 834 view(s)
  • Sebastian's Avatar
    Today, 09:39
    Sebastian replied to a thread DZip in Data Compression
    Interestingly, the improvements using the dynamic model are almost neglible on some files, although it is not clear to me if they keep updating the static model.
    4 replies | 207 view(s)
  • Shelwien's Avatar
    Today, 08:08
    Shelwien replied to a thread DZip in Data Compression
    Not related to cmix. Its written in python and uses all the popular NN frameworks. Compression seems to be pretty bad though - they report worse results than bsc for enwik8.
    4 replies | 207 view(s)
  • bwt's Avatar
    Today, 01:55
    bwt replied to a thread DZip in Data Compression
    It seems like cmix
    4 replies | 207 view(s)
  • Gonzalo's Avatar
    Yesterday, 20:18
    Precomp hangs restoring an .iso image file. I attached a 10 Mb chunk around the area where it happens. On this particular file, precomp hangs at 39.27%
    54 replies | 3392 view(s)
  • JamesB's Avatar
    Yesterday, 20:16
    JamesB started a thread DZip in Data Compression
    I stumbled across another neural network based general purpose compressor today. They compare it against LSTM where it mostly does better (sometimes very much so) but is sometimes poorer. I haven't tried it yet. https://arxiv.org/abs/1911.03572 https://github.com/mohit1997/DZip
    4 replies | 207 view(s)
  • Shelwien's Avatar
    Yesterday, 18:30
    Shelwien replied to a thread vectorization in Data Compression
    I guess you mean https://en.wikipedia.org/wiki/Image_tracing Conversion software like that does exist, so you can test it. In particular, djvu format may be related. But its much harder to properly compress vector graphics... for any reasonable image complexity it would be better to rasterize it instead and compress the bitmap.
    1 replies | 127 view(s)
  • smjohn1's Avatar
    Yesterday, 18:08
    It is really hard to come to a definitive conclusion with only one sample file. In general packJPG ( which Lepton borrowed a lot of techniques from ) works quite well. It would be great if you could report test results on a large group of sample JPEG files with various contexts, such as people, nature, etc.
    19 replies | 834 view(s)
  • pklat's Avatar
    Yesterday, 16:59
    pklat started a thread vectorization in Data Compression
    ( if OT, or already mentioned, sorry ) what do you think of using vectorization as a sort of 'lossy' compresion? pdf could be converted to svg, for eg. .cbr also depending on content. something would be lost, but something gained obviously. or use it for text parts only?
    1 replies | 127 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 07:19
    XML.tar from Silesia benchmark without Precomp and memory only ~1.3 Gb the result is 265136 byte
    18 replies | 1101 view(s)
  • suryakandau@yahoo.co.id's Avatar
    Yesterday, 05:18
    It only takes ~1.3 Gb memory
    18 replies | 1101 view(s)
  • Shelwien's Avatar
    Yesterday, 00:20
    As I said, its not a single files, there're dependencies (vbrun300.dll etc)
    5 replies | 214 view(s)
  • CompressMaster's Avatar
    4th December 2019, 21:06
    @Shelwien it does not work properly. I tried many versions of that file (including uncorrupted original from corrupted CD), but it does not work - same error. Could I request you for compiling that? I have found some of these approached prior, but I have only VS 2010. Thanks a lot.
    5 replies | 214 view(s)
  • JamesB's Avatar
    4th December 2019, 20:43
    In that case I'm amazed fqzcomp does even remotely well! It was written for short read Illumina sequencing data. :-) Luck I guess, although it's clearly not the optimal tool. NAF is occupying a great speed vs size tradeoff there.
    9 replies | 697 view(s)
  • kaitz's Avatar
    4th December 2019, 19:18
    https://www.toptensoftware.com/win3mu/ ​
    5 replies | 214 view(s)
  • schnaader's Avatar
    4th December 2019, 17:52
    Here's the latest development version - I fixed an error with a file write that had no mutex, which led to incorrect reconstruction on files with many small interleaved JPG and preflate (PDF/ZIP/...) streams.
    54 replies | 3392 view(s)
  • Kirr's Avatar
    4th December 2019, 11:57
    The first type of data is currently not represented at all in the benchmark. I will certainly add such data in the future. The other two kinds are both used, and I thought are labeled clearly. But probably not clearly enough. I will try to further improve the clarity. Thanks! There are also different kind of compressors. E.g., those designed specifically for short reads vs those not caring about sequence type. I will probably separate short read compressors to their own category. (Currently I bundle all specialized compressors together as "Sequence compressos").
    9 replies | 697 view(s)
  • Shelwien's Avatar
    4th December 2019, 09:08
    There're actually 3 different "codecs" based on same reflate in 7zdll: -m0=reflate:x9 -m0=reflate*4:x=9876 -m0=reflate*7:x=1234567 Thing is, its quite possible to encounter nested deflate... like .png in .docx in .zip, so reflate supports nested preprocessing... but each nesting level requires an extra output stream for diff data, and 7-zip framework doesn't support dynamic number of output streams, so I made 3 instances with different names instead: "reflate" has nesting depth 1, "reflate*4" = depth 4, "reflate*7" = depth 7... 7-zip framework also has a limit at max 8 output streams per codec, and reflate*7 uses all of them. Unfortunately reflate doesn't yet have detection of zlib parameters... window is assumed to always be 32k (which is bad for pngs), and level has to be specified manually (via x parameter in this case; default is 6). Thus "-m0=reflate:x9" means "diff against zlib level 9", while "-m0=reflate*4:x=9876" means "level 9 for depth 0, level 8 for depth 1...". Its important to keep in mind that in 7zdll, reflate's "x" parameter clashes with global "x" parameter, so "7z a -m0=reflate ..." would use reflate/level6, while "7z a -mx=9 -m0=reflate ..." would suddently use reflate/level9, but its not a problem if reflate level is always specified manually like "7z a -mx=9 -m0=reflate:x6 ...". Reflate library has some other parameters, but they're not included in 7zdll atm. https://encode.su/threads/1399-reflate-a-new-universal-deflate-recompressor?p=50858&pp=1 Also keep in mind that actual cmdline using reflate would look something like this: pa a -m0=reflate:x6 -m1=lzma2:x9 -m2=lzma2:lc0:lp0:pb0 -mb00s0:1 -mb00s1:2 archive.pa *.pdf or even pa a -m0=reflate:x6 -m1=jojpeg -m2=lzma2:x9 -m3=lzma2:lc0:lp0:pb0 -m4=lzma2:lc0:lp0:pb0 -mb00s0:1 -mb00s1:4 -mb01s0:2 -mb01s1:3 archive.pa *.pdf since reflate's diff outputs are usually incompressible per deflate stream... but would be same for multiple instances of same stream. I have this converter for complex filter trees: http://nishi.dreamhosters.com/7zcmd.html
    131 replies | 72162 view(s)
  • brispuss's Avatar
    4th December 2019, 06:35
    Thanks. I've got the 7zdll "7z" working again. Followed your advice and renamed 7z.exe to pa.exe to avoid confusion with the original 7-zip program. For reflate, I think there may be options associated with it? What are those options please?
    131 replies | 72162 view(s)
  • Shelwien's Avatar
    4th December 2019, 06:19
    It says "Not implemented" about the archive format (.7z or .wtf). Archive name should either end with ".pa" or you need to specify the format explicitly with -tpa option. Also maybe rename 7zdll's 7z.exe to pa.exe - that would let you use both 7-zip and 7zdll at once.
    131 replies | 72162 view(s)
  • brispuss's Avatar
    4th December 2019, 04:01
    Thanks for the detailed comments. However, it doesn't entirely explain why I'm getting the "Not implemented" errors. I can now install the original 7-zip program and run it OK. But still getting errors when trying to compress files using files from 7zdll_vF7.rar. Original 7-zip program once again uninstalled. I've "installed" files from 7zdll_vF7.rar to c:\7zdll_vF7 directory. Set environment variables path to c:\7zdll_vf7\x64 since I'm running x64 version of Windows. Typing 7z in a command window brings up command syntax as expected. But trying to compress anything brings up errors again "Not implemented". Trying to run reflate on a small swf file (attached) with the command - 7z a -m0=reflate intenium.swf.wtf intenium.swf. But it doesn't work > "Not implemented" What is wrong here? Is the 7zdll installation wrong, somehow?
    131 replies | 72162 view(s)
  • Shelwien's Avatar
    4th December 2019, 02:26
    1. 7zdll works with a ".pa" format, there's no support for any other formats. You can't replace normal 7-zip with it. 2. With 7zdll, "7z a 0067jpg.pa 0067.jpg" would compress the .jpg with lzma2, there's no codec optimization in console version. 3. 7zdll does include same lepton and brunsli codecs (and jojpeg, but its much slower), but they can be used only with -ms=off (because both lepton and brunsli want to load the whole file), so for jpegs atm its better to use normal 7-zip with mfilter or 7zdll with jojpeg.
    131 replies | 72162 view(s)
  • brispuss's Avatar
    4th December 2019, 01:35
    Not a bad result. However, the best compression is still gained by using paq8px_v183fix1 -9beta on the 0067.jpg file which results in a 101,192 byte file.
    19 replies | 834 view(s)
  • Jarek's Avatar
    4th December 2019, 00:00
    Simple approximation of above modeling minimum as where linear trend of gradients intersects 0, gives looking very practical and universal way of adaptive choice of learning rate 'mu'. Imagine you would like to minimize in 1D using gradient (g) descent - with steps: theta <- theta - g * mu The big question is how to choose learning rate mu - too small and you get slow convergence, too large and you e.g. jump over valleys. In second order method we would like to find minimum of parabola e.g. in such single step. For parabola these (theta,g) points are on line as in figure above - we can find its linear coefficient by dividing standard deviations of both - we jump to its minimum if using: mu = sqrt( variance(theta) / variance(g) ) Getting very simple and pathology resistant (repelling from saddles, avoiding too large steps) adaptive choice of learning rate - requires updating just four (exponential moving) averages: of theta, theta^2, g, g^2. For example we can cheaply do it for all coordinates in SGD (page 5 of v2) - getting 2nd order learning rate adaptation independently for each parameter. Standard methods like RMSprop, ADAM use sqrt(mean(g^2)) denominator, but it will not minimize parabola in one step - have anybody seen optimizers using variances like here?
    23 replies | 1319 view(s)
  • Aniskin's Avatar
    3rd December 2019, 23:59
    MFilter + paq8px_v181fix1.exe MFilter7z.ini: Encode=lepton\lepton-slow-best-ratio.exe "%1" "%2" Decode=lepton\lepton-slow-best-ratio.exe "%1" "%2" PipeDecode=lepton\lepton-slow-best-ratio.exe - Ext=lep Ext=paq8 Encode=paq8px_v181fix1\paq8px_v181fix1.exe -9 "%1" "%2" Decode=paq8px_v181fix1\paq8px_v181fix1.exe -d "%1" "%2" Result: 101401 bytes
    19 replies | 834 view(s)
  • brispuss's Avatar
    3rd December 2019, 12:25
    OK. Something strange going on here! I believe I got reflate working under 7z a few days ago, but 7z now no longer works (properly), and can't get anything to compress!? Getting error: System ERROR Not implemented now!? Not sure if I used the original 7-zip installation with the updated files from file 7zdll_vF7.rar installed within the original directory (c:\program files\7-zip), or I might have used a newly created directory called c:\7z-mod with x64 version only files from file 7zdll_vF7.rar? The original (unmodified) 7-zip program under c:\program files\7-zip has been uninstalled. And the c:\7z-mod folder and files were deleted recently. I've re-created c:\7z-mod folder and extracted only the x64 files from file 7zdll_vF7.rar. This is the only 7-zip installation at the moment. Typing in 7z in a command window brings up 7z syntax etc. But trying a simple compress test such as - 7z a 0067jpg.7z 0067.jpg brings up the above error " . Not implemented". I can't even do basic compression let alone advanced stuff such as reflate etc. What is wrong here?
    131 replies | 72162 view(s)
  • JamesB's Avatar
    2nd December 2019, 21:08
    It's worth noting there are several very different classes of compression tool out there, so it may be good to label the type of input data more clearly. The sort I can think of are: Compression of many small fragments; basically sequencing machine outputs. There is a lot of replication as we expect e.g. 30 fold redundancy, but finding those repeats is challenging. Further subdivided into fixed length short reads (Illumina) and long variable size reads (ONT, PacBio) Compression of long genomes with a single copy of each chromosome. No obvious redundancy except for repeats internal to the genome itself (ALU, LINES, SINES, etc). Compression of sets of genomes or sequence families. Very strong redundancy.
    9 replies | 697 view(s)
  • Shelwien's Avatar
    2nd December 2019, 14:20
    That's the idea... Brunsli normally uses brotli library to compress jpeg metainfo, while this version leaves metainfo uncompressed. In theory this can improve solid compression of recompressed jpegs.
    15 replies | 1904 view(s)
  • maadjordan's Avatar
    2nd December 2019, 13:33
    but yours lose some compression but can unpack both safely.
    15 replies | 1904 view(s)
  • Aniskin's Avatar
    1st December 2019, 17:14
    It is possible to create additional section in mfilter.ini that will describe direct call of paq8px or cmix. In this case you can use m0=mfilter:a2. (from another thread) Yes, you are right.
    19 replies | 834 view(s)
  • Shelwien's Avatar
    1st December 2019, 15:27
    http://nishi.dreamhosters.com/u/brunsli_v2a_dll.rar Applied schnaader's patch to my dll kit... can be replaced in mfilter (smaller dll because of removed brotli). Doesn't affect compressed size though, I guess mfilter doesn't feed metainfo to brunsli at all.
    15 replies | 1904 view(s)
  • Bulat Ziganshin's Avatar
    1st December 2019, 15:16
    it seems that mfilter provides lepton-compressed output (and last time I looked into lepton, it has no feature to provide uncompressed output), and paq compression is slower but tighter than the lepton one
    19 replies | 834 view(s)
  • Shelwien's Avatar
    1st December 2019, 14:30
    Yes, or you can add -m1=lzma2... etc. Unfortunately mfilter outputs a single stream, so lepton/brunsli output has to be compressed again. Also maybe I should make a brunsli dll from schnaader's hacked versions that doesn't use brotli for metainfo...
    19 replies | 834 view(s)
  • brispuss's Avatar
    1st December 2019, 12:57
    Yes, thanks. I had already placed the mfilter files within a newly created Codecs sub-folder previously, as per the instructions that came with the files. The command 7z a -m0=mfilter:a1 0067.7z 0067.jpg creates an uncompressed 7-zip file via the lepton codec which then can be further processed/compressed using paq8.. and other compressors? EDIT: Assuming the above, the 0067.7z file was compressed using the following with the following results - 135,671 bytes - original file 0067.jpg 109,982 bytes - mfiltered 7z file created using above command 109,977 bytes - mfiltered 7z file compressed with paq8pxd v69 -s9 109,927 bytes - mfiltered 7z file compressed with fast paq8 v6 -8 109,908 bytes - mfiltered 7z file compressed with paq8px V183fix1 -9beta Not really good results!? Best result so far is using paq8px v183fix1 directly on the 0067.jpg file, with 101,192 bytes file resulting. It seems the mfilter method is not the best to use here(?)
    19 replies | 834 view(s)
  • Shelwien's Avatar
    1st December 2019, 12:36
    Have to put mfilter7z*.dll to "7-Zip\Codecs" rather than "7-Zip". Then it should work with syntax like above (7z a -m0=mfilter:a1 0067.7z 0067.jpg) ...Actually there's this: http://nishi.dreamhosters.com/u/mfilter_demo_20191013.rar
    19 replies | 834 view(s)
  • Shelwien's Avatar
    1st December 2019, 12:33
    Shelwien replied to a thread FLiF Batch File in Data Compression
    > to flif files, decompressing/decoding webp files back is apparently not lossless Both flif and webp actually encode raw picture (pixels), they're not recompression algorithms. .bmp files decoded from .png and dwebp(cwebp(.png)) would be mostly the same. Or pngcrush(.png) and pngcrush(dwebp(cwebp(.png))). Although webp also discards metainfo. And for lossless compression of .pngs I can suggest either precomp, or flif/webp+pngcrush+bsdiff.
    14 replies | 518 view(s)
  • brispuss's Avatar
    1st December 2019, 11:37
    brispuss replied to a thread FLiF Batch File in Data Compression
    Thanks for the details! I was aware of webp. Although webp may produce smaller files with respect to flif files, decompressing/decoding webp files back is apparently not lossless(?) A test compression and decompression using the supplied (in previous post) webp options of the file 21dtRoundBlack21.png produced the results - Original png file size 15,235 bytes Webp compressed file size 7,116 bytes Decompressed webp file with size at 13,515 bytes So the decompressed file does not match the original file in size. This is NOT what is wanted. What is wanted is a decompressed file with EXACTLY the same file size as the original file. The png files are intended to be archived, and then restored exactly to how they were originally later on. This is to ensure that these png files are the correct size as programs using these files expect them to be. If png files are not the right size, programs may crash (especially if the png files are over sized).
    14 replies | 518 view(s)
  • brispuss's Avatar
    1st December 2019, 11:09
    OK. So how exactly do you use mfilter (which I've downloaded and installed under 7-zip directory) with a "dummy" lepton (for no compression)? I can't use cmix because I only have 8GB of RAM; I believe for Windows 32GB of RAM is required(?) Anyway cmix always crashes on compression tests, probably due to lack of RAM.
    19 replies | 834 view(s)
  • Shelwien's Avatar
    1st December 2019, 10:20
    Shelwien replied to a thread FLiF Batch File in Data Compression
    Its an indirect suggestion to use webp lossless instead of flif: https://storage.googleapis.com/downloads.webmproject.org/releases/webp/libwebp-1.0.3-windows-x64.zip cwebp.exe -z 9 -exact 21stRoundBlack12.png -o 1 dwebp.exe -nofancy -nofilter -nodither 1 -o 2 It seems to work on your "buggy" file and even produces a smaller output file.
    14 replies | 518 view(s)
  • Mauro Vezzosi's Avatar
    1st December 2019, 00:57
    "Lossless Data Compression with Transformer" compression rates on Silesia benchmark: https://openreview.net/forum?id=Hygi7xStvS
    118 replies | 14425 view(s)
  • byronknoll's Avatar
    1st December 2019, 00:21
    I just found this paper: "Lossless Data Compression with Transformer". Using transformers they achieve similar compression ratio as cmix on enwik8.
    118 replies | 14425 view(s)
  • Kirr's Avatar
    30th November 2019, 14:35
    I started measuring memory consumption, and it provided one more piece to this puzzle: gzip appears to have the smallest memory footprint among 40 compressors. E.g.: decompression memory used by strongest setting of each compressor: http://kirr.dyndns.org/sequence-compression-benchmark/?d=all&cs=1&cg=1&cc=1&com=yes&src=all&nt=4&only-best=1&bn=1&bm=ratio&sxm=datasize&sxl=log&sym=dmem&syl=log&button=Show+scatterplot I finally started benchmarking fastq compressors. Currently including: Leon, BEETL, GTZ, Quip, DSRC, HARC, SPRING, fastqz, fqzcomp, LFQC (with a few more in the pipeline). This is not easy since each compressor has its own unique set of limitations, quirks and bugs. Even though this may not be an apples-to-apples comparison, still I find it very interesting. Eventually I'll have to add fastq test data too. Example comparison on a 9.22 MB genome: Compression ratio vs Decompression speed. (other measures and data are selectable).
    9 replies | 697 view(s)
  • brispuss's Avatar
    30th November 2019, 05:07
    brispuss replied to a thread FLiF Batch File in Data Compression
    FLiF decoding speed? I haven't really paid any attention to that. The issue here seems to be with FLiF being unable to encode/convert some png images to flip format for whatever reason(s). What "speed" of encoding/decoding has anything to do with this issue is unknown.
    14 replies | 518 view(s)
  • Shelwien's Avatar
    30th November 2019, 03:07
    You don't really need to "install" windows - up to WinME its just a normal DOS program, so its possible to copy already installed files and just run the kernel as dos .exe. I'm attaching here my realmode win3 kit - NE file to run is configured in system.ini "shell=" line, and you have to run dosx.exe to start it. It runs, but I didn't manage to make it fully work since it seems to be a VB3 program, so there're lots of dependencies (dlls etc). I tried to find some based on error messages, but it doesn't like "MCI.VBX" that I found. Also I think its related to this site: http://www.superhry.cz/temata/retro-hry
    5 replies | 214 view(s)
  • Jyrki Alakuijala's Avatar
    30th November 2019, 01:07
    What are your thoughts on the FLIF decoding speed? (When I designed WebP lossless, I made quite a few density compromises to keep the pretty good decoding speed, and I'm just generally curious if the decoding speed is not an issue in your use case.)
    14 replies | 518 view(s)
  • JamesWasil's Avatar
    30th November 2019, 00:55
    "I do hereby reckon I will not replace Lena for the image testing of the industry."
    8 replies | 353 view(s)
  • JamesWasil's Avatar
    30th November 2019, 00:43
    Unfortunately, you'll either have to use a virtual windows 3.11 image via virtualbox, vmware, or install it to dosbox to run it. I still have the EXE header format from the Microsoft tech sheet from years ago for the modified exes, and basically the Windows EXE is a stub loader. It looks to see if Windows is running, and if not it executes a small code segment that says "This requires Microsoft Windows to run" or similar, and exits. If Windows is running, then it jumps to the rest of the exe and runs the binary making API calls and other things with GDI and the windows environment. The only way to get that to run on DOS, would be to create a virtual environment that emulates Windows enough over DOS to execute it. Basically, a version of WINE for DOS or FreeDos, and a TSR that captures the exec call from the stub loader, and redirects it to the environment to run (that, or make the virtual environment in DOS say that it is windows when it isn't, and then forward all the calls from the windows EXE to that wine-like DOS environment to get it to run as a standalone exe) Short answer: Programs written for Windows only have a <1k EXE stub that says it needs windows. The program itself only runs when windows (or what emulates windows?) is present. This is probably one of the finishing touches that FreeDos still needs imo, but I'm not sure if we'll ever see it, even though it's entirely possible, since Windows 3.1 and before ran over DOS as a shell rather than an operating system (as did Windows 95 and 98). NT was/is a separate kernel, but you could probably make an NTVDM emulator that runs NT based EXEs as well over 16 bit DOS environments with a little effort.
    5 replies | 214 view(s)
  • brispuss's Avatar
    30th November 2019, 00:37
    brispuss replied to a thread FLiF Batch File in Data Compression
    Thanks for the tests and comment. Thinking about posting an "issue" at the github site, but it seems that development of flif may have stopped? So it may not be worthwhile posting the issue there. In the meantime I'm using different procedures to compress png images by using precomp first and then using paq8px. . or fast paq8 compressor. Paq8px . . compression is very slow though. So I'm currently using fpaq8 v6 which is much faster and almost provides the same compression.
    14 replies | 518 view(s)
  • Shelwien's Avatar
    30th November 2019, 00:19
    Shelwien replied to a thread FLiF Batch File in Data Compression
    1) My build of flif doesn't crash, but decodes a zero-size file: Z:\098>flif.exe -e 21stRoundBlack12.png 1.flif libpng warning: iCCP: known incorrect sRGB profile Z:\098>flif.exe -d 1.flif 1.png libpng error: known incorrect sRGB profile 2) It works after "convert.exe 21stRoundBlack12.png 1.png" (imagemagick) Maybe post an issue? https://github.com/FLIF-hub/FLIF/issues
    14 replies | 518 view(s)
  • CompressMaster's Avatar
    29th November 2019, 23:58
    I wanted to execute old CD menu app that contains good old DOS games. DosBox says that it requieres Microsoft Windows (it´s designed for windows 95,98, but I think that it should work also on win3.11) and therefore it´s not exactly for DOS. Installing virtual win3.11 - hmm, I don´t want that. It´s possible to install windows 3.11 directly to dosbox and execute app from this environment, but if there´re more compact ways, I´d be glad. File alongside with printscreen is attached. Thanks.
    5 replies | 214 view(s)
  • Gotty's Avatar
    29th November 2019, 23:49
    Black Friday shopping? Aha, so it's seriously overclocked? Nice! What (single thread and cpu mark) score does it get by cpubenchmark.net?
    204 replies | 120748 view(s)
  • suryakandau@yahoo.co.id's Avatar
    29th November 2019, 18:42
    bigm_suryak v3 using level 6. i do not try with level 9 or 10 because my laptop has only 16Gb. bigm_suryak v4 on progress :)
    18 replies | 1101 view(s)
  • suryakandau@yahoo.co.id's Avatar
    29th November 2019, 18:40
    bigm_suryak v3 enwik8 17,858,121 bytes ​compression time 132776.36 s
    18 replies | 1101 view(s)
More Activity