Activity Stream

• Today, 19:53
This is known as the Schönhage-Strassen algorithm - and "long enough" seems to be somewhere above 10,000 decimal digits: Also, it has an upper bound of O(n * log n * log log n) - worse than O(n * log n) - and is mentioned in the paper, so there seems to be an improvement indeed.
3 replies | 29 view(s)
• Today, 14:46
Probably an old trick. There is an analysis of long multiplication in Knuth, "The Art of Programming", Vol.2. The observation is that multiplication of long numbers has some commonalities with convolution, and thus a multiplication can be replaced by a fourier transformation, a digit-wise multiplication, and a backwards transformation. Fourier is O(n*log(n)), thus multiplication can be done in the same complexity. I'm not sure whether there was an analysis of practiability, but if the numbers are "long enough", it will pay. Just unclear what "long enough" is.
3 replies | 29 view(s)
• Today, 08:13
Nice, but misleading; its about multiplication of long numbers rather than common integers used in programming. Also O(n*log(n)) complexity is defined for Turing machines, who knows how it would compare to cpu instructions, and what's the minimum n where it becomes useful.
3 replies | 29 view(s)
• Today, 07:49
Here is the paper for it as PDF which has 2 different methods proposed to achieve this: Integer Multiplication In Time O (n log n) https://hal.archives-ouvertes.fr/hal-02070778/document
3 replies | 29 view(s)
• Today, 07:01
t64 replied to a thread paq8px in Data Compression
paq8px is the best :D Processor: i5-6200U (Dualcore with 4 Threads, 2.80 GHz) Software to Compress: Postal: Classic and Uncut v1.05 + Postal 2 v1409 = 1.8 GiB Filelist: https://pastebin.com/KXyufSYP 689.9 MiB in 37 minutes : taskset -c 2,3 wine uharc.exe a -ed- -mx -md32768 -mm+ -d2 -o+ -y+ -r+ -pr "POSTAL1&2.uha" "POSTAL1&2/*" 688 MiB in 11 minutes : taskset -c 2,3 7z a -t7z -m0=lzma -mx=9 -mfb=64 -md=404m -ms=on "POSTAL1&2.7z" "POSTAL1&2" 674.1 MiB in 13 minutes taskset -c 2,3 wine arc.exe create -mx -ld1600m "POSTAL1&2.arc" "POSTAL1&2" 672.1 MiB in 31 minutes taskset -c 2,3 arc create -mx -ld1600m "POSTAL1&2.arc" "POSTAL1&2" 627.6 MiB in 1 hours and 21 minutes : taskset -c 2,3 zpaq a "POSTAL1&2.zpaq" "POSTAL1&2" -m5 511.3 MiB in 4 Days : taskset -c 2,3 paq8px -9b @FILELIST "POSTAL1&2.paq8px182fix1" Time 345805.70 sec, used 4642 MB (4868167519 bytes) of memory Compared Postal 2.exe sha256 from decompressed archive with the original uncompressed file, and sha256 matches
1730 replies | 482186 view(s)
• Today, 03:50
39 replies | 2273 view(s)
• Yesterday, 11:13
Yes, that is what I've checked this morning, the header of the compressed files. The files patched with the new strings for the game use FILESIZE_UNCOMP_DATA + COMP_RAW_DATA. The game uses only the header of the uncompressed data, but I was using the one with compressed data. So, this was my mistake, I used an incorrect filesize, that's why some text where not used, because the uncompression was cutted. I've changed it and now is working perfectly. I apologize for my mistake. Thanks a lot for your help. I will check anyway the information you've provided. It can be interesting to discover the algorithm that matches the original compressed data. Regards.
9 replies | 275 view(s)
• Yesterday, 05:11
> I've adapted the algorithm that is in the sources of SCUMMVM So is it scummvm that has problems with decoding of modified resources? If so, then my encoder itself shouldn't be a problem. What about header then? "FED8_COMP.IW" had a header with 16-bit unplen/сomplen fields, do you update these? Also, I found this (decompiled source attached):https://filetrip.net/nds-downloads/utilities/download-codec-lzss-ds-1-0-f23640.html
9 replies | 275 view(s)
• Yesterday, 03:05
The code of the first post is implemented as is. It's not an executable with the decoder. It's public. I've adapted the algorithm that is in the sources of SCUMMVM to my tool por uncompress the files I need to translate, although the original packed file of the game has a lot of subfiles: https://github.com/scummvm/scummvm/blob/master/engines/startrek/lzss.cpp As far as I've seen, "offsetlen & 0xF" is only to mark the length. So, the max byte length of an offset, calculated with "Offset = bufpos - ((offsetlen >> 4) & (N - 1));", is 18 bytes, I assume (this is 15 + THRESHOLD, which is 3). I have done some changes in the lzss_rdf_v0 like: for (i = 0; i < 8; i++) { if (inpptr >= inp + inpsize) break; l = inp + inpsize - inpptr; if (l > 15 + 3) l = 15 + 3; search_pos = 0; ml = 0; // max match len mj = 0; // max match pos for (search_pos = 0; search_pos < pos; search_pos++) { //for( r=0; r<l; r++ ) if( inpptr!=inp ) break; for (r = 0; r < l; r++) if (inpptr != inp) { break; } if (r > ml) { ml = r; mj = search_pos; } if (r >= THRESHOLD) break; } if (ml < 3) { // literal c = *inpptr++; win = c; *outptr++ = c; flag |= 1 << i; // set literal flag } else { // match // dddddddd //l = (d & 15) + 3; // DDDDllll //d = c + (d>>4)*256; // ddddllll // DDDDDDDD // offsetlen = (byte)compdata + ((byte)compdata << 8); // Length = (offsetlen & 0xF) + THRESHOLD; // Offset = bufpos - ((offsetlen >> 4) & (N - 1)); mj = search_pos - mj + 1; *outptr++ = (mj << 4) + (ml - 3); *outptr++ = byte(mj >> 4); for (j = 0; j < ml; j++) { c = *inpptr++; win = c; } } } // for *pflag = flag; } It helps a bit with the "1F 00"s, but I can not calculate the offsets that are near the end. I'm trying to calculate them somehow.
9 replies | 275 view(s)
• Yesterday, 01:51
> encodes/decodes perfectly, even with my decoder algorithm of the first post, > but the game misses some lines I can see at the beginning. All kinds of subtle differences are possible if your decoder from first post doesn't work exactly like the one in the game. > There is this code: > 1F 00 1F 00 F0 1F 00 1F 00 1F 00 1F 00 maxlen is 18, so it uses a series of (-1,18) matches to encode a long string of zeroes. > If I use the (r >= ml) -the commented-, the result is more similar than (r > ml), on this file yes, but on your first sample there're more matches chosen with longest distance. In any case, I don't think this is the problem, unless len=0xF has some special meaning (like maybe its decoded not as 15+3 but as 256 or something). > PD: If you need some other file, I can pass it to you. > There are smaller files which can help better to solve this issue. Either capture the output of game's decoder, or give me the executable that contains the decoder.
9 replies | 275 view(s)
• 18th October 2019, 23:23
Thanks again, I will check your results and let you know. I want to answer some of your points tough: * I seen the QuickBMS tool mentioned in the thread you posted. In fact, I had checked the algorithm with EI, EJ params, but I though it was a very complicated method. It did not work for me. It worked with trees I think, and the want you passed me works with an array, as the decoder one. * About the translated file, maybe I have expressed bad previously what I want to say. I thought also what you've said, maybe I had a problem with the translated file and its offsets. The file with new translated texts is obviously bigger, maybe 5000-6000kb. If I use the "fake" compressor, so, it is even bigger than the uncompressed one, and use it in game, it works well, I can see the texts from the beginning. If I use the encoder v0 which we are trying to decipher here, It encodes/decodes perfectly, even with my decoder algorithm of the first post, but the game misses some lines I can see at the beginning. Strangely, it seems to decode somehow the file, if it was not correct, the game crashes. Look the offset 0x75A0 in the *.NEW file (which is called from the offset 0xA64 in the file). From here I put some translated lines, which are called inside the code (with updated pointers), aligned to 16bits for clarify the texts. Maybe too much zeroes between lines make some garbagging in the encoder/decoder. Some other lines, the ones that uses the original offsets to the texts seems to work well. I think that from some point, the executable decodes the data differently as expected. * Lastly, There is this code: 1F 00 1F 00 F0 1F 00 1F 00 1F 00 1F 00 which is all the bunch of zeroes that there are in the original file. 1F = 0000 0001 1111 (so Length = 15, Offset = pos (of the match in History Buffer/Window?) - 1. But it seems that this position is always the same, it seems to find the mach absolutely and not relatively, may be? There are things that I do not understand from this compression, like the window size. Is it moveable? It refreashes (resets the data) and begins again -maybe the window resets when full-? Reuses the previous data/dictionary? I understand minimally the LZSS algorithm, and I think that the matches are of 16bytes, can it be? I'm really a newbie for this!! ;) PD: If you need some other file, I can pass it to you. There are smaller files which can help better to solve this issue. =========== EDIT ============== I have attached a small file which you can see very easy the changes, from the original file and the one with lzss_rdf_v0 compressor. If I use the (r >= ml) -the commented-, the result is more similar than (r > ml), -except for one match offset near the end-. if (r > ml) { ml = r; mj = j; } /* -->> if (r >= ml) { ml = r; mj = j; } */
9 replies | 275 view(s)
• 18th October 2019, 21:58
> But the game is having problems somehow interpreting the data. > I think that the compression must be adjusted somehow. There're some choices for encoder implementations: 1. allowing overlapped matches (dist<len) 2. allowing wrapped matches (ones that start at the end of window buffer) 3. whether window buffer is updated during match copying or after Based on your decoder its "yes" for all 3, but it only means anything if reverse-engineered decoder works exactly as game code. Normally I'd suspect hashtable-based matchfinder, which is the usual cause for different encoder output, but here it seems that longer distances are preferred in compressed files, which is kinda incompatible with hashchain matchfinders. > I include also, originalsamples.zip, with the original (not translated) files that came with the game. > It is very odd, but if I use the fake compressor, they work well with the game. Is it possible that translation breaks internal structure, or translated file is longer and doesn't fit in some buffer? > Somehow I've seen that the 0x1F (0001 1111) is used normally when a bunch of the same bytes are correlative. Some formats (eg. LZ4) have special values which enable different behavior (eg. read extra bytes for length or distance). https://github.com/lz4/lz4/blob/dev/lib/lz4.c#L1895 > I will try to check it and hope to find a solution, but I'm very bad when programming bitwise operations. Here I added some match logging: 008F: 0032/005D/7 0032/005D/7 004A/0045/7 005C/0033/7 0072/001D/7 ^ ^^^^^^^^^^^ ^ ^ ^-- match length | \---------- pos/d/l for actual decoded match \ \------- match distance | ------------ abs. position \---- position in unpacked file But I don't understand the difference (ie how encoded match is chosen from available options) Could be a "lazy" matchfinder, or some hashtable or anything.
9 replies | 275 view(s)
• 18th October 2019, 21:27
Nah, that was a screwup on my part, I accidentally did 5 bit inputs instead of 10, so we had a lot of duplicates. The correct one is just a larger version of the 4-bit one.
25 replies | 931 view(s)
• 18th October 2019, 21:09
I'm not sure exactly what you are graphing here, but it looks bad, we generally usually want white noise.
25 replies | 931 view(s)
• 18th October 2019, 20:41
I used 16-bit values and if I only use the first lane, everything is roughly even. To test both 16-bit accumulators, it would require… ~86 GiB of storage for a CSV with an average 21.7 byte line length 32 GB of memory to store the counter table ((2 ^ 32) * sizeof(uint64_t)) A lot of processing power A lot of time I currently have this: 147 GB available disk space on a mediocre SSD 8 GB RAM A 2.0 GHz i7-2635QM on a 2011 MacBook Pro, which has thermal management GPUs die for Zero patience I think I could do it, right?
25 replies | 931 view(s)
• 18th October 2019, 19:27
XXH3_128b should have a slightly better distribution than XXH3_64b. I would not expect the difference to be huge, just visible and measurable.
25 replies | 931 view(s)
• 18th October 2019, 17:53
Well, it seems I have called victory so soon. I attach a new file with I'm working (translated) for: samplefile2.zip. Uncompressed and compressed format (with lzss_rdf_v0). It seems that the algorithm of lzss_rdf_v0 compress and uncompress well the file. Even the algorithm in the first post which supposedly worked with the game and I'm using is working well when decompressing. I assume that this algorithm may work with different compressed files if the main params are the same like window size, N param, etc... But the game is having problems somehow interpreting the data. I think that the compression must be adjusted somehow. I include also, originalsamples.zip, with the original (not translated) files that came with the game. It is very odd, but if I use the fake compressor, they work well with the game. Somehow I've seen that the 0x1F (0001 1111) is used normally when a bunch of the same bytes are correlative. In lzss_rdf_v0 tool, this changes a bit with 0xnF . This is represented in offset 0x8A of both compressed files (the original of the game and the compressed with lzss_rdf_v0). I will try to check it and hope to find a solution, but I'm very bad when programming bitwise operations.
9 replies | 275 view(s)
• 18th October 2019, 13:21
- TurboPFor Integer Compression: The fastest now more faster - New SIMD codecs - Benchmark : Intel - Benchmark ARM see tweet with more info
42 replies | 18896 view(s)
• 18th October 2019, 12:41
Shelwien, Thank you very much!!! I have tested the compressor you've modified in both games that uses it, one is Star Trek 25th Anniversary and the other Star Trek Judgment Rites. The two tests I have done, had worked flawlessly. I don't know if I will have any problems with any other file compressed, but I have very high expectations. I hope you will accept that I give credits to you for the encoding algorithm in the Readmes (altough in spanish) for your help. I will post if I have any problem in the future, but I think that if the algorithm is working with a bitmap file of 64008 bytes and with FEDS.RDF file (more complex, with opcodes for texts and actions), the rest of the files will work also. I'm very grateful to you. Regards.
9 replies | 275 view(s)
• 18th October 2019, 02:46
Not sure if I already encountered it, but here I modified encoder from this: https://encode.su/threads/3140-LZ98?p=60866&viewfull=1#post60866 Size seems to be similar, but not an exact match.
9 replies | 275 view(s)
• 18th October 2019, 02:43
Intriguing. It seems like XXH3_128b (swapped acc + input) has perfect distribution. Each value appears exactly 1024 times according to my output. @Cyan is it correct that XXH128 has even distribution? Wait, hold on, there is a huge error in my code - I'm only testing the low bits lol
25 replies | 931 view(s)
• 18th October 2019, 02:33
Here's 10 bit instead of 4 bit ZrHa Chart Here's XXH3_64b: Chart For the key, I used different chunks of kSecret on each row. About 1/4 of the values end up being zero in ZrHa, but XXH3 has fairly even distribution.
25 replies | 931 view(s)
• 18th October 2019, 00:37
9 replies | 275 view(s)
• 17th October 2019, 23:00
@easyaspi314 Yeah, that is the expected pattern, there is only 256 different possible states after mixing in the plaintext, and you are generating each of those states exactly 256 times. If you want to do something fun, take a slightly bigger version of the function, take each of the possible states and iterate a number of times with a fixed input, then we can do stats on the resulting distribution. Do things get worse/better with a different fixed input? I'd say probably, but I don't know.
25 replies | 931 view(s)
• 17th October 2019, 22:44
Yeah, Spooky v2 is broken. You can generate some high probability collision by changing a single bit in the final 8 bytes of the second-to last block, and then undo that change by flipping the two affected bits in the final 16 bytes of the last block. There is a pattern switch for the handling of the last block, and it means that data gets ingested into the same spot twice with virtually no mixing in-between.
25 replies | 931 view(s)
• 17th October 2019, 22:26
ZrHa64_update when scaled down to 4 bits instead of 64 bits (with 2 states and 2 inputs = 256x256), generated with libpng instead of terminal escape codes and a screenshot: It seems that some values just don't occur, everything occurs a multiple of 256 times, and there is a huge favoring of 136. Chart of value occurences Here is the code to generate it. #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <png.h> // 4-bit ZrHa64_update void ZrHa4_update(uint8_t state, uint8_t data) { uint8_t x0 = (state + data) % 16; uint8_t x1 = (state + data) % 16; uint8_t m0 = ((x0 % 4) * (x0 / 4)) % 16; uint8_t m1 = ((x1 % 4) * (x1 / 4)) % 16; uint8_t rot1 = ((x1 >> 2) | (x1 << 2)) % 16; uint8_t rot0 = ((x0 >> 2) | (x0 << 2)) % 16; state = (m0 + rot1) % 16; state = (m1 + rot0) % 16; } // Shameless copy of the libpng example code. int main(void) { int width = 256, height = 256; int code = 0; FILE *fp = NULL; png_structp png_ptr = NULL; png_infop info_ptr = NULL; png_bytep row = NULL; // Open file for writing (binary mode) fp = fopen("xormul.png", "wb"); if (fp == NULL) { fprintf(stderr, "Could not open file xormul.png for writing\n"); code = 1; goto finalise; } // Initialize write structure png_ptr = png_create_write_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL); if (png_ptr == NULL) { fprintf(stderr, "Could not allocate write struct\n"); code = 1; goto finalise; } // Initialize info structure info_ptr = png_create_info_struct(png_ptr); if (info_ptr == NULL) { fprintf(stderr, "Could not allocate info struct\n"); code = 1; goto finalise; } // Setup Exception handling if (setjmp(png_jmpbuf(png_ptr))) { fprintf(stderr, "Error during png creation\n"); code = 1; goto finalise; } png_init_io(png_ptr, fp); // Write header (8 bit colour depth) png_set_IHDR(png_ptr, info_ptr, width, height, 8, PNG_COLOR_TYPE_RGB, PNG_INTERLACE_NONE, PNG_COMPRESSION_TYPE_BASE, PNG_FILTER_TYPE_BASE); png_write_info(png_ptr, info_ptr); // Allocate memory for one row (3 bytes per pixel - RGB) row = (png_bytep) malloc(3 * width * sizeof(png_byte)); // Write image data int x, y; // Count the outputs int outputs = {0}; for (y=0 ; y<height ; y++) { for (x=0 ; x<width ; x++) { // Split up the nibbles uint8_t state = { (uint8_t)(y % 16), (uint8_t)(y / 16) % 16 }; uint8_t data = { (uint8_t)(x % 16), (uint8_t)(x / 16) % 16 }; // Run our downscaled update routine ZrHa4_update(state, data); // Combine the state back together uint8_t code = state + (state << 4); // Log the occurrence ++outputs; // Insert the pixel, with the R, G, and B being the outputted value. row = row = row = code; } png_write_row(png_ptr, row); } // Dump CSV of all of the outputs to stdout. printf("value,amount\n"); for (int i = 0; i < 256; i++) { printf("%4d,%4d\n", i, outputs); } // End write png_write_end(png_ptr, NULL); finalise: if (fp != NULL) fclose(fp); if (info_ptr != NULL) png_free_data(png_ptr, info_ptr, PNG_FREE_ALL, -1); if (png_ptr != NULL) png_destroy_write_struct(&png_ptr, (png_infopp)NULL); if (row != NULL) free(row); return code; }
25 replies | 931 view(s)
• 17th October 2019, 17:14
Speaking of assembly, I have spooky64v2-x86_64.S, which closely corresponds to a slightly improved C version. The compiler cannot avoid register spilling, which in fact can be avoided narrowly (12 state registres with 15 general-purpose registers), so when spelled out in assembly, the loop runs about 10% faster (I don't remember the exact figure). As you can see, there is a fair amount of boilerplate, some of it to support both System V and Microsoft x64 calling convention, and some of it to make the rest of the code less error-prone. Then I started to have doubts about SpookyHash. Leo Yuriev reports that it has "collisions with 4bit diff" (I don't know the details), but more importantly, the way the last portion of data is fed atop of the state that's not been mixed, contrary to the trickle-feed theory, looks very suspicious (this is possibly the same issue that leads to 4-bit diff collisions). If it wasn't Bob Jenkins, I would call it a blunder. I also wrote a JIT compiler to explore ARX constructions like SpookyHash, and specifically to find better rotation constants that maximize avalanche. Measuring avalanche is not a trivial thing though: some if it comes from the fact that the state must be sufficiently random, and some from the strength of the construction proper... I must add that I don't understand well all aspects of Bob Jenkins' theories.
25 replies | 931 view(s)
• 17th October 2019, 03:11
Ok... Tried :rolleyes: Segmentation fault (core dumped) An before that, Cygwin WARNING: Couldn't compute FAST_CWD pointer. This typically occurs if you're using an older Cygwin version on a newer Windows. Please update to the latest available Cygwin version from https://cygwin.com/. If the problem persists, please see https://cygwin.com/problems.html The opensuse version from berelix downloads also crashed, but at least could initialize to the point of showing the options. I was thinking about downloading an old liveCD with build-essentials from that time and try compiling it there. Anyways, seems that the code itself isn't very mature so I don't know if it's worth it.
156 replies | 81353 view(s)
• 16th October 2019, 23:09
The ideal case for non-reversible mixing is a random mapping function from one state to another, i.e. for 128 bit states, there exist (2^128)^(2^128) different (mostly non-reversible) mapping functions (as (2^128)! of them are reversible), if you pick a random one of those functions you'll expect the state-space to deteriorate by 22 29 bits in a billion applications. This number can be found by iteration, if at iteration x, n out of p different states are possible, then at iteration x+1 the expected number of possible states is p*(1 - ((p-1)/p)^n). By experimentation that turns into the bit loss being asymptotic to log2(iteration count)-1. But if your mapping function is not an ideal random pick, and it has some distinguisher (like being equivalent to a simple series of arithmetic operations), then the only thing we can really say theoretically is that it is unlikely to be better in this regard than a random pick amongst the functions. Your test functions stand out by entering loops faster than expected for a random function, and especially by always (?) entering into period 2 loops.
25 replies | 931 view(s)
• 16th October 2019, 20:36
Thank you Shelwien. I'll give it a try. Although what I really wanted was the program working as an archiver.
156 replies | 81353 view(s)
• 16th October 2019, 17:39
XOR has fewer patterns than ADD (which is almost symmetrical), here is an 8x8 table visualized. Edit: Here's 64x64 to really show the pattern: Add is really pretty though.
25 replies | 931 view(s)
• 16th October 2019, 17:00
True, but there are many drawbacks to writing in assembly: It is platform-specific (and even subarch specific) It can be somewhat difficult to follow with multiple lanes Trying to figure out the best way to inline/unroll is annoying It can't be inlined It can't be constexpr'd I usually prefer using tiny inline assembly hacks, which while they mess up ordering a bit, can usually help a lot, especially this magic __asm__("" : "+r" (var)); which can break up patterns, murder SLP vectorization, move loads outside of loops, and more. It is like temporary volatile. For example, on ARMv7-A, vzip.32 (a.k.a. vtrn.32) modifies in place: If we use this on the even and odd halves of the vector (on ARMv7, Q-forms are unions of 2 half-width D-forms), we can end up with our vmlal.u32 setup in one instruction by vzipping in place at the cost of clobbering data_key (which is ok): Edit: Oops, the arrows are pointing to the wrong lanes. They should point to a >> 32 and b & 0xFFFFFFFF However, Clang and GCC can't comprehend an operation modifying 2 operands, and emit an extra vmov (a.k.a. vorr) to copy an operand. This line __asm__("vzip.32 %e0, %f0" : "+w" (data_key)); forces an in-place modification. I only write things in assembly when I want tiny code or when I am just messing around. It is just a pain otherwise.
25 replies | 931 view(s)
• 16th October 2019, 16:36
I had some thoughts on non-reversible mixing. It must be bad if you only have a 64-bit state and you want 64-bit result. Interleaving doesn't change this, if the lanes are independent. But that's not what we have with ZrHa_update. Since the lanes are "cross-pollinated", the irreversible mixing applies to 128-bit state. So this might be all good if you only want 64-bit result. And you can't get a decent 128-bit result anyway because a single 32x32->64 multiplication isn't enough for it. So what's "the mechanics" of non-reversible mixing? The internal state may "wear out" or "obliterate" gradually, but how and when does this happen? After being mixed, the state cannot assume certain values, about 1/e of all possible values. But as we feed the next portion of data, it seems that the state "somewhat recovers" in that it can assume more values. If the next portion is somewhat dissimilar to the mixed state, it is plausible to say that the state can assume any value again. (If we fed ascii strings with XOR, this could not have flipped the high bit in each byte, but we feed with ADD.) Assuming that the state recovers, I can't immediately see how the non-reversible mix is worse than a reversible one. Can it be show that the construction leaks, specifically more than 1 bit after a few iterations? Of course, the worst case is that the state assumes progressively less values on each iteration, which happens if you feed zeroes into it. We can see how this happens in the scaled-down constructions with 8x8->16 or 16x16->32 multiplications. #include <stdio.h> #include <inttypes.h> static inline uint32_t mix(uint32_t x) { uint8_t x0 = (x >> 0); uint8_t x1 = (x >> 8); uint8_t x2 = (x >> 16); uint8_t x3 = (x >> 24); uint16_t m0 = x0 * x1; uint16_t m1 = x2 * x3; uint16_t y0 = m0 + (x2 << 8 | x3); uint16_t y1 = m1 + (x0 << 8 | x1); return y0 << 16 | y1; } int main() { uint32_t x = 2654435761; while (1) { x = mix(x); printf("%08" PRIx32 "\n", x); } return 0; } The construction collapses with less than 2^10 iterations: $./a.out |head -$((1<<9)) |tail ecd22f4e e13e0fc7 4a8afd8d 15a3b5e1 422aef14 3cee1fc3 05d9fae7 ba9bec37 ce6ea88a c95ee32c $./a.out |head -$((1<<10)) |tail 2900a500 002900a5 2900a500 002900a5 2900a500 002900a5 2900a500 002900a5 2900a500 002900a5 If you change the combining step to XOR, the construction collapses in under 2^14 iterations: uint16_t y0 = m0 ^ (x2 << 8 | x3); uint16_t y1 = m1 ^ (x0 << 8 | x1); $./a.out |head -$((1<<13)) |tail 572eb816 2187191a 85ab0b7e aeef26dc cf067e54 2f9750a4 a46fbfe9 c273aea3 1d08f488 89bd881c $./a.out |head -$((1<<14)) |tail a400a400 00a400a4 a400a400 00a400a4 a400a400 00a400a4 a400a400 00a400a4 a400a400 00a400a4 Here's the 16x16->32 version, which collapses in under 1^25 and 1^30 iterations (with ADD resp. XOR, I'll spare you the outputs). #include <stdio.h> #include <inttypes.h> static inline uint64_t mix(uint64_t x) { uint16_t x0 = (x >> 0); uint16_t x1 = (x >> 16); uint16_t x2 = (x >> 32); uint16_t x3 = (x >> 48); uint32_t m0 = x0 * x1; uint32_t m1 = x2 * x3; uint32_t y0 = m0 + (x2 << 16 | x3); uint32_t y1 = m1 + (x0 << 16 | x1); return (uint64_t) y0 << 32 | y1; } int main() { uint64_t x = 6364136223846793005; while (1) { x = mix(x); printf("%016" PRIx64 "\n", x); } return 0; } By extrapolation, the construction with 32x32->64 multiplication must collapse in about 2^60 iterations. Can it be shown that XOR as the combining step works better than ADD also in the average case rather than just in the worst case? @NohatCoder, how did you calculate that 22 bits must be leaked after 1G iterations? Was that the average case or the worst-case analysis, or the distinction doesn't matter?
25 replies | 931 view(s)
• 16th October 2019, 15:02
It's no big deal to write an .S file in assembly if you can't cajole the compiler into emitting the right sequence of instructions.
25 replies | 931 view(s)
• 16th October 2019, 10:55
but nevertheless there is a gain ;)
555 replies | 289244 view(s)
• 16th October 2019, 09:52
Aniskin replied to a thread 7-Zip in Data Compression
because of
555 replies | 289244 view(s)
• 16th October 2019, 08:52
t64 replied to a thread paq8px in Data Compression
I have tested zpaq v7.15 with GTA IV & EFLC (31 GiB), with the following parameters: taskset -c 2,3 zpaq a compressed.zpaq folder -m5 Ended with a 18.7 GiB file (after 14 hours and 45 minutes, on a i5-6200U), while FreeArc 0.67 produced a 19.2 GiB file (using the default Ultra settings) With zpaq -m5 also compressed Postal 10th Anniversary Collectors Ed. Multi-platform (works on Windows, Mac and Linux) Repack (16.4 GiB) to only 4.1 GiB (in 5.7 hours), and that repack includes a 7.2 GiB .mdf image of the original game disc with old Postal 1 & Postal 2 versions, a 507.7 MiB .bin image file (the Music to Go Postal By CD), and 698.4 MiB of FLACs Other people compressed Wasteland 2 from 20.2 GiB to only 2.8 GiB with lrzip -z (which uses zpaq) (https://forums.inxile-entertainment.com/viewtopic.php?p=148864#p148864) And I wanted to try paq8px and compare results to zpaq, because some people consider paq8px the best for producing the smallest files (https://www.reddit.com/r/compression/comments/8uy70j/is_freearc_or_kgb_better_at_compression/) I'm interested in getting the smallest files possible regardless of compression time, for backups of all kind of data Thanks for the suggestion, I will try UltraARC then (has precomp and srep) and compare the results with paq8px, zpaq and uharc This seems like a good alternative to password-protected archives from other software, compiling paq8px (so your binary is unique and only your binary can decode the archive) and using the '-e' option Maybe I will :)
1730 replies | 482186 view(s)
• 16th October 2019, 08:50
Mfilter handling of "tar" file in your package does not show good gain and then noticed that many of these jpg files are damaged
555 replies | 289244 view(s)
• 16th October 2019, 06:54
Shelwien replied to a thread paq8px in Data Compression
Trying to repack games with any paq version is a bad idea. Not only it would take a lot of time and resources (note that decoding time is the same as encoding), but also won't even allow you to estimate the best possible compression, since paq doesn't have any special handling for large files and compressed formats. I'd suggest using precomp, xtool, srep first. If you're really interested, you can apply paq8px to their output. > What is the difference between paq8px and paq8pxd? I know they are different projects but I don't know what are the actual differences Read the first posts here: https://encode.su/threads/342-paq8px https://encode.su/threads/1464-Paq8pxd-dict These are original developers of each branch and there's some description of initial differences. At this point though its hard to list differences, since parts were added, removed, exchanged etc. You can do a benchmark of some small (~1Mb) files of various types and tell us. > Could you explain what does option 'e = Pre-train x86/x64 model' do? Afaik it uses the compressor exe itself to train the predictor. Thus a different version would be unable to decode the archive. > Would using this option improve compression ratio with the files from the attached filelist? It would help with exe/dll and .dylib. Oggs are already compressed and paq8px doesn't recompress them - you can try oggre or cdm instead.
1730 replies | 482186 view(s)
• 16th October 2019, 05:46
t64 replied to a thread paq8px in Data Compression
The problem was the gcc version, mine is 6.3.0, thanks for investigating the issue :) Used a VM with MX-Linux 19 beta 3 (with gcc 8.3.0) for compiling the binary and had no problems I'm currently compressing the files of two games, POSTAL 1 and 2 (1.8 GiB): taskset -c 2,3 ./paq8px -9b @FILELIST -v -log "POSTAL1&2.paq8px182fix1.log" Will later post about the results P.D.: What is the difference between paq8px and paq8pxd? I know they are different projects but I don't know what are the actual differences P.D.2: Could you explain what does option 'e = Pre-train x86/x64 model' do? Would using this option improve compression ratio with the files from the attached filelist?
1730 replies | 482186 view(s)
• 16th October 2019, 03:12
With some hacks I was able to build it on msys2 (windows): http://nishi.dreamhosters.com/u/pcompress_v1.7z It doesn't want to work with filesystem (creates empty archive), but seems to work in stream mode. Something like this: cat * | pcompress -c adapt2 -l2 -t1 -p >..\test pcompress.exe -d -p <../test >unp
156 replies | 81353 view(s)
• 15th October 2019, 23:55
Has anybody been able to compile this recently? I tried in both Manjaro and Ubuntu, but it keeps throwing me errors. I had to downgrade openssl in ubuntu, use a modified version of pcompress in manjaro, and I should downgrade binutils too to make the linker work but I don't want to go there yet. I also tried to find an old copy in my backups, and I didn't have one Seems that the author sadly abandoned the project. It was a very promising program. So... has anybody got any luck and want to share their binary? Preferable with WavPack and bsc included. Thanks!!
156 replies | 81353 view(s)
• 15th October 2019, 23:15
Gotty replied to a thread paq8px in Data Compression
Lubuntu 19.04 64 bit: paq8px_v182fix1 compiled successfully. Your command for compiling looks identical to mine except for -fprofile-use (which I don't have). But it works, too (I have just tried). It's strange that compiling is successful in your environment but linking is not. You have linker errors complaining about a couple of static const/constexpr arrays. Could you verify the source file? $md5sum paq8px.cpp 1f7e2ee9eb3a8bba679a101db4aff46b paq8px.cpp What is your gcc version? Mine is:$ gcc --version gcc (Ubuntu 8.3.0-6ubuntu1) 8.3.0 Edit: it could be that you may have an older gcc. It looks like those static constexpr arrays would work only in newer compilers. Could you try upgrading your gcc package and see if it works?
1730 replies | 482186 view(s)
• 15th October 2019, 22:58
Shelwien replied to a thread 7-Zip in Data Compression
made an mfilter demo because apparently some people can't RTFM. http://nishi.dreamhosters.com/u/mfilter_demo_20191013.rar
555 replies | 289244 view(s)
• 15th October 2019, 09:40
29 replies | 1582 view(s)
• 15th October 2019, 03:02
t64 replied to a thread paq8px in Data Compression
Hello Gotty The 31 GiB are multiple files :) I'm particularly interested in doing compression tests with videogames, have several games under 2 GB, so if I manage to compile paq8px under Linux (or get the 64 bit Linux binary) I'd be glad to do tests with paq8px, even if is a slow process (I just want to see how much I can compress these games, in order to reduce disk space needed for backups) Best regards
1730 replies | 482186 view(s)
• 14th October 2019, 23:47
Gonzalo replied to a thread repack .psarc in Data Compression
You can always use a de-duplicator before 7z or RAR, like srep, or freearc -m0=rep. If you have memory enough, I believe this last method to be better. FA also lets you sort the files on different ways to put the similar ones closer. Deduplication improves radically the overall speed and almost always improves the ratio, sometimes greatly, especially in big archives. OTOH, You can replace 7z with FA altogether. There is another project that seems great for this but I haven't tried it yet: https://github.com/moinakg/pcompress In my personal case, I found the rep+fastlzma2 combination to be a perfect match to my needs. It usually gives me the same or better ratio than pure 7z but at least 2x faster, sometimes up to 20x faster.
4 replies | 382 view(s)
• 14th October 2019, 22:25
Gotty replied to a thread paq8px in Data Compression
Hello t64, Is the 31 GiB one file or multiple files? Currently paq8px does not support files over 2 GB. Also paq8px is quite slow: compressing that amount would require like 2-5 days (depending on your cpu and memory speed, and other programs running simultaneously). Anyway I'll try to figure out what's wrong with compiling it (it should work.), and come back to you soon.
1730 replies | 482186 view(s)
• 14th October 2019, 21:46
t64 replied to a thread paq8px in Data Compression
Hello, I'm doing tests compressing GTA IV & Episodes From Liberty City (31 GiB in total) Tried zpaq with -m5 and the games were compressed to 18.7 GiB Now I want to see if I can achieve better compression with paq8px, but I'm failing to compile it under Linux (MX-Linux 18.3, 64 bit) \$ g++ -s -fno-rtti -fwhole-program -static -std=gnu++1z -O3 -m64 -march=native -Wall -floop-strip-mine -funroll-loops -ftree-vectorize -fgcse-sm -fprofile-use paq8px.cpp -opaq8px.exe -lz /tmp/ccieukZA.o: In function StateMap::StateMap(Shared const*, int, int, int, bool)': paq8px.cpp:(.text+0x4539): undefined reference to StateTable::State_table' /tmp/ccieukZA.o: In function ContextMap::mix(Mixer&)': paq8px.cpp:(.text+0xe1cd): undefined reference to StateTable::State_table' /tmp/ccieukZA.o: In function IndirectMap::update()': paq8px.cpp:(.text+0x1a94e): undefined reference to StateTable::State_table' /tmp/ccieukZA.o: In function FrenchStemmer::Stem(Word*)': paq8px.cpp:(.text+0x1c7cd): undefined reference to FrenchStemmer::TypesExceptions' /tmp/ccieukZA.o: In function ContextMap2::update()': paq8px.cpp:(.text+0x26da9): undefined reference to StateTable::State_table' /tmp/ccieukZA.o: In function ContextMap2::mix(Mixer&)': paq8px.cpp:(.text+0x27904): undefined reference to StateTable::State_table' paq8px.cpp:(.text+0x27997): undefined reference to StateTable::State_group' /tmp/ccieukZA.o: In function ContextMap::update()': paq8px.cpp:(.text+0x3246d): undefined reference to StateTable::State_table' /tmp/ccieukZA.o: In function dmcModel::st()': paq8px.cpp:(.text+0x41e88): undefined reference to StateTable::State_table' paq8px.cpp:(.text+0x421c6): undefined reference to StateTable::State_table' /tmp/ccieukZA.o: In function Image4bitModel::mix(Mixer&)': paq8px.cpp:(.text+0x42a20): undefined reference to StateTable::State_table' /tmp/ccieukZA.o: In function ExeModel::mix(Mixer&)': paq8px.cpp:(.text+0x675cd): undefined reference to ExeModel::InvalidX64Ops' paq8px.cpp:(.text+0x675d4): undefined reference to ExeModel::InvalidX64Ops' paq8px.cpp:(.text+0x675df): undefined reference to ExeModel::InvalidX64Ops' paq8px.cpp:(.text+0x67714): undefined reference to ExeModel::Table1' paq8px.cpp:(.text+0x6771b): undefined reference to ExeModel::TypeOp1' paq8px.cpp:(.text+0x67912): undefined reference to ExeModel::Table3_3A' paq8px.cpp:(.text+0x67919): undefined reference to ExeModel::TypeOp3_3A' paq8px.cpp:(.text+0x6871f): undefined reference to ExeModel::TypeOp1' paq8px.cpp:(.text+0x688db): undefined reference to ExeModel::TableX' paq8px.cpp:(.text+0x688e8): undefined reference to ExeModel::TypeOpX' paq8px.cpp:(.text+0x68b51): undefined reference to ExeModel::Table2' paq8px.cpp:(.text+0x68b58): undefined reference to ExeModel::TypeOp2' paq8px.cpp:(.text+0x68c4e): undefined reference to ExeModel::Table3_38' paq8px.cpp:(.text+0x68c55): undefined reference to ExeModel::TypeOp3_38' paq8px.cpp:(.text+0x68c73): undefined reference to ExeModel::Table1' paq8px.cpp:(.text+0x68c7a): undefined reference to ExeModel::TypeOp1' paq8px.cpp:(.text+0x68e7f): undefined reference to ExeModel::TypeOp1' /tmp/ccieukZA.o: In function ContextModel::p()': paq8px.cpp:(.text+0x6bf84): undefined reference to StateTable::State_table' paq8px.cpp:(.text+0x6ecfc): undefined reference to DmcForest::dmcparams' paq8px.cpp:(.text+0x6ed0a): undefined reference to DmcForest::dmcmem' paq8px.cpp:(.text+0x6f1df): undefined reference to StateTable::State_table' /tmp/ccieukZA.o: In function Predictor::trainText(char const*, int)': paq8px.cpp:(.text+0x7ec3d): undefined reference to StateTable::State_table' paq8px.cpp:(.text+0x7ec80): undefined reference to StateTable::State_table' paq8px.cpp:(.text+0x7ed0e): undefined reference to StateTable::State_group' paq8px.cpp:(.text+0x7fcf8): undefined reference to StateTable::State_table' paq8px.cpp:(.text+0x7fcff): undefined reference to StateTable::State_table' paq8px.cpp:(.text+0x80ee5): undefined reference to StateTable::State_table' paq8px.cpp:(.text+0x80f24): undefined reference to StateTable::State_table' paq8px.cpp:(.text+0x80fba): undefined reference to StateTable::State_group' paq8px.cpp:(.text+0x81b67): undefined reference to StateTable::State_table' paq8px.cpp:(.text+0x81b85): undefined reference to StateTable::State_table' /tmp/ccieukZA.o: In function EnglishStemmer::Stem(Word*)': paq8px.cpp:(.text+0x82b6b): undefined reference to EnglishStemmer::TypesExceptions1' paq8px.cpp:(.text+0x841b5): undefined reference to EnglishStemmer::TypesStep4' paq8px.cpp:(.text+0x84a90): undefined reference to EnglishStemmer::TypesExceptions2' paq8px.cpp:(.text+0x85601): undefined reference to EnglishStemmer::TypesStep3' paq8px.cpp:(.text+0x8574a): undefined reference to EnglishStemmer::TypesStep1b' paq8px.cpp:(.text+0x8657e): undefined reference to `EnglishStemmer::TypesStep4' collect2: error: ld returned 1 exit status Can someone provide a Linux Binary for paq8px_v182fix1? Assuming that is the latest version
1730 replies | 482186 view(s)
• 14th October 2019, 20:12
Hah! I’ll see what I can do :D
39 replies | 2273 view(s)
• 14th October 2019, 19:59
Papa's Optimizer: Better Ingredients. Better Refinements. Papa Op's. Once the optimizations can be delivered under 30 minutes with a Papa Tracker, I would tip for that. :)
39 replies | 2273 view(s)
• 14th October 2019, 18:35
Sorry, "streaming support" was an unclear term, here. I meant the capability to detect JPG streams inside other files (so, "embedded support" would be a better term), e.g. JPG in RAW files, game containers or .tar files. Or, as in the case above, a file processed by brunsli-nobrotli that has the main image data compressed, but two JPG streams for the thumbnails left that are "embedded" in the .brn-nobrot file (part of the copied metadata).
12 replies | 1146 view(s)
• 14th October 2019, 14:18
Updated version with drag’n’drop support and optional foreground priority: https://papas-best.com/downloads/optimizer/stable/x64/Best%20Optimizer.7z This was a major UI overhaul, so regard this as a beta release. major UI overhaul (now supporting drag’n’drop) moved options to new General tab fixed 7-Zip settings fixed crashes with analysis fixed a typo fixed tab flickering
39 replies | 2273 view(s)
• 14th October 2019, 11:31
pklat replied to a thread repack .psarc in Data Compression
I don't understand the question. .psarc is a PS3 archive, it is already compressed, and the point here is same as in Precomp. that is, to unpack the .psarc, keep the metadata, and repack it with better compression and larger dictionary. so that later you can recreate identical .psarc the difference to Precomp is that this is done in 'file level'. so you can rearrange files (-mqs) to gain better compression. there can be significant gains ( like 30% ). but most data in PS3 games are videos, etc. if you got the PC version of same game, hopefully, some data files like textures would be identical if not similar so you could gain more by putting it all in giant solid .7z I've been planning to do it with .cab and similar someone else here already did it, but iirc hasn't released the source code.
4 replies | 382 view(s)
• 14th October 2019, 10:20
Yes to interest companies I must adapt my codec to any image size, but I postpone this work for later if I resume work on the NHW Project, as I think now I will start this training (unfortunately there are no image/video compression job positions in my area). You can always contact me if you are interested in the NHW Project. Cheers, Raphael
186 replies | 17206 view(s)
• 14th October 2019, 10:12
Brunsli the lib, no. Brunsli the file format, yes. Brunsli has a fixed two way progressive layout. 8x8 subsampled whole image first, followed by the sequential AC.
12 replies | 1146 view(s)
• 14th October 2019, 04:57
telengard replied to a thread LZ98 in Data Compression
I managed to get my hands on a HW debugger and dumped RAM where the main program is loaded after decompression, every single byte is exactly the same. :) There were some differences WAY at the end of the section, but that seems to be some kind of global data which had been updated while running for a few seconds. I hope to test out the compression code soon with some changes I will be testing. thanks again for your help!
30 replies | 1740 view(s)
• 14th October 2019, 02:51
You can try contacting game studios, especially small ones, or try attaching your codec to some open-source software like XBMC/Plex. Games commonly work with GPU-friendly image formats, so your codec would likely require extra conversion layer. Being limited to 512x512 is a large hurdle for any practical use too. Also a lossy image codec is nothing rare, so its hard to get attention unless you can beat everything else at something like this: https://stackoverflow.com/questions/891643/twitter-image-encoding-challenge Maybe consider switching to a lossless image codec? Its easier to find practical applications for that.
186 replies | 17206 view(s)
• 13th October 2019, 21:36
Hello all, Just a quick new email to give some few update... As I told you I had to start in September a training in Machine Learning at University but the National Employment Agency finally refused to finance me... Now they want that I start a training as Java Fullstack developer in November, but I am not totally enthusiastic about it (personal taste)... So I am now reconsidering the NHW Project image/video compression codec... I don't have worked on it since February this year but I have made some visual comparison with AOM AV1 (AVIF) and HEVC, and as I told you, I visually prefer the NHW Project compared to AVIF and HEVC for high quality to high compression (up to -l11 quality setting) because it has more neatness, and for me it is more pleasant.The NHW Project is also a lot faster to encode/decode and royalty-free.-For very high and extreme compression (below 0.4bpp), that's right that AVIF and HEVC are better (and very impressive)-... Just a reminder if you want to test the NHW Project, its new entropy coding schemes are not totally optimal for now, and we can save 2.5KB per .nhw compressed file in average.Will have to re-work on it... As I also told you, some months ago, I contacted JPEG, MPEG and the Alliance for Open Media, and they confirmed me that they were not interested in the NHW Project.So I don't think the NHW Project will find a large application in the industry, but would some of you be interested in developing the NHW Project maybe as a niche market? Sorry to "spam" this forum about my job search, but again if you and your company are interested in developing the state-of-the-art NHW Project image/video compression codec, do not hesitate to contact me.Any thought on it are also very welcome.Is my objective hopeless? Cheers, Raphael
186 replies | 17206 view(s)
• 13th October 2019, 16:26
@pklat, it´s possible to compress results further or it´s already compressed?
4 replies | 382 view(s)
• 13th October 2019, 14:36
jethro replied to a thread Zstandard in Data Compression
339 replies | 113341 view(s)
• 13th October 2019, 11:45
as always Shnaader, you keep amazing me.
12 replies | 1146 view(s)
• 13th October 2019, 11:04
Nice trick! Can also be confirmed using SREP: MFilter7z.64.dll.srep 1,200,440 // srep MFilter7z.64.dll jcaron.jpg.srep 536,446 // srep jcaron.jpg MFilter_jcaron.dat.srep 1,215,822 // copy /b MFilter7z.64.dll + jcaron.jpg MFilter_jcaron.dat After searching a bit, I found a site from Adobe with download links for their typical ICC profiles and together with a string from jcaron.jpg ("U.S. Web Coated (SWOP) v2"), the specific profile can be found: USWebCoatedSWOP.icc 557,168 USWebCoatedSWOP.icc.srep 531,192 USWeb_jcaron.dat.srep 541,982 By the way, the mentioned thumbnail recompression would also be possible with the modified brunsli version: cover.jpg 201,988 // file with 2 thumbnails cover.jpg.brn 152,636 // unmodified brunsli treats thumbnails as metadata... cover.jpg.brn.pcf 152,665 // ...so precomp finds nothing afterwards cover.jpg.brn-nobrot 163,750 // modified brunsli-nobrotli cover.jpg.brn-nobrot.pcf_cn 162,093 // precomp -cn => 2/2 JPG streams (the thumbnails) cover.jpg.brn-nobrot.pcf 152,653 // but it doesn't really help compared to unmodified brunsli on this file The thumbnail streams are small compared to the whole file (5,157 bytes each) and completely identical, so the second one gets deduplicated by unmodified brunsli as well as by lzma2 on the modified brunsli. Unfortunately, brunsli has no streaming support (processes whole JPEGs only), so you can't apply it a second time to recompress thumbnails. Another modified version would be needed that detects and processes thumbnails in the metadata.
12 replies | 1146 view(s)
• 13th October 2019, 09:22
item #3 is like Ecm did with dumped cds which could explain why Mfilter size is large and this should get benefit with compressing few jpg files and lose gain on compressing tens of jpg files.
12 replies | 1146 view(s)
• 12th October 2019, 20:12
pklat replied to a thread repack .psarc in Data Compression
guess someone already did it all: https://aluigi.altervista.org/quickbms.htm oh, well.
4 replies | 382 view(s)
• 12th October 2019, 19:17
MFilter uses the following additional optimizations for jcaron.jpg: 1) Compresses jpeg thumbnail in Exif segment 2) Compresses jpeg thumbnail in Photoshop segment 3) Deletes "well known" ICC profile. MFilter knows several well known ICC profiles and can delete and restore such ICC profiles on the fly.
12 replies | 1146 view(s)
• 12th October 2019, 14:39
mfilter can recompress jpeg thumbnails, maybe because of that?
12 replies | 1146 view(s)
• 12th October 2019, 13:33
WinnieW replied to a thread Zstandard in Data Compression
I can confirm there is no problem. I compressed a file of 38 Gbyte of size using the official 64 Bit Windows command line binary. Verified the file integrity using SHA1 checksums. Original file and decompressed file were bit identical.
339 replies | 113341 view(s)
• 12th October 2019, 09:52