Page 3 of 22 FirstFirst 1234513 ... LastLast
Results 61 to 90 of 642

Thread: Paq8pxd dict

  1. #61
    Member
    Join Date
    May 2008
    Location
    Estonia
    Posts
    385
    Thanks
    142
    Thanked 213 Times in 115 Posts
    paq8pxd_v9

    • over 2GB file support
    • fix jpeg and wav errors
    • updated header doc
    • in buf pos wraps to buf size
    • tmpfile for windows (created in user tmp folder)


    So, if you compress jpeg file with level -1 it will be worse then previous versions if it will not fit to buf.

    If someone will dare to test this over HFCB i recommend level -1 to -3.

    EDIT: paq8pxd_v9fix.7z (source only) is same as paq8pxd_v9.7z except it has fix if compiled for Linux or without WINDOWS. There was mistake in tmpfile2().
    Attached Files Attached Files
    Last edited by kaitz; 21st June 2014 at 16:36. Reason: removed executable
    KZo


  2. The Following 3 Users Say Thank You to kaitz For This Useful Post:

    Bulat Ziganshin (19th June 2014),Skymmer (20th June 2014),Stephan Busch (20th June 2014)

  3. #62
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    778
    Thanks
    63
    Thanked 273 Times in 191 Posts
    Calgary Corpus 14 files merged:

    570,867 bytes cmix 397.32 sec
    588,801 bytes paq8pxd_v7 113.97 sec
    588,946 bytes paq8pxd_v9 110.89 sec
    646,744 bytes nanozip 2.41 sec
    659,477 bytes zpaq 6.859 sec
    721,094 bytes zcm 0.960 sec

  4. #63
    Member
    Join Date
    May 2008
    Location
    Estonia
    Posts
    385
    Thanks
    142
    Thanked 213 Times in 115 Posts
    So i made a test with vm.dll level -1
    It failed at :
    9626 | jpeg | 56264 b [3091557888 - 3091614151]

    Extracted part of this.
    If compressed separately all is ok. If in tar file. uploaded to test

    Assertion failed: i>=0 && i<C
    Witch is in ContextMap in normalModel at first time when context is set, i is larger and has not been reset to 0. So jpeg model still has some problems.
    It seems to me that jpeg model is not quiting on a byte boundary.

    EDIT:
    One easy fix is to reset all static varibles at block start.
    Last edited by kaitz; 20th June 2014 at 17:22.
    KZo


  5. #64
    Member
    Join Date
    May 2008
    Location
    Estonia
    Posts
    385
    Thanks
    142
    Thanked 213 Times in 115 Posts
    paq8pxd_v10

    • finally fix jpeg error (from fp8.cpp )
    • AVX2 is compatible with SSE and non SIMD version


    vm.dll level -3
    Code:
    Compressed from 4244176896 to 933208195 bytes.
    
    
    Total 4244176896 bytes compressed to 933312912 bytes.
    
    
     Segment data size: 104668 bytes
    
    
     TN |Type name |Count      |Total size
    -----------------------------------------
      0 |default   |      5634 | 2814016309
      1 |jpeg      |       816 |   11697941
      2 |hdr       |       228 |      15922
      3 |1b-image  |        12 |        720
      4 |4b-image  |        13 |      16288
      5 |8b-image  |         3 |      30592
      6 |24b-image |        99 |    2093527
      7 |audio     |       101 |    8693627
      8 |exe       |      4107 |  710373298
     10 |text      |       400 |  601409735
     11 |utf-8     |        70 |   95807262
     12 |base64    |        22 |      21675
    -----------------------------------------
    Total level  0 |     11505 | 4244176896
    
    
    
    
     TN |Type name |Count      |Total size
    -----------------------------------------
      0 |default   |        21 |      15586
      2 |hdr       |         1 |        118
      4 |4b-image  |         1 |        512
    -----------------------------------------
    Total level  1 |        23 |      16216
    
    
    Time 65671.14 sec, used 95437302 bytes of memory
    Compressed without an error, currently decompressing.
    I think this is my final attempt for now.

    Edit: silesia
    Code:
    paq8pxd_v10 -1 
             Size     Time
    xml      321824   44.69
    ooffice  1833166  81.77
    reymont  969209   76.12
    dickens  2190894  88.60
    nci      1291297  294.86
    webster  6222152  429.64
    osdb     2247599  149.01
    mozilla  13175465 705.13
    mr       2173269  130.40
    samba    3351010  265.61
    sao      4324703  132.32
    x-ray    3677487  131.48
              41778075
    Attached Files Attached Files
    Last edited by kaitz; 21st June 2014 at 16:35. Reason: tmpfile fix
    KZo


  6. #65
    Member Skymmer's Avatar
    Join Date
    Mar 2009
    Location
    Russia
    Posts
    681
    Thanks
    37
    Thanked 168 Times in 84 Posts
    Line 4802 probably should be
    Code:
        if (!f) return NULL;

  7. The Following User Says Thank You to Skymmer For This Useful Post:

    kaitz (21st June 2014)

  8. #66
    Member
    Join Date
    May 2008
    Location
    Estonia
    Posts
    385
    Thanks
    142
    Thanked 213 Times in 115 Posts
    Decompressed file MD5 checksum was same for vm.dll
    KZo


  9. The Following User Says Thank You to kaitz For This Useful Post:

    Bulat Ziganshin (30th June 2014)

  10. #67
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    Updated http://mattmahoney.net/dc/text.html#1348
    and http://mattmahoney.net/dc/silesia.html
    There are small improvements in compression over v8 in both tests and a big improvement in speed.

  11. The Following User Says Thank You to Matt Mahoney For This Useful Post:

    kaitz (30th June 2014)

  12. #68
    Member
    Join Date
    Jun 2014
    Location
    Ro
    Posts
    20
    Thanks
    4
    Thanked 4 Times in 4 Posts
    Since you are exploring the limits of compression, why not go with a -9 memory option?
    Using Orwell DevCpp 5.6.3, with the "-march=native -m32 -g3 -Wl,--large-address-aware -DWINDOWS -Ofast -msse2 -s" compiler options for the TDM-GCC 4.8.1 32bit compiler, I've created a version of paq8pxd_v10 with a -9 memory option.
    When using a 64 OS, one can use more than 2GB of memory. The "-Wl,--large-address-aware" compiler options set the flag into the exe file.
    I've tested the resulting program on enwik6, enwik7, enwik8 and enwik9. I've tested decompression only on the first 3.

    enwik6.paq8pxd10 - 202.721 bytes
    enwik7.paq8pxd10 - 1.830.217 bytes
    enwik8.paq8pxd10 - 16.488.637 bytes
    enwik9.paq8pxd10 - 132.131.277 bytes (Time 46779.48 sec, used 2127679081 bytes of memory)

    With this memory option I think it puts paq8pxd_v10 on the number 3 spot on Large text benchmark (enwik9 + program size).

    I will test decompression and update the time.
    Tested on a I7-4700HQ @ 2.4GHz, 8GB RAM, OS - Windows 7 Ultimate 64bit

    Edit: decompression of enwik9 checks, Time 45784.26 sec, with task manager showing a 3285 MB memory use.
    Attached Files Attached Files
    Last edited by AlexDoro; 2nd July 2014 at 12:27.

  13. #69
    Member Skymmer's Avatar
    Join Date
    Mar 2009
    Location
    Russia
    Posts
    681
    Thanks
    37
    Thanked 168 Times in 84 Posts
    Provided compilation crashes on my i7-2700K and it will crash on most of the CPUs due the fact that it have been compiled with -march=native option. And since you have i7-4xxx its limited to Haswell or maybe Ivy generation CPUs only.
    OK, its not a problem to recompile it but there is another problem. PAQ8pxd is not designed to run at -9 level. The memory usage of 2127679081 bytes it shows at -9 level is wrong cause actually it uses ~3334 MB. And its a big luck that you were able to successfully compress enwiks at all. Probably because that the limited number of models is used during compression of ENWIK.
    I took my test file which is only 21 591 040 bytes long but contains the data which activates all of the PAQ8pxd models. The result is quite expected - at -9 level PAQ8pxd_v10 tried to use too much memory and exited with error message.
    Code:
     149         | default   |      1226 b [16300854 - 16302079]
     150         | hdr       |      2302 b [16302080 - 16304381]
     151         | 8b-image  |    500544 b [16304382 - 16804925] (width: 711)
     152         | default   |      1474 b [16804926 - 16806399]
     153         | jpeg      |    682615 b [16806400 - 17489014]
    Compressing  ... Out of memory
    
    User Time        :         758.843s
    Kernel Time      :           1.218s
    Process Time     :         760.061s
    Clock Time       :         766.094s
    
    Working Set      :         2746680 KB
    Paged Pool       :              19 KB
    Nonpaged Pool    :               4 KB
    Pagefile         :         3845164 KB
    Page Fault Count : 688639

  14. #70
    Member
    Join Date
    Jun 2014
    Location
    Ro
    Posts
    20
    Thanks
    4
    Thanked 4 Times in 4 Posts
    It is true, now, while it is decompressing the enwik9 file, task manager shows 3285 MB of memory in use. Most likely it was the same while compressing. Still, since the large text benchmark specifically says that it needn't be a general purpose compressor or can fail on any other file other than enwik9, if the validation of the decompressed file stands, the -9 option could still be kept. Of course, mentioning the value displayed by the actual memory usage, not the computed one.
    More than this, Kaitz (or someone else) can modify the memory allocation for other blocks different from DICTTXT in cases where level is 9 so that it still yields good compression without crashing. I think this is worth the shot, since a 4GB RAM computer is something common nowadys.

  15. #71
    Member
    Join Date
    May 2008
    Location
    Estonia
    Posts
    385
    Thanks
    142
    Thanked 213 Times in 115 Posts
    paq8pxd_v11
    +SZDD & SZ recompression (LZSS compression in MS compress.exe)
    +MRB encoding/decoding (.hlp RLE images) (needs more work)
    -fix base64 lenght (limit 128MB)
    -add back .pbm .pgm .ppm .rgb .tga detection (failed on vm.dll hdd image)
    -fix SSE train,dotproduct typedef
    -change 8bit image model
    -fix final summary (transform fails)
    -exe small improvment
    -wordmodel small improvment
    -change sparsemodel(1)
    -display memory usage and attempted mem. alloc if out of memory
    -jpg,8/24 bit image, recordmodel/sparsemodel for dicttext
    only when needed (smaller memory usage)
    Attached Files Attached Files
    KZo


  16. The Following 7 Users Say Thank You to kaitz For This Useful Post:

    Bulat Ziganshin (8th July 2014),Edison007 (8th July 2014),Euph0ria (8th July 2014),Matt Mahoney (9th July 2014),Mike (9th July 2014),schnaader (8th July 2014),Skymmer (9th July 2014)

  17. #72
    Member
    Join Date
    May 2008
    Location
    brazil
    Posts
    163
    Thanks
    0
    Thanked 3 Times in 3 Posts
    Quote Originally Posted by AlexDoro View Post
    It is true, now, while it is decompressing the enwik9 file, task manager shows 3285 MB of memory in use. Most likely it was the same while compressing. Still, since the large text benchmark specifically says that it needn't be a general purpose compressor or can fail on any other file other than enwik9, if the validation of the decompressed file stands, the -9 option could still be kept. Of course, mentioning the value displayed by the actual memory usage, not the computed one.
    More than this, Kaitz (or someone else) can modify the memory allocation for other blocks different from DICTTXT in cases where level is 9 so that it still yields good compression without crashing. I think this is worth the shot, since a 4GB RAM computer is something common nowadys.


    4GB gb of RAM is not common .Remember there are billions of old computers are not updated to use more than 2 GB of ram. And remember, apps which use more than 2GB (32 bits limits) have lots of problems.And hutter prize requires the use of 800 Mb of RAM.

  18. #73
    Member
    Join Date
    Dec 2013
    Location
    Italy
    Posts
    342
    Thanks
    12
    Thanked 34 Times in 28 Posts
    i do not agree. just today i made a pc for one of my secretary. with 32gb of ram and a 4ghz i7 cpu.
    64 bit windows brings ram to masses

  19. #74
    Member
    Join Date
    May 2008
    Location
    brazil
    Posts
    163
    Thanks
    0
    Thanked 3 Times in 3 Posts
    Quote Originally Posted by fcorbelli View Post
    i do not agree. just today i made a pc for one of my secretary. with 32gb of ram and a 4ghz i7 cpu.
    64 bit windows brings ram to masses

    New pcs does not have problems .But there are more old PCs than new pcs.the transition from 32 bits to 64 bits is not complete.Lots of compressors have problems when uses more than 2gb of ram.64 bits processors are not better because of extra amount of ram ,but because extra registers and other cpu features.

    And I think it's wrong to use more than 2gb ram for just compression apps. It's a waste of memory. And computers today scales memory according with the number of threads.If just one threads sucks 2 gb of ram .4 threads destroys 8gb easily.

    Remember ,hutter prize and calgary corpus is 800mb of ram .It's still a lot of memory nowadays.
    Last edited by lunaris; 9th July 2014 at 12:23.

  20. #75
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    Nobody is going to use paq to compress their important files. Most likely they will use zip, which is good enough and works on ancient machines, or they won't even bother to compress. Disk space is cheap.

    For Hutter prize, the next winner (if there is one) will be specialized for enwik8 and nothing else. IMHO the 3% threshold for improvement is too big and the current record will stand for a long time.

    If your goal is to top the benchmarks, regardless of speed and memory, then paq is what you want.

  21. #76
    Member
    Join Date
    May 2008
    Location
    brazil
    Posts
    163
    Thanks
    0
    Thanked 3 Times in 3 Posts
    Disk space is cheap ,but internet bandwidth is not really cheap and is very dependent from other factors. Torrent + freearc (or zpaq ) for example can help defend against MPAA.Or even archiving purposes not related with piracy .
    Last edited by lunaris; 9th July 2014 at 00:51.

  22. #77
    Member
    Join Date
    Dec 2013
    Location
    Italy
    Posts
    342
    Thanks
    12
    Thanked 34 Times in 28 Posts
    Quote Originally Posted by lunaris View Post
    New pcs does not have problems .But there are more old PCs than new pcs.the transition from 32 bits to 64 bits is not complete.Lots of compressors have problems when uses more than 2gb of ram.64 bits processors are not better because of extra amount of ram ,but because extra registers and other cpu features.

    And I think it's wrong to use more than 2gb ram for just compression apps. It's a waste of memory. And computers today scales memory according with the number of threads.If just one threads sucks 2 gb of ram .4 threads destroys 8gb easily.

    Remember ,hutter prizer of calgary corpus is 800mb of ram .It's still a lot of memory nowadays.
    i totally disagree. 64 bit is fatter, in many case even slower, worst on cache etc. but more address space is for me the key for future. computing. i have calculate a 1 000 000 000 digit pi in about 5 minutes just today. 64 bit cpu is here from about 8 years. 64 bit working version of windows back to 7. i don't think today you can find 32 bit oem windows. even smartphone cpu work in 64 bit and 3gb ram is currently shipped. in less than a year plus 4gb will become standard even for mid range telephone...

  23. #78
    Member
    Join Date
    Dec 2013
    Location
    Italy
    Posts
    342
    Thanks
    12
    Thanked 34 Times in 28 Posts
    Quote Originally Posted by lunaris View Post
    Disk space is cheap ,but internet bandwidth is not really cheap and is very dependent from other factors. Torrent + freearc (or zpaq ) for example can help defend against MPAA.Or even archiving purposes not related with piracy .
    i disagree with this too
    because compression brings typically some percent of space saved.

  24. #79
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,474
    Thanks
    26
    Thanked 121 Times in 95 Posts
    Quote Originally Posted by fcorbelli View Post
    i totally disagree. 64 bit is fatter, in many case even slower, worst on cache etc. but more address space is for me the key for future. computing. i have calculate a 1 000 000 000 digit pi in about 5 minutes just today. 64 bit cpu is here from about 8 years. 64 bit working version of windows back to 7. i don't think today you can find 32 bit oem windows. even smartphone cpu work in 64 bit and 3gb ram is currently shipped. in less than a year plus 4gb will become standard even for mid range telephone...
    64-bit pointers are fatter than 32-bit pointers, but you don't have to use plain pointers. You can use indexes and/ or use aligned objects and shift the address by log2(align) to the right, extending the 32-bit addressable range. Java does the second trick, calling it "compressed oops". Because Java objects sizes are multiplies of 8 bytes, compressed oops allow to address 4 GiB * 8 = 32 GiB of RAM with 32-bit values.

    I don't think an average compression algo needs to use lots of pointers, so moving to 64-bit shouldn't increase cache pressure or memory occupancy much.

  25. #80
    Member
    Join Date
    Dec 2013
    Location
    Italy
    Posts
    342
    Thanks
    12
    Thanked 34 Times in 28 Posts
    that is true. but for my point of view 64 bit is no always faster than 32, in one word 'better'.
    but extended addressable space yes, is THE key to modern computing and the reason to use it.
    and afaik almost 100% of cpu, manifactured in the last 8 years, use amd64.
    so i think that about 70% of all pc in the world have now 64bit.much less use a 64 bit windows os, of course.

  26. #81
    Expert
    Matt Mahoney's Avatar
    Join Date
    May 2008
    Location
    Melbourne, Florida, USA
    Posts
    3,255
    Thanks
    306
    Thanked 778 Times in 485 Posts
    25% of computers run unsupported Windows XP. 50% run Windows 7. http://www.netmarketshare.com/operat...10&qpcustomd=0

    Information on 32 bit vs. 64 bit is apparently not to be found.

  27. #82
    Albin jijo
    Guest
    Hello Mr. Matt Mahoney sir,
    I have send you a PM . Please replay. Message Topic - How to rip games like Skulptura.?

  28. #83
    Member
    Join Date
    May 2008
    Location
    Estonia
    Posts
    385
    Thanks
    142
    Thanked 213 Times in 115 Posts

    Talking

    I made a test on Pentium 3 (ca 866 MHz), 875 MB RAM, Linux Mint 13.

    Compiled: g++ -O3 paq8pxd_v11.cpp -o paq8pxd -s
    Compressed:
    Code:
    ./paq8pxd paq8pxd_v11.cpp 
    Creating archive paq8pxd_v11.cpp.paq8pxd11 with 1 file(s)...
    
    File list (24 bytes)
    Compressed from 24 to 26 bytes.
    
    1/1  Filename: paq8pxd_v11.cpp (234078 bytes)
    Block segmentation:
     0           | default   |    234078 b [0 - 234077]
    Compressed from 234078 to 42072 bytes.
    
    Total 234078 bytes compressed to 42134 bytes.
    
     Segment data size: 9 bytes
    
     TN |Type name |Count      |Total size
    -----------------------------------------
      0 |default   |         1 |     234078
    -----------------------------------------
    Total level  0 |         1 |     234078
    
    Time 307.80 sec, used 237431664 bytes of memory
    
    Close this window or press ENTER to continue...
    Decompression identical.
    Not bad.
    On my Core2Duo E4500 2.2GHz it takes about 30 sec.
    KZo


  29. #84
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    778
    Thanks
    63
    Thanked 273 Times in 191 Posts
    For single core CPU's Moore's law is broken already a long time:

    My almost 3 years old computer beat my new computer with your paq8pxd_v11 test:

    15.49 sec, November 12, 2007, Intel Core 2 QX9650 3.0GHz (OC 3.99GHz) DDR3 1680MHz CL9
    11.48 sec, November 14, 2011, Intel i7 3960X 3.3GHZ (OC 4.4GHz), DDR3 2133MHZ CL11
    11.73 sec, September 10, 2013, Intel i7 4960X 3.6GHz (OC 4.5GHz), DDR3 2000MHz CL10
    Last edited by Sportman; 9th July 2014 at 21:32.

  30. #85
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,474
    Thanks
    26
    Thanked 121 Times in 95 Posts
    Probably because paq is sensitive to RAM latency :]

  31. #86
    Programmer Bulat Ziganshin's Avatar
    Join Date
    Mar 2007
    Location
    Uzbekistan
    Posts
    4,497
    Thanks
    733
    Thanked 660 Times in 354 Posts
    one possible reason: ivy bridge has the bug in the PREFETCH instruction implementation, making it very slow

  32. #87
    Member
    Join Date
    May 2008
    Location
    brazil
    Posts
    163
    Thanks
    0
    Thanked 3 Times in 3 Posts
    Quote Originally Posted by fcorbelli View Post
    that is true. but for my point of view 64 bit is no always faster than 32, in one word 'better'.
    but extended addressable space yes, is THE key to modern computing and the reason to use it.
    and afaik almost 100% of cpu, manifactured in the last 8 years, use amd64.
    so i think that about 70% of all pc in the world have now 64bit.much less use a 64 bit windows os, of course.

    Processing power is not the problem.There are lot of 64 bits PCs out there,but all of them probably should have 512 MB Ram or less than 2 gb of Ram, like amd athlon and pentium 4 machines. It's perfectly possible to write 64 bits software without the need to use more than 2gb of ram.New pcs still have 4gb pre installed. Using 2 gb ram for each thread is too much memory consuming for these PCs because all dual core machines will use 4 gb.And the operating system will crash.

    And remember the satanic fsb and the old memory ram types was much more slow than today.The satanic FSB ( I remember the pentium celeron 667 mhz históry about "neighbor of beast" )

    Off-topic: I think FSB frequency does not have a good numerology.Binary and Fibonacci are very friendly.But the fsb frequency is piece of infinite rounding satanic beasts (like 666.666666..... rounding to 667 mhz)

    http://www.maths.surrey.ac.uk/hosted...ci/fibrep.html
    Last edited by lunaris; 9th July 2014 at 21:49.

  33. #88
    Member
    Join Date
    Aug 2008
    Location
    Planet Earth
    Posts
    778
    Thanks
    63
    Thanked 273 Times in 191 Posts
    I added my almost 7 years old metacompressor computer paq8pxd_v11 test result (what still run 24 hours a day by somebody else). This almost 7 years old computer use Windows Vista 64-bit and the almost 3 years and 1 year old computers Windows 8.1 64-bit, all same RAMDISK software FAT32.

    Looking to that results, Intel managed to make single core paq8pxd_v11 less then 35% faster in almost 7 years, that's average 5% gain a year (and even less, because that time it run OC 4.2GHz instead of OC 3.99GHz nowadays because CPU is old now).

    That does not promise much for the single core future, it looks like multi core (CPU/GPU/Coprocessor) and using more memory (64-bit) is the way to go.
    Last edited by Sportman; 9th July 2014 at 21:59.

  34. #89
    Member Skymmer's Avatar
    Join Date
    Mar 2009
    Location
    Russia
    Posts
    681
    Thanks
    37
    Thanked 168 Times in 84 Posts
    I'm quite disappointed with latest v11 version. I took all of the versions from 7 to 11 and compiled them with same compiler and same options. The test file is my personal set of 63 files wrapped into TAR. These files are selected in a way to activate all currently existing PAQ8pxd models. The size of TAR is 21 591 040 bytes.
    Code:
    v7 -8		5 969 832	1181.734s	1729 MB
    v8 -8		5 857 633	990.515s	2095 MB
    v9 -8		5 857 712	998.281s	2095 MB
    v10 -8		5 857 619	986.468s	2095 MB
    v11 -8		6 077 694	636.953s	1861 MB
    
    v11 -9		6 058 911	565.671s	3664 MB		unofficial source modification
    v11 gives 3.75% worse compression comparing to v10. Yes, there is a notable speedup but IMHO its achieved with too much of sacrifice.

  35. #90
    Member
    Join Date
    May 2008
    Location
    Estonia
    Posts
    385
    Thanks
    142
    Thanked 213 Times in 115 Posts
    It depends, what if you compress say 5 or 10 jpg/image files (tar) or directory of 10 different text (1+ mb) files v10 vs v11.
    KZo


Page 3 of 22 FirstFirst 1234513 ... LastLast

Similar Threads

  1. FreeArc compression suite (4x4, Tornado, REP, Delta, Dict...)
    By Bulat Ziganshin in forum Data Compression
    Replies: 554
    Last Post: 26th September 2018, 02:41
  2. Dict preprocessor
    By pat357 in forum Data Compression
    Replies: 5
    Last Post: 2nd May 2014, 21:51

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •