Page 1 of 6 123 ... LastLast
Results 1 to 30 of 179

Thread: Google's compression projeсts

  1. #1
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,982
    Thanks
    298
    Thanked 1,309 Times in 745 Posts

    Google's compression projeсts

    brotli: Google's LZMA replacement; Wiki Thread
    zopfli: Google's Deflate compressor; Wiki Thread
    snappy: Google's LZ4 replacement; Wiki Thread
    gipfeli: another zlib replacement?; Wiki Thread
    butteraugli: Google's SSIM replacement; Wiki Thread
    pik: Google's JPEG replacement; Thread

    brunsli: JPEG recompressor; Thread
    guetzli: JPEG encoder; Wiki Thread
    knusperli: JPEG decoder; Thread

    grittibanzli: Deflate recompressor;

    riegeli: format for storing a sequence of string records, like serialized protocol buffers; Thread

  2. Thanks (6):

    Bulat Ziganshin (30th April 2019),encode (30th April 2019),giothothan (7th June 2019),Hakan Abbas (30th April 2019),lz77 (22nd December 2019),PsYcHo_RaGE (31st May 2019)

  3. #2
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    807
    Thanks
    245
    Thanked 257 Times in 160 Posts
    ... WebP, VP8, VP9, ~AV1
    Using rANS: PIK, Draco 3D ( https://github.com/google/draco ), "image compression via triangulation": https://arxiv.org/pdf/1809.02257.pdf

  4. Thanks (2):

    Hakan Abbas (30th April 2019),Jyrki Alakuijala (30th April 2019)

  5. #3
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Shelwien View Post
    Did I miss anything?
    JPEG XL and AV1, AVIF have lots from Google

    WebP is partially from On2, partially from Google

    VP9 is from Google

    cmix is by a Google software engineer

    'Greedy palettization' helps in compression, likely faster than pngquant, but also likely worse quality. Generates a pretty good palette in 777 us.

    LodePNG is by a Google software engineer (just before google career)

    SimpleJPEG (better psnr, worse butteraugli than libjpeg)

    Shared Brotli

    PVRTC compressor by Lode was opensourced within some other Google project

  6. Thanks (4):

    Hakan Abbas (30th April 2019),Jarek (30th April 2019),Mike (30th April 2019),Shelwien (30th April 2019)

  7. #4
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,982
    Thanks
    298
    Thanked 1,309 Times in 745 Posts
    I ripped the google repository list from github and found "gipfeli" in it by grepping for "li$".
    Hard to find this stuff by keywords because of lack of descriptions.
    Attached Files Attached Files

  8. #5
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Shelwien View Post
    gipfeli
    gipfeli was the first format/algorithm in the -li series.

    gipfeli is a 2011 design and is completely shadowed by newer algorithms. Even brotli quality 0 or 1 can deliver the same enc/dec speed and has better streaming properties.

  9. #6
    The Founder encode's Avatar
    Join Date
    May 2006
    Location
    Moscow, Russia
    Posts
    4,013
    Thanks
    406
    Thanked 403 Times in 153 Posts
    AFAIK, zopfli is simply Optimized Deflate, a la KZIP.

  10. #7
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    807
    Thanks
    245
    Thanked 257 Times in 160 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    JPEG XL
    I have just tried to find some details, but only found this August SPIE 2019 conference abstract: https://spie.org/OPO/conferencedetai...age-processing

    JPEG XL next-generation image compression architecture and coding tools
    Paper 11137-20
    Author(s): Jyrki Alakuijala, Robert Obryk, Google Zürich (Switzerland); Jon Sneyers, Cloudinary (Israel); Luca Versari, Jan Wassenberg, Google Zürich (Switzerland)

    An update on the JPEG XL standardization effort: JPEG XL is a practical approach focused on scalable web distribution and efficient compression of high-quality images. It will provide various benefits compared to existing image formats: significantly smaller size at equivalent subjective quality; fast, parallelizable decoding and encoding configurations; features such as progressive, lossless, animation, and reversible transcoding of existing JPEG; support for high-quality applications including wide gamut, higher resolution/bit depth/dynamic range, and visually lossless coding. Additionally, a royalty-free baseline is an important goal. The JPEG XL architecture is traditional block-transform coding with upgrades to each component. We describe these components and analyze decoded image quality.
    The authors partially agree with PIK (beside Jan, I see there is also Alex Deymo and Sami Boukortt in contributors), Robert has worked e.g. on butteraugli, guetzli (and studied in my institute) - is JPEG XL basically PIK plus some additions?

  11. #8
    Member cfeck's Avatar
    Join Date
    Jan 2012
    Location
    Germany
    Posts
    50
    Thanks
    0
    Thanked 17 Times in 9 Posts
    Unsure if Mathieu Chartier still works at Google, because no updates since over 2 years, but https://github.com/mathieuchartier/mcm says "Copyright (C) 2015, Google Inc."

  12. #9
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by Jarek View Post
    I have just tried to find some details, but only found this August SPIE 2019 conference abstract: https://spie.org/OPO/conferencedetai...age-processing
    Yes, I setup a session for latest JPEG activities, and Jon & Jon will contribute.
    Quote Originally Posted by Jarek View Post
    The authors partially agree with PIK (beside Jan, I see there is also Alex Deymo and Sami Boukortt in contributors), Robert has worked e.g. on butteraugli, guetzli (and studied in my institute) - is JPEG XL basically PIK plus some additions?
    Well... not quite. First, its technology is still under discussion, but two parties drive the project, not one. Cloudinary and Google. Thus, JPEG XL contains contributions from both parties.

  13. Thanks:

    Jarek (30th April 2019)

  14. #10
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Jarek View Post
    I have just tried to find some details, but only found this August SPIE 2019 conference abstract: https://spie.org/OPO/conferencedetai...age-processing



    The authors partially agree with PIK (beside Jan, I see there is also Alex Deymo and Sami Boukortt in contributors), Robert has worked e.g. on butteraugli, guetzli (and studied in my institute) - is JPEG XL basically PIK plus some additions?
    JPEG XL is built by FLIF author Jon Sneyers, Alexander Rhatushnyak, and the PIK team. Jon is contributing to an improved variation of FLIF for 'responsive' mode, Alex for lossless mode, and PIK team for more traditional lossy compression. Much of it is building on your work with ANS, but traditional coding (binary arithmetic etc.) is used here and there.

  15. Thanks (4):

    dado023 (2nd May 2019),encode (2nd May 2019),Jarek (2nd May 2019),jibz (2nd May 2019)

  16. #11
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Shelwien View Post
    brotli: Google's zstd replacement; Wiki Thread
    Brotli was a replacement of LZHAM, LZMA and zlib. Zstd came after brotli -- it was frozen a year or two after brotli.

    Brotli has similarity with LZHAM and LZMA in that all three have context modeling (slower, but more dense), and a similar compression density (within 1 %). Brotli has similarity with zlib in that that both work rather well with tiny data (even hundred bytes or so).

    In the early years zstd was a fast-to-compress-fast-to-decompress algorithm for large payload transmission (like for database replication or backup), and brotli was targeting a wide range of payload sizes and compression levels (human waiting for a web page). I suspect zstd's rescoping was partially motivated because of the success that brotli had in its wider application domain.

  17. #12
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,982
    Thanks
    298
    Thanked 1,309 Times in 745 Posts
    Ok :)

  18. #13
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by encode View Post
    AFAIK, zopfli is simply Optimized Deflate, a la KZIP.
    Zopfli is similar in scope to kzip, but compresses ~0.5 % better and Zopfli is open source with the apache license while kzip is available only as binaries.

  19. Thanks:

    encode (3rd May 2019)

  20. #14
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    807
    Thanks
    245
    Thanked 257 Times in 160 Posts
    General JPEG XL scheme from Jon Sneyers ImageCon 2019 talk ( https://twitter.com/bseymour/status/...730597888?s=20 ):

    Click image for larger version. 

Name:	JsVbkYu.jpg 
Views:	254 
Size:	214.9 KB 
ID:	6600

  21. #15
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Jarek View Post
    General JPEG XL scheme from Jon Sneyers ImageCon 2019 talk ( https://twitter.com/bseymour/status/...730597888?s=20 ):

    Click image for larger version. 

Name:	JsVbkYu.jpg 
Views:	254 
Size:	214.9 KB 
ID:	6600
    There are 101 components in JPEG XL. Every reasonable architectural chart drawn out of it is a coarse simplification.

    Although many ideas are derived from WebP lossless work in 2011, and FLIF in 2016, butteraugli+guetzli similarly in 2016, the majority of the development leading to JPEG XL was started around 2014 and until mid 2018 we were developing it for high bit rates only (1+ BPP) and very high decoding speeds. That is still where it excels in comparison to other codecs.

  22. #16
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    807
    Thanks
    245
    Thanked 257 Times in 160 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    That is still where it excels in comparison to other codecs.
    https://jpeg.org/jpegxl/index.html says "can deliver images with similar quality at a third of the size of widely used alternatives" - if this 1/3 size is indeed true (for the same butteraugli score?), "excels" seems a bit understatement.

  23. #17
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Jarek View Post
    https://jpeg.org/jpegxl/index.html says "can deliver images with similar quality at a third of the size of widely used alternatives" - if this 1/3 size is indeed true (for the same butteraugli score?), "excels" seems a bit understatement.
    This is a cherry-picked example, but it highlights that there are improvements.

    1.png is decompressed from 8560 bytes (0.2 BPP) of JPEG XL
    Click image for larger version. 

Name:	1.png 
Views:	282 
Size:	247.4 KB 
ID:	6603

    2.png is decompressed from 11716 bytes (0.27 BPP) of JPEG
    Click image for larger version. 

Name:	2.png 
Views:	242 
Size:	40.4 KB 
ID:	6604

    3.png is decompressed from 8871 bytes (0.2 BPP) of JPEG YUV420
    Click image for larger version. 

Name:	3.png 
Views:	256 
Size:	74.6 KB 
ID:	6605

    Note, that 0.2 BPP range is outside the operational range for both JPEG XL and JPEG. Usually the use in the internet with traditional JPEG is between 1-5 BPP, and I anticipate that the use of JPEG XL will fall between 0.35 and 1.7 BPP.

    Butteraugli agrees with eyes:
    Maximum values in the butteraugli field: 6.28, 25.2, 15.43
    7th norm of the butteraugli field: 2.65, 12.2, 7.49
    2nd norm of the butteraugli field: 2.10, 8.89, 5.33
    (Butteraugli scores are in multiples of just noticeable difference.)

    Even PSNR (4/6 intensity weighted, 1/6 for each chromacity plane) agrees with eyes:
    34.4, 27.12 and 29.66 db

    Also, the WebP Image at 8638 bytes has more clarity than jpegs:

    Click image for larger version. 

Name:	4.png 
Views:	251 
Size:	213.3 KB 
ID:	6606

  24. #18
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    This is a cherry-picked example, but it highlights that there are improvements.
    Well, yes, but look... 0.2bpp is probably not a rate you would find in many practical applications - it is way below "transparent quality" in all cases. As far as JPEG is concerned, there is room for improvement, namely "Trellis quantization". Use the JPEG reference software from www.jpeg.org, there is a command line "-oz" will enable it. You should also include "JPEG 2000" in the evaluation, and HEVC of course as well. Last but not least, it is really mandatory(!) to run some subjective tests. There are certainly some obvious answers (JPEG being a 25 year old technology), but some may be less obvious. Anyhow, there will be a JPEG XL interim meeting end of this month, there will be a chance to discuss this all.

  25. #19
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by thorfdbg View Post
    Well, yes, but look... 0.2bpp is probably not a rate you would find in many practical applications - it is way below "transparent quality" in all cases.
    I wrote about this in my post.

    (I believe most use will be around 0.35 to 1.7 BPP for jpeg xl, and bit rate variations are more affected by the simplicity of the image than the quality goal, so benchmarking every image at 0.35 is still going to be too low quality compared to what users will be using.)

    Quote Originally Posted by thorfdbg View Post
    "Trellis quantization".
    Both jpeg xl and jpeg can benefit from trellis quantization. It is off in both cases, as it tends to be slower.

    Quote Originally Posted by thorfdbg View Post
    "JPEG 2000" in the evaluation, and HEVC of course as well. Last but not least, it is really mandatory(!) to run some subjective tests.
    We are doing that occasionally with independent labs, so far twice. Also we do simple reviews of two chosen reference corpora (31 and 8 images) when quality improvements are done. Particularly, we don't use psnr or ssim in deciding if images look better or not.

  26. #20
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    807
    Thanks
    245
    Thanked 257 Times in 160 Posts
    Jyrki, the slide says "reversible nonlinear Haar" as lossless alternative for DCT.
    However, I cannot find materials on "nonlinear Haar" - are there any available details?

    I am writing because MSc student of mine is currently working (till this September) on fractal Haar: https://encode.su/threads/2045-2D-fr...-(implemented)

    Click image for larger version. 

Name:	fravelets.png 
Views:	172 
Size:	456.6 KB 
ID:	6666
    This is practically standard Haar ("boxes"), just using different shift vectors - not horizontal/vertical, but as powers of chosen complex number.
    The advantage is hexagon-like lattice of blocks - what should make blocking artifacts less visible than for squares, maybe reduce residues for lossless.
    Is there a chance that there will be available JPEG XL source in a month or two so he could test such tame twindragon Haar replacement?

  27. #21
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Jarek, the non-linear Haar is available also in fuif, which is quite a bit smaller codebase than jpeg xl. See https://github.com/cloudinary/fuif

    Fuif is arbitrarily progressive but compresses possibly 30% less and is also substantially (10–20x) slower to decode than pik. (Jpeg xl will likely contain about five different ways to code images.)
    Last edited by Jyrki Alakuijala; 30th June 2019 at 11:43.

  28. Thanks:

    Jarek (24th June 2019)

  29. #22
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    807
    Thanks
    245
    Thanked 257 Times in 160 Posts
    Thanks, I see in https://github.com/cloudinary/fuif/b...form/squeeze.h
    Haar-like transform: halves the resolution in one direction
    A B -> (A+B)>>1 in one channel (average) -> same range as original channel
    A-B - tendency in a new channel ('residual' needed to make the transform reversible)
    Which is practically Haar, can get fractal by just replacing vectors between A and B ... maybe by "nonlinear" he meant the rounding ...

  30. #23
    Member
    Join Date
    Aug 2019
    Location
    Russia
    Posts
    5
    Thanks
    0
    Thanked 3 Times in 2 Posts
    When seeing another "new standard, JPEG replacement" (JPEG XL) comparison to JPEG, I just had to register here to provide another point of view.
    This comparison to JPEG, as well as many others, like to set quite low bitrates where Huffman encoding becomes very unoptimal and many DCT blocks degrade into sole DC coefficient. But is JPEG actually so bad in those conditions? The answer is no.

    First obvious improvement is switching to arithmetic coding that's also in JPEG standard. Sure, it's not that widely supported but it's supported much better (libjpeg) than any new standard. With Robidoux's quantization tables and given bitrate we can get this standard JPEG result out of cjpeg:
    Click image for larger version. 

Name:	stp4n.png 
Views:	140 
Size:	160.9 KB 
ID:	6764
    Much better than provided JPEG samples, isn't it?

    The issue with unpleasant blocking is easily solved with some postprocessing. Here is Nosratinia's one:
    Click image for larger version. 

Name:	stp4.png 
Views:	150 
Size:	218.4 KB 
ID:	6765
    And this one actually looks better than proposed new standard (JPEG XL, WebP too). Quite pitiful result.

    The original:
    Click image for larger version. 

Name:	stp2.png 
Views:	134 
Size:	548.0 KB 
ID:	6767

    Here is the JPEG file, 8255 bytes (0.19 BPP):
    Attached Images Attached Images

  31. Thanks (2):

    algorithm (6th August 2019),JamesB (13th August 2019)

  32. #24
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Sentix View Post
    Sure, it's not that widely supported but it's supported much better (libjpeg) than any new standard.
    If one does not want to use variable sized or rectangular DCTs, identity transform, adaptive quantization, better DC modeling, and wants nothing of more advanced xyb color modeling, one can losslessly transport old school jpegs in jpeg xl with state of the art re-encoding that is both faster and more dense than the standardized (but not implemented) arithmetic encoding in the old jpeg, and much faster than lepton and packjpg with a 1 % density loss. There is a gradual upgrade path that allows the 'non-believers' to take one feature into use at a time, when they move on from dct8x8 and yuv420, without having to take a loss at any stage.

    We expect actual use to happen at a higher bitrate. The main tuning has been done at 1 to 1.5 bpp so detailed benchmarking should happen in that domain.

    We understand the power and compromises of the jpeg format having built three related systems: guetzli, brunsli and knusperli. We didn't just move on into our dream world but really attempted to squeeze everything out of the existing system before starting to dream about a better world.

  33. Thanks:

    Jarek (8th August 2019)

  34. #25
    Member
    Join Date
    Aug 2019
    Location
    Russia
    Posts
    5
    Thanks
    0
    Thanked 3 Times in 2 Posts
    The problem it's year 2019 not 2000 now. And you should be competing not against JPEG but things like AVIF and BPG. It looks to me that AVIF will be pushed hard by industry so there will not be "upgrade path" if your standard is inferior.

    On the other hand standardized lossless JPEG compression would be nice. But it needs to be not only faster than packjpg (what is trivial to do on modern hardware just with threads) but also compress better (STUFFIT can do it). Adding in standard requirement for decoder to be able to do Nosratinia's level postprocessing on JPEGs is probably a good idea too: then the new standard can be pushed as "make your JPEGs smaller and better quality without any information loss".

  35. #26
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    807
    Thanks
    245
    Thanked 257 Times in 160 Posts
    Sentix, while there are many independent compressors, I understand the main purpose of JPEG XL is filling the gap: squeezing what we can from JPEG: repacking (brunsli, ~20% reduction), better encoder (guetzli), better decoder (knusperli) - remaining as compatible as possible: what is crucial e.g. to reduce all the current JPEGs, but indeed leaving some small burden in restrictions.

    The question is how large is this burden, e.g. of not using intraprediction from video compressors? With good recompressor the difference should be tiny - in a bit different types of artifacts.
    To get essentially better compression we need to restrict to looking natural images, what rather requires ML-based compressor: huge models trained on databases.

    I see Nosratinia ( https://dl.acm.org/citation.cfm?id=372070 ) is looking simple blurring, originally expecting encoder to cooperate.
    JPEG XL uses https://github.com/google/knusperli instead, but I don't know its details, the description states:
    The goal of Knusperli is to reduce blocking artifacts in decoded JPEG images, by interpreting quantized DCT coefficients in the image data as an interval, rather than a fixed value, and choosing the value from that interval that minimizes discontinuities at block boundaries.
    what seems reasonable - we should try to optimize coefficients inside the quantization ranges to minimize blocking artifacts.
    I wonder how much can be done analytically here? Are there some more recent papers about such fuzzy decoding to minimize blocking artifacts?

  36. Thanks (2):

    Hakan Abbas (11th August 2019),Jyrki Alakuijala (9th August 2019)

  37. #27
    Member jibz's Avatar
    Join Date
    Jan 2015
    Location
    Denmark
    Posts
    124
    Thanks
    106
    Thanked 71 Times in 51 Posts
    Jyrki, I wonder what are your thoughts on something like Basis Universal as a JPEG replacement? As I understand Basis is partly a Google project, and Geldreich seems to envision it replacing JPEG on the web in the future (see for instance this twitter thread).

  38. #28

  39. #29
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by jibz View Post
    Jyrki, I wonder what are your thoughts on something like Basis Universal as a JPEG replacement?
    I suspect that Basis targets too high quality range today to take JPEG's role. My guesswork is that the sweet-spot functionality of Basis is at ~4 BPP, for JPEG XL at ~1 BPP, and for AVIF/BPG/HEIF at < 0.5 BPP. I never benchmarked Basis myself, so could be awfully wrong here.

    The thinking from Basis authors can be more focused on practicality, working set size, than image quality. That is a visionary position and we don't yet know how correct it is.
    Last edited by Jyrki Alakuijala; 9th August 2019 at 04:03.

  40. Thanks:

    jibz (9th August 2019)

  41. #30
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    902
    Thanks
    246
    Thanked 326 Times in 199 Posts
    Quote Originally Posted by Sentix View Post
    The problem it's year 2019 not 2000 now. And you should be competing not against JPEG but things like AVIF and BPG.
    We benchmark against them (of course there is no mature implementation of AVIF yet -- this makes it a bit difficult) and we do rather well at around 1 BPP. The video codecs tend to do better at the lowest bit rates (0.5 BPP and lower).

    Quote Originally Posted by Sentix View Post
    postprocessing on JPEGs is probably a good idea too
    Absolutely! Even the old jpeg standard allows such postprocessing. The decoder/encoder just needs to respect the quantization ranges.

Page 1 of 6 123 ... LastLast

Similar Threads

  1. RAISR - image compression with machine learning by Google
    By willvarfar in forum Data Compression
    Replies: 1
    Last Post: 13th January 2017, 19:59
  2. Google Chrome alert
    By Bulat Ziganshin in forum The Off-Topic Lounge
    Replies: 12
    Last Post: 3rd December 2016, 07:45
  3. Google released Snappy compression/decompression library
    By Sportman in forum Data Compression
    Replies: 11
    Last Post: 16th May 2011, 12:31
  4. Interested in Google-Wave?
    By Vacon in forum The Off-Topic Lounge
    Replies: 2
    Last Post: 29th November 2009, 19:11
  5. Did you know the google hashmap
    By thometal in forum Forum Archive
    Replies: 0
    Last Post: 4th February 2007, 15:21

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •