Page 1 of 2 12 LastLast
Results 1 to 30 of 32

Thread: BWTIL: a set of tools to work with BWT-based text indexes

  1. #1
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post

    BWTIL: a set of tools to work with BWT-based text indexes

    Hi all,

    BWTIL (BWT Text Indexing Library) is a library that contains a set of tools to manipulate efficiently the Burrows-Wheeler transform and BWT-based text indexes. Inside BWTIL you can find the following tools:


    • cw-bwt (paper coming soon) : a novel context-wise bwt construction algorithm
    • bwt-check : check consistency of a BWT file
    • bwt-to-sa : build the Suffix Array from a BWT file
    • sa-to-bwt : build the BWT from a suffix array file
    • bwt-invert : invert a BWT file to reconstruct the original text
    • dB-hash : an implementation of the dB-hash data structure (paper under review)
    • sFM-index : an implementation of a succinct (uncompressed) FM-index


    In particular, cw-bwt implements a new ultra memory-efficient algorithm to compute the BWT: cw-bwt requires only compressed space in RAM and the average-case running time is linear (worst is O(n log n) )

    A quick benchmark: On a 2.4 GHz core i7 PC, cw-bwt is able to build the BWT of the Human genome ( 3.2 billions of characters on the alphabet {A,C,G,T,N} ) in 4h 30' using only 990 MB of RAM.

    link: https://github.com/nicolaprezza/BWTIL

    enjoy!

    Nicola Prezza

  2. Thanks (3):

    Gonzalo (19th August 2014),nburns (19th August 2014),Stephan Busch (19th August 2014)

  3. #2
    Member biject.bwts's Avatar
    Join Date
    Jun 2008
    Location
    texas
    Posts
    449
    Thanks
    23
    Thanked 14 Times in 10 Posts
    I noticed from the write up it does not support 256 symbols since you state the 0x0 symbol is reserved. Also I see that it fails on what you call bad bwt files. Could you replace the particular form of BWT you are using to get the BWTS transform which is the bijective version of old style BWT?

  4. #3
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post
    Hi,

    Thank you for the suggestion. Yes I admit it is a bit annoying having the 0x0 byte reserved. I am not familiar with BWTS (I'll read about that), however my cw-bwt algorithm strongly relies on the fact that there is a single character, smaller than the others, terminating the text so I expect it would be quite difficoult to adapt the code. Notice that, however, this way to proceed (0x0 byte reserved) is adopted by other libraries (sdsl, sais).

    The fact that my programs fail on malformed bwts is because, while scanning backwards the bwt index, a malformed bwt may cause a loop. The programs however detect this situation and inform the user.

  5. #4
    Member biject.bwts's Avatar
    Join Date
    Jun 2008
    Location
    texas
    Posts
    449
    Thanks
    23
    Thanked 14 Times in 10 Posts
    Quote Originally Posted by Nicola Prezza View Post
    Hi,

    Thank you for the suggestion. Yes I admit it is a bit annoying having the 0x0 byte reserved. I am not familiar with BWTS (I'll read about that), however my cw-bwt algorithm strongly relies on the fact that there is a single character, smaller than the others, terminating the text so I expect it would be quite difficoult to adapt the code. Notice that, however, this way to proceed (0x0 byte reserved) is adopted by other libraries (sdsl, sais).

    The fact that my programs fail on malformed bwts is because, while scanning backwards the bwt index, a malformed bwt may cause a loop. The programs however detect this situation and inform the user.
    If you want to actually read about BWTS. The source code of Yata is likely the best place to look. Look at his various versions of openbwt. He wrote some of the fastest bwt code on the planet. I asked when his first open bwt code written if he could mod it to do BWTS. He did in a few days. All his code is somewhere on this site.
    There is no such thing as a malformed file if you use bwts. When a loop occurs that does no use the whole data. You have gotten one of the lyndon words from original file you just start the same procedure over for the next lyndon word. If there is a single loop then the UNBWT and UNBWTS are the same.
    Also you could drop the zero token by using a pointer where you would have put the zero and remember that its less than every thing else. My old site has some of the first code to do bwts but that was years ago. If you look at wiki for bwt they talk about bijective bwt thats bwts. Wiki changes frequently so not sure what it says any more.

  6. #5
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post
    Interesting. I'll try to work on that when I'll have a bit more free time.. at the moment I'm overwhelmed by work so I'll hardly find the time to extend the code I thought also, as you mention, about using a pointer at the beginning of the text instead of the 0x0 char but i didn't like much that idea because the resulting bwt would have been a mix between text and binary pointers rather than pure text (well is a matter of preferences, I know the difference wasn't that big). Thank you for all your suggestions!

  7. #6
    Member
    Join Date
    Dec 2012
    Location
    japan
    Posts
    150
    Thanks
    30
    Thanked 59 Times in 35 Posts
    mk-bwts is very fast.
    https://code.google.com/p/mk-bwts/

  8. Thanks (3):

    biject.bwts (20th August 2014),nburns (20th August 2014),Nicola Prezza (21st August 2014)

  9. #7
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    Quote Originally Posted by xezz View Post
    It's continuing to get faster, too. The devel branch has new code, and there are a few more changes I'm planning to introduce soon.

  10. Thanks:

    biject.bwts (20th August 2014)

  11. #8
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post
    Yes, the point of cw-bwt isn't speed, but RAM usage: I designed this algorithm since I needed to index large files (GBytes) with a variant of the FM-index; however, all the tools I was able to find on the web were really memory consuming (most of them rely on the SA), so I was often out of RAM.

    cw-bwt, instead, builds upon dynamic compressed strings (based on packed B-trees) and builds the BWT incrementally (i.e. at the i-th step the BWT of the suffix of length i of the text is completed and already compressed in RAM). Another example: 1G of english text (http://pizzachili.dcc.uchile.cl/ repository) is processed in 5h 50' using only 630MB of RAM.

    This is better than external memory algorithms (e.g. sdsl) which use between 1 and 1.5 Bytes per symbol in RAM.

    If you need fast code, don't use cw-bwt. If you are out of RAM, well...use it

  12. #9
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,475
    Thanks
    26
    Thanked 121 Times in 95 Posts
    There's bwmonstr: http://nanozip.net/#bwmonstr which does BWT in less memory than the input size (when input is compressible using simple o0 coding, I think). Also, Shelwien (former admin of this forum) made a program which computed BWT using less than 1 byte of memory per 1 byte of input.

    BBB from Matt Mahoney has option to do BWT in 1.25x the input size IIRC.

  13. #10
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post
    Nice, I will try them, thanks!

  14. #11
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post
    Quote Originally Posted by Piotr Tarsa View Post
    There's bwmonstr: http://nanozip.net/#bwmonstr which does BWT in less memory than the input size (when input is compressible using simple o0 coding, I think). Also, Shelwien (former admin of this forum) made a program which computed BWT using less than 1 byte of memory per 1 byte of input.

    BBB from Matt Mahoney has option to do BWT in 1.25x the input size IIRC.
    Do you by any chance know if bwmonstr code/linux executable is available anywhere? I am really curious to test it, but I can find only win executables.. and for Shelwien's program?

  15. #12
    Member
    Join Date
    Jun 2009
    Location
    Kraków, Poland
    Posts
    1,475
    Thanks
    26
    Thanked 121 Times in 95 Posts
    You can contact bwmonstr's author (Sami Runsas IIRC) for info. I don't know much about it.

    Shelwien's work is here: http://encode.su/threads/379-BWT-wit...sed-input-data - sources included.

  16. Thanks:

    Nicola Prezza (21st August 2014)

  17. #13
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post
    Thank you very much!

  18. #14
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    A while back I played with building the BWT using a straightforward incremental on-line algorithm and a compressed dynamic bitvector/wavelet tree data structure from Daisuke Okanohara's Google Code repo (https://code.google.com/u/Daisuke.Okanohara/). It was a workable method IIRC, but it was going very slow (I'm not sure if I was patient enough to let it finish). Unfortunately, oftentimes you pay a high price in performance for data structures that use little memory. I haven't gotten around to trying out your code, yet, but I'm assuming you put a lot of consideration into choosing a trade-off between performance and memory.

  19. #15
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post
    Quote Originally Posted by nburns View Post
    A while back I played with building the BWT using a straightforward incremental on-line algorithm and a compressed dynamic bitvector/wavelet tree data structure from Daisuke Okanohara's Google Code repo (https://code.google.com/u/Daisuke.Okanohara/). It was a workable method IIRC, but it was going very slow (I'm not sure if I was patient enough to let it finish). Unfortunately, oftentimes you pay a high price in performance for data structures that use little memory. I haven't gotten around to trying out your code, yet, but I'm assuming you put a lot of consideration into choosing a trade-off between performance and memory.
    I think that the method you used is very similar to mine: it is simply the algorithm that builds the BWT reading the text backwards, right? In my case, we introduced an additional element that makes the code much faster than the original method: in the original algorithm, you use only 1 dynamic compressed string, which length is equal to the length of the text = N; it follows that the height of the trees used internally in the dynamic strings is log N, so overall the algorithm has N log N running time. In our case, we instead partition the BWT using length-k-contexts (k is the entropy order and is approximately log N), and for each of them we mantain a dynamic string. The number of contexts is very high (asymptotically N/polylog(N)) , so on average (assuming uniform random input text) each context has length polylog(N), and the height of the trees used internally in the dynamic strings is O(1) (constant). Overall, this leads to a (average) O(N) time algorithm instead than a O(N log N). In practice, this is confirmed: the algorithm is not as fast as SA-based algorithms, but is considerably faster than algorithms that use only 1 dynamic string for the entire BWT.

  20. #16
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post
    I forgot to mention: since the parameter k (entropy order) can be choosed by the user (otherwise the optimal k is detected), this is the tradeoff: high values of k lead to more memory consumption (since there are more contexts and for each of them some additional structures are stored) but also to smaller length of the dynamic strings, so that all the operations are faster. On the other hand, choosing k=1 leads to a N log N time algorithm (very slow), but with really low RAM requirements.

  21. #17
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    Quote Originally Posted by Nicola Prezza View Post
    I think that the method you used is very similar to mine: it is simply the algorithm that builds the BWT reading the text backwards, right?
    Yes. It's exactly like backward search.

    In my case, we introduced an additional element that makes the code much faster than the original method: in the original algorithm, you use only 1 dynamic compressed string, which length is equal to the length of the text = N; it follows that the height of the trees used internally in the dynamic strings is log N, so overall the algorithm has N log N running time. In our case, we instead partition the BWT using length-k-contexts (k is the entropy order and is approximately log N), and for each of them we mantain a dynamic string. The number of contexts is very high (asymptotically N/polylog(N)) , so on average (assuming uniform random input text) each context has length polylog(N), and the height of the trees used internally in the dynamic strings is O(1) (constant). Overall, this leads to a (average) O(N) time algorithm instead than a O(N log N). In practice, this is confirmed: the algorithm is not as fast as SA-based algorithms, but is considerably faster than algorithms that use only 1 dynamic string for the entire BWT.
    Splitting up the tree by bounded context is a good idea. Did you index them directly or use a hash table? I think you would need to maintain rank statistics on all the trees combined, right? How does that work?

  22. #18
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post
    Quote Originally Posted by nburns View Post
    Yes. It's exactly like backward search.



    Splitting up the tree by bounded context is a good idea. Did you index them directly or use a hash table? I think you would need to maintain rank statistics on all the trees combined, right? How does that work?
    I use a de Bruijn-like automata, where each state represents a context and an edge exists if 2 states differ only by 1 shift; then, each state memorizes one dynamic string. The nice part of the algorithm is that you don't need rank statistics on all the trees combined, but only singularly for each tree: to make things still working you just need an additional partial sums structure for each automata state. Then, automata edges tell you in which dynamic string make the next insert, and rank+partial sum structures tell you the offset inside the string.

  23. #19
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    Do debruijn-like automata appear anywhere in the literature, or were they an improvisation? I think I know exactly what you're talking about. It could work like a suffix tree, but with limited depth. Interesting data structure. You could accomplish very similar ends by using a hash table with a rolling hash, couldn't you? Then the transitions would be implicit. But you would also have collisions.

    You're quite right that you do not need global ranks.
    Last edited by nburns; 23rd August 2014 at 03:39.

  24. #20
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post
    I call it de Bruijn automata because the automata is just a subgraph of a de Bruijn graph (that you find in literature). It is very simple: for example, node ACATN is connected to node GACAT because the latter can be obtained from the former appending G at the beginning and removing N from the end.

    Yes, also a rolling hash would work, and maybe collisions are not a big problem (the computational bottleneck is in the dynamic strings operations). In a first version I used direct hashing (which is trivially rolling), but it was not a good solution because many contexts do not appear in the text and I was wasting many hash entries. A more general rolling hash function would do nicely the job. I'll think about that, thanks!

  25. #21
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    Quote Originally Posted by Nicola Prezza View Post
    I call it de Bruijn automata because the automata is just a subgraph of a de Bruijn graph (that you find in literature). It is very simple: for example, node ACATN is connected to node GACAT because the latter can be obtained from the former appending G at the beginning and removing N from the end.
    Yes, I'm familiar with de Bruijn graphs. de Bruijn automata, if they are constructed on-line, seem like they might wind up resembling truncated suffix trees in their design and construction. Or, maybe the problem becomes much simpler than that. I love string-matching data structures, so if de Bruijn automata represent a new class, I'm intrigued.

    Yes, also a rolling hash would work, and maybe collisions are not a big problem (the computational bottleneck is in the dynamic strings operations). In a first version I used direct hashing (which is trivially rolling), but it was not a good solution because many contexts do not appear in the text and I was wasting many hash entries. A more general rolling hash function would do nicely the job. I'll think about that, thanks!
    I have thought about how to do this sort of thing, mainly for the general unbounded case. IIRC, collision resolution doesn't have an obvious, perfect solution, but fixed contexts would make the situation simpler. I'll try to offer some ideas.

    Edit:
    One thing you probably want to avoid is having to rescan the entire k-substring to resolve collisions, because that adds a factor of k to hash table lookups, and there are at least N of those. That's what makes this problem interesting.
    Last edited by nburns; 23rd August 2014 at 05:38.

  26. #22
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post
    Quote Originally Posted by nburns View Post
    Yes, I'm familiar with de Bruijn graphs. de Bruijn automata, if they are constructed on-line, seem like they might wind up resembling truncated suffix trees in their design and construction. Or, maybe the problem becomes much simpler than that. I love string-matching data structures, so if de Bruijn automata represent a new class, I'm intrigued.
    I created the term for the occasion, maybe saying simply "de Bruijn labeled graph" is better since it captures all the essence of the structure. Yes, they are sort of truncated suffix trees, though the latter are more complicated since they collapse common suffixes of the contexts and they represent explicitly the contexts; using an automata, instead, you don't need at all to store explicitly the contexts (for example as string objects) since they are implicit in the labeled edges.

    I think the construction can be easiy made online; In my implementation I instead firstly scan the text to detect all unique k-mers, sort them (I use hash tables so this is very fast) and then build the automata. This sort of structure is widely used in genome assembly, so here I am not adding anything new

    As a matter of fact, the whole cw-bwt algorithm is not online since it requires the text to be scanned a couple of times beforehand to detect optimal k and build data structures.

    Quote Originally Posted by nburns View Post
    I have thought about how to do this sort of thing, mainly for the general unbounded case. IIRC, collision resolution doesn't have an obvious, perfect solution, but fixed contexts would make the situation simpler. I'll try to offer some ideas.

    Edit:
    One thing you probably want to avoid is having to rescan the entire k-substring to resolve collisions, because that adds a factor of k to hash table lookups, and there are at least N of those. That's what makes this problem interesting.
    Yes this is true. However, k-mers can be represented as integers in base sigma (sigma being alphabet size), and then comparisons are constant-time. I already do that to detect and sort k-mers.

  27. #23
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    Quote Originally Posted by Nicola Prezza View Post
    Yes this is true. However, k-mers can be represented as integers in base sigma (sigma being alphabet size), and then comparisons are constant-time. I already do that to detect and sort k-mers.
    I'm not quite sure I follow what you're saying. k-mers can be represented as integers base sigma, but does that change actually take place on the machine, or is it just a different way of conceptualizing the same thing?

  28. #24
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post
    Quote Originally Posted by nburns View Post
    I'm not quite sure I follow what you're saying. k-mers can be represented as integers base sigma, but does that change actually take place on the machine, or is it just a different way of conceptualizing the same thing?
    It does take place on the machine: instead of keeping a contex as a string C[0,...,k-1] (which requires O(k) time to be manipulated, e.g. comparisons), you keep the number c = C[k-1] * s^0 + C[k-2]*s^1 + ... + C[0] * s^(k-1), where s is the alphabet size. You can do that since k is small, and c fits in a typical 64-bit computer word. Now, it holds that C1 < C2 (lexicographic order) if and only if c1 < c2 (integer order). The comparison now has cost O(1) instead of O(k). This affects heavily theoretical (and practical) results if you assume a RAM model of computation (i.e. standard computers).

    This technique belongs to the fascinating subfield of algorithmics called packed computation, where you exploit the fact that you can manipulate in constant time words of w bits (w=64 in a typical computer); almost all modern string data structures exploit packed computation, as I do with the packed B-trees that I use in my algorithm.

  29. #25
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    Quote Originally Posted by Nicola Prezza View Post
    It does take place on the machine: instead of keeping a contex as a string C[0,...,k-1] (which requires O(k) time to be manipulated, e.g. comparisons), you keep the number c = C[k-1] * s^0 + C[k-2]*s^1 + ... + C[0] * s^(k-1), where s is the alphabet size.
    You're right. The time to compare those is trivial. The point that I had intended to make, before my thoughts got away from me, is that you want to keep memory use under control, and that is what makes things interesting. You would like to avoid storing copies of the k-mers, because k-mers are overlapping and you would be storing a lot of redundant bytes. Tree structures have to deal with the same problem, but they can make the information implicit in the path from root to leaf, which achieves compression.
    Last edited by nburns; 24th August 2014 at 04:24.

  30. #26
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post
    yes, and so does an automata: using it, contexts are not stored at all, since are implicit in the edges (I use them explicitly only while building the automata, afterwards the memory is freed). However, all these things are really marginal in the memory consumption of the algorithm since the number of states (i.e. unique contexts) is very low compared to the text length. Overall, the memory overhead of the automata is approx. 10%. Most of the work has to be done in the compressed Dynamic strings and partial sum data structures in order to keep memory usage at a minimum.

  31. #27
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    Quote Originally Posted by Nicola Prezza View Post
    yes, and so does an automata: using it, contexts are not stored at all, since are implicit in the edges (I use them explicitly only while building the automata, afterwards the memory is freed). However, all these things are really marginal in the memory consumption of the algorithm since the number of states (i.e. unique contexts) is very low compared to the text length. Overall, the memory overhead of the automata is approx. 10%. Most of the work has to be done in the compressed Dynamic strings and partial sum data structures in order to keep memory usage at a minimum.
    You're right; it's probably comparatively-little memory for ordinary, compressible files. It's not a bad idea to conserve resources wherever you can, though. Otherwise, your code may fail on pathological inputs, like de Bruijn sequences. Or, the code may make poor use of the memory hierarchy and have poor performance (conserving memory doesn't automatically make better use of the memory hierarchy; they're somewhat orthogonal concerns. But it's good to pay attention to memory in general).

  32. #28
    Member
    Join Date
    Aug 2014
    Location
    Udine
    Posts
    17
    Thanks
    2
    Thanked 3 Times in 1 Post
    There is no such patological input: the algorithm is studied so that all auxiliary structures (automata, partial sums and so on) occupy space o(N). This is guaranteed theoretically, so the space will always be N * Hk + o(N). Also in practice, the o(N) term is very small and is not a big concern.

    de Bruijn sequences are the best input for the algorithm, not the worst: with de Bruijn sequences each text partition (induced by the length-k contexts) has exactly the same length, so that the load is uniformly distributed among the dynamic strings. The fact that they are not compressible simply means that the space will be N*log s +o(N), where s is the alphabet size.

    The worst case, instead, is a highly redundant text, which causes a lot of dynamic strings to have length O(N). This raises the time up to O(N log N), but space remains N * Hk + o(N). In practice, however, we observed that with all texts of interest (proteins, dna, english, xml) the load is always uniformly distributed, so running time in practice is always O(N).

  33. #29
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    With respect to de Bruijn sequences being a worst case, I was thinking of the hypothetical hash table (due to potentially ballooning memory requirements), not your data structures. It's an interesting feature of text problems that the best case for one algorithm is the worst case for another and vice versa.

  34. #30
    Member
    Join Date
    Feb 2013
    Location
    San Diego
    Posts
    1,057
    Thanks
    54
    Thanked 71 Times in 55 Posts
    The online construction for deBruijn automata probably parallels the one for suffix trees.

Page 1 of 2 12 LastLast

Similar Threads

  1. Rawzor, how does it work?
    By SZGY in forum Data Compression
    Replies: 33
    Last Post: 23rd November 2017, 00:43
  2. Quo Vadis JPEG - ISO Work Starts
    By thorfdbg in forum Data Compression
    Replies: 0
    Last Post: 2nd November 2012, 18:04
  3. Why does BWT work on images?
    By m^2 in forum Data Compression
    Replies: 6
    Last Post: 21st September 2012, 04:46
  4. Getting JOCL to work.
    By Piotr Tarsa in forum The Off-Topic Lounge
    Replies: 1
    Last Post: 28th October 2010, 00:50
  5. Interesting tools
    By lunaris in forum Data Compression
    Replies: 2
    Last Post: 26th August 2009, 00:50

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •