Results 1 to 12 of 12

Thread: Software protection

  1. #1
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,364
    Thanks
    212
    Thanked 1,016 Times in 540 Posts

    Lightbulb Software protection

    > Can you please suggest a harder-to-break way to implement
    > time check in future versions

    1. The common way involves software protections like http://vmprotect.ru
    But its not really a solution for time check, because its always possible
    to trace API calls and make a loader which would change the time for app.
    At least it makes sense to avoid using any kinds of time/date-related calls.
    Instead, some indirect source can be used, like %SystemRoot%\WindowsUpdate.log
    or some registry entries.

    2. I don't really understand why time-limited demos are so popular.
    There's no way to acquire real time on a computer
    (except for internet, but with internet you don't need the time limit),
    so there's always some kind of workaround even without cracking the app.
    Also when the demo stops working, its very annoying for the users.

    3. The only near-perfect way to control your software (aside from hardware keys)
    is internet access. For example, you can put a small block of compressed app's code
    into a file on your site, and make a loader that would download it on each run -
    like that, there would be no way to crack it when it can't get the necessary part anymore.

    4. There're less annoying ways to make demos. You can reduce the feature set,
    or just compile the demo without speed optimization.
    The safest way is certainly to make a web service from it - upload file, process,
    then download.

    > I know its impossible to have a unbreakable lock, but I will really
    > appreciate some easy tips on making it harder to break.

    Its possible, but it affects customers. Its really funny, when paying customers
    have to manually type a 100-symbol key, then wait extra 10s each time when app starts
    (while app is decrypted and key is verified), and tell AVs that its not a virus,
    while users of cracked version don't have any of these problems.

  2. #2
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Shelwien View Post
    3. The only near-perfect way to control your software (aside from hardware keys)
    is internet access. For example, you can put a small block of compressed app's code
    into a file on your site, and make a loader that would download it on each run -
    like that, there would be no way to crack it when it can't get the necessary part anymore.
    And what's the problem with running a valid version of the app to obtain the downloadable or decryptable code and embedding it in a crack?

  3. #3
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,364
    Thanks
    212
    Thanked 1,016 Times in 540 Posts
    > And what's the problem with running a valid version of the app to
    > obtain the downloadable or decryptable code and embedding it in a crack?

    1. Nobody would crack it when it still works (a time-limited demo!),
    and its impossible to crack when it already doesn't (because an essential part is missing).
    That's the main point, but even in unlikely case when somebody would try
    cracking it while it still works, such a setup still provides additional protection
    comparing to plain exe without any external parts.
    Its easy to prevent the capture of network data (plain https would do), so
    the only way to crack it would be static analysis - no runtime debuggers and
    trial-and-error, because protection can report hacking attempts to server.
    And, well, that's hard.

    Example: http://hiew.ru/ (".cah" etc)

    2. For legally purchased versions there's an interesting alternative:
    watermarking. Basically it means selling an unique version of the app
    to each customer (but it has to be nontrivially unique; for example, compiled
    with different compiler options - so that it won't be possible to detect watermarks
    by diffing two builds).
    Like that, if some version leaks to the net, you'd know whom to sue;
    or at least which version to blacklist (in conjunction with network check).

    Example: http://hex-rays.com/ (ida pro is watermarked)

  4. #4
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Shelwien View Post
    > And what's the problem with running a valid version of the app to
    > obtain the downloadable or decryptable code and embedding it in a crack?

    1. Nobody would crack it when it still works (a time-limited demo!),
    and its impossible to crack when it already doesn't (because an essential part is missing).
    That's the main point, but even in unlikely case when somebody would try
    cracking it while it still works, such a setup still provides additional protection
    comparing to plain exe without any external parts.
    Its easy to prevent the capture of network data (plain https would do), so
    the only way to crack it would be static analysis - no runtime debuggers and
    trial-and-error, because protection can report hacking attempts to server.
    And, well, that's hard.

    Example: http://hiew.ru/ (".cah" etc)

    2. For legally purchased versions there's an interesting alternative:
    watermarking. Basically it means selling an unique version of the app
    to each customer (but it has to be nontrivially unique; for example, compiled
    with different compiler options - so that it won't be possible to detect watermarks
    by diffing two builds).
    Like that, if some version leaks to the net, you'd know whom to sue;
    or at least which version to blacklist (in conjunction with network check).

    Example: http://hex-rays.com/ (ida pro is watermarked)
    I don't find any of these a show stopper, can be cracked easier than you say but nevertheless it's much harder than I thought. Thanks for the explanation.

  5. #5
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,364
    Thanks
    212
    Thanked 1,016 Times in 540 Posts
    > I don't find any of these a show stopper, can be cracked easier than you say

    The normal way to bypass protections is app tracing.
    Encryption layers are decrypted naturally, sometimes you encounter an
    anti-debugger trick and have to use some workaround (patch the check,
    use a different type of breakpoint; or just a better debugger).
    Doing the same thing statically is a major pain, basically it means
    writing an external decryptor for the protection, and having to understand
    all the tricks in the code (many of common tricks don't affect any sane
    debugger, so normally they can be ignored).

    But with a network-dependent protection we can't happily trace the app
    until hitting a trap anymore, because that trap can tell the server to
    block your ip and you're done. And "tell the server" can be done by setting
    a flag in the request to the server which is sent anyway.

  6. #6
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Shelwien View Post
    > I don't find any of these a show stopper, can be cracked easier than you say

    The normal way to bypass protections is app tracing.
    Encryption layers are decrypted naturally, sometimes you encounter an
    anti-debugger trick and have to use some workaround (patch the check,
    use a different type of breakpoint; or just a better debugger).
    Doing the same thing statically is a major pain, basically it means
    writing an external decryptor for the protection, and having to understand
    all the tricks in the code (many of common tricks don't affect any sane
    debugger, so normally they can be ignored).

    But with a network-dependent protection we can't happily trace the app
    until hitting a trap anymore, because that trap can tell the server to
    block your ip and you're done. And "tell the server" can be done by setting
    a flag in the request to the server which is sent anyway.
    But you don't have to do it statically. You can first run an app offline to see what can you find. Then it goes to download and spoils the fun. So the next time, you let it download and then break a connection and attach a debugger again.
    You can run it several times at first to know when to stop, judging only on what is being sent / how much of data is being sent.
    SSL and all other secure protocols can be broken when you control one side.
    This can be made harder by downloading code in pieces with checks on the way.
    Also, a connection frequently breaking in weird moments is something that can be detected by server and lead to a ban.....
    ....but such ban might be questioned and 'we can cease to provide you service if you have flaky connection with no cash back' does not look good in licenses.
    So it can still be a major pain, but it's not bad to actually have to do it statically. Especially when cost of buying a new copy of a program is low and you can afford wasting a few.

    And as to watermarking - you can blend a few copies to get something unique, even 2 should do. Though it takes some work to check what each difference really means.

  7. #7
    Tester
    Stephan Busch's Avatar
    Join Date
    May 2008
    Location
    Bremen, Germany
    Posts
    876
    Thanks
    472
    Thanked 175 Times in 85 Posts
    I also do vote for no time limits. It's free, it is supposed to run forever. Customers won't use time-limited tools.
    In order to become more popular without annoying users, Rawzor should be continued without time limitation.

    For me as a tester its even more important to have no time-limited tools because testsets change from time to time.

    Nevertheless, cracking is neither good nor motivating for authors. We should discuss compression here.

  8. #8
    Member
    Join Date
    Feb 2010
    Location
    Nordic
    Posts
    200
    Thanks
    41
    Thanked 36 Times in 12 Posts
    The best way to keep something private is not to distribute it.

    If you want it to be secret source, do it as an online service.

    As the user has both the input and the output, however, they can still infer and recreate what you did secretly in the middle, of course.

  9. #9
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,364
    Thanks
    212
    Thanked 1,016 Times in 540 Posts
    > Nevertheless, cracking is neither good nor motivating for authors.

    As to motivation, its different for different authors.
    Its obvious that users would do what they want, and its an interesting
    technical problem to only let them do what author wants.
    Still, I think it'd be a worse world if there was easy patenting and
    perfect patent enforcement.

    > We should discuss compression here.

    Maybe I'd move it to a separate thread later.
    I have a compression-based protection idea though - a VM based
    on arithmetic coding can't be reverse-engineered incrementally :)

    > As the user has both the input and the output, however, they can
    > still infer and recreate what you did secretly in the middle, of course.

    For compression that's unlikely, because hidden state is too large :)

    > But you don't have to do it statically. You can first run an app
    > offline to see what can you find.

    Yes, so it runs offline - with disabled compression, because an essential
    part of compression code couldn't be downloaded.

    > Then it goes to download and spoils the fun.

    The main idea with software protections is to keep cause and effect as
    far away as possible. Ie instead of checking timer and immediately quitting,
    it should change something in least expected place (eg. a crc32 table) and
    detect that in other place(es) - eg. compute crc32 of some things and compare
    with known values. Ideally the fact that some checks failed shouldn't be reported
    in any way at all, instead the check results can be used as constants in processing,
    so that failed checks would introduce tricky bugs - that's hardest to crack.
    However unfortunately in practice that idea doesn't work, because the users of
    buggy cracked versions start spamming about the app being buggy, and it affects the sales.

    > So the next time, you let it download and then break
    > a connection and attach a debugger again.

    Things don't work that way. Would you run a known harmful trojan on your system,
    in hope that AV is installed and would catch it?
    In other words, you have to trace it from the start, or you won't know how it did
    whatever it did.
    To be specific, its easy to prevent reconstruction of complete program from memory dumps,
    by putting code into multiple heap memory blocks and randomizing the imports.
    Also in case of vmprotect, it doesn't ever decrypt critical functions at all - instead
    it recompiles them into a randomized VM code and interpretes that in runtime.

    > You can run it several times at first to know when to stop, judging
    > only on what is being sent / how much of data is being sent.

    As I said before, communication should be always the same (send a request / receive a block),
    and to control that, you'd have to be able to trace the code that generates and
    sends the request. And for that you have to correctly trace the code from
    the start. Obviously the program shouldn't process unencrypted data with OS APIs,
    so API capture won't be of any help.

    Also at this point it already becomes risky for the hacker, because
    incomplete/broken request packets can be detected by the server, and
    in the worst case, the whole (unique, watermarked) program instance
    can get blacklisted, and you'd have to download a new one and start from scratch.

    To clarify things:
    1. At the time of network exchange, the program has already built its unique
    runtime layout, so dumping it after it closes the connection is no better
    when dumping it when it waits for user input.
    2. The whole idea with network exchange is to provide an external
    protection module which can't be controlled by the hacker.
    You won't know whether you made any mistakes while tracing it until
    it reports to the server, and its too late after that.
    And starting to trace it only after it closes the connection is also
    too late, because at that point you'd have to deal with unknown
    polymorphic runtime layout.

    > SSL and all other secure protocols can be broken when you control
    > one side.

    Yes, but you don't control the program if you don't control it from the start.
    Also strong encryption is not really the point there, I just thought
    that SSL would be good enough, because its pretty complicated
    (and all that complicated algorithm can be interleaved with protection).
    Any encryption stronger than "XOR const" would do though - its necessary there
    only to prevent capture-and-replay of network traffic.

    > This can be made harder by downloading code in pieces with checks on the way.

    Yes, but that increases the delay and puts more load on the server, so in practice
    one exchange via a simple standard protocol (like http or https) is desirable.
    Ideally the server should do some custom processing for each request (eg. adjust
    the code to specific memory offset/imports), but it would be strong enough even
    without that.

    > ....but such ban might be questioned and 'we can cease to provide
    > you service if you have flaky connection with no cash back' does not
    > look good in licenses.

    It should be safe to assume that any connection would be able to transfer
    a single tcp packet without breaking it - integrity is only important for
    the request data, it can break however it wants after that.

    > So it can still be a major pain, but it's not bad to actually have
    > to do it statically.

    I think it is. You have to know how to build a valid request packet
    to capture the response. And to do that, you have to trace all the
    protection code up to request sending without triggering any traps.

    > Especially when cost of buying a new copy of a program is low and
    > you can afford wasting a few.

    Might be a good hightech replacement for the roulette :)

    > And as to watermarking - you can blend a few copies to get something
    > unique, even 2 should do. Though it takes some work to check what
    > each difference really means.

    That's why its called "watermarking" - only some specific details matter.
    So if its done properly, by mixing parts from a few copies, you'd
    just compromise all the copies.
    Sure there're some possibilities, like putting your own protection on it,
    and having the author to do the cracking part, but then, a memory dump
    would be enough to detect watermarks, even if its not enough to build
    a working program. In theory, its possible to completely recompile the
    code (decompile/restructure/recompile basically), but even that doesn't
    guarantee that _all_ watermarks would be removed (some could be in the data).

  10. #10
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Shelwien View Post
    Things don't work that way. Would you run a known harmful trojan on your system,
    in hope that AV is installed and would catch it?
    No, I would run it in a VM. And if I did it professionally, I'd have a special PC for this purpose, in case it can exploit some VM bug to get out.
    And the point of what I said was not yet to infer how it did things but rather what it did and what to look for, what data is exchanged with the server and such.

    In other words, you have to trace it from the start, or you won't know how it did
    whatever it did.
    To be specific, its easy to prevent reconstruction of complete program from memory dumps,
    by putting code into multiple heap memory blocks and randomizing the imports.
    Mhm.
    Quote Originally Posted by Shelwien View Post
    Also in case of vmprotect, it doesn't ever decrypt critical functions at all - instead
    it recompiles them into a randomized VM code and interpretes that in runtime.
    You can trace the interpreter just like regular code.


    > You can run it several times at first to know when to stop, judging
    > only on what is being sent / how much of data is being sent.

    As I said before, communication should be always the same (send a request / receive a block),
    and to control that, you'd have to be able to trace the code that generates and
    sends the request. And for that you have to correctly trace the code from
    the start. Obviously the program shouldn't process unencrypted data with OS APIs,
    so API capture won't be of any help.
    You need to have your compression code intact before you send anything important, so it can be safely read by the adversary. It can't do anything but obfuscate communication.

    Also at this point it already becomes risky for the hacker, because
    incomplete/broken request packets can be detected by the server, and
    in the worst case, the whole (unique, watermarked) program instance
    can get blacklisted, and you'd have to download a new one and start from scratch.
    Definitely.

    To clarify things:
    1. At the time of network exchange, the program has already built its unique
    runtime layout, so dumping it after it closes the connection is no better
    when dumping it when it waits for user input.
    I meant breaking it as soon as your computer gets the data, preferably even before it gets to the program. Then trace it offline, so the program decompresses its brand new code, does all the checks it has programmed and can't report them back because you catch all the outgoing data.

    Quote Originally Posted by Shelwien View Post
    2. The whole idea with network exchange is to provide an external
    protection module which can't be controlled by the hacker.
    You won't know whether you made any mistakes while tracing it until
    it reports to the server, and its too late after that.
    Yes, it can be frustrating, you may loose a few licenses, which may or may not be a problem.

    Quote Originally Posted by Shelwien View Post
    > SSL and all other secure protocols can be broken when you control
    > one side.

    Yes, but you don't control the program if you don't control it from the start.
    You can infer (and modify) what you need while you're offline.

    > Especially when cost of buying a new copy of a program is low and
    > you can afford wasting a few.

    Might be a good hightech replacement for the roulette
    Nah. If you're reasonably good, no check will catch you more than once, so there's little randomness in it.


    Quote Originally Posted by Shelwien View Post
    > And as to watermarking - you can blend a few copies to get something
    > unique, even 2 should do. Though it takes some work to check what
    > each difference really means.

    That's why its called "watermarking" - only some specific details matter.
    So if its done properly, by mixing parts from a few copies, you'd
    just compromise all the copies.
    Sure there're some possibilities, like putting your own protection on it,
    and having the author to do the cracking part, but then, a memory dump
    would be enough to detect watermarks, even if its not enough to build
    a working program. In theory, its possible to completely recompile the
    code (decompile/restructure/recompile basically), but even that doesn't
    guarantee that _all_ watermarks would be removed (some could be in the data).
    I guess you're right. It's not as easy as it seemed.

    BTW one interesting protection came to my mind. Server could have a large number of validity checks and upload a random one to the client. Getting them all would take a lot of time. I wonder whether one can make them polymorphicaly in a way that doesn't make it easy to block them all at once. Or something way cooler, but probably less practical, have them all stored in something like Rubberhose and just upload passwords.
    And you could add significantly different checks to your server from time to time. And ensure nobody hacks your server or your users might get angry at you spreading malware.

  11. #11
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    3,364
    Thanks
    212
    Thanked 1,016 Times in 540 Posts
    > And the point of what I said was not yet to infer how it did things
    > but rather what it did

    At that point it could fill the whole memory with polymorphic code,
    how do you intend to analyze that?
    Start of the program is the weak point exactly because there we
    still comprehend the environment, where the program starts, and
    its first actions,

    > and what to look for, what data is exchanged with the server and such.

    Some encrypted data. But with any reasonable encryption its more expensive
    to crack than the code.

    > You can trace the interpreter just like regular code.

    Wanna try? Obviously that interpreter is 95% anti-debugger traps.
    Btw, even tracing plain sequential code can be hard, because x86
    instruction set is very complex and not quite fully documented.
    Single-stepping the instructions with cpu's debug features (like int1 and int3)
    allows to execute all instructions right, but there're methods to break out of it
    (eg. cpu skips the trace interrupt after MOV SS,xx - supposedly to process it
    atomically with the following MOV SP,xx - but we can put a POPF which would clear
    TF there, and debugger would lose control).
    Its also not fully transparent either.
    And only intel can make a perfect emulator - at least, there're always new tricks
    to fool AV tracers and such.
    But instruction set is not the real problem actually, because there's also
    interrupt system, some basic hardware like timer (currently emulated by OS,
    which makes it harder to emulate for debugger under that OS), and the whole
    OS with its quirks and APIs.

    > You need to have your compression code intact before you send
    > anything important, so it can be safely read by the adversary.
    > It can't do anything but obfuscate communication.

    As I said before, the main point of network exchange is to prevent
    trial-and-error bruteforce method, which is currently the easiest
    way to crack a program.
    Its also possible to let server do some computing for the program,
    but it won't matter if we can trace the program up to the point
    where it sends the request.

    >> 1. At the time of network exchange, the program has already built
    >> its unique runtime layout, so dumping it after it closes the
    >> connection is no better than dumping it when it waits for user input.

    > I meant breaking it as soon as your computer gets the data,
    > preferably even before it gets to the program. Then trace it
    > offline, so the program decompresses its brand new code, does all
    > the checks it has programmed and can't report them back because you
    > catch all the outgoing data.

    As I said, that's not any different from attaching a debugger to
    a program while it waits for user input.
    We have to presume that the program had already taken control of
    the system at this point (and it actually can do just that if
    it uses a driver or some good exploits).
    Again:
    1. To understand the runtime layout, we need to trace the program
    from the start, there's no other way (at least if the protection
    is designed right). For example, when the program already runs,
    it normally has to use many functions imported from system DLLs.
    There's no certain way to locate all API calls in the program's
    memory dump (see "halting problem"), and you have to be able to
    relocate these, or the program won't run on a different OS version
    (or even with different boot options).
    Btw, relocation is a problem in itself, even if you know the spots
    somehow, because these spots don't need to contain a function
    address which you can patch.
    2. The point of network request is to prevent trial-and-error
    tracing, where you understand where a trap was after triggering it.
    3. The point of downloading a code patch is to make sure that
    the network request has to be sent for program to work.

    > Nah. If you're reasonably good, no check will catch you more than
    > once, so there's little randomness in it.

    Yeah, but there can be 1000s of these tricks... modern protections
    are not hand-written anyway.

    > BTW one interesting protection came to my mind. Server could have a
    > large number of validity checks and upload a random one to the client

    Maybe its interesting...
    But there's actually a common mistake, when protection designer makes multiple
    valid paths for some reason - eg. low-level protection to run under admin
    and something simpler to run without admin rights. Or fully protected x86
    version and barely protected x64 version.
    Anyway, in this case its as weak as the weakest path, while development
    time has to be spent for each path.
    And in fact that's still good like that, because if you have N password hashes,
    it can be N times faster to find the password.

  12. #12
    Member m^2's Avatar
    Join Date
    Sep 2008
    Location
    Ślůnsk, PL
    Posts
    1,611
    Thanks
    30
    Thanked 65 Times in 47 Posts
    Quote Originally Posted by Shelwien View Post
    > and what to look for, what data is exchanged with the server and such.

    Some encrypted data. But with any reasonable encryption its more expensive
    to crack than the code.
    If you mean the encryption code - yes. If you mean the protected code - no.

    Quote Originally Posted by Shelwien View Post
    > You can trace the interpreter just like regular code.

    Wanna try? Obviously that interpreter is 95% anti-debugger traps.
    Btw, even tracing plain sequential code can be hard, because x86
    instruction set is very complex and not quite fully documented.
    Single-stepping the instructions with cpu's debug features (like int1 and int3)
    allows to execute all instructions right, but there're methods to break out of it
    (eg. cpu skips the trace interrupt after MOV SS,xx - supposedly to process it
    atomically with the following MOV SP,xx - but we can put a POPF which would clear
    TF there, and debugger would lose control).
    Its also not fully transparent either.
    And only intel can make a perfect emulator - at least, there're always new tricks
    to fool AV tracers and such.
    But instruction set is not the real problem actually, because there's also
    interrupt system, some basic hardware like timer (currently emulated by OS,
    which makes it harder to emulate for debugger under that OS), and the whole
    OS with its quirks and APIs.
    Yeah, I only wanted to say that tracing an interpreter is not much different from tracing regular code. It's another layer of complexity, but that's all.

    Quote Originally Posted by Shelwien View Post
    > You need to have your compression code intact before you send
    > anything important, so it can be safely read by the adversary.
    > It can't do anything but obfuscate communication.

    As I said before, the main point of network exchange is to prevent
    trial-and-error bruteforce method, which is currently the easiest
    way to crack a program.
    I don't know what do you mean by brute force....

    Quote Originally Posted by Shelwien View Post
    2. The point of network request is to prevent trial-and-error
    tracing, where you understand where a trap was after triggering it.
    Mhm. You disable all traps up to the point of communication, undetected. You do the communication. Then restart, disable all you know and trick the program to think it talks with the server, while it really talks with you. Sure you have to understand the protocol, but you can learn it just by analysing the program offline and passively sniffing communication. Then try what the downloaded part does and what happens while the download is happening w/out allowing the program to report anything. I still don't see the problem.

    Quote Originally Posted by Shelwien View Post
    > BTW one interesting protection came to my mind. Server could have a
    > large number of validity checks and upload a random one to the client

    Maybe its interesting...
    But there's actually a common mistake, when protection designer makes multiple
    valid paths for some reason - eg. low-level protection to run under admin
    and something simpler to run without admin rights. Or fully protected x86
    version and barely protected x64 version.
    Anyway, in this case its as weak as the weakest path, while development
    time has to be spent for each path.
    Mhm.
    Quote Originally Posted by Shelwien View Post
    And in fact that's still good like that, because if you have N password hashes,
    it can be N times faster to find the password.
    Yeah, the Rubberhose idea wasn't a good fit in here.

Similar Threads

  1. PerfectCompress, a new file compression software.
    By moisesmcardona in forum Data Compression
    Replies: 148
    Last Post: 21st May 2018, 04:16
  2. RAMDisk software
    By Surfer in forum The Off-Topic Lounge
    Replies: 9
    Last Post: 30th March 2010, 23:44
  3. Free software to support RARv3
    By lunaris in forum Data Compression
    Replies: 11
    Last Post: 21st January 2009, 20:13

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •