Page 1 of 2 12 LastLast
Results 1 to 30 of 52

Thread: JPEG XL release candidate

  1. #1
    Member
    Join Date
    Nov 2020
    Location
    Zurich
    Posts
    1
    Thanks
    0
    Thanked 8 Times in 1 Post

    JPEG XL release candidate

    Hi everyone!

    We are releasing version 0.1 of JPEG XL here: https://gitlab.com/wg1/jpeg-xl/-/releases/v0.1

    This version is a format release candidate for the codestream, meaning that we don't plan to do any modifications but we are ready for freezing.

    Any feedback is appreciated!

    Luca

  2. Thanks (8):

    Cyan (14th November 2020),fabiorug (14th November 2020),Jarek (14th November 2020),Jon Sneyers (14th November 2020),Jyrki Alakuijala (14th November 2020),Mike (14th November 2020),schnaader (15th November 2020),Scope (14th November 2020)

  3. #2
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    58
    Thanks
    11
    Thanked 37 Times in 20 Posts
    Maybe to clarify a bit more:

    This version 0.1 is a format release candidate, meaning that we don't plan to do any backwards-incompatible bitstream modifications and we are getting ready for freezing. Our aim is that files will remain decodable in the future at this point, though we cannot promise it yet. If we discover a bug, it may still lead to a bitstream-breaking fix, but we will try to avoid that if possible.

    ​So this version is suitable for trying out the codec, but please don't yet start converting all your images to jxl and deleting the originals

  4. Thanks (2):

    Jyrki Alakuijala (14th November 2020),Mike (14th November 2020)

  5. #3
    Member
    Join Date
    Aug 2020
    Location
    Italy
    Posts
    72
    Thanks
    19
    Thanked 2 Times in 2 Posts
    Ok, good. I will try. But sometimes at higher quality mouth jpg to jxl lossy looks like a truffle (but i tried the squoosh implementation with the speed backwards, it will be updated and i will try the cjpegxl)
    And the codec while is good at generation loss, I'm curious about -0 modular? negative
    does it does a editing of the features, does it have a special heuristics(decisions) for drawing like a drawing mode?
    Or the focus is only generation loss and you don't plan to change or add this?
    Also do you will collab with pascal massimino (skal) in improving a standard?
    I do not have feedbacks, i'm not developer.

  6. Thanks:

    Jyrki Alakuijala (14th November 2020)

  7. #4
    Administrator Shelwien's Avatar
    Join Date
    May 2008
    Location
    Kharkov, Ukraine
    Posts
    4,013
    Thanks
    303
    Thanked 1,328 Times in 759 Posts
    There're windows binaries in this thread: https://forum.doom9.org/showthread.php?t=174300&page=17
    Also apparently WebP2 is also out finally?
    Attached Files Attached Files

  8. Thanks:

    Jyrki Alakuijala (15th November 2020)

  9. #5
    Member
    Join Date
    Aug 2020
    Location
    Italy
    Posts
    72
    Thanks
    19
    Thanked 2 Times in 2 Posts
    Old binaries, also webP 2 Jamaika hasn't yet tried to compile it.

  10. Thanks:

    Jyrki Alakuijala (14th November 2020)

  11. #6
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    58
    Thanks
    11
    Thanked 37 Times in 20 Posts
    The focus has not particularly been on generation loss – yesterday was the first time I started taking a serious look at it, actually (though others have done some tests in the past). But high visual fidelity has been a major focus, and I suppose that generation loss resilience is a byproduct of that.

    Negative modular quality is just like in FLIF, it doesn't really make sense to limit the scale to 0 to 100 since you can go as lossy as you want. Of course the useful range in practice is "quality" 50 to 100 or distance 0 to 4 or so.

    It is too late now to improve the standard itself (i.e. the decoder side of things), since we are in the final stages of standardization. There will still be implementation improvements (e.g. speed or even just handling all exotic cases allowed by the standard correctly), and of course there also will still be encoder-side improvements. We will collaborate with anyone who wants to collaborate with us

    One thing skal did bring to the standard is the idea of allowing the variable-sized blocks to be floating (i.e. not naturally aligned on an offset that is a multiple of the block size), which we allow in the standard but afaik we don't take advantage of it in the encoder yet.

  12. Thanks (2):

    fabiorug (14th November 2020),Jyrki Alakuijala (14th November 2020)

  13. #7
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    924
    Thanks
    255
    Thanked 331 Times in 203 Posts
    Quote Originally Posted by Jon Sneyers View Post
    variable-sized blocks to be floating
    It is wonderful to see how different efforts cross-pollinate each other with ideas. We don't yet have an encoder for this, so this feature is for future improvements in performance

  14. #8
    Member
    Join Date
    Aug 2020
    Location
    Italy
    Posts
    72
    Thanks
    19
    Thanked 2 Times in 2 Posts
    This could get better visual quality at costs of some ringing in drawings or spotify screenshots?
    Or there isn't such thing as draw tools or drawing mode for jpeg xl like in webp?
    Should an encoder edit more draws than how the person drew like introducing significant distortions?
    Or for that webp2 or webp are more specialized?
    Should I try?
    Also what documents we will see about jpeg XL format fidelity or bitstream code specification? How to keep updated?
    We will see only those in jpeg gitlab the whitepaper or there are others that will be made with the course of time?

  15. #9
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    924
    Thanks
    255
    Thanked 331 Times in 203 Posts
    Quote Originally Posted by fabiorug View Post
    Also do you will collab with pascal massimino (skal) in improving a standard?
    Pascal Massimino has been active in WebP v2.

    JPEG XL development has been heavily based on butteraugli. Pascal kindly guided me to publish it so that he could consider it, but I did not have the resources at that time (other than cheap proxy publishing through guetzli evaluation -- https://arxiv.org/abs/1703.04416 and later through jpeg xl as a proxy in https://www.spiedigitallibrary.org/c...237.full?SSO=1).

    Some collaboration between the JPEG XL and WebP teams exists: I have followed what is happening with WebP lossless v2 and I have managed to guide the WebP team to improve on some of the mistakes I introduced in WebP lossless v1. On the lossy side, the goals have been different and people have been in heads-down-mode, so less collaboration there. We based the new palettization mode in JPEG XL heavily on the ideas that were first tried out in WebP delta palettization. However, the WebP team decided not to continue on that track, so the next-gen delta palettization is only happening on JPEG XL side.
    Last edited by Jyrki Alakuijala; 14th November 2020 at 20:38.

  16. #10
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    924
    Thanks
    255
    Thanked 331 Times in 203 Posts
    Quote Originally Posted by fabiorug View Post
    screenshots?
    My experience so far is that AVIF may be more efficient at lossy compression of screenshots -- at least if JPEG XL's patches mode is not used. We have not invested a lot into this and encoder decisions have been mostly guided with photographic image test corpora (with both SDR and HDR images).

    The patches and the modular mode in JPEG XL are very expressive, so it is possible that we can catch up there. Also, layering different DCT-sizes or different modes (dct, palettized and modular) may provide possible further improvements for screenshots and other kinds of mixed content.
    Last edited by Jyrki Alakuijala; 14th November 2020 at 20:39.

  17. #11
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    924
    Thanks
    255
    Thanked 331 Times in 203 Posts
    Quote Originally Posted by fabiorug View Post
    squoosh implementation
    At a time (according to rumors, didn't test myself) there was confusion about the --distance argument in squoosh.

    With --distance you define the multiples of just-noticeable-difference you want in the end results. If you specify 1.0, you get an end result that is at the just-noticeable-error boundary. At 4.0 you get 4 times that error (in psnr-like linear scaling, not psychovisual scaling). If you specify 75, you will be asking for an error of 75 times the just noticeable error, i.e., pretty bad.

  18. #12
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    58
    Thanks
    11
    Thanked 37 Times in 20 Posts
    I tested jxl 0.1, avif-aom 2.0, webp 1.1 for generation loss resilience. Some discussion about it is on the av1 subreddit: https://www.reddit.com/r/AV1/comment..._jxl_and_avif/

  19. #13
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    924
    Thanks
    255
    Thanked 331 Times in 203 Posts
    Quote Originally Posted by Shelwien View Post
    There're windows binaries in this thread: https://forum.doom9.org/showthread.php?t=174300&page=17
    Also apparently WebP2 is also out finally?
    Copied from the doom9.org thread:

    WebP 2: experimental successor of the WebP image format
    https://chromium.googlesource.com/codecs/libwebp2/

    JPEG XL v0.1 is a JPEG XL format release candidate
    https://gitlab.com/wg1/jpeg-xl/-/releases

    Would be interesting if someone compared them.

    Some differences from the top of my head:
    + JPEG XL can compress old-school JPEGs losslessly
    + JPEG XL can code progressively, 8x8 subsampled first with 'middle-out' or saliency-guided ordering of 256x256 groups in further AC scans
    + JPEG XL supports patches, larger transforms (up to 256x256 DCT) and layering different transform sizes
    + JPEG XL uses an absolute colorspace, SDR, HDR and wide-gamut all use the exact same coding
    + JPEG XL filtering tuned for maintaining the original image (authenticity)

    + WebP v2 has triangle-based thumbnails
    + WebP v2 likely needs less memory at decoding in current implementation
    + WebP v2 has a simpler (and thus likely faster) context modeling in lossless
    + WebP v2 is still free to creatively improve, i.e., in experimental mode
    + WebP v2 filtering tuned to reduce artefacts (comfort at high compression ratios)

    It is likely that the differences are going to be somewhat reduced in the future when both JPEG XL and WebP v2 reference codecs evolve.

  20. Thanks:

    fabiorug (15th November 2020)

  21. #14
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    58
    Thanks
    11
    Thanked 37 Times in 20 Posts
    I don't quite understand the stated goals for the WebP v2 project, tbh. It will not be related or compatible with WebP v1, but a completely new codec, which will bring interesting transition problems. It wants to be as fast to decode as AVIF (currently it is 3x slower they say), and aims at being 20% worse than AVIF. So it will be worse than something that already exists, with the same decode speed, also no progressive rendering, and future browser support hinging on the shaky assumption that calling it "WebP" will mean browsers will automatically support it too.
    I don't see any potential technical advantages it could have over AVIF or JXL, at least not with the design goals as they are currently stated, unless I am missing something. Am I missing something?

  22. #15
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    924
    Thanks
    255
    Thanked 331 Times in 203 Posts
    Quote Originally Posted by Jon Sneyers View Post
    I don't see any potential technical advantages it could have over AVIF or JXL
    The triangle-based thumbnails is a unique position.

    There may be differences how coefficient quantization is done.

    There may be some format aspects that reduce memory consumption of animations (like less reference frames).

  23. #16
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    924
    Thanks
    255
    Thanked 331 Times in 203 Posts
    Quote Originally Posted by Jon Sneyers View Post
    I tested jxl 0.1, avif-aom 2.0, webp 1.1 for generation loss resilience. Some discussion about it is on the av1 subreddit: https://www.reddit.com/r/AV1/comment..._jxl_and_avif/
    Dominik Homberger tried it with JPEG, MozJPEG, JPEG XL and WebP v2.

    https://www.youtube.com/watch?v=Jetc...ature=youtu.be

  24. #17
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    58
    Thanks
    11
    Thanked 37 Times in 20 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    The triangle-based thumbnails is a unique position.

    There may be differences how coefficient quantization is done.

    There may be some format aspects that reduce memory consumption of animations (like less reference frames).

    Triangle-based LQIP is cool, but I prefer an approach where you just don't need any LQIP or separate 'preview' image, and rely on progressive rendering to get e.g. the 1:8 image fast enough to use that as the LQIP.

    From a high-level point of view, I don't care if there are differences in how coefficient quantization is done – if it ends up 20% worse than AVIF at the same decode speed, what's the point?

    Making animations use less memory is nice, but I think on the web, for animation you need a full-blown video codec (with proper inter-frame motion compensation), not something that is better than GIF but is still essentially an intra-only codec. We need muted looping video to work in <img> tags on all browsers, and stop pretending still image codecs are OK for video if the video is only a few seconds long. It is simply not true.

    "Animated" still image codecs do have use cases, when compression matters less and fidelity is more important, e.g. in medical applications or for digital cinema (where they currently use JPEG 2000 as an intra-only codec), or for burst photography where you want each photo to be equally accurate.

    But on the web, I don't really see a use case for intra-only animation. Perhaps for some kind of synthetic animations (e.g. a 5-frame animated emoji) or for mostly static photographic animations (cinemagraphs) it still makes sense. Operationally though, it's much easier to just use a video codec for everything that moves on the web, instead of trying to figure out when you are in one of the specific cases where a still image codec is OK.

  25. #18
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    924
    Thanks
    255
    Thanked 331 Times in 203 Posts
    Quote Originally Posted by Jon Sneyers View Post
    Triangle-based LQIP is cool, but I prefer an approach where you just don't need any LQIP or separate 'preview' image, and rely on progressive rendering to get e.g. the 1:8 image fast enough to use that as the LQIP.
    I'm with you on that. From 200 bytes to a real image is an abrupt transition that can create cognitive load to users.

    Quote Originally Posted by Jon Sneyers View Post
    From a high-level point of view, I don't care if there are differences in how coefficient quantization is done – if it ends up 20% worse than AVIF at the same decode speed, what's the point?
    Agreed! Hopefully there are use cases where it is or will be better than other codecs.

    Quote Originally Posted by Jon Sneyers View Post
    Making animations use less memory is nice, but I think on the web, for animation you need a full-blown video codec (with proper inter-frame motion compensation), not something that is better than GIF but is still essentially an intra-only codec. We need muted looping video to work in <img> tags on all browsers, and stop pretending still image codecs are OK for video if the video is only a few seconds long. It is simply not true.

    "Animated" still image codecs do have use cases, when compression matters less and fidelity is more important, e.g. in medical applications or for digital cinema (where they currently use JPEG 2000 as an intra-only codec), or for burst photography where you want each photo to be equally accurate.

    But on the web, I don't really see a use case for intra-only animation. Perhaps for some kind of synthetic animations (e.g. a 5-frame animated emoji) or for mostly static photographic animations (cinemagraphs) it still makes sense. Operationally though, it's much easier to just use a video codec for everything that moves on the web, instead of trying to figure out when you are in one of the specific cases where a still image codec is OK.
    I'd like to hear more opinions from this. What happens to bugs that propose adding muted video playing to browsers?

  26. #19
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    58
    Thanks
    11
    Thanked 37 Times in 20 Posts
    Safari has allowed h264 in an img tag for a while now.

    Chrome currently allows full (inter-enabled) av1 in an img tag at the moment, but only if you wrap it in a (non-compliant) avif.

  27. #20
    Member
    Join Date
    Nov 2013
    Location
    Kraków, Poland
    Posts
    810
    Thanks
    246
    Thanked 257 Times in 160 Posts
    Quote Originally Posted by Jon Sneyers View Post
    Triangle-based LQIP is cool, but I prefer an approach where you just don't need any LQIP or separate 'preview' image, and rely on progressive rendering to get e.g. the 1:8 image fast enough to use that as the LQIP.
    Such low quality version can be use like in progressive decoding: as initial approximation to be subtracted - we further encode residue: difference from it ... what should be quite beneficial, starting with gradient encoding.

  28. #21
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    58
    Thanks
    11
    Thanked 37 Times in 20 Posts
    Quote Originally Posted by Jarek View Post
    Such low quality version can be use like in progressive decoding: as initial approximation to be subtracted - we further encode residue: difference from it ... what should be quite beneficial, starting with gradient encoding.
    You could do that, but I doubt subtracting a triangle-based LQIP and encoding the residue will be beneficial at all – the residues will probably have more entropy than the original image, and lossy encoding of the residues might lead to remnants of the triangulation to remain visible...

  29. #22
    Member
    Join Date
    Jan 2014
    Location
    USA
    Posts
    12
    Thanks
    13
    Thanked 3 Times in 2 Posts
    Quote Originally Posted by Jyrki Alakuijala View Post
    I'm with you on that. From 200 bytes to a real image is an abrupt transition that can create cognitive load to users.
    Are you referring to a few milliseconds of cognitive load, or something more dramatic? I think you're interested in making a page which is as simple from a usability standpoint as possible. How do several ms per page affect the user's experience? There are many things that might mitigate cognitive load, such as background color palette and other aspects of UI/UX, correct? What does the evidence provided by studies show?

  30. #23
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    58
    Thanks
    11
    Thanked 37 Times in 20 Posts
    Here are some demonstrations of what progressive loading would look like in JPEG, AVIF, JPEG XL:
    https://www.youtube.com/watch?v=UphN1_7nP8U

    Comparing with FLIF:
    https://youtu.be/uzYHXqjGgL4

  31. #24
    Member
    Join Date
    May 2015
    Location
    ~
    Posts
    10
    Thanks
    1
    Thanked 5 Times in 2 Posts
    Maybe browsers should generate a couple of frames to smoothly transition from less to more detail by default for large images that support some form of progressive loading.

  32. #25
    Member
    Join Date
    Jul 2018
    Location
    Russia
    Posts
    12
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Jon Sneyers View Post
    Here are some demonstrations of what progressive loading would look like in JPEG, AVIF, JPEG XL:
    https://www.youtube.com/watch?v=UphN1_7nP8U

    Comparing with FLIF:
    https://youtu.be/uzYHXqjGgL4
    Loading 1 MiB file, LTE:
    • 10 Mb/s (slow): 0.8 s (not 39 s)
    • 40 Mb/s (typical) 0.2 s (not 39 s)

    Loading 1 MiB file, Wi-Fi:
    • 100 Mb/s (cheap and slow): 0.08 s (not 39 s)
    • 400 Mb/s (not untypical) 0.02 s (not 39 s)

    Progressive rendering has no sense in modern networks.
    Dvizh must go Dvizh, no open source - no Dvizh.

  33. #26
    Member
    Join Date
    Jun 2018
    Location
    Yugoslavia
    Posts
    73
    Thanks
    8
    Thanked 5 Times in 5 Posts
    pages have bunch of images and servers can get slow, I think it will always have sense

  34. #27
    Member
    Join Date
    Oct 2015
    Location
    Belgium
    Posts
    58
    Thanks
    11
    Thanked 37 Times in 20 Posts
    Quote Originally Posted by e8c View Post
    Loading 1 MiB file, LTE:
    • 10 Mb/s (slow): 0.8 s (not 39 s)
    • 40 Mb/s (typical) 0.2 s (not 39 s)

    Loading 1 MiB file, Wi-Fi:
    • 100 Mb/s (cheap and slow): 0.08 s (not 39 s)
    • 400 Mb/s (not untypical) 0.02 s (not 39 s)

    Progressive rendering has no sense in modern networks.
    I agree. But not everyone in the world has access to LTE all of the time. At home I have ~200 Mb/s, and progressive rendering doesn't matter (provided the server and everything in between is fast enough).
    On the train I sometimes have 4G, but sometimes (between two cities) it drops to 3G, and then to 2G, and I do actually end up having no more than a few kilobytes per second for a significant part of the journey.

    I live in a rich, western country, and I can afford decent home internet and not caring about mobile data cost. This is not the case for the majority of the world population.

    Finally, if you think progressive rendering has no sense, then do you also think that (advanced) compression has no sense?

  35. #28
    Member
    Join Date
    Jul 2018
    Location
    Russia
    Posts
    12
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by Jon Sneyers View Post
    Finally, if you think progressive rendering has no sense, then do you also think that (advanced) compression has no sense?
    In some cases.
    For web consumption, FLIF better than PNG only if network speed lower than 20-30 Mb/s. Best compression has no sense if decompression speed too low.
    Dvizh must go Dvizh, no open source - no Dvizh.

  36. #29
    Member
    Join Date
    Jun 2015
    Location
    Switzerland
    Posts
    924
    Thanks
    255
    Thanked 331 Times in 203 Posts
    Quote Originally Posted by e8c View Post
    Loading 1 MiB file, LTE:
    • 10 Mb/s (slow): 0.8 s (not 39 s)
    • 40 Mb/s (typical) 0.2 s (not 39 s)

    Loading 1 MiB file, Wi-Fi:
    • 100 Mb/s (cheap and slow): 0.08 s (not 39 s)
    • 400 Mb/s (not untypical) 0.02 s (not 39 s)

    Progressive rendering has no sense in modern networks.
    Today's average website has likely 1.5 MB images. An image heavy site has 3-10 MB?

    Backend response times may relate to the payload size.

    4k and 8k monitors are coming and require more bytes to provide value to the user.

    Some use cases (like going through all the images in an album visually) benefit a lot from a < 50 ms response.

    Sometimes a millisecond of reduced latency means observable and significant-to-business change in user behaviour.

  37. #30
    Member
    Join Date
    Nov 2019
    Location
    Moon
    Posts
    44
    Thanks
    15
    Thanked 43 Times in 25 Posts
    Quote Originally Posted by Jon Sneyers View Post
    I agree. But not everyone in the world has access to LTE all of the time. At home I have ~200 Mb/s, and progressive rendering doesn't matter (provided the server and everything in between is fast enough).
    On the train I sometimes have 4G, but sometimes (between two cities) it drops to 3G, and then to 2G, and I do actually end up having no more than a few kilobytes per second for a significant part of the journey.
    Yes, at home I have 1 Gb/s Internet, but even at this speed, I sometimes notice that images are not instantly displayed on many sites (it also does not mean that such speed will be up to any servers and there will be no other delays in image transfer), it's good that large sites are trying to optimize images and use LQIP or color blocks until the full image is received by the client.

    And I just recently left the hospital where I was more than two weeks, where I had to use the Internet from my mobile operator who has not very good coverage and often overloaded the network, the speed was very unstable from 50 Mbps to 2 Kbps and browsing some sites with images became very uncomfortable, especially where LQIP and its alternatives and progressive loading were not used at all and I had to wait only for the full image.

    Quote Originally Posted by Jyrki Alakuijala View Post
    WebP 2: experimental successor of the WebP image format
    https://chromium.googlesource.com/codecs/libwebp2/

    JPEG XL v0.1 is a JPEG XL format release candidate
    https://gitlab.com/wg1/jpeg-xl/-/releases

    Would be interesting if someone compared them.
    At the moment I encode the whole set of images in lossless WebP v2 and want to add it to my comparison, the preliminary result is almost everywhere better than WebP v1, also the encoder in lossless mode has a very high multithreading efficiency (if the resolution is enough), the speed at maximum compression (effort 9) is quite slow, although it is higher than speed 9 in Jpeg XL.

    However, in lossy mode for some reason multithreading works badly (but since I compile the Windows version, it's possible that something doesn't work properly) and therefore the encoding is extremely slow (20-50 times slower than AVIF), at the same time AVIF (using the latest Libaom builds) became quite fast even at the slowest speed.
    The quality of WebP v2 at the moment and on my test images was worse than AVIF, and visually, the compression is very similar to how AVIF works. As an advantage, memory consumption when encoding in WebP v2 is minimal (compared to AVIF and even Jpeg XL on large images, although as far as I know their encoders are not fully implemented chunking and tiling).

    I haven't compared Near-lossless modes yet, but it's a good idea to place them in 96-99 quality values, so they will be found and used much more often (i learned about Near-lossless mode in WebP v1 not so long ago, in more than 10 years of format existence).
    Last edited by Scope; 21st November 2020 at 13:36.

Page 1 of 2 12 LastLast

Similar Threads

  1. Replies: 9
    Last Post: 12th June 2015, 00:28
  2. packARC v0.7RC6 (Release Candidate)
    By packDEV in forum Data Compression
    Replies: 14
    Last Post: 7th December 2013, 20:06
  3. diz.py release
    By Roger Flores in forum Data Compression
    Replies: 41
    Last Post: 11th February 2013, 09:49
  4. packMP3 v1.0c release
    By packDEV in forum Data Compression
    Replies: 11
    Last Post: 11th October 2012, 18:48
  5. ZPAQ pre-release
    By Matt Mahoney in forum Data Compression
    Replies: 54
    Last Post: 23rd March 2009, 03:17

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •