offtopic, sorry..
is there something similar that can (significantly) compress mpeg1,2 video losslessly?
offtopic, sorry..
is there something similar that can (significantly) compress mpeg1,2 video losslessly?
Correct me If i am wrong, but Mjpeg video (and any JPG derivative formats) support should be easier..
pklat (30th September 2019)
precomp supports it:
--------------------------------Code:$precomp -longhelp Precomp v0.4.8 Unix 64-bit - DEVELOPMENT version - USE AT YOUR OWN RISK! Free for non-commercial use - Copyright 2006-2019 by Christian Schneider preflate v0.3.5 support - Copyright 2018 by Dirk Steinke Usage: precomp [-switches] input_file Switches (and their <default values>): r "Recompress" PCF file (restore original file) [...] progonly[+-] Recompress progressive JPGs only (useful for PAQ) <off> mjpeg[+-] Insert huffman table for MJPEG recompression <on>
BTW: Is there any implementation of truly solid video compression? AFAIK most video codecs only pack some 3 frames together so they can be seekable. I believe that for a video that's not going to be played, only stored in a backup, every frame can benefit from the previous one to be compressed. Is there any such method for long term solid video compression?
Last edited by Gonzalo; 30th September 2019 at 07:12.
pklat (30th September 2019)
Thanks.
Ocarina is for windows only and there isn't source code?
@Gonzalo
I think they use more frames. But they should use as many frames as RAM size and CPU speed allow. IIRC, they also use following frames, not just previous.
They should group similar frames together, and let decompressor rearrange them. A scene from beginning of movie could be similar enough to the one near the end, like small files in solid .7z
Actually you can configure the distance between keyframes, also can use multipass compression to improve results.
But "solid" video can be only viewed sequentially, so its rarely done.
@Shelwien: So you are saying that what I want to do is basically doable right now with a standard encoder + special settings? I'm talking about a block the size of the entire video, or smth like a 'sliding window' over the frames instead of doing blocks of x frames at a time (explained again with different words just in case I wasn't clear the first time)
The idea is to losslessly compress a given (already lossy-compressed) video file and be able to restore it to its previous form, not to encode it from the raw or uncompressed material.
Last edited by Gonzalo; 1st October 2019 at 23:39.
> So you are saying that what I want to do is basically doable right now with a standard encoder + special settings?
I'm saying that technically video formats do have something similar to "solid compression".
Even by default its usually (aside from MJPEG etc) impossible to decode a random individual frame,
and it is possible to encode video with a single keyframe at start, and it would improve compression
(at the cost of random access).
> I'm talking about a block the size of the entire video,
> or smth like a 'sliding window' over the frames instead of doing blocks of x frames at a time
In audio/video there's no direct equivalent to deduplication / long-distance matching.
Sometimes container formats (eg. mkv) provide such an option - I've seen it used
to manually factor out opening/ending sequences in a series.
But aside from artificial cases, in audio/video data compression there're usually no benefits
from long-distance block matching, simply because "literal" code for a 8x8 block would be, say, 64 bits,
while a reference would have to include frame#,X,Y+diff, so there's a very small gain even potentially.
Video formats do use some block matching, but its mostly used for still scenes and motion compensation.
Otherwise, even when video shows the same objects, there're different angles and effects all the time,
so its necessary to "decompile" it first for any useful matching to become possible.
There's some research but nothing practical yet.
> The idea is to losslessly compress a given (already lossy-compressed) video file
> and be able to restore it to its previous form, not to encode it from the raw or uncompressed material.
Its usually possible to save 10-20% by recompression, mostly by using a better statistical model / entropy code.
Its also possible to remove entropy code and apply solid LZ compression, but results are usually
not very good, aside from special cases like different versions of the same scene etc.
Also keep in mind that best possible video compression (like runtime rendering from a model)
may not be applicable for you (like in a game installer), because of performance issues.
Gonzalo (2nd October 2019)
so what if one rearranges all frames, keeping a list of their positions (shouldn't take that much space, say 200000 integers) and then encode?
AI should rearrange.
Its possible, can really impove compression (a little), and you can actually test it with a script easily enough
(eg. append one of remaining frames to the list of images for video encoding, test encoding for all choices of new frame,
keep the result with smallest video or with best bits-per-pixel metric, continue until no more frames remain).
But
1. Its basically the same idea as long-distance block matches, it can help if there're repeated scenes or something, but normally it won't.
2. To play the video you'd have to decode all frames first, then play in order.
3. Its not applicable for recompression, because existing inter-frame dependencies are normally lossy,
you can decode frames to images, then encode again in different order, but there'd be loss of quality and added redundancy.
How about VP8/9/AV1 "Golden Frames" (eventually evolved into numerous "Reference Frames")? It is a bit more complicated than just "deduplication" and "long distance reference" but appears to be pretty much in this spirit. It allows to implement any I/P/B equivalent and (especially in advanced versions) something that greatly exceeds I/P/B concepts. Like references to several different "framebuffers" at once, selecting each that acts as best reference in particular place. In this case it is pretty much possible to do "deduplication". Say if some scene reoccurs at some point later, and suitable REF still buffered, you do not have to re-transmit scene despite sharp scene switch, rather reference mentioned REF frame and it would do the trick (with some rather marginal amendment).
I know, but is it possible to include whole video as "reference frames"? I've only seen 5-10 reference frames used.
https://planetcalc.com/3321/
Hmm, same could be said about lossless compression: pragmatic schemes eventually settle on reasonable amount of RAM used for compression and decompression. Keeping, say, 10 x 2160p sized frame buffers could already get quite noticeable, after all videos have to play not just on some high-end Xeons with terabytes of RAM, but also on smaller things, like e.g. smartphones. And if I grab say 500 DVD sized files, concatenate all that, sometimes duplicating some of these files and feed all that to typical solid compressor I wouldn't expect it to fully deduplicate that, at least on typical desktop configuration.
Lossy schemes pursue somewhat different goals - but with these "reference frames" it seems it become a bit more similar to solid compression, in sense now it not just relatively local processing but also way further away referencing. Idea of referencing frames on its own doesn't even imply distance limit or something, unlike P/B frames. Actual implementation could have different idea on that, sure.
If you say visibly lossless than you can use ffmpeg with -crf 18 preset