Results 1 to 9 of 9

Thread: Large-scale Netflix test of x264, x265, and VP9

  1. #1
    Member SolidComp's Avatar
    Join Date
    Jun 2015
    Location
    USA
    Posts
    353
    Thanks
    131
    Thanked 54 Times in 38 Posts

    Large-scale Netflix test of x264, x265, and VP9

    Hi all – This is an interesting test, especially for its scale and thoroughness. I wonder if butteraugli could be added to their slate of metrics. More details are in their linked talk.

    http://techblog.netflix.com/2016/08/...-x265.html?m=1

  2. #2
    Member
    Join Date
    Oct 2016
    Location
    Berlin
    Posts
    9
    Thanks
    8
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by SolidComp View Post
    x265 outperforms libvpx for almost all resolutions and quality metrics, but the performance gap narrows (or even reverses) at 1080p.
    That's certainly unexpected. I'm keen to see how the current gen competition to hevc will perform (vp9 is the same gen as avc), as Google decided to dump a stand-alone vp10 and integrate it into av1: https://en.wikipedia.org/wiki/AOMedia_Video_1

  3. #3
    Member
    Join Date
    Nov 2015
    Location
    boot ROM
    Posts
    95
    Thanks
    27
    Thanked 17 Times in 15 Posts
    I've had plenty of fun with VP9 codec giving it a try. It proven to be really decent, beating x264 any day in terms of bitrate to quality tradeoff. When I'm up for relatively low bitrates compared to FPS/image size, I would prefer VP9 over x264 any day. As for x265, it just does not plays in the web browsers and granted there're 2 or 3 sets of patent trolls I guess it would never be playable in the web. For me its way too major scenario to disregard it. So it seems we have to "thank" patent trolls for screwing HEVC up a lot.

    - It seems I like VP9 compression artifacts, most of time they're not really easy to spot, especially in moving pictures. On side note I wonder why most metrics are inclined on static images comparison? Isn't human perception of MOVING pictures different thing? AFAIK humans can't spot small objects in motion. Probably that's why VP9 compression artifacts aren't very annoying most of time.
    - Bitrate to quality of VP9 is a really appealing and I've got impression Google is more or less correct in their claims, one could save plenty of bitrate targeting same quality levels.
    - It comes with a price though, and price is slow encoding process. Its really slow compared to x264 if targeting good bitrate to quality tradeoffs.
    - I like how VP9 performs on e.g. small text like movie credits and so on. Text stays readable, unlike in x264 case which corrupts text to some unreadable mess at comparable bitrates. AFAIK x265 prone to similar kind of behavior though it obviously should perform much better at given bitrate.

    Yet as far as I could see from git activity, Google pushed most resources to AV1 codec and VP9 only sees relatively minor improvements these days. So these tests are, ahem, appear to be somewhat late. AOM codec looks way more interesting. I gave a try to some intermediate version of AV1 on tos3k set, which is uncompressed sequence from Tears of Steel movie, about 3000 frames at 1080p. Now the most interesting part: I've gone nuts and demanded 500kbps. Hell yea, FullHD @ 500 kbit average. Ofc I did 2-pass. The result has been surprising: I could only see some compressoin artifacts in one particular scene and it does not looks awful even if zoomed on 30" LCD watched from close distance. Most of time it is hard to spot artifacts. Not to mention I have a priori knowledge it is heavily compressed sequence and have idea what I'm looking for. So I guess if someone is truly up for next-gen benchmark they probably want to try AV1 to get idea what's REALLY going up. Though it "under construction" and evolves a lot and isn't a production quality solution yet. But VP9 isn't future anymore. Its technology to use here and now. Since AV1 is starting to take a shape, those who delayed VP9 use could figure out they're just too late.

    On side note I wonder how the hell facebook isn't in this alliance so far. They probably should somewhat care of video on the web, aren't they?

    p.s. if someone from Google reads it: when I've gave a try to AV1 encoder, my laptop with 2GiB RAM barely did it, being fairly close to OOM killer point even after I've unloaded everythnig. Why it have to consume so much RAM? Its predecessor VP9 haven't even got on radar in this regard, never taking enough RAM to get under consideration. Is this some kind of AV1 bug?

  4. #4
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Yes, I know this article and I was present at its presentation at SPIE 2016.
    Quote Originally Posted by xcrh View Post
    As for x265, it just does not plays in the web browsers and granted there're 2 or 3 sets of patent trolls I guess it would never be playable in the web. For me its way too major scenario to disregard it. So it seems we have to "thank" patent trolls for screwing HEVC up a lot.
    One could say the same about x264 or VP9. Actually, you never know. In how far is VP9 protected from patents? Just because it is open source or google claims so? This does not prevent some patent troll coming up and claiming rights on it any time later, and whether google will then pay is another guess. For x264 and x265, the situation is somewhat different in the sense that ISO collected at least a list of patents and patent holders, so you know at least whom to talk do. For x264 the (known!) patent holders agreed to provide royalty free access for internet video. Whether that will be the case any time for x265 is another guess, but if netflix provides x265, it is up to them to provide their customers with a x265 codec as part of their service.
    Quote Originally Posted by xcrh View Post
    It seems I like VP9 compression artifacts, most of time they're not really easy to spot, especially in moving pictures.
    I would say "they are different". VP9 tends to blur video a lot. While this is a good solution for the usual low-quality youtube video clip, it is not a good idea for high-quality professional production. So to say, x265 and VP9 address different markets, and hence different market needs. There was a similar VP9 vs. x265 "shootout" at the PCS two years ago in San Jose, US, that was fun to watch and participate. Of course, everybody claimed that their solution would be best, though in reality, all one could say is that they were different.
    Quote Originally Posted by xcrh View Post
    On side note I wonder why most metrics are inclined on static images comparison? Isn't human perception of MOVING pictures different thing? AFAIK humans can't spot small objects in motion. Probably that's why VP9 compression artifacts aren't very annoying most of time.
    It's complicated. The major problem is that there is no single reliable objective quality metric for video. The reason is just complexity. For still images, there are solutions like VDP2 that are pretty good, capture a lot of aspects (but not all, e.g. VDP2 does not capture color defects), but are already very complex. Running VDP2 on a 4K image takes several minutes(!). Now try to think of such a metric, with an added time dimension for temporal masking, and consider how long such a beast would run on a realistically sized video. Probably years... Ultimately, you have to trust the eyes of subjective observers. That's again an entirely different matter, but such tests are of course made during the development of any codec. PSNR, SSIM, VIF and multiscale SSIM (as used in this test) are at best indicators if you make small modifications to the codec. In general, I would trust the process at ISO probably a bit more as tests are done by competitors watching carefully the tests and results of each other, unlike an inhouse development of a single team.
    Quote Originally Posted by xcrh View Post
    Bitrate to quality of VP9 is a really appealing and I've got impression Google is more or less correct in their claims, one could save plenty of bitrate targeting same quality levels.
    Same holds for x265. Just on better quality professional input material.
    Quote Originally Posted by xcrh View Post
    I like how VP9 performs on e.g. small text like movie credits and so on. Text stays readable, unlike in x264 case which corrupts text to some unreadable mess at comparable bitrates. AFAIK x265 prone to similar kind of behavior though it obviously should perform much better at given bitrate.
    Well, x264 is not exactly recent anymore anyhow, so that is not a surprise, and not in contradiction with the outcome of this study. To be fair, you need to measure x265 against VP9.

  5. #5
    Member
    Join Date
    Nov 2015
    Location
    boot ROM
    Posts
    95
    Thanks
    27
    Thanked 17 Times in 15 Posts
    Quote Originally Posted by thorfdbg View Post
    One could say the same about x264 or VP9. Actually, you never know.
    From realistic standpoint,
    - H.264 plays on nearly everything I've met, ranging from Linux computers to smartass TV and ancient android tablets. Yet interop considerations could limit bitsteam features one could use. Levels and profiles could be annoying, not all HW could handle all features. So it could further hurt achievable bitrate to quality tradeoffs. And bitrate is a #1 issue for anyone serving video over networks.

    - VP9 plays at least on recent android devices (btw, modern 64-bit SoCs are cool enough to decode 1080P VP9 from youtube in software). It plays in FF and Chrome and even in recent Edge. Older IEs could install codec. So the only outcasts are apple and their things. And looking on market share and trends...

    - When it comes to AV1, just take a look on the names, its no longer Google-only project, which is IMHO step in right direction. More entities could take part and share their expertise while their use cases would be considered. It set for rapid adoption in both SW, scoring for all major browsers, as well as early lib availabilty and decent reference encoder/decoder, and even most HW around, be it small ARMs, high-profile GPUs or even x86. I'm pretty sure Google and Netflix would also very rapidly deploy it, things like wikipedia would probably follow quite soon, etc. So I think this tech would spread over globe at speed of light, showcasing ISO how to do things right .

    In how far is VP9 protected from patents? Just because it is open source or google claims so? This does not prevent some patent troll coming up and claiming rights on it any time later, and whether google will then pay is another guess.
    I would expect it to be reasonably protected because:
    - Google has bought On2 and all their patents, they had plenty of patents for sure.
    - MPEG LA attempted to challenge Google. Google proven to be level-headed and heavily commited to open web. They've hold some negotiations and it has been sorted out. So at least MPEG LA would not bother and other trolls could troll H.26x users as well, why not?
    - Even MS these days thinks its ok to implement VP9 in their browser. IIRC they even take part in MPEG LA with some patents, but, ironically, they pay more royalties than they earn. Isn't it funny?

    Not to mention poking 50 000 pound godzilla with a stick isnt best idea ever. I guess Google would step in: it puts their ecosystem on fire and they probably care, at least to defend self and own use cases. AV1 should be even more fun. Aggravating whole pack of godzillas? Cool idea.

    For x264 and x265, the situation is somewhat different in the sense that ISO collected at least a list of patents and patent holders,
    Compiling list of patent trolls? These experts are so freakin' handsome.

    But let's compare, right? With VP9 one would get:
    - Patents license and royalty free distribution, as well as right to change it to suit own needs. Not to mention 50 000 pound godzilla and all its lawyers. OTOH ISO just does not cares? Its your freakin' prob? Oh, great.
    - Decent reference endoder and decoder (also being a lib), under permissive terms. Someone has told standards like that are 10% formal thins and 90% actual implementation. Some H.264 encoders could even lose to xvid, despite it implements older part of standard. Now let's take a look on ISO? What? No real reference encoder? No ready to use libs? Some crap on bizarre terms? Everyone have to spend these 90% of efforts on their own. Seems ISO experts are big fans of "batteries not included" approach.
    - Google gone further and hired some company to create IP blocks for encoder and decoder to integrate to HW. They give IP blocks to whatever HW mfrs willing to use 'em on royalty free terms and IIRC they could even help to integrate IP blocks into SoC design. Erm, what ISO could offer? Nothing? Oh yeah, batteries are not included. Go design silicon IP yourself or pay losts of money to someone who did. That's how it performs in case of ISO.

    I think high profile tech companies have voiced their opinion in AV1. Who needs just freakin' list of patents instead of high-quality encoder/decoder distributed on sane terms and silicon IP?

    so you know at least whom to talk do.
    I strongly doubt most entities implementing video over nets and so on are dreaming to bother self contacting all patent trolls around, writing own high-q encoders and decoders and developing own silicon IP, etc. To put it directly, ISO proven to be ignorant to long-standing problems and unfriendly and useless when it comes to actual implementation.

    For x264 the (known!) patent holders agreed to provide royalty free access for internet video.
    ...after MPEG LA has attempted WorldWideRacket, demanding royalties. It has been bait-and-switch: only first few years were free to act as "bait", but once it has worked, licensing terms were assuming royalties. Yet this sound plan has failed: when "bait" phase got close to its end, MPEG LA openly voiced royalty demands. It backfired. Google bought whole On2 company. Since Google isn't ancient greedy corp from 80s, they've got idea it going to work, opened it and VP8 appeared, facing fairly quick adoption. Then they've improved it, releasing it as VP9. MPEG LA attempted to attack it, but Google sorted it out somehow and MPEG LA gave up on this. But it took On2 purchase and competing codec to get there. Somehow looking on AV1 it seems it going to continue this way even further.

    Whether that will be the case any time for x265 is another guess,
    I think it is unlikely. Way too much companies and individuals got fed up with ISO and MPEG LA ways of doing things, so another working group has appeared. I guess it is going to stay and it going to work other ways. Actually serving interests of its members and actually solving most pressing problems, unlike it happens with ISO.

    but if netflix provides x265, it is up to them to provide their customers with a x265 codec as part of their service.
    And it seems they're not exactly happy about all this since they've now part of AV1 working group, being founding member IIRC. Even Nvidia, intel, AMD and ARM gathered under same umbrella. That's how one could get competitors to cooperate on fundamental, long-standing issue, lol . And FYI, x265 and H.265 are two different things. Just like H.264 and x264 are. ISO standards are named H.xxx and x264 and x265 were created by independent people who implemented mentioned standards. It is wrong to credit ISO for x26x, they only created H.26x specs. Last time I've seen ISO's own reference MPEG 4 encoder, it has been awful borkzored crap, distributed on restrictive terms. Kinda drastic difference compared to VPx and AV1 where high-q encoder and decoder are part of the process

    I would say "they are different". VP9 tends to blur video a lot. While this is a good solution for the usual low-quality youtube video clip,
    Goal of lossy video compression is to throw away some data without making it too obvious. I've got to think VP9 is quite good at this. Obviously it implies data loss which could accumulate during processing.

    it is not a good idea for high-quality professional production.
    VP9 got lossless mode and high-bitdepth support, I guess these are options to explore if someone needs things like this. I guess worst complaint is going to be slow encode, though it could hurry up, at expense of higher bitrate. Highest-quality processing which also grants quick random access to any frame probably implies dealing with uncompressed data. But HUGE. As quick example, infamous "tos3k" 1080p test sequence is merely 3000 frames, approx 2 minutes long. Yet it takes 6GiB on HDD. But taking source not yet polluted by compression artifacts is the only way to get fair idea about codec abilities. As for H.265 being suitable for goals like this, I think it needs some more technical justification why it is a case. Not to mention there're some specialized lossless codecs.

    Most fancy outcome of mentioned article is the fact according to netflix, VP9 has performed better for high-quality content where they've thought H.265 makes sense. So they could have same if not better result free of charge and without wasting time dealing with all kinds of patent trolls. I guess Netflix got some reasons to feel goofied .

    So to say, x265 and VP9 address different markets, and hence different market needs.
    TBH I do not get which "market needs" ISO working processes are addressing, unless MPEG LA racket and somesuch counts as "market needs". AV1 working group members probably share this idea.

    There was a similar VP9 vs. x265 "shootout" at the PCS two years ago in San Jose, US, that was fun to watch and participate.
    Google managed to stay fast-moving company - 2 years is a long time. I guess VP9 was seriously improved over 2 years, even despite primary focus has shifted in favor of AV1 during last few months. IIRC, x265 is developed by independent individuals and I'm not sure whether they have enough resources to keep comparable pace. On interesting note, AV1 only uses parts of Daala, but most interesting parts like lapped transform aren't included. So it seems they already have interesting unexplored tricks for nextgen of nextgen in their backpack to try in longer term. Let's see if ISO could think 2 versions ahead, etc

    Of course, everybody claimed that their solution would be best, though in reality, all one could say is that they were different. It's complicated.
    Neither Google, nor companies behind AV1 are selling their codec as "product". Codec is a building block they need somewhere else, they just can't implement their "superb products" without this part done. I think they have little reasons to exagerrate codec abilities. Unlike MPEG LA, btw, where profit depends on adoption. For technological companies it about defining own future the ways it works great for them, ensuring ecosystem is non-toxic and working processes are tailored to actually deal with difficult long-standing problems of the process members. Yes, it has turned out opensource approaches could be effcient, convenient and tailored towards reaching concrete goals fast, reusing efforts here and there.

    The major problem is that there is no single reliable objective quality metric for video. The reason is just complexity. For still images, there are solutions like VDP2 that are pretty good, capture a lot of aspects (but not all, e.g. VDP2 does not capture color defects), but are already very complex. Running VDP2 on a 4K image takes several minutes(!). Now try to think of such a metric, with an added time dimension for temporal masking, and consider how long such a beast would run on a realistically sized video.
    Furthermore, AFAIK the way humans see moving objects differs considerably from perception of still pictures. Correct me if I'm wrong, but I think I've read somewhere human vision process isn't fully decoded yet. Either way, formal metrics are great, but if I could spot annoying artifact, it worth of nothing. Let it be mentioned text damage typical for x264 handling small text in places like credits. Metrics have no idea what "text" and "readable" is. To make it more fun, there is even higher level: most viewers could live without movie credits, but what if it going to be presentation instread? So identic problem could have different "price" (in terms of annoying viewers). A bit more complicated than merely adding time axis, eh?

    Ultimately, you have to trust the eyes of subjective observers.
    It more or less works for audio formats, so maybe it isn't fundamentally wrong for video as well? It surely hard to use during codec development to check if development heads right direction but when codec more or less done, I see nothing wrong in show same sequence encoded by different codecs to viewers without informing 'em about codecs and letting 'em to vote, just like in blind audio tests. So devs could see how humans actually percept achieved result and how it compares to other codecs overall.

    In general, I would trust the process at ISO probably a bit more as tests are done by competitors watching carefully the tests and results of each other,
    On other hand neither Google nor AV1 members have reasons to fool THEMSELF. Let's take a look on e.g. Google. They do not sell codec or licenses, so they would get no extra profit from exagerrating codec's properties. They have services and they have bandwidth which costs 'em money. They can't force users to consider crappy quality to be good without making it looking good . To reduce bills they have to use less bandwidth. Funny, isn't it?

    Under these assumptions I do not get how or why Google or AV1 group should fool someone about their codec properties. Sounds like pointless thing to do.

    unlike an inhouse development of a single team. Same holds for x265. Just on better quality professional input material.
    I dislike "in-house" approach as well - I do not think single company should develop upcoming standards single-handed, at least there is high risk of churn when it would fail to address someone's else needs. Somehow AV1 followed right direction and I have reasons to think their working processes are going to be true next-gen, showcasing ISO nuts how to do it right. So what do we have? Some working group full of competitors (lol) chewing on some common goal (lol). They do not sell codecs, and the only way they could have profit out of this activity is actually developing superior codec, using less BW at same (or better) quality. The only profit of this activity is codec itself, lol.

    Well, x264 is not exactly recent anymore anyhow, so that is not a surprise, and not in contradiction with the outcome of this study. To be fair, you need to measure x265 against VP9.
    Speaking for myself, VP9 plays out of the box in my browser. I can't admit the same about H.265. Easy to guess how it affects my motivation to fiddle with one thing or another. I'm sorry, but I really just want to put file on my server and play it. Long list of patent trolls isn't really helpful in this regard. Working codec implementation in my browser with patents grant is what really counts.
    Last edited by xcrh; 20th October 2016 at 08:37.

  6. Thanks:

    skal (3rd November 2016)

  7. #6
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by xcrh View Post
    - When it comes to AV1, just take a look on the names, its no longer Google-only project, which is IMHO step in right direction. More entities could take part and share their expertise while their use cases would be considered. It set for rapid adoption in both SW, scoring for all major browsers, as well as early lib availabilty and decent reference encoder/decoder, and even most HW around, be it small ARMs, high-profile GPUs or even x86. I'm pretty sure Google and Netflix would also very rapidly deploy it, things like wikipedia would probably follow quite soon, etc. So I think this tech would spread over globe at speed of light, showcasing ISO how to do things right .
    ISO is also an open process, and a process that is based on balloting proposals. That makes it slow, but does not expose a particular project to the likes and dislikes of a single dominant company, as in the google case.
    Quote Originally Posted by xcrh View Post
    Compiling list of patent trolls? These experts are so freakin' handsome.
    That's not how it works. ISO issues "calls": Come here, report your IP on this particular project. ISO does not have a bunch of lawyers that drive a full-fledged patent research. Clearly, participants of the process will report their IP, and if you are part of the process, and *do not* report, you are later on in deep trouble. This is *exactly* how Forgent (do you remember them?) lost their "VLC patent" they attacked JPEG with, and they lost for exactly that reason - patent troll. So there is some good indication that this process works. Is there a good indication that the "google process" works?
    Quote Originally Posted by xcrh View Post
    - Patents license and royalty free distribution, as well as right to change it to suit own needs. Not to mention 50 000 pound godzilla and all its lawyers. OTOH ISO just does not cares? Its your freakin' prob? Oh, great.
    Does google provide you with a *license* that they protect *you* from outside patent claims? That sounds like a very risky policy for a company if you ask me. They - open source authors - provide you with a source code with a "best guess" estimate of potential patent threats. But the more parties participate in such a process, the less likely it becomes that there is an outside threat from someone else.
    Quote Originally Posted by xcrh View Post
    - Google gone further and hired some company to create IP blocks for encoder and decoder to integrate to HW. They give IP blocks to whatever HW mfrs willing to use 'em on royalty free terms and IIRC they could even help to integrate IP blocks into SoC design. Erm, what ISO could offer? Nothing? Oh yeah, batteries are not included. Go design silicon IP yourself or pay losts of money to someone who did. That's how it performs in case of ISO.
    No, that's not how it works. IPs are not about "creating silicon", and you cannot avoid IP by "doing something yourself". ISO requires to report your IPs to the upper levels of the ISO. ISO does not manage licenses, and it does not market licenses. It only collects them, to report them when needed, so it is an open process that is carefully recorded. So in particular, MPEG LA is *not* a subsidiary of ISO. It is actually an independent company, completely distinct fromISO.
    Quote Originally Posted by xcrh View Post
    I strongly doubt most entities implementing video over nets and so on are dreaming to bother self contacting all patent trolls around, writing own high-q encoders and decoders and developing own silicon IP, etc. To put it directly, ISO proven to be ignorant to long-standing problems and unfriendly and useless when it comes to actual implementation.
    ISO is not in the position to create implementations. You misunderstand what ISO is about. It is not a company that provides technology for you. It is a forum for industry to cooperate on joined solutions. ISO creates standards. Given specifications, everyone can implement them. Implementation quality and usability is not anything ISO can possibly handle. It's not in their mandate to do so.
    Quote Originally Posted by xcrh View Post
    ...after MPEG LA has attempted WorldWideRacket, demanding royalties. It has been bait-and-switch: only first few years were free to act as "bait", but once it has worked, licensing terms were assuming royalties. Yet this sound plan has failed: when "bait" phase got close to its end, MPEG LA openly voiced royalty demands. It backfired. Google bought whole On2 company. Since Google isn't ancient greedy corp from 80s, they've got idea it going to work, opened it and VP8 appeared, facing fairly quick adoption. Then they've improved it, releasing it as VP9. MPEG LA attempted to attack it, but Google sorted it out somehow and MPEG LA gave up on this. But it took On2 purchase and competing codec to get there. Somehow looking on AV1 it seems it going to continue this way even further.
    MPEG LA != ISO. That's something entirely different. Whatever MPEG LA decides is what the participating companies of the MPEG LA licensing program decide. If you - as a company - decide to invest money into engineers that create cool technology, this money needs to come from somewhere. Also the money google invests comes from "somewhere". In case of google, it comes from selling the private data of every google user on the planet to interested parties. In case of MPEG-LA, the money comes from the actual users of a a technology. Now, if you ask me, the second process looks a bit fairer to me. One way or another: Somebody will pay.
    Quote Originally Posted by xcrh View Post
    I think it is unlikely. Way too much companies and individuals got fed up with ISO and MPEG LA ways of doing things, so another working group has appeared. I guess it is going to stay and it going to work other ways. Actually serving interests of its members and actually solving most pressing problems, unlike it happens with ISO.
    ISO does whatever its WG members do. Why do you think that its members are not interested in solving their problems? It's in their very interest to solve the problems of their respective customers. But unlike in google, decisions are not made by a single stakeholder, but are the outcome of a discussion of its industry members. While the ISO process is necessarily slow and complex, it at least ensures that competing parties work out a best common solution. How should that work if a single stakeholder makes decisions that is only within its own interest?
    Quote Originally Posted by xcrh View Post
    And FYI, x265 and H.265 are two different things. Just like H.264 and x264 are. ISO standards are named H.xxx and x264 and x265 were created by independent people who implemented mentioned standards.
    I pretty much hope that x265 is an implementation of H.265. (-: If not, something broke. x265 is a piece of software, MPEG-HEVC (or ITU-T H.265) is an international standard and an ITU recommendation. So yes, that's something different, and x265 made particular choices of how to implement HEVC, but that's not a a bad things.
    Quote Originally Posted by xcrh View Post
    It is wrong to credit ISO for x26x, they only created H.26x specs. Last time I've seen ISO's own reference MPEG 4 encoder, it has been awful borkzored crap, distributed on restrictive terms. Kinda drastic difference compared to VPx and AV1 where high-q encoder and decoder are part of the process
    I don't credit ISO for x265. You are confusing something. You probably mean the HM software provided by ISO members. The purpose of this software is not to provide a solution to customers. Yes, it is slow, awkward, hard to use. The purpose of this software is to provide a common basis to facilitate experiments to improve the standard. It is a "verification model" by which you can evaluate whether a specific proposal for a technology works as advertised.
    Quote Originally Posted by xcrh View Post
    VP9 got lossless mode and high-bitdepth support, I guess these are options to explore if someone needs things like this.
    That's not what I'm saying. High quality does not require high-bitdepth, necessarily. I really mean on the "image quality / bandwidth" scale. VP9 provides "acceptable quality at low bitrate". But this is an operating point that is not so interesting for professional users. Here you want high quality, not the type of vidoes you find on youtube. It is a different market.
    Quote Originally Posted by xcrh View Post
    Most fancy outcome of mentioned article is the fact according to netflix, VP9 has performed better for high-quality content where they've thought H.265 makes sense. So they could have same if not better result free of charge and without wasting time dealing with all kinds of patent trolls. I guess Netflix got some reasons to feel goofied .
    In how far are you protected from patent trolls if you follow google? You seem to make a very balanced decision towards a particular development process, probably without knowing the processes.
    Quote Originally Posted by xcrh View Post
    TBH I do not get which "market needs" ISO working processes are addressing, unless MPEG LA racket and somesuch counts as "market needs". AV1 working group members probably share this idea.
    MPEG-LA does not defines market needs as it is not part of ISO. ISO members identify market needs - actually, that's part of the process. It is usually handled by a subgroup called "Requirements".
    Quote Originally Posted by xcrh View Post
    Google managed to stay fast-moving company - 2 years is a long time. I guess VP9 was seriously improved over 2 years, even despite primary focus has shifted in favor of AV1 during last few months. IIRC, x265 is developed by independent individuals and I'm not sure whether they have enough resources to keep comparable pace.
    That is certainly true. ISO moves slowly. It is a necessity due to the process. Yet they get competitive results.
    Quote Originally Posted by xcrh View Post
    Furthermore, AFAIK the way humans see moving objects differs considerably from perception of still pictures. Correct me if I'm wrong, but I think I've read somewhere human vision process isn't fully decoded yet.
    No, it is not. As far as it is understood, motion vision is usually modeled by "two channels", a "still" and a "motion" part where motion can very effectively hide defects away.
    Quote Originally Posted by xcrh View Post
    Either way, formal metrics are great, but if I could spot annoying artifact, it worth of nothing. Let it be mentioned text damage typical for x264 handling small text in places like credits. Metrics have no idea what "text" and "readable" is. To make it more fun, there is even higher level: most viewers could live without movie credits, but what if it going to be presentation instread? So identic problem could have different "price" (in terms of annoying viewers). A bit more complicated than merely adding time axis, eh?
    Of course. I don't disagree. Just saying that there is nothing particularly useful at this time, and netflix surely did not have means to access the same amount of video subjectively.
    Quote Originally Posted by xcrh View Post
    It more or less works for audio formats, so maybe it isn't fundamentally wrong for video as well? It surely hard to use during codec development to check if development heads right direction but when codec more or less done, I see nothing wrong in show same sequence encoded by different codecs to viewers without informing 'em about codecs and letting 'em to vote, just like in blind audio tests. So devs could see how humans actually percept achieved result and how it compares to other codecs overall.
    That's a "subjective test", and of course such tests are made in the development of a codec. It would be insane not to. In ISO speech, they are called "core experiments".
    Quote Originally Posted by xcrh View Post
    On other hand neither Google nor AV1 members have reasons to fool THEMSELF. Let's take a look on e.g. Google. They do not sell codec or licenses, so they would get no extra profit from exagerrating codec's properties. They have services and they have bandwidth which costs 'em money. They can't force users to consider crappy quality to be good without making it looking good . To reduce bills they have to use less bandwidth. Funny, isn't it?
    Do you believe ISO members have reasons to fool themselves? After all, they want to sell good products.
    Quote Originally Posted by xcrh View Post
    I dislike "in-house" approach as well - I do not think single company should develop upcoming standards single-handed,
    So why exactly do you think the google process is a good one? There is certainly *no* single-handed development going on at ISO.
    Quote Originally Posted by xcrh View Post
    at least there is high risk of churn when it would fail to address someone's else needs. Somehow AV1 followed right direction and I have reasons to think their working processes are going to be true next-gen, showcasing ISO nuts how to do it right.
    If you believe that a single-management non-democratic approach as google takes it is a good model, then I don't know what else to say. Here a single stakeholder has all to say in which direction development goes. Its up to a single company whether to take or leave a technology they want to. That is a faster process for sure, but it is only a good process for a particular company, and the quality of this process depends on the good will of this company.

  8. #7
    Member
    Join Date
    Dec 2011
    Location
    Cambridge, UK
    Posts
    506
    Thanks
    187
    Thanked 177 Times in 120 Posts
    Quote Originally Posted by thorfdbg View Post
    ISO is also an open process, and a process that is based on balloting proposals. That makes it slow, but does not expose a particular project to the likes and dislikes of a single dominant company, as in the google case. That's not how it works. ISO issues "calls": Come here, report your IP on this particular project. ISO does not have a bunch of lawyers that drive a full-fledged patent research. Clearly, participants of the process will report their IP, and if you are part of the process, and *do not* report, you are later on in deep trouble.
    ISO is open for all to comment, but it's a highly complex process with high bar to reach to take part (I have tried and have only partial success so far). Eg the current call for proposals on DNA sequencing compression techniques. To submit you have to be part of a country standards organisation. The first round of submissions have been done now, but the general public cannot even read those submissions as they're behind a password locked web portal which only people in the process can access.

    This is not "open" in my book. It's a closed shop, until you become part of the club. I can see why they need to control things as it gets too messy with anyone chiming in, and I can see why the voting takes place only between representatives of official standards bodies and any chosen external committee representations, but not letting the general public see it as it progresses is poor form. I assume it's all due to IP protection.

  9. #8
    Member
    Join Date
    Nov 2015
    Location
    boot ROM
    Posts
    95
    Thanks
    27
    Thanked 17 Times in 15 Posts
    Quote Originally Posted by thorfdbg View Post
    ISO is also an open process, and a process that is based on balloting proposals. That makes it slow, but does not expose a particular project to the likes and dislikes of a single dominant company, as in the google case.
    At the end of day ISO failed to address pressing matters in a while, so who cares? It does not works.

    That's not how it works. ISO issues "calls": Come here, report your IP on this particular project. ISO does not have a bunch of lawyers that drive a full-fledged patent research [...]
    Sure, its ISO. Bunch of lame excuses and unworkable crap is their hallmark.

    Does google provide you with a *license* that they protect *you* from outside patent claims? That sounds like a very risky policy for a company if you ask me. They - open source authors - provide you with a source code with a "best guess" estimate of potential patent threats. But the more parties participate in such a process, the less likely it becomes that there is an outside threat from someone else.
    IIRC, alliance around AV1 gone further, to join alliance one have to share their patents on subject, worldwide, royalty free. Probably they've got desperate to get there other ways, so they took more radical approach. As far as I understand, if someone is member of alliance, they can't do patent trolling: they've licensed patents to everyone and can't backpedal.

    When it comes to outside claims, neither ISO nor even MPEG LA could protect from this, so it isn't anyhow different. With H.265 it gone so bad now there're IIRC like 3 different groups of patent trolls, so just paying MPEG LA wouldn't do. That's what one gets for ISO ignorance. On other hand, AV1 working group seeks to actually deal with problems rather then ignore them.

    No, that's not how it works. IPs are not about "creating silicon", and you cannot avoid IP by "doing something yourself". ISO requires to report your IPs to the upper levels of the ISO. ISO does not manage licenses, and it does not market licenses. It only collects them, to report them when needed, so it is an open process that is carefully recorded. So in particular, MPEG LA is *not* a subsidiary of ISO. It is actually an independent company, completely distinct fromISO.
    I referred to "silicon IP" which means reusable HW peripheral block, kind of IP asset. I know MPEG LA is different entity, but I've got impression ISO pads interests of some MPEG LA members, even at expense of overall outcome. IMHO it undermines value of ISO and makes their standards pointless. Calling some commercial tool helping patent trolls to milk everyone around a "standard" is a misnomer.

    ISO is not in the position to create implementations. [...] It's not in their mandate to do so.
    Now companies around AV1 would show 'em they ARE "in position", so they "could" and "would", "able to afford" and "care". Not to mention "competitive".

    MPEG LA != ISO. That's something entirely different. Whatever MPEG LA decides is what the participating companies of the MPEG LA licensing program decide. If you - as a company - decide to invest money into engineers that create cool technology, this money needs to come from somewhere. Also the money google invests comes from "somewhere". In case of google, it comes from selling the private data of every google user on the planet to interested parties. In case of MPEG-LA, the money comes from the actual users of a a technology. Now, if you ask me, the second process looks a bit fairer to me. One way or another: Somebody will pay.
    AV1 group has reshaped and rethought process. Google got plenty of patents from On2, alliance members must share their patents as well. This should protect "core" more or less I guess. Development phase assumes patent analisys and also developing and using new techniques. Way more sane approach compared to ISO ignorance.

    And sure, companies behind AV1 facing some running costs associated with this activity, but on other hand they'll be able to cut their costs down and/or increase income if they no longer have to pay royalties, use less bandwidth, provide even more attractive services and so on. It seems it works for them. Why not?

    ISO does whatever its WG members do. Why do you think that its members are not interested in solving their problems? It's in their very interest to solve the problems of their respective customers. But unlike in google, decisions are not made by a single stakeholder, but are the outcome of a discussion of its industry members. While the ISO process is necessarily slow and complex, it at least ensures that competing parties work out a best common solution. How should that work if a single stakeholder makes decisions that is only within its own interest?
    It worth of nothing, since overall outcome is EPIC FAIL. Look, I've been experimenting with playing back videos in ... smth like IE6 and Win98 or so. Almost 20 years passed and ... this topic still gives headache, we still have similar interop issues? BS. Timeout expired, watchdog reset would take place. There is no way to avoid reboot at this point. If ISO can't get it right, it have to be something else.

    I pretty much hope that x265 is an implementation of H.265. (-: If not, something broke. x265 is a piece of software, MPEG-HEVC (or ITU-T H.265) is an international standard and an ITU recommendation. So yes, that's something different, and x265 made particular choices of how to implement HEVC, but that's not a a bad things.
    Bad thing are all these patent troll cohorts, it jeopardizes technology adoption. It gone so badly "industry" has shown their voice. Somehow it sounded like AV1.

    I don't credit ISO for x265. You are confusing something.
    Then I guess there should be clear separation between H.26x (standards) and x26x (particular implementation) for the sake of clarity.

    You probably mean the HM software provided by ISO members. The purpose of this software is not to provide a solution to customers. Yes, it is slow, awkward, hard to use. The purpose of this software is to provide a common basis to facilitate experiments to improve the standard. It is a "verification model" by which you can evaluate whether a specific proposal for a technology works as advertised.
    I do not remember how this thing named, it has been few years ago. I only remember it used some weird license and performed like crap. I failed to get the point of doing it this way, but well, it is ISO.

    That's not what I'm saying. High quality does not require high-bitdepth, necessarily. I really mean on the "image quality / bandwidth" scale. VP9 provides "acceptable quality at low bitrate". But this is an operating point that is not so interesting for professional users. Here you want high quality, not the type of vidoes you find on youtube. It is a different market.
    The best quality one could get is uncompressed (or lossless-compressed) original data. In professional uses quality takes precedence over space saving and granted modern HDD and SSD capacities and speeds it shouldn't be highly pressing matter at all. OTOH networking bandwidth is harder to improve all over world, it evolves at slower pace and it always an issue for servers serving multiple users. That's what drives codec development further. Ironically ISO activity proven to be EPIC FAIL when it comes to web.

    Furthermore, as far as I remember, VP9 bitrate to quality curve looks decent on high bitrates, it going to be hard to find any artifacts and there is even lossless mode and high bidepth. So I would ask again: which advantages professional users are expected to get from H.265 use? Any particular example? Just mumbling about "different markets" and "high quality" isn't convincing. Especially for person storing some uncompressed test sequences.

    In how far are you protected from patent trolls if you follow google? You seem to make a very balanced decision towards a particular development process, probably without knowing the processes
    I guess at least it could be better than what ISO could offer (aka utter ignorance of the problem and stupid list of troll dens).

    MPEG-LA does not defines market needs as it is not part of ISO. ISO members identify market needs - actually, that's part of the process. It is usually handled by a subgroup called "Requirements".
    Somehow it seems these nuts have forgot to put trouble-free distribution and adoption into this list, esp in the web. FAIL.

    That is certainly true. ISO moves slowly. It is a necessity due to the process. Yet they get competitive results.
    I'm pretty sure AV1 working group would now showcase ISO what is "necessary" and how "competition" looks.

    No, it is not. As far as it is understood, motion vision is usually modeled by "two channels", a "still" and a "motion" part where motion can very effectively hide defects away.
    It sounds reasonable and surely better than nothing, but it is still simplified approach.

    Do you believe ISO members have reasons to fool themselves?
    No, but they could have reasons to fool others. When it comes to e.g. MPEG LA members, the more ppl uses product, the more money they would get from licensing. This implies marketing BS could be profitable in this case, so it have to be considered.

    After all, they want to sell good products.
    Strong marketing could somewhat save mediocre or troublesome products. Of course even superb marketing would choke on awful product, but there is considerable room for bias. This have to be considered

    So why exactly do you think the google process is a good one?
    I think "google process" could be roughly described as transition from proprietary, single-company On2 development model to opensource-style working processes, with VP8/9 being intermediate stopgaps. Google didn't invented these processes, they're just smart enough to use it in their favor. Now its not just Google, feel free to take a look on list of companies who joined alliance. Sure, they've used VP9 (or to be exact, VP10) as starting point. Just because it has been most developed codec AV1 members had to the date and it had patent issues more or less sorted out. It is logical it proven to be good starting point or AV1.

    This process is going to be good, because it favors actual outcome over formal crap. They are going to solve problems instead of ignoring them. Codec and its achieved properties are the ONLY reward. No royalties, no product sales, no associated BS. It seems companies managed to get over short-term greed in favor of models which would allow 'em to reach desired goals. Development process is centered around code and targets to maximize achieved result. Not to mention it allows it to be massively parallel on all levels. Everyone knows what's going on and prepared for it, members could undertake computed risk implementing things early and gradually tweaking their prototypes as code and algos evolve.

    When working process set up like this I just do not get how one could screw it up. Now its almost 100% chance process would follow its targeted destination and would reach it, just as planned. This is software engineering done right. ISO is obviosuly bad at this.

    There is certainly *no* single-handed development going on at ISO.
    It is worth of nothing if it comes to lame excuses, ignorance and failing to address pressing issues and keeps doing so for a while.

    If you believe that a single-management non-democratic approach as google takes it is a good model, then I don't know what else to say.
    It is more logical to say I do not believe in broken working processes which have failed to address long-standing issues for so many years. Lame excuses and stupid list of troll dens instead of solving problem right way is BS.

    Here a single stakeholder has all to say in which direction development goes. Its up to a single company whether to take or leave a technology they want to. That is a faster process for sure, but it is only a good process for a particular company, and the quality of this process depends on the good will of this company.
    Ironically it is no longer single stakeholder, unless you define whole humankind as stakeholder :P. Google has proven they're commited to goal of creation of high-quality codec so much they're even ok with giving up on exclusive control. Its amazing and I think it has been step in right direction. If someone haven't got it: there is no point to define VP9 future at this point. It is stable thing, ppl could use it here and now if they think it suits their needs. The time is now. Roughly half of year from now or so going to be late, since AV1 bitstream will be finalized (it going here and now and no longer google-only). Since development is centered around code, lib is going to be available immediately. Sure, ISO can't afford workflows like this. One more nail into their processes coffin.

  10. #9
    Member
    Join Date
    Apr 2012
    Location
    Stuttgart
    Posts
    448
    Thanks
    1
    Thanked 101 Times in 61 Posts
    Quote Originally Posted by JamesB View Post
    ISO is open for all to comment, but it's a highly complex process with high bar to reach to take part (I have tried and have only partial success so far). Eg the current call for proposals on DNA sequencing compression techniques. To submit you have to be part of a country standards organisation. The first round of submissions have been done now, but the general public cannot even read those submissions as they're behind a password locked web portal which only people in the process can access.
    I do not know under which SC this project is handled, but it really depends a lot on the openness of the particular WG and SC. I can only tell you how JPEG (SC29 WG1) handles matters. We have mailing lists, open for everyone to join - all it requires is to identify yourself and make clear why you want to participate. I believe "next door" at WG11, mailing list access is also open.

    Thus, you are invited to discuss with experts. If you want to make a formal contribution, then *yes*, you have to be a member of a national body. That's part of how ISO is organized, to the better or worse. Anyhow, a good expert should always listen to outside comments. We had, for example, a couple of good comments coming in on JPEG-XT part 9 (Alpha channel integration into JPEG), and those made it into the standard. So yes, everyone can make a difference.

    The process itself feels a bit strange at first. There is a good way, there is a bad way, there is an ISO way. I'm not saying its ideal, but there are reasons for the formalities.

    Quote Originally Posted by JamesB View Post
    This is not "open" in my book. It's a closed shop, until you become part of the club. I can see why they need to control things as it gets too messy with anyone chiming in, and I can see why the voting takes place only between representatives of official standards bodies and any chosen external committee representations, but not letting the general public see it as it progresses is poor form. I assume it's all due to IP protection.
    Strangely enough, we *cannot* talk about IP whatsoever - not our mandate. What needs to be recorded, however, is who participated in the meetings. The reasons are that ISO wants to minimize the risk that a participant takes somebody else's idea and patents it later on, or add technology to a standard without later on declaring IP to the ISO itself - so every name is recorded and the attendance list can be backtracked over meetings.

    Another thing we cannot do is share documents with "ISO" status on it because it has ISO copyright. That's only for editors and members. But that is really for the standards itself, and less due to IP, but more due to copyright itself.

    There is sometimes a way around it (dual-standard with ITU, or make an explicit declaration that a standard should be accessible for free - that's all possible in the process).

    Becoming an expert as part of your NB is an easy thing. I just had to fill a one-page form, years ago, and file it to my NB (here, DIN). Easy going and just a formal registration step.

Similar Threads

  1. Adler32 on Large Blocks
    By encode in forum Data Compression
    Replies: 1
    Last Post: 2nd January 2015, 21:38
  2. Kardashev scale
    By Keith in forum The Off-Topic Lounge
    Replies: 7
    Last Post: 21st May 2012, 03:04
  3. LZSS with a large dictionary
    By encode in forum Data Compression
    Replies: 31
    Last Post: 31st July 2008, 21:15
  4. Large text benchmark
    By Matt Mahoney in forum Forum Archive
    Replies: 39
    Last Post: 13th January 2008, 00:57
  5. New rule for large text benchmark
    By Matt Mahoney in forum Forum Archive
    Replies: 5
    Last Post: 28th October 2007, 21:00

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •