I think most of the commenters are basing their responses on the (terrible) headline rather than what actually seems to have happened.
As comment 19 of the bug report says, "This bug is not about to implement HEVC in Chrome", and "Chrome does already support the native HEVC decoder for progressive download videos using the HTML5 <video> tag on Android". The issue description notes that as of Feb 2015, "Playing plain HEVC mp4 files in Chrome/Android/Nexus5 works fine".
This new fix just corrects the handling of HEVC in the Chrome's Media Source API (formerly known as "Media Source Extensions").
Comment 19 is 7 years old and doesn't reflect the contents of the patch that actually closed the bug, which enables HEVC both on Android and desktop when provided by the OS (but does not ship a software decoder itself).
On one hand, that's great for anyone watching HEVC content in Chrome.
<rant>On the other, I'd prefer to not see further adoption of HEVC and instead see increased deployment of VP9 and AV1 wherever possible. Let MPEG-LA and the other HEVC patent pools+holders... well I'll leave the rest to your imagination. Future looking, no one should even touch VVC/H.266.</rant>
Unfortunately the above rant does not address the gap in hardware support between HEVC and AV1 for efficient accelerated decoding. Codec support is a difficult game to balance. I'm hopeful that by the time AVC/H.264's patent pools fully expire later this decade we'll all have moved on to newer and better (non-royalty patent-unencumbered) things.
It's ironic, not one week ago there were a bunch of complaints on this site saying that Google was abusing their position of power (controlling YouTube, Chrome, Android TV, etc) by pushing AV1. Now they bring HEVC support to Chrome and the top comment is another complaint!
(That said, I agree with you. I think a codec being royalty free is a very good reason to prefer it to other codecs.)
Personally, I have no idea why people are so upset about WebM and AV1 and etc. Like sure, they're not necessarily God's gift to AV, but they're reasonable enough, and patent unencumbered. Google may be awful, but that doesn't mean the incentives can't align. I can tell you that my incentives are perpendicularly aligned with MPEG-LA's in this situation, so...
Hostages? I'd say Google is better than most regarding data portability, assuming that's what you're referring to, offering 'takeout' for just about everything.
The comment was about the internet "hivemind"; no need to be pedantic about it.
Keep in mind that upwards of 95% of people will read comments but not post any themselves (and to address an obvious retort, sure, it'll be a different 95% for each submission). So the general "tone" of the comments absolutely impacts public perception, especially now that practically no-one consumes straight-news without social media commentary.
Redditors use identical arguments: "we're different people!" when their website is the most groupthink-y of all.
Threads are groupthink, but subreddits with diverse topics are less groupthink than you'd think. Different user groups turn out for different headlines, at different times, etc.
TBF they can be abusing their position + helping other companies abuse their position as well.
Just reading about the historical context of AV1 it all feels like huge dirty lawyer battles involving troves of money thrown around, with the user as a hostage indirectly footing the bill in the end so we can't just ignore all the drama.
I would want to side with Google on the open side they seem to be championing, but there's no way it doesn't come with huge side effects that we will pay sooner or later.
All to say, it's looks like a complicated enough matter that opinions will be devised and all options might not be great, some just being more acceptable than others.
I think it's remarkable that the pirate scene has barely touched VP9 or AV1. We discussed this 2.5 years ago, nothing really has changed except there's more H.265 than before. https://news.ycombinator.com/item?id=19362098
Pirates don't care about codec licences the same way they don't care about the copyright of the content they pirate. Therefore they will always pick the codec which is easiest to target, has hardware for encoding and decoding readily available. Most either watch content on PCs, smart TVs, nvidia shield / firestick and almost none of those have av1 hardware support (maybe PCs but only recently).
Pirates also work directly with video files. Each file has one bitrate and one codec, and needs to work as universally as possible. There are sometimes two or more versions of the same video available that use different bitrates or codecs, but each added version has a cost in clutter (they typically show up as separate entries in the UI), disk space (relatively expensive when it’s people’s personal disks) and P2P swarm availability (critical if using P2P), so you won’t see too many. In contrast, just about any streaming service will have several different encodes for each video, automatically selecting one based on bandwidth and codec availability. That makes it relatively easy to adopt new codecs, since users who can’t decide them can just get a different encode.
However, pirates do care about file size and quality, as demonstrated by the adoption of 10-bit H.265. So AV1 should be coming, eventually.
I think the biggest reason is lack of hardware decoding, Lots of people have a device that can hardware decode H.265, which is extremely important for 4k content, my x230 really struggled with software 4k decoding of H.265, so I assume the same issue will happen with VP9 and AV1 especially with higher res content.
Probably all comes down to ffmpeg not supporting av1 well. Pretty much everyone uses ffmpeg or a wrapper around ffmpeg like handbrake to encode. And my understanding is that ffmpeg implements an old and very slow and partially broken version of av1.
This'll be long-winded excitement to talk about the weird little community, but I think a lot of that depends on the circles you run in and the content you consume.
There is surely a lot of low-effort GUI handbrake encodes online. But most of the """well-respected""" piracy groups put a surprising amount of effort into filtering and such to correct artifacts, both due to the compression and due to the source material itself.
A lot of these people are using tools like VapourSynth with a variety of scripts they've put together and x264 or x265 directly rather than ffmpeg. The scripts themselves are typically Python, but often rely on loads of native modules. You can see a couple of guides about some of the processes they perform:
And while not directly related to the encoding side of things, but if any of that is interesting, in addition to the encoding side of things, pirate fansubs also get pretty complex, particularly for anime since, unlike the unstyled SRT subs most people come across for foreign movies online, anime fansubs tend to use ASS [1] subtitles with lots of styling to accomplish things like cleanly replacing Japanese text in a letter someone is reading or adding non-distracting subtitles for background text (e.g., signs on buildings, etc) [2].
To do a lot of that, though, these subtitles often pack fonts into the video container to allow the media player to render things as expected without resorting to "hardsubbing" (i.e., pre-rendering the subtitles into the video itself)—which is one of many reasons container formats like Matroska (MKV) is so popular in those communities.
An interesting thing to see come out of that is that I have noticed some fansubbing groups move to proper build tools, like Gradle, to automate portions of their workflows. As an example, SubKt, a Gradle plugin, allows them to essentially have CI/CD for their subtitling projects by doing integrity checks on the fonts, linting the subtitles/fonts to ensure the selected fonts actually have glyphs for all the text, templating and merging so that different team members can work on things like the script/timing while another does styling, and then packaging and publishing tasks to bundle everything up into an MKV at the end and upload the result to torrent sites.
If any of that is interesting, here are some links to SubKt + some real-world finished projects making use of it:
Regarding 'why' AV1 and other codecs like VP8/VP9 or VVC haven't really been used:
1. Many of the private trackers have fairly strict rules in terms of standards (e.g., due to lack of hardware support, perceived differences in quality, etc., many don't allow <4K HEVC encodes at all, except in edge cases like when a streaming platform releases a new show in HEVC-only), so individual encoders and groups aren't always free to use whatever codecs they please.
2. Many seem to find x264 easier to tune for certain types of media than x265, and even more so compared to AV1 and others.
3. Many seem to believe that insert codec tends to produce worse results in certain circumstances or for certain content, so they will stick with x265 (or even x264 for the same reasons)
4. Many find that, to truly achieve the same picture quality produced by x265, compression ratios often end up much worse than people claim, and thus the significant slow-down in encoding speed and loss of hardware support is not worth the minor reductions in size.
#4 is likely the most common reason, as it was/is the same with those who prefer x264 over x265; HEVC video is definitely not "half the size" if you want it to look comparably good. And so, especially in the past with older hardware, it simply wasn't worth the tradeoffs; it's worth remembering that, in the case of piracy groups which distribute over P2P networks, no one is paying AWS and co. exorbitant amounts of money per terabyte of data transferred.
These sites run off of 'free' bandwidth provided by users and cheap unmetered servers from companies like Hetzner, OVH, LeaseWeb, etc -- saving 10-30% in bandwidth often is not worth it at the expense of doubling your encode times (or significantly worse than doubling, in the case of AV1 and VVC) and alienating the people watching on older hardware.
EDIT:
Also, I figured it'd be worth noting as well that RE: my points on encoding speeds and such, while hardware decoding may help adoption in the piracy communities, I don't foresee hardware-accelerated AV1/VVC encoding making much of a difference in the near future; even today, virtually none of these groups use solutions like NVENC for HEVC due to the fact that the software HEVC encoders produce better results (so pirates that encode such content typically have just come to accept the slower encodes now that good CPU compute is much cheaper than in the past).
Thank you for all these details! You confirm my suspicion, which is that the pirate scene has a lot of really valuable knowledge built up from years of using codecs in a very practical way.
ffmpeg can also be built to link to libaom, and my package manager (Macports) does this by default. (Likewise, I think ffmpeg has its own H.264 and H.265 implementations, but also links to libx264 and libx265.) Does it really matter what ffmpeg's built-in implementation is like?
I remember maybe ten years ago "real" pirates (from "the scene") thought that everything needed to be in rar format, and anything else was considered "p2p" and not as pure. They were also late with h264 and used xvid for far too long.
Did you tried to encode anything with av1? Even with a high end CPU it's just not feasible, I tried with ffmpeg and it was encoding a single digit fps per seconds.
A video of just 10min would take many hours, on the other end h265 encoding is slow but doable.
Edit: just retried on my laptop:
0.6fps with a 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
Using the lastest version of ffmpeg with the sample from the wiki: ffmpeg -i input.mp4 -c:v libaom-av1 -crf 30 -b:v 0 av1_test.mkv
It's that slow that I don't even get the file size to change on disk, still 0 I guess the ffmpeg encoded buffer is still too small to be flushed out after a minute.
Try compiling ffmpeg with `--enable-librav1e` and use the `rav1e` encoder implementation. It's supposed to be the fastest software encoder, though of course it can still quite slow depending on the settings.
I don't think there are CPUs with integrated GPUs from Intel that do AV1 encoding yet. Raptor Lake launched on October 20, 2022, and while RL will decode AV1, it won't encode it.
The Intel Arc cards will encode AV1.
The next generation of Intel CPUs with integrated GPUs are supposed to have AV1 encode. Maybe a year or two away?
Intel CPUs don't have AV1 hardware encoding support quite yet. Their latest 13th gen CPUs have AV1 hardware decoding support though. The recent performance improvements[1][2] are for software encoding.
But does that get the same quality? All the codec performance comparisons I've seen have totally ignored the video quality, making the utterly meaningless.
For what it's worth, Intel & Netflix's AV1 encoder SVT-AV1 is extremely fast -- it changed my mind about what encoder rates are possible with AV1, to the point that I'm very happy with realtime CPU encoding.
This was also true of H.265 in its early days before there was widespread support. I once had to wrangle a few workstations to act as a renderfarm overnight in order to transcode a couple hours of footage by the next morning having not been aware that we were dealing with H.265 until the day before it had to be done.
But also yes, as others are pointing out, this is a problem rapidly being addressed and is not atypical for new media formats.
I'm able to encode using av1an (which I believe still uses ffmpeg as its backend) at 3440x1440@30Hz with 10bit color using a mid/high-range AMD 3800X processor.
I'm not sure what might be wrong on your end, but it sounds like ffmpeg's default configs might not be well-optimized yet if you can't encode in real time.
With the correct encoder and settings, it can be faster to encode than x265, for any given "efficiency" that x265 can achieve. Of course, the efficiency can be pushed higher than that, too, still with a reasonable encoding time.
exactly. vp9 isn't actually that good. It's clearly worse than hevc. AV1 is better, but there's limited hardware encoding/decoding support. The result is exactly as expected.
> does not address the gap in hardware support between HEVC and AV1 for efficient accelerated decoding
I'm rooting for AV1 just the same as everyone else, but it's still nowhere close to HEVC in terms of hardware support, or even general quality/efficiency. It's just going to take some more time.
Add the hardware support, then we can talk about dropping the proprietary ones. There's no way the costs of royalties, no matter how they indirectly affect me as a user, outweigh the cost of my CPU chugging to handle unoptimized video.
Surely that depends - is this a bunch of generic and vague detail-less patents, or do they provide the actual information required to implement the thing it purports to describe.
The core problem with “software patents” in the US sense is that the patent office appear to grant them by default if they are vague, only accepts specific types of pre-existing evidence that they are not new or novel, and once a patent is granted makes it as hard and expensive as possible to challenge the granted patents, and doesn’t allow you to recover costs if you are sued for a patent that is eventually revoked.
All of those thing mean that the specifics of us patent law remain BS, but at the same time I think that everyone on HN does believe that IP should exist, and people should have rights to what they create.
You can't patent "pure math" but given that describes literally anything that it is possible to do on a computer, including simulating a physical device, we know that there is an intrinsic point where things go from "math" to something patentable.
The rationale for "you can't patent maths" is basically "you can't patent a fact".
Blanket anti-software patent people take a maximalist position: if it's a step of instructions it is maths, so should not be patentable. I think that is absolute nonsense, and it is an explicit statement that if you ever come up with anything idea, no matter how much it cost you to invent it, or develop it, it has zero value - because apparently the hard part of complex and new technology is writing code, not developing the technology in the first place. It also means you get some absurd results: the same invention would be patentable if you made a mechanical implementation, a purely electronic one, probably an ASIC, but probably not if it was an ASIC executing instructions from a builtin ROM. Because suddenly it becomes "math".
As I said originally, the problem is not patenting "software", it's that the idiocy of the US patent office means that you can make a patent document that has no information that can be used to implement the patent, and thus the patents are inherently open to abuse.
The core problem with software (and worse, process) patents is that they let you patent an idea, rather than an actual implementation of an idea, which is what physical object patents are required to do. The whole reason patents are public is so that the public can look at a patent, and use that document to implement the idea being patented, but if all you've done is patent the idea then all the public can do is see that you had an idea but didn't know how to actually build it (which is what patents are _meant_ to be for).
> It also means you get some absurd results: the same invention would be patentable if you made a mechanical implementation, a purely electronic one, probably an ASIC, but probably not if it was an ASIC executing instructions from a builtin ROM. Because suddenly it becomes "math".
Why is that absurd? Here's my attempt to describe a maximalist position: If the machine actually does something then you get a valid patent for the real world. But the patent won't apply to simulations of the real world. So it doesn't really matter whether it's "patentable" or not. We could give patents to both variants, but if someone is only interested in the data the machine outputs, they can run a simulated version without violating either patent.
> it is an explicit statement that if you ever come up with anything idea, no matter how much it cost you to invent it, or develop it, it has zero value - because apparently the hard part of complex and new technology is writing code, not developing the technology in the first place
Math takes tons of effort too! Deep complicated proofs are no more "just a fact" than compression schemes are "just a fact".
> The core problem with software (and worse, process) patents is that they let you patent an idea, rather than an actual implementation of an idea
I worry that there's no good way to make a thorough guideline for what counts as idea and what counts as implementation for things that are code-based.
Though in the strictest sense you could just rely on copyright for implementations and toss out patents entirely.
I just want the best codec, and I am willing to a pay a dollar for it.
For Video, H.266 / VVC is just technically superior in every single way. For Images, JPEG Xl is the best for 95%+ of use cases. For audio, we have a AAC-LC, literally as ubiquitous as MP3, true patent free, and at 128+kbps, 95% of cases as good as the state of the audio codec.
And yet we end up in a world where the only accepted choice is AV1 for video, AVIF for images and Opus for Audio.
While I agree with the rest, I must say that in one point you are wrong. Opus is better than AAC-LC. It spans a much wider spectrum of bitrates and it sounds good¹ at any of those. On top Opus is open source which AAC-LC is not afaik.
---
¹ good is subjective, I earn my money as a freelance audio engineer, so I should have the ears to notice anything wrong with it.
is it possible to implement generic/unencumbered blocks at the HW level and then string them together at the SW or firmware layer? how much efficiency do you lose? if you can move all the patentable stuff into SW then we can do the same thing we all did with mp3 patents: i.e. ignore them (see: LAME).
tell that to every Linux user who was doing anything with mp3's before the patents expired just a couple years ago. i'm pretty sure even the Windows version of Audacity -- a tool with over 100 million downloads -- integrated with it. are you claiming the LAME approach wasn't successful? or that its success was a fluke, context-dependent, or for some other reason is a bad analogue?
I'm not sure how you think for-profit companies like Mozilla, Google or Apple would get away with shipping 'ha ha I tricked you!' patent circumventions to a billion people. It's not like Audacity.
Keep in mind that Firefox AFAIK still relies on OS codecs for h264 because shipping it themselves is such a difficult proposition due to patents. And on Windows, if you want native HEVC or AV1 playback you have to buy it from the store, it doesn't ship with the OS.
maybe i misunderstood, then. i thought that HEVC licensing was paid for by the hardware vendor, and hence acted as a broad tax even if you’re not using HEVC, or using GPUs in a HPC context, etc (and unnecessarily raises the barrier of entry to new HW vendors, etc). but if Windows users are individually paying those licenses, that’s probably not the case.
i guess the worry then is that HEVC crowds out AV1/others, and this Chrome change is setting the stage for it to become a broad tax on video? i could buy that angle.
There was a recent kerfuffle about Fedora removing support for hardware accelerated H.265 from their distribution of mesa because of threat of patent litigation. See https://www.phoronix.com/news/Fedora-Disable-Bad-VA-API. That would also seem to suggest that the OS has to pay royalties if it uses hardware acceleration. It seems pretty strange to me that the OS/library/driver has to pay royalties just to expose access to hardware, which is where the patented technology actually exists. But I also don't understand how a video codec can be patented in the first place. ¯\_(ツ)_/¯ IANAL.
Yeah downloading an OS that couldn't play MP3's by default, and required you to jump through hoops to let it play them, because said distro was avoiding law suits, Just like how Audacity didn't ship with LAME, but required you to supply your own dll. Yes patents had zero effect on the User experience.
yes. of course patents impact UX. if patents didn’t make things more difficult for engineers, then they’d be worthless, and in open source that burden on the devs gets passed through to the users.
a reasonable person might conclude that patents are a bad thing as a result. like it or not, HEVC is patented and that’s not changing anytime soon: but do we decide to develop tools around it, or banish it?
that any users actually went through the awful UX of downloading codecs is some evidence that mp3 support was valuable to them. is this still the case, today with HEVC, or not?
if the argument was really as simple as “HEVC is bad UX”, we wouldn’t have this discussion: nobody would use something with bad UX if they didn’t feel compelled to for some other reason. why anyone would feel compelled to, is the more interesting discussion.
LAME is a really great example of why patents exist: the need to avoid the mp3 patents resulted in the development of technology that was superior to that covered by the mp3 patents. Do people really think LAME would have been developed if people were just reimplementing the existing mp3 encoder?
This is wrong in so many ways I'm not sure where to start. LAME was covered by the patents, so your whole idea is backwards and even if it weren't it's not supported by any evidence - there's just no relationship between patentability and how many encoders get implemented independently or not.
AV1 would be ideal to support. It is resource intensive for software, but with M2 MacBook Pros, for example and upcoming iPhone A17 processors, the AV1 decompression and compression codecs could be put in the hardware.
AV1 reputedly is 30% more efficient than HEVC, important for a number of cases, such as more efficient use of bandwidth over cellular. For FaceTime over cellular Apple today uses the HEVC codecs if available on source and destination phones.
My guess is that the next shot is with a TSMC N3E chip, which _may_ debut with iPhone 15 Pro. There's been conflicting reports if N3E will be ready when Apple needs it - September 2023 if it's for iPhone 15.
N3E based M3 Macbook Pro with hardware raytracing and hardware accelerated AV1 decode/encode would be nice.
> That’s where the catch is, unfortunately. The biggest drawback is that HEVC with Widevine DRM is not supported at this point, only clear, unprotected content. It’s unclear whether Google has plans to add support for this in the future or not.
That was my first question. I hoped against hope that it wasn't a trojan horse for some new DRM.
Surely they are going to add Widevine support at some point, but I can't help but rejoice a bit to hear that the latest/greatest coding doesn't support DRM. Even the possibility of a DRM-free future is a wonderful dream.
This isn't true at all. True, HEVC doesn't support Box level encryption like MP4 does, but nothing is stopping anyone from just AES encrypting an entire fragment, as has been used for years with HLSe. The fact that MP4 uses box level encryption is one of the stupidest tech-related things I have ever seen. It forces anyone who wants to decrypt (or encrypt) to write a full MP4 parser, instead of just utilizing existing encryption tools on some "dumb bytes". Once people realize that you can just use HLSe method with HEVC, websites can then pull the keys from the M3U files and just return them from Widevine license requests instead.
That's an interesting perspective. Some corrections are in order.
MP4 encryption, specifically Common Encryption addition to ISO BMFF has two levels of partial encryption.
- Each frame's media bytes are encrypted independently of other frames. MP4 boxes themselves are not encrypted. This is done so that applications can parse container metadata such as codec params and timestamps without depending on the secure decrypt and decode layer.
- Each frame consists of codec protocol messages such as NAL Units for H.264 and HEVC, OBUs for AV1, and uncompressed and compressed headers and tile headers for VP9. Subsample encryption leaves message headers unencrypted, but encrypts their contents. This is done for efficiency of the secure decode hardware. Clear and encrypted byte ranges are stored in MP4 boxes.
MP4 encryption of course fully supports HEVC using both mechanisms.
Widevine in general supports HEVC. The Widevine module in Chrome has to include a decoder for each supported codec. They probably skipped HEVC to avoid increasing download size for users.
I don't buy these rationalizations at all. As was previously said, HLSe has been using fragment level encryption for many years, with AVC and MP4, so we know its possible. This is not theoretical. https://paramountplus.com and https://cbc.ca
and others use HLSe on some streams currently. You don't need box level encryption, and requiring it just add pointless MP4 parsing and overhead.
They're not rationalizations. They're implementation choices under legal, engineering and product constraints. In particular, Paramount and CBC allow themselves to use HLS encryption forbidden under typical content protection contracts.
Based on the Quicktime File Format, MP4 is most usually only a container these days. Moving Pictures Experts Group developed the MPEG-4 standard with many parts[1] including a codec called MPEG-4 aka MP4V or just MP4, which was still popular for SD video encoding only a few years ago, though not exclusively, and it is still available to encode video in Apple Quicktime (MPEG-4 Basic, MPEG-4 Advanced), ffmpeg, HandBrake, etc., but I doubt that's what GP meant, at least, I'm not aware of any codec supporting encryption, but I really hardly know anything, so there's that. At any rate, "MP4" alone is ambiguous because it can be either the container and/or the codec.
Let each of them know, please, that I and my compadres apologize for referring to MPEG-4 Part 2 as MP4 to save time, and refer to the container as MPEG-4 Part 14 to avoid ambiguity. Major faux pas there, how embarrassing, mea culpa, and thank you for taking the time to set us straight by letting us know what every single other person does.
I agree with your assessment. The reference to “box” makes me think it was definitely the container. I also don’t see the codec called MP4, it’s usually spelled out MPEG-4.
".mp4" is just a file extension, the source of the colloquial name for the container. But both the container and the codec are named MPEG-4 and both are colloquially called "MP4."
"Box level" is apparently referring to types of cryptography, "S-boxes are non-linear transformations of a few input bits that provide confusion and P-boxes simply shuffle the input bits around to provide diffusion"[1] The basic function of S-Box[2] is to transform 8 bits input data into 8 bits of secret data using a precomputed look-up-table (LUT). A "permutation box (or P-box) is a method of bit-shuffling used to permute or transpose bits across S-boxes inputs, retaining diffusion while transposing."[3]
Thanks, I really had no idea what he was talking about... fucking movie atoms. I've always hated those things, and the idea of encrypting them makes them even more sadistic.
> It forces anyone who wants to decrypt (or encrypt) to write a full MP4 parser
MP4 is seriously one of the easiest binary formats to parse, and even if you couldn't use an existing parser it would probably be the simplest part of whatever you were actually building.
> it would probably be the simplest part of whatever you were actually building
Unless you've written a Widevine client, downloaded from DASH, parsed MP4, decrypted MP4 samples, then reassembled the decrypted fragments, then you're really not in a position to be making this claim. I have done all the above, and the MP4 parsing was by far the most difficult part of the process, and that includes parsing Protocol Buffers for use with Widevine. The sheer volume of different box types is what makes it difficult. Over 100 types, see for yourself:
I haven't done anything with Widevine , but I have written multiple BMFF parsers, and I'm intimately familiar with how many different boxes/atoms there are. Luckily you can implement them incrementally because the box hierarchy is so normalized.
It's actually my go-to project when I'm trying to learn a new language, because the problem itself is simple enough to understand, but it forces you to learn the idioms about the language you're learning. What is the idiomatic way to represent different box types? How do you read values with specific endianness from a buffer? How do you seek through a file's contents without loading the whole 10GB movie into memory?
Back a while I tried to implement a MP4 demuxer, and I can kind of relate to that. The mdat box is sometimes an opaque blob and you need to parse the codec framing to split packets (fMP4 helps with this a bit), each codec has its own set of boxes, and the specs for each of them are paywalled...
Matroska/WebM is so much simpler and easier to parse, you can essentially abstract it away in a JSON-like DOM (obviously without loading 1GB of data into memory) and just get what you want, it's great.
Why do pirates prefer hevc over AV1? I assume they don’t care about intellectual property so I assume it comes down to technical quality of the algorithms?
Maybe they have moved to av1 and I just don’t know it. Either way the quality of the piracy scene has always seemed vastly better than legitimate channels, up until Netflix.
HEVC encoding is far more mature. Encoders are faster, of better quality, and hardware acceleration exists. Additionally, for video playback outside of a PC, HEVC support is ubiquitous, AV1 support is virtually unknown.
And -- universality of the playback. "Why don't people adopt Vorbis?" almost answers itself, "Where/on what devices are they going to play it?" Without looking -- does your Apple TV support X?
For the record, it seems like more and more devices are supporting Opus for general use cases. It seems to be the audio codec end game - at 16kbps it sounds as good to me as 64kbps mp3
> For the record, it seems like more and more devices are supporting Opus for general use cases
Yes, exactly and AFAIK it's because Chrome/Google/YouTube has recently supported Opus to the exclusion of these proprietary formats for higher quality (read HEVC level) video.
At least with web streams, pirates don't re-encode.
And since web streams are in h.264 and h.265, that's what the torrents are in too. It's not about preference, it's about the source.
(And the web streams are in h.264 and h.265 because that's what people have the most hardware decoders for, which preserves CPU usage and battery life.)
That applies for TV show. For movies though, the vast majority of new torrents are Blurays re-encoded to HEVC (x265 being the encoder used pretty much every time).
well yes, but not everyone wants to download 70GB movies, so blu-ray movies are often re-encoded for smaller filesizes (10-20GB 4K HEVC, 5-10GB 1080p H264 AC3 sound, 1-5GB 1080p HEVC AAC)
It comes down to the fact that HEVC is supported by more devices. 1080p and lower resolution versions of pirated movies still come in H264, because that is supported by even more devices...
encode is/was slow. Theres at least one group that was running an encode farm for AV1 though. today, theres a TON more AV1 content available these days from pirates as better hardware is available, and ffmpeg is usable with AV1 now.
High resolutions like UHD require really high bitrates when encoded with H264, because H264 wasn't designed with such high resolutions in mind. HEVC/H265 improves upon its predecessor in this regard, so Netflix's edge hardware only keeps their UHD content encoded in HEVC/H265. If you request HD content, you're not only getting a different resolution, but also likely a different encoding scheme.
Chrome on Windows has Widevine support but not the level required for providers to serve 4k content to it. As mentioned in another comment it's also possible the 4k content is only served via HEVC, so lack of Widevine support in HEVC would mean there is no compatible 4k steam available.
Widevine, and DRM more generally, has everything to do with why most providers don't stream above 1080p(even if) on Windows. Netflix's Windows app (ie not Chrome and possibly limited to only Netflix Originals..) being a notable exception:
Amazon Prime Video: Nope
HBO Max: Nope
Paramount+: Nope
Disney+: Nope
Hulu: Nope (Maybe originals in the app?)
Apple TV: God nope; ugly as sin I don't even think they are hitting 1080p but maybe I'm just spoiled now
There may also be aggressively limited bitrates at 720/1080p for H.264 streams. Would need to dig into it but that could make those resolutions look like garbage compared to how they used to look(ie a Vudu 1080p stream circa 2011).
Basically it's a shitshow and a damn shame for personal computing in 2022.
that's why piracy is regaining currency - when you pay for DRM'd content, you're just paying money to be treated like a pirate. Better to be a pirate to start with :)
The mystery is who is paying whom ? Is Microsoft paying for HEVC licences ? Google ? Distributor ? Silicon vendors ? All of them ? The question above is, "Why use HEVC(pay) when AV1 exists ?". Or, "Why care about AV1 if everyone is paying ?"
>Because HEVC has hardware support, right now, so it's faster.
Most new hardware supports hardware decode for AV1 too. There were 1-2 generations prior to the current one that had HECV but not AV1, but that sample size will become irrelevant over time.
>People don't much.
Well, clearly people who make decisions do. If you ask an average person on the street if they care about HEVC or AV1, they don't. If you ask Netflix or Google, they do.
"Netflix has also partnered with YouTube to develop an open-source solution for an AV1 decoder on game consoles that utilizes the additional power of GPUs."
But still today you can take an HEVC iPhone or GoPro video and watch it on your PS5 with full hardware encoding. This open source solution doesn't help with that.
> It's hard to see HEVC as anything else than a legal liability.
I actually think the truth is the exact reverse. HEVC has easy-to-license patent pools. A couple of clicks and done.
The patent situation surrounding AV1 is complex. It claims to be a free format but multiple entities claim patents that cover it. A submarine patent lawsuit seems likely.
>The patent situation surrounding AV1 is complex. It claims to be a free format but multiple entities claim patents that cover it. A submarine patent lawsuit seems likely.
This can be said about anything. Sounds like FUD. There hasn't been a successful lawsuit yet, right ?
The Alliance for Open Source Media (the group behind AV1) is the subject of an ongoing investigation by the European Commission[1] who, per the article, appear to be threatening those who distribute software decoding with a fine of 10% of global revenue.
According to this article[2], Sisvel, the patent group claims its licensing for AV1 is
> more convenient than licensing from individual patent holders, which in this case include companies like Philips, GE, NTT, Ericsson, Dolby and Toshiba
So, yes there is FUD around AV1, but as FUD goes, it’s pretty legit considering the players and their “nationality ”.
Going to add that I don’t have any knowledge of the legitimacy of the patents involved, and think the legitimacy of software patents generally is often questionable.
Previous codes have been designed and sold as patented codecs. AV1 is meant to be royalty free by design. The claimants of the alleged patents can ask for money, but eventually somebody will refuse due to the very explicit royalty-free promise in AV1, and there should be a lawsuit. We have not seen that yet. As a practical matter, even if people are paying money silently, it would get on the grapevine. We haven't even seen any evidence of that either.
A submarine patent approach implies waiting for it to become popular enough that you can extort companies into paying, rather than just dropping the codec in lieu of something else.
Submarine patent lawsuits have the same probability for both of them. But guaranteed patent leeching applies only to HEVC. So HEVC has only downsides, no upsides in comparison.
> Submarine patent lawsuits have the same probability for both of them.
Frankly, I don't believe this, if only because HEVC has been around for much longer, and if this was going to happen, it already would have.
The patent pool situation is a little messy, as there are two possible pools, but given how ubiquitous the codec is now, most companies seem to be figuring it out.
You can't prove the opposite anyway. Whole "submarine patent" thing is just a speculative fear. You can't quantify it. So it applies to both in case someone has paranoia. But as above, in case of HEVC it's complemented with guaranteed protection racket fees for the likes of MPEG-LA. So HEVC is worse in the end.
If you've paid for HEVC you can say 'hey, I tried to do the right thing'. If you don't pay for AV1 it looks like you were trying to avoid doing the right thing.
That sounds like "if you paid your racketeer, you can expect you'll be OK". However tomorrow you can get an additional racketeer to the first and both will demand payment then. First won't care to protect you from the second. Unlike mob protection racket they aren't exclusive.
The right thing here is use something that doesn't have a default protection racket attached.
Compared to HEVC, AV1 is still relatively new hence it suffers limited hardware support. Again it is key to note that Intel, AMD, Nvidia, Qualcomm, and most hardware manufacturers already had support for the incumbent HEVC. That means that a hardware accelerated encoder can run HEVC 5x faster than AV1.
And…
A vast majority of devices in the market; phones, TVs, tablets, cameras, browsers, professional-grade applications, etc come with a built-in ability to decode HEVC. Even those coming from the founding fathers of the AV1 codec have added support for HEVC.
Most older hardware has HEVC decoding but only the last few gens have AV1. My 10980XE is like 96% occupied while watching AV1 without HW acceleration so many smaller devices can't do anything resembling a smooth playback.
> My 10980XE is like 96% occupied while watching AV1 without HW acceleration
That seems high. What resolution and frame rate is the video? And which decoder are you using? dav1d is a highly optimized software decoder so that's the one to try:
I tried it on some 8K HDR test video on YouTube on Linux/Firefox... All 18 cores and 36 threads at ~96%. NUC with an older Atom (7PJYH) gave like 0.1fps...
Which video and when did you try it? dav1d has improved a lot over time. You should try it again. You should also try any other AV1 video on YouTube. There's a lot of it there these days.
>My 10980XE is like 96% occupied while watching AV1 without HW acceleration
That shouldn't be the case. I play AV1 (via dav1d, software) w/o issue on much weaker CPUs than that (a zen1 laptop cpu and a haswell i7). The CPU is mostly idle.
New mpeg standards come out about once every ten years. It's amusing to me that Google are adding h.265 support six months after the approval of the next-generation h.266 standard[1].
I’m so annoyed that this has come so late to the picture. HEVC was a worthy AVC replacement years ago and they constantly closed as wontfix or just let languish any requests to support it.
Does anyone know if they quietly added support for encoding as well? I’m on the lookout for more encoders to add to hubble.gl whenever there’s browser news about codec support.
Aside from all the patent nonsense, what is even up with that naming? “Advanced" -> "High Efficiency" -> “Versatile" is nearly as bad as USB marketing… Can we not just stick to h264 -> h265 -> h266?
What's the point? No one is using it on the Web and if anyone will try to use it commercially, they'll be eaten by patent trolls like MPEG-LA. Google might afford shelling out money to those trolls to add it to the browser, but not everyone is Google. So it's basically DOA. Did anything change about that recently?
> Great feature, but where is the necessary hype!?
There is surely no need for any hype for dead end technology.
Apple isn’t causing a problem in the manner you’re implying - they’ve been using those codecs for years at this point.
All that’s happening here is chrome is adding support for an existing standardized format that is used by the general public in presumably sufficient amounts to warrant the complexity.
If Apple can stop using it to not proliferate it switching to AV1 by default and aren't doing it, then you can say they are part of the problem due to their size.
I.e. if you meant that Google went out of the way to enable it in Chrome because Apple users produce so much content with it (because of Apple), then Apple are causing a problem.
Compare it to someone who makes products that dump toxic waste in the environment when they are being used but due to their size others have to start dealing with it to interoperate instead of avoiding that. You can't say they aren't causing a problem.
When it comes to media codecs Apple were always very problematic, but I thought after them joining AOM something could improve.
Presumably heif and/or hevc had an advantage when Apple adopted them as the default formats. Given they’re primarily on mobile devices it’s reasonable to assume there is hardware support. So switching to something else means battery life impact on everything else.
What you are saying, is that because Google didn’t support a standard format that has been around, and in wide spread use, for years Apple should make all of its devices worse.
Today AV1 can be supported in hardware and is supported in all recent or upcoming SOCs and GPUs. So Apple have no excuse not to do it when they are even pouring tons of money into making and refreshing their own chips.
Can you explain how a new or soon to be released device having hardware support for something adds hardware support to the last 9 years of devices apple has sold that still work with iMessage and Apple's Photos app? Or should Apple just break those because you have a personal bias against a standardized file format?
I guess Google and Android have demonstrated that supporting hardware after you've sold it isn't something that people actually care about so maybe you're right and apple should take the same approach - after all I'm sure breaking older hardware will result in sales, which is good for business.
Above was never about older devices. They can use H.264 and VP9 which have hardware support for years already. It was about newer ones. There isn't a need to ever use the toxic H.265.
All Apple devices communicate with each other via heic, etc. older devices may not be able to encode the video version, but they can all decode it.
You are saying that Apple should revert all of their image and video encoding to h264 because you’ve decided av1, etc is better, and that justifies regressing 100s of millions if not billions of devices.
I do not understand why you find this concept so hard to grasp - millions or billions of devices exist, are in use, and produce and display heic and heif data. Those devices will continue to do so, because new hardware that supports new codecs can’t be magically installed in those devices. So if nothing else, devices that produce the standardized format you despise will continue to produce that format, hence it makes sense for chrome to support that format.
new devices can encode in av1 and similar, but only if you think it’s reasonable for Apple to effectively break older devices for no real reason. Given people already accuse Apple of designed obsolescence, despite providing greater long term support for all its devices than any other company, I can’t imagine “Apple breaks sending images to old ‘supported’ devices” going over well.
Again, this is a super basic and obvious concept that should not be remotely challenging to understand.
I get that you hate heif and hevc, and I get that new hardware supports codecs that might be better, but that is not remotely relevant because there are vast numbers of devices out there that don’t support your favorite codec of the day.
Apple are at fault for deliberately proliferating the damaging codec. Trying to whitewash it with arguments like "they can't do otherwise" won't going to fly. They aren't some poor vendor with no choice. What they are causing is reducing choice for everyone else due to their own size, which was exactly the point.
That's why Firefox don't want to support it. Because you can only provide hardware decode unless you pay huge amounts of license fees to the license leeches. Which will make this format only works on some devices but not others.
The h264 has a free software decoder so your browser can play it no matter your device supports it or not
I have a production use-case for it but it kind of depends on it being on by default, can't wait to see that. Do you know if I can detect whether it's supported per client request on the server side?
The word "quietly" should be banned from all tech headlines. What should Google have done? Launched a $10M marketing campaign to publicize this change?
And who knows, maybe they still plan on putting HEVC support in their release notes at some point.
My "new features in this update" section is often 6+ months delayed - as in I will have had access to a feature for 6+ months before seeing it in Chrome release notes.
Exactly. IDK how people can use Google software. You never know when they will quietly inserts some spyware or adware into their so called open source software.
Chrome does not have release notes. The blog post only mentions CVE fixes and points you to Git logs for the rest of the work. Only a tiny number of new Chrome features (out of the thousands of changes and bug fixes done in every release) get further marketing treatment, and they are obviously going to focus on those directly relevant for users over a new video codec. Just because they didn't call out the change anywhere doesn't mean they are being secretive about it, as the title wants to imply.
The top of the Chromium blog [1] currently says "Find the release notes for Chrome Beta 107 here." and they point to [2].
Which is definitly not marketing treatment -- it's deep in the weeds of changes for CSS, JavaScript, HTML, and so forth.
It's obviously not including every change made, but it does seem like it's intending to be fairly comprehensive in terms of new relevant changes for developers.
As comment 19 of the bug report says, "This bug is not about to implement HEVC in Chrome", and "Chrome does already support the native HEVC decoder for progressive download videos using the HTML5 <video> tag on Android". The issue description notes that as of Feb 2015, "Playing plain HEVC mp4 files in Chrome/Android/Nexus5 works fine".
This new fix just corrects the handling of HEVC in the Chrome's Media Source API (formerly known as "Media Source Extensions").