I already force-enabled EGL in firefox 93 on Arch Linux with the 470.74 nvidia driver, and the performance is stellar compared with what it used to be. Before EGL all desktop applications (including firefox) would stutter when a video was playing in firefox.
Wasn't there friction when using nvidia's proprietary drivers, as Nvidia's video acceleration depends on VDPAU compared to Intel/AMD using VA-API?
The only reason I haven't made the full change for Linux yet is because I watch a lot of Twitch.tv and the CPU usage hits 45-50% on average even at 720p.
Pihole + uBlock Origin, I never see ads on Twitch or YouTube. I only see sponsors and affiliates in YT (and I guess Twitch as well, not watching much Twitch tbh). Which usually don't work in EU the one time I am interested (LTT...)
Last I checked, ytdl when used standalone has poor behavior with live: it will fetch the live stream, but if ytdl gets up to "current time", it considers the live stream complete instead of continuing to capture it in real-time.
This may or may not affect using mpv's ytdl integration (where mpv internally calls ytdl to obtain a stream).
Yes, I tried to make video accel work but didn't succeed. There's a vaapi driver for vdpau, but it crashes when I try using it. In some firefox issue there was someone mentioning that video accel needs dmabuf, which needs gbm, so the 490 driver might solve the issue.
Now that you mention, I did have an issue after a suspend. All text was missing inside firefox, restarting it resolved the issue. But I keep my laptop plugged in and open 99% of the time with presentation mode enabled, so this is extremely rare for me.
There are two steps in the task of hw assisted video playback: 1) video decoding, i.e. going from compressed stream to uncompressed, offscreen video frame, and 2) compositing video with all the other surfaces the video playback application uses -- the decoded frames are usually put into OpenGL (or Vulkan, whatever) texture and then composed with the rest.
When you do 1) on the GPU (well, not exactly GPU, but video decode block on the videocard, but that's not really important now), you end up with decoded frame in VRAM. Reading it back to system RAM, just to push it back to VRAM elsewhere is expensive (if the graphic is not UMA, then you go via PCIE back and forth) and unnecessary in the end, it is more efficient to have some way to directly share content from 1) into 2) already in VRAM.
For that, both subsystems must support some way of sharing memory buffers. VA-API (for 1) and Mesa (for 2) support DMA-BUF, so that's the reason why it is used here.
Though Vulkan has video decoding built in, so eventually everything will hopefully unify to use that and we can avoid needing multiple separate subsystems depending on vendor...
It looks like [0] there currently exist extensions for H.264 and H.265. I don't know anything about Khronos politics, but I expect there's nothing that would stop e.g. Google from proposing similar extensions for VP9, AV1, etc., besides the need to actually get it implemented. (The H.264 and H.265 extensions have only AMD, Intel, and NVIDIA employees listed as Contributors.)
Video playback is accelerated in many apps, for example VLC 3.x defaults to it.
In web browsers, things are more delicate because videos live within the complex context that is a webpage -with its own layer of hardware acceleration!-, and failures can be brutal depending on your browser, hardware and driver quality (crashes, video corruption, etc).
The pain-in-the-ass-ness to enable it on a given browser varies with time and browser/OS and along regressions, stack changes, and maintainer priorities. From personal experience during the past years, it has never been great on Linux:
- In Firefox, regressions have been frequent.
- In Chrome, the feature is here but Google disables it at build time in their official builds (my personal intuition is that they do enable it for their Chromebooks, but they don't want to support the wild west of broader Linux configuration and old/broken driver fun). However, out of official Chrome builds, several distributions builds (Arch, Debian) of Chromium now enable it at build time, so it's possible to try to use it.
So, it's finnicky to enable, but sometimes feasible, until it's no longer :D . To give it a try on a Linux box, the well-maintained Arch wiki is what you want (and these sections are mostly not specific to Arch):
Bang on! This just summarizes the experience of browser video hardware acceleration on Linux. Marred with bugs and incompatibility issues. Regressions are the norm for Firefox in this regard according to my experience using an Intel card.
Since announcing support for HW video initially on Wayland, Firefox has been off to the races on this [0]. Despite requiring lots of hacks to enable flags as you linked on the Arch Wiki, one of which now suggests disabling the sandbox for content processes and a major red flag, it somehow worked.
For one reason or another [1], Firefox chose to lock the video drivers to the old i965 drivers instead of choosing the newer iHD driver. A weird decision that still applies to current builds.
There have been incompatibility issues with video codecs. In my experience I can play h264 videos with hardware acceleration but not vp9 with the flatpak distribution. With vanilla package, both h264 and vp9 video is hw accelerated.
I could also have sworn that video acceleration support has gotten worse over time since the introduction.
I now get a weird artifact at the bottom half of the screen only when watching YouTube videos. [2]
Video hardware acceleration is not entirely doomed on Linux. Even if Chromium does not want to make any move toward this direction in this century. Gnome web or rather Epiphany has stellar video acceleration support by using Gstreamer plugins. So it is possible to get hw video working on a browser.
Will you be affected? Here's the gist from the article: As of Firefox 94, users using Mesa driver >= 21 will get it by default. Users of the proprietary Nvidia driver will need to wait a little bit longer as the currently released drivers lack an important extension. However, most likely we’ll be able to enable EGL on the 470 series onwards. DMABUF support (and thus better WebGL performance) requires GBM support and will be limited to the 495 series onwards.
Ubuntu 21.10 currently comes with the 470 series driver.
Well in Arch we have the drivers in testing with an extended period to give users a chance to report issues. You can already get them and help us test! We'll likely move them next week.
You're doing God's work. I don't use nvidia drivers, but do enjoy my Arch Linux installations very much. They are always up to date, but very rarely break in practice (IME, YMMV). Thanks for everything you do.
How do you change those? This is something I remember trying to look up some time ago (to set PATH also for the Alt+F2 'run command' dialog in Cinnamon) but I didn't find it.
I think you misunderstood my question. Indeed with `DISPLAY=:0 glxgears` I can run something on my screen from a different virtual terminal (perhaps even an ssh session), but what I mean is that my desktop environment takes the PATH environment variable from somewhere and I don't know where. When I run 'josm' in the Alt+F2 dialog, it can't find the command, even though in my bashrc I configured my PATH to include ~/bin/.
Environment variables are defined in multiple files. most desktop environments launch in a systemd user session, so one option is to use that[0]. Then are .xsession, scripts specific to the DE, /etc/environment and a bunch of other stuff I am forgetting.
A TTY like you get in your "ALT+F2" is a new login/session shell and uses .bash_profile rather than .bashrc (which is invoked when you create a new bash process in an already existing login session like when you open a terminal window). There are lots of moving pieces, but I've found the easiest way to get the same behavior in both is to have my .bash_profile source my .bashrc.
> Generic Buffer Management (GBM) is an API that provides a mechanism for allocating buffers for graphics rendering tied to Mesa. GBM is intended to be used as a native platform for EGL on DRM or openwfd. The handle it creates can be used to initialize EGL and to create render target buffers.
Where
> The Direct Rendering Manager (DRM) is a subsystem of the Linux kernel responsible for interfacing with GPUs of modern video cards. DRM exposes an API that user-space programs can use to send commands and data to the GPU
What does it mean for nvidia to jump on the GBM bandwagon? Firefox would just talk (perhaps via some intermediate API like EGL or BGM) to the Linux kernel for talking to an nvidia card, if I understand it correctly?
With the proprietary driver, the GPU is controlled by their closed-source kernel module, so Mesa's GBM code wouldn't know how to talk to it. However, as of the latest version, NVIDIA is shipping a GBM back-end library that Mesa can load which is able to talk to their module.
So Firefox, not using GBM but EGL, would not be able to make use of the proprietary driver and has to talk to DRM? Can two kernel modules even use the same card at the same time?
> Can two kernel modules even use the same card at the same time?
yes, I'm not sure how everything is divvied up. NVIDIA ships multiple kernel modules in their driver, and one of them is called ‘nvidia_drm’. So idk, maybe you have to use their DRM implementation or something
It's much more believable that they are developing with Chromium in mind, and testing primarily on it.
I've seen many webapps that are completely unrelated to Google, that are more performant on Chromium browsers, just because most developers are using them for development. It's also easier to justify spending time fixing bugs/improving performance if it affects the majority of users, instead of spending effort for 4% of users.
It didn't actually have bad performance. I never had performance problems on YouTube, and I even made my own apps with Polymer / Web Components v0 (before v1 became a thing) purely in Firefox (never even tested in Chromium), it all worked pretty well.
Google built their redesign on top of a prototype set of APIs that didn't end up being standardized. That's their own fault, and it should be their problem.
It's not just Firefox devs, it's everyone else apart from Chrome, because Chrome had gone and implemented said APIs that were never standardized, so they didn't need a polyfill. And I would bet money that ShadowDOM v0 would have been removed from the Chrome codebase earlier if not for the fact that Youtube was using it - they wouldn't have forced themselves to use the same polyfill as everyone else.
Shipping a new, standards incompliant frontend was a choice, not an immutable fact of nature. There was nothing wrong with the previous frontend apart from the fact that nobody is going to get promoted for not shipping the new thing. And since the consequences of shipping impact everyone except for Google, who cares, let's ship the garbage UI anyway.
Don't make excuses for this kind of behavior. Google absolutely has the resources to do these things properly but they didn't.
I'm sorry but this comment rings very hollow to me. I have seen this "hypothetically they could have done more" type of sentiment repeated so much on HN and it's not helpful nor is it a meaningful criticism of Google. Every browser has some non-standard features. That isn't new, Google isn't the first one to do it, they certainly won't be the last. Do they have a choice to not do that? Sure, but nobody chooses to do it because it makes it harder to actually iterate on things. The "behavior" is widespread and every vendor is already making excuses for it.
There is a real problem here and it's absolutely related to the fact that Google is only incentivized to develop/test on their own browser, but that's really orthogonal to other browsers being slow or not being able to improve performance of a polyfill.
Oh come on. Google is never going to have an incentive to do better if we just excuse them for their bad behaviour, especially in cases like this where claiming any equivalence is simply ludicrous. One must be wilfully ignorant to act as though their ambitions are "orthogonal" to other browsers purely due to their own faults.
Google did not have to publicly ship their pre-standard experimental "v0" web components implementation so early in Chrome. They chose to do so regardless of what other browser vendors expressed.
Likewise they did not have to make YouTube use them so soon, thus forcing other browsers to rely on a polyfill that could not feasibly be made remotely performant compared to just implementing web components more quickly, wink wink.
They chose to do these things the way they did. They wanted to ship it on their timeline, and to hell with what other browsers wanted to spend time on first instead. They wanted to look like they were heroes for pushing the web forward, while in reality they were just holding other vendors and APIs back to get the one they valued the most done first, no matter how much of a mess they caused in the process (the transition from v0 to v1 was hardly quick or painless). And that's just web components.
When is the last time you saw Firefox ship such a major web API in such a non-final and un-vetted state, and then use one of the largest web properties on Earth to get others to prioritize it as they wished? Or even Apple, for that matter?
It's flat out ridiculous to try equating the vendors in this manner. They don't have the market dominance or even the same force of apologists burying the lede on their bad behaviour.
I'm sure the other browsers also have their own Project Fugus underway too, where they're just shipping a slurry of new APIs regardless of whether anyone else will ever implement them? Or is that the others' fault somehow too, because they should also be trying to fragment the web as much as possible as quickly as possible?
If we collectively just want Chromium to be the only engine because we value rapidly iterating on new APIs more than anything, then let's at least be honest about it.
Look, I hear what you're saying but it all just sounds like hypotheticals to me. If you want to take that approach, hypothetically the other vendors could have chose to standardize the feature, it could have become standard, and all the other browsers could have implemented it and it wouldn't be a problem. But that didn't happen. And I have seen plenty of other features that were gated behind Moz or Webkit prefixes.
I'm not trying to be a downer here. Realistically, there will always be a browser out in front that is going to iterate on features faster than the others. That's normal as long as you have more than one browser. If we want to criticize Google for practicing anti-competitive behavior then let's do that, but it just really doesn't make sense to me to put "they shipped a feature that someone else didn't" in that category.
It's not that you're being a downer, you're just not presenting things in a fair or accurate manner.
Google didn't merely "ship a feature someone else didn't". They keep shipping new features as they wish, whether others even agree. They have in fact accelerated that attitude with Project Fugu. Some of them are quite complex or consequential APIs.
They do not deserve a free pass for it because Mozilla once shipped one or two relative minor features before Chrome did, or Apple added some weird CSS visual property for their latest iPhones without consensus. We're talking orders of magnitude of difference here.
The problem isn't who is first to ship. It's the casual disregard for even reaching consensus on the basics before shipping something, the sheer rate of output, the interop issues left in the wake, and the anti-competitive tactics being applied. Those are not "hypotheticals" in the slightest.
What is hypothetical is acting like anyone else could just magically compete on those terms. Microsoft couldn't keep up with them. Opera couldn't. No new engine has even come close to breaking into a general market yet, though a couple are feverishly trying.
Is that really ok with us? If so, then let's just be honest. Let's just say "new APIs are more important to us than engine diversity, and we don't mind Google being as evil as possible to kill the other engines off." As a webcompat worker, I'd love to see that honesty.
Do you think it's abusive of Linux to offer APIs beyond what is standardized in POSIX, breaking comparability with other nix-like OS such as FreeBSD? Or is it abusive that clang constantly adds features that gcc lacks, making programs that use clang no longer compile with gcc?
average disingenuous HNer making comparisons between compilers and web browsers. Everyone involved knows Shadow DOM V0 was a rush job, as was the YouTube redesign that used it (it had major perf issues even in chrome when it came out). The standardized ShadowDOM v1 is better in every way and works in all browsers. It's pretty clear that Google wanted V0 to spread as far as possible so they could force it to become a standard, as removing it would "break the web". Shadow DOM, regardless of version, isn't critical for a product like YouTube. The "web" is only the "web" if parties involved play fair, even just a bit, otherwise it's back to IE6.
Does Linux also happen to control a major piece of hardware pretty much everyone uses, and used it to force BSD to adopt some new driver system they wanted to prioritize, which ended up being completely different by the time the dust had settled, and held back other important advancements in the meantime?
That is, it all depends on the full context. Competition is fine, but not anti-competitive behaviour. I maintain that we have had too much of the latter from Google, and that it is only increasing as folks intentionally look the other way and try to boil arguments away to mere deflections and other apologia.
It would be interesting to profile these sites and make PRs for Firefox to fix any possible issues, then see how long it takes for Google to find new ways to make them slow again.
This is literally one of the big things changed by what's described in the article (for those still using X11; it's already been solved for quite some time on Wayland)
As Firefox is lagging a bit with new feature implementation, Chromium based browsers usually are using native implementation that is faster and Firefox is left using slower polyfills. And web app developers sure love using most recent web browser features and APIs.
well most api that is not implemented in firefox has high probably of not being implemented in safari too. So I don't think web app developer should use recent web browser features and API that is available in chromium browser only. 20-30% represent a large number of users so I think devs should show more pragmatism.
I think firefox has cache management problems. I got to the point where even opening an nginx test page off localhost took 1-2 seconds, but clearing the cache seems to have fixed it back to tolerable levels again.
With Chromium 95 and Nvidia 470.74 driver for me hardware acceleration is enabled for canvas, and can be enabled for rasterization. For video decode it can be enabled, but it's VA-API, which is not supported by Nvidia. Vulkan can be enabled. GpuMemoryBuffers is software only.
Actually, that is one the main reasons I get better performance on Chromium is because you can use HW acceleration. I just use some of the settings documented in the Arch wiki.
Firefox isn't terrible, but there is a noticeable startup lag and I get random freezes a lot more often.
"Reduced power consumption" I can't wait to see that actually happen because ... man Firefox and Chrome have both become quite the power and memory hogs !
Firefox has become an excellent operating system but somewhere along the way it became a pretty marginal browser. Still, getting hardware acceleration working on linuxes was serious work and it's not like this guy would've been fixing the UI issues created in recent version if he wasn't doing this.
I have used it interactively over VNC, I don't think native headless mode helps for that case. Getting GL acceleration for a headless session is not very straight forward unfortunately.
GLX and EGL have two roles, one is the WSI part, the other is context creation and device enumeration which Vulkan had the good sense to add to the core standard.
That's freaking awesome. I had no idea, is this a recent addition? I've read SDL source code some years ago and I remember seeing a X11 implementation only.
It’s a spec for how GPU APIs can share data without necessary copying and converting through CPU memory. Mainly because the OpenGL specification explicitly does not define anything to do with windowing. But, EGL is not just and OpenGL - XWindows bridge. CUDA, GStreamer and several other APIs use it to communicate.
EGL is primarily about windowing system integration, it's what lets you initialize an OpenGL context in the first place.
It doesn't really accomplish interop between different APIs on its own, though of course various interop methods are based on it — e.g. if you want to import a dmabuf, you'd use EGL_EXT_image_dma_buf_import. But, say, on the Vulkan side you'd use VK_EXT_external_memory_dma_buf — there is no EGL with Vulkan, Vulkan has its own WSI subsystem.
Reading that helps a bit, but you still won't understand the complicated graphics stack without prior knowledge. Is there a good introduction that systematically goes through all (or most) of the components for the average Python, C, database etc. programmer who knows absolutely nothing about computer graphics?
Yeah I guess it won't make sense without more context, I.E. understanding the rest of the graphics systems on a computer.
It seems like it's a standardized interface for a windowing library like wayland/x11 to talk to a graphics driver, specifically for setting up regions of the screen which will be rendered into by something (opengl etc.)? So there were non-standard ways before?
Well, before there was only X11 in the Unix/Linux world. And the mechanism was GLX. It had different implementations over nearly 30 years: https://en.m.wikipedia.org/wiki/GLX
I read this and my first thought is "oh shit, is Firefox about to stop working?"
some of us like our software to be stable and reliable, and not switch to the newest bullshit just because they can. I'm still bitter about being forced to find a workaround for FF requiring pulseaudio. Am I now gonna need to find a workaround for this? I run FF 94 right now, and will upgrade with trepidation...
I did not encounter any bugs so far.