Hacker News new | past | comments | ask | show | jobs | submit login
Firefox 94 to start using EGL on Linux (mozillagfx.wordpress.com)
427 points by TangerineDream on Oct 30, 2021 | hide | past | favorite | 112 comments



I already force-enabled EGL in firefox 93 on Arch Linux with the 470.74 nvidia driver, and the performance is stellar compared with what it used to be. Before EGL all desktop applications (including firefox) would stutter when a video was playing in firefox.

I did not encounter any bugs so far.


Wasn't there friction when using nvidia's proprietary drivers, as Nvidia's video acceleration depends on VDPAU compared to Intel/AMD using VA-API?

The only reason I haven't made the full change for Linux yet is because I watch a lot of Twitch.tv and the CPU usage hits 45-50% on average even at 720p.


Have you tried the streamlink gui client? You can play streams in mpv/vlc/whatever that way


I thought they're trying to actively block it as it allowed people to watch without ads.


Pihole + uBlock Origin, I never see ads on Twitch or YouTube. I only see sponsors and affiliates in YT (and I guess Twitch as well, not watching much Twitch tbh). Which usually don't work in EU the one time I am interested (LTT...)


youtube-dl (and forks) work with twitch (using mpv). I've never understood why people use streamlink instead.


Can youtube-dl work with live?


Last I checked, ytdl when used standalone has poor behavior with live: it will fetch the live stream, but if ytdl gets up to "current time", it considers the live stream complete instead of continuing to capture it in real-time.

This may or may not affect using mpv's ytdl integration (where mpv internally calls ytdl to obtain a stream).


Yes, I tried to make video accel work but didn't succeed. There's a vaapi driver for vdpau, but it crashes when I try using it. In some firefox issue there was someone mentioning that video accel needs dmabuf, which needs gbm, so the 490 driver might solve the issue.


Reading linked issue, it seems like you might run into issues after suspend/resume


Now that you mention, I did have an issue after a suspend. All text was missing inside firefox, restarting it resolved the issue. But I keep my laptop plugged in and open 99% of the time with presentation mode enabled, so this is extremely rare for me.


It seems to pretty consistently break after suspend, with the text looking like an ms-dos machine with bad vram.

I enabled EGL in response to seeing your comment this morning, then found myself having to turn it back off after the very first suspend/resume.


I thought video playback is not using OpenGL.


There are two steps in the task of hw assisted video playback: 1) video decoding, i.e. going from compressed stream to uncompressed, offscreen video frame, and 2) compositing video with all the other surfaces the video playback application uses -- the decoded frames are usually put into OpenGL (or Vulkan, whatever) texture and then composed with the rest.

When you do 1) on the GPU (well, not exactly GPU, but video decode block on the videocard, but that's not really important now), you end up with decoded frame in VRAM. Reading it back to system RAM, just to push it back to VRAM elsewhere is expensive (if the graphic is not UMA, then you go via PCIE back and forth) and unnecessary in the end, it is more efficient to have some way to directly share content from 1) into 2) already in VRAM.

For that, both subsystems must support some way of sharing memory buffers. VA-API (for 1) and Mesa (for 2) support DMA-BUF, so that's the reason why it is used here.


Though Vulkan has video decoding built in, so eventually everything will hopefully unify to use that and we can avoid needing multiple separate subsystems depending on vendor...


What codecs are part of the spec? Or is it just the interface, and nothing is actually guarenteed to work?


It looks like [0] there currently exist extensions for H.264 and H.265. I don't know anything about Khronos politics, but I expect there's nothing that would stop e.g. Google from proposing similar extensions for VP9, AV1, etc., besides the need to actually get it implemented. (The H.264 and H.265 extensions have only AMD, Intel, and NVIDIA employees listed as Contributors.)

[0]: https://www.khronos.org/registry/vulkan/specs/1.2-extensions...


VP9 decode and AV1 decode/encode is planned: https://github.com/KhronosGroup/Vulkan-Docs/issues/1497#issu...


Video playback is accelerated in many apps, for example VLC 3.x defaults to it.

In web browsers, things are more delicate because videos live within the complex context that is a webpage -with its own layer of hardware acceleration!-, and failures can be brutal depending on your browser, hardware and driver quality (crashes, video corruption, etc).

The pain-in-the-ass-ness to enable it on a given browser varies with time and browser/OS and along regressions, stack changes, and maintainer priorities. From personal experience during the past years, it has never been great on Linux:

- In Firefox, regressions have been frequent.

- In Chrome, the feature is here but Google disables it at build time in their official builds (my personal intuition is that they do enable it for their Chromebooks, but they don't want to support the wild west of broader Linux configuration and old/broken driver fun). However, out of official Chrome builds, several distributions builds (Arch, Debian) of Chromium now enable it at build time, so it's possible to try to use it.

So, it's finnicky to enable, but sometimes feasible, until it's no longer :D . To give it a try on a Linux box, the well-maintained Arch wiki is what you want (and these sections are mostly not specific to Arch):

- Firefox: https://wiki.archlinux.org/title/Firefox#Hardware_video_acce...

- Chrome: https://wiki.archlinux.org/title/Chromium#Hardware_video_acc...


Bang on! This just summarizes the experience of browser video hardware acceleration on Linux. Marred with bugs and incompatibility issues. Regressions are the norm for Firefox in this regard according to my experience using an Intel card.

Since announcing support for HW video initially on Wayland, Firefox has been off to the races on this [0]. Despite requiring lots of hacks to enable flags as you linked on the Arch Wiki, one of which now suggests disabling the sandbox for content processes and a major red flag, it somehow worked.

For one reason or another [1], Firefox chose to lock the video drivers to the old i965 drivers instead of choosing the newer iHD driver. A weird decision that still applies to current builds.

There have been incompatibility issues with video codecs. In my experience I can play h264 videos with hardware acceleration but not vp9 with the flatpak distribution. With vanilla package, both h264 and vp9 video is hw accelerated.

I could also have sworn that video acceleration support has gotten worse over time since the introduction.

I now get a weird artifact at the bottom half of the screen only when watching YouTube videos. [2]

Video hardware acceleration is not entirely doomed on Linux. Even if Chromium does not want to make any move toward this direction in this century. Gnome web or rather Epiphany has stellar video acceleration support by using Gstreamer plugins. So it is possible to get hw video working on a browser.

[0] https://mastransky.wordpress.com/2020/06/03/firefox-on-fedor...

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1619585

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1724663


But the video is still usually rendered to an OpenGL surface, so there is interaction between them.


Will you be affected? Here's the gist from the article: As of Firefox 94, users using Mesa driver >= 21 will get it by default. Users of the proprietary Nvidia driver will need to wait a little bit longer as the currently released drivers lack an important extension. However, most likely we’ll be able to enable EGL on the 470 series onwards. DMABUF support (and thus better WebGL performance) requires GBM support and will be limited to the 495 series onwards.

Ubuntu 21.10 currently comes with the 470 series driver.


Even Arch doesn’t ship the 495 yet. It’s coming though.


Well in Arch we have the drivers in testing with an extended period to give users a chance to report issues. You can already get them and help us test! We'll likely move them next week.


You're doing God's work. I don't use nvidia drivers, but do enjoy my Arch Linux installations very much. They are always up to date, but very rarely break in practice (IME, YMMV). Thanks for everything you do.



For those needing an EGL LEG-up:

https://en.wikipedia.org/wiki/EGL_(API)


Note that Wayland users are already using EGL in Firefox, so I don't think this makes any difference to them?


Does Firefox ship one binary that can do both X and Wayland? How do they probe for it?


Yes, but currently they're defaulting to XWayland when running under Wayland.

You can set `MOZ_ENABLE_WAYLAND=1` in your environment to launch Firefox under Wayland natively.

You can check what window protocol your Firefox at `about:support` (Window Support).


There are env variables set by the session


How can one tell if Firefox is already using EGL?


If you go to about:support and scroll to the end of the graphics section you should see two entries:

X11_EGL

DMABUF

and should say something like "available by default"/enabled or similar wording. Otherwise it'll either say unsupported/disabled or will be missing.


How do you change those? This is something I remember trying to look up some time ago (to set PATH also for the Alt+F2 'run command' dialog in Cinnamon) but I didn't find it.


Wayland uses WAYLAND_DISPLAY, X uses DISPLAY.


I think you misunderstood my question. Indeed with `DISPLAY=:0 glxgears` I can run something on my screen from a different virtual terminal (perhaps even an ssh session), but what I mean is that my desktop environment takes the PATH environment variable from somewhere and I don't know where. When I run 'josm' in the Alt+F2 dialog, it can't find the command, even though in my bashrc I configured my PATH to include ~/bin/.


Environment variables are defined in multiple files. most desktop environments launch in a systemd user session, so one option is to use that[0]. Then are .xsession, scripts specific to the DE, /etc/environment and a bunch of other stuff I am forgetting.

[0] https://wiki.archlinux.org/title/systemd/User#Environment_va...


A TTY like you get in your "ALT+F2" is a new login/session shell and uses .bash_profile rather than .bashrc (which is invoked when you create a new bash process in an already existing login session like when you open a terminal window). There are lots of moving pieces, but I've found the easiest way to get the same behavior in both is to have my .bash_profile source my .bashrc.


> For OpenGL on X11 most programs use GLX, while its successor, EGL, gets used on Wayland, Android and in the embedded space

The headline is kind of wrong, then. This is about X11. Firefox on modern (Wayland) desktops is already using EGL.


That's too bad. My WebGL performance is pretty bad.


It's still possible for EGL to not be enabled on Wayland for various reasons.


Only as in using full software rendering, there's no other way to get GL (as there was GLX for X11).


Uh, through this article I noticed that I missed a juicy but if news:

Nvidea seems to jump on the GBM wagon, at least for some GPUS.


For anyone else as uninitiated as me:

> Generic Buffer Management (GBM) is an API that provides a mechanism for allocating buffers for graphics rendering tied to Mesa. GBM is intended to be used as a native platform for EGL on DRM or openwfd. The handle it creates can be used to initialize EGL and to create render target buffers.

Where

> The Direct Rendering Manager (DRM) is a subsystem of the Linux kernel responsible for interfacing with GPUs of modern video cards. DRM exposes an API that user-space programs can use to send commands and data to the GPU

https://en.wikipedia.org/wiki/Mesa_(computer_graphics)#Gener... and https://en.wikipedia.org/wiki/Direct_Rendering_Manager

What does it mean for nvidia to jump on the GBM bandwagon? Firefox would just talk (perhaps via some intermediate API like EGL or BGM) to the Linux kernel for talking to an nvidia card, if I understand it correctly?


With the proprietary driver, the GPU is controlled by their closed-source kernel module, so Mesa's GBM code wouldn't know how to talk to it. However, as of the latest version, NVIDIA is shipping a GBM back-end library that Mesa can load which is able to talk to their module.


I wonder if nvidia will ever integrate into the kernel properly. Using their drivers is such a poor user experience.


So Firefox, not using GBM but EGL, would not be able to make use of the proprietary driver and has to talk to DRM? Can two kernel modules even use the same card at the same time?


> Can two kernel modules even use the same card at the same time?

yes, I'm not sure how everything is divvied up. NVIDIA ships multiple kernel modules in their driver, and one of them is called ‘nvidia_drm’. So idk, maybe you have to use their DRM implementation or something


But what does this mean for applications that use Vulkan?


Probably nothing, similar for most applications using EGL it doesn't mean much.

But it does matter for developers of Wayland compositors. A lot.


It would mean that desktop compositors do no longer need to have two implementations for a lot of parts.

This e.g. mean developer cost for gnome/kde would be reduced and sway(wlroots) would work on devices with Nvidia cards (at least some).

I.e. in general it makes it easier for everyone, except maybe Nvidea. But even for them it (long term) could make maintenance simpler.


Yes, the latest stable version 495 has GBM support.


About time. I don't know why, but for me in Linux Chromium has had much better performance than Firefox. Maybe this will help.


The only websites I've noticed Chromium having an advantage, are those run by or embedding Google content (eg Google maps)


At this point I have enough confidence in Google's developers to consider bad performance in Firefox not an accident.


It's much more believable that they are developing with Chromium in mind, and testing primarily on it.

I've seen many webapps that are completely unrelated to Google, that are more performant on Chromium browsers, just because most developers are using them for development. It's also easier to justify spending time fixing bugs/improving performance if it affects the majority of users, instead of spending effort for 4% of users.


Google has, in the past, made YouTube reliant on non-standard browser APIs and shipped a polyfill for other browsers that had terrible performance.

It's not just that they passively focus on Chromium, they will deliberately trash performance on every other browser without a second thought.


It didn't actually have bad performance. I never had performance problems on YouTube, and I even made my own apps with Polymer / Web Components v0 (before v1 became a thing) purely in Firefox (never even tested in Chromium), it all worked pretty well.


Google didn't have much of a choice. It's either a polyfill or not supporting Firefox at all.

Or having two different frontends, or stopping feature development to make Firefox developers happy. Which are terrible choices.

And going beyond of what standards offer is pretty much how web evolution had always happened.


What? Of course they had a choice.

Google built their redesign on top of a prototype set of APIs that didn't end up being standardized. That's their own fault, and it should be their problem.

It's not just Firefox devs, it's everyone else apart from Chrome, because Chrome had gone and implemented said APIs that were never standardized, so they didn't need a polyfill. And I would bet money that ShadowDOM v0 would have been removed from the Chrome codebase earlier if not for the fact that Youtube was using it - they wouldn't have forced themselves to use the same polyfill as everyone else.

Shipping a new, standards incompliant frontend was a choice, not an immutable fact of nature. There was nothing wrong with the previous frontend apart from the fact that nobody is going to get promoted for not shipping the new thing. And since the consequences of shipping impact everyone except for Google, who cares, let's ship the garbage UI anyway.

Don't make excuses for this kind of behavior. Google absolutely has the resources to do these things properly but they didn't.


I'm sorry but this comment rings very hollow to me. I have seen this "hypothetically they could have done more" type of sentiment repeated so much on HN and it's not helpful nor is it a meaningful criticism of Google. Every browser has some non-standard features. That isn't new, Google isn't the first one to do it, they certainly won't be the last. Do they have a choice to not do that? Sure, but nobody chooses to do it because it makes it harder to actually iterate on things. The "behavior" is widespread and every vendor is already making excuses for it.

There is a real problem here and it's absolutely related to the fact that Google is only incentivized to develop/test on their own browser, but that's really orthogonal to other browsers being slow or not being able to improve performance of a polyfill.


Oh come on. Google is never going to have an incentive to do better if we just excuse them for their bad behaviour, especially in cases like this where claiming any equivalence is simply ludicrous. One must be wilfully ignorant to act as though their ambitions are "orthogonal" to other browsers purely due to their own faults.

Google did not have to publicly ship their pre-standard experimental "v0" web components implementation so early in Chrome. They chose to do so regardless of what other browser vendors expressed.

Likewise they did not have to make YouTube use them so soon, thus forcing other browsers to rely on a polyfill that could not feasibly be made remotely performant compared to just implementing web components more quickly, wink wink.

They chose to do these things the way they did. They wanted to ship it on their timeline, and to hell with what other browsers wanted to spend time on first instead. They wanted to look like they were heroes for pushing the web forward, while in reality they were just holding other vendors and APIs back to get the one they valued the most done first, no matter how much of a mess they caused in the process (the transition from v0 to v1 was hardly quick or painless). And that's just web components.

When is the last time you saw Firefox ship such a major web API in such a non-final and un-vetted state, and then use one of the largest web properties on Earth to get others to prioritize it as they wished? Or even Apple, for that matter?

It's flat out ridiculous to try equating the vendors in this manner. They don't have the market dominance or even the same force of apologists burying the lede on their bad behaviour.

I'm sure the other browsers also have their own Project Fugus underway too, where they're just shipping a slurry of new APIs regardless of whether anyone else will ever implement them? Or is that the others' fault somehow too, because they should also be trying to fragment the web as much as possible as quickly as possible?

If we collectively just want Chromium to be the only engine because we value rapidly iterating on new APIs more than anything, then let's at least be honest about it.


Look, I hear what you're saying but it all just sounds like hypotheticals to me. If you want to take that approach, hypothetically the other vendors could have chose to standardize the feature, it could have become standard, and all the other browsers could have implemented it and it wouldn't be a problem. But that didn't happen. And I have seen plenty of other features that were gated behind Moz or Webkit prefixes.

I'm not trying to be a downer here. Realistically, there will always be a browser out in front that is going to iterate on features faster than the others. That's normal as long as you have more than one browser. If we want to criticize Google for practicing anti-competitive behavior then let's do that, but it just really doesn't make sense to me to put "they shipped a feature that someone else didn't" in that category.


It's not that you're being a downer, you're just not presenting things in a fair or accurate manner.

Google didn't merely "ship a feature someone else didn't". They keep shipping new features as they wish, whether others even agree. They have in fact accelerated that attitude with Project Fugu. Some of them are quite complex or consequential APIs.

They do not deserve a free pass for it because Mozilla once shipped one or two relative minor features before Chrome did, or Apple added some weird CSS visual property for their latest iPhones without consensus. We're talking orders of magnitude of difference here.

The problem isn't who is first to ship. It's the casual disregard for even reaching consensus on the basics before shipping something, the sheer rate of output, the interop issues left in the wake, and the anti-competitive tactics being applied. Those are not "hypotheticals" in the slightest.

What is hypothetical is acting like anyone else could just magically compete on those terms. Microsoft couldn't keep up with them. Opera couldn't. No new engine has even come close to breaking into a general market yet, though a couple are feverishly trying.

Is that really ok with us? If so, then let's just be honest. Let's just say "new APIs are more important to us than engine diversity, and we don't mind Google being as evil as possible to kill the other engines off." As a webcompat worker, I'd love to see that honesty.


Do you think it's abusive of Linux to offer APIs beyond what is standardized in POSIX, breaking comparability with other nix-like OS such as FreeBSD? Or is it abusive that clang constantly adds features that gcc lacks, making programs that use clang no longer compile with gcc?


average disingenuous HNer making comparisons between compilers and web browsers. Everyone involved knows Shadow DOM V0 was a rush job, as was the YouTube redesign that used it (it had major perf issues even in chrome when it came out). The standardized ShadowDOM v1 is better in every way and works in all browsers. It's pretty clear that Google wanted V0 to spread as far as possible so they could force it to become a standard, as removing it would "break the web". Shadow DOM, regardless of version, isn't critical for a product like YouTube. The "web" is only the "web" if parties involved play fair, even just a bit, otherwise it's back to IE6.


Does Linux also happen to control a major piece of hardware pretty much everyone uses, and used it to force BSD to adopt some new driver system they wanted to prioritize, which ended up being completely different by the time the dust had settled, and held back other important advancements in the meantime?

That is, it all depends on the full context. Competition is fine, but not anti-competitive behaviour. I maintain that we have had too much of the latter from Google, and that it is only increasing as folks intentionally look the other way and try to boil arguments away to mere deflections and other apologia.


None of that is an accident, at a company with Google's experience and expertise with web browsers and apps.



It would be interesting to profile these sites and make PRs for Firefox to fix any possible issues, then see how long it takes for Google to find new ways to make them slow again.


Unfortunately there are also non-Google websites that run slower on Firefox.

Example: Our startup's product (https://benaco.com).

WebGL is slower on Firefox (e.g. this bug on Linux: https://bugzilla.mozilla.org/show_bug.cgi?id=1010527), so I use Chromium to work on it, despite using Firefox for everything else, for 19 years.


> e.g. this bug on Linux

This is literally one of the big things changed by what's described in the article (for those still using X11; it's already been solved for quite some time on Wayland)


Yes, this is one of the affected bugs, but do read the other bugs linked from it, e.g. https://bugzilla.mozilla.org/show_bug.cgi?id=1010527#c51

> Even wirh webrender, EGL and dmabuf all enabled, performance is still not at par with chrome. Bug 1684224

which shows 5x higher FPS in Chromium than in Firefox.

There will be more to do until general parity, even past FF 94.


Works fine on Windows. Linux only makes up 3% of Firefox users so it's no surprise they haven't prioritized it.


It also seems to me. At least it motivates me to use anything which is not owned by Google :]


As Firefox is lagging a bit with new feature implementation, Chromium based browsers usually are using native implementation that is faster and Firefox is left using slower polyfills. And web app developers sure love using most recent web browser features and APIs.


well most api that is not implemented in firefox has high probably of not being implemented in safari too. So I don't think web app developer should use recent web browser features and API that is available in chromium browser only. 20-30% represent a large number of users so I think devs should show more pragmatism.


I think firefox has cache management problems. I got to the point where even opening an nginx test page off localhost took 1-2 seconds, but clearing the cache seems to have fixed it back to tolerable levels again.


If you can afford, just set on disk cache to 0 and increase on memory cache. Firefox have bad defaults.


I think Chrome/Chromium don't use hw acceleration on Linux, right?


With Chromium 95 and Nvidia 470.74 driver for me hardware acceleration is enabled for canvas, and can be enabled for rasterization. For video decode it can be enabled, but it's VA-API, which is not supported by Nvidia. Vulkan can be enabled. GpuMemoryBuffers is software only.


Actually, that is one the main reasons I get better performance on Chromium is because you can use HW acceleration. I just use some of the settings documented in the Arch wiki.

Firefox isn't terrible, but there is a noticeable startup lag and I get random freezes a lot more often.


Have you tried vulkan? Curious if it offers any performance benefits.


It does, if it deems your drivers good enough.


This won't help. Firefox's latency problems seem to be deeper.


For a while. Wirth's Law still applies. Web developers will find further ways to mess up rendering some basic shapes and letters.


"Reduced power consumption" I can't wait to see that actually happen because ... man Firefox and Chrome have both become quite the power and memory hogs !


Firefox has become an excellent operating system but somewhere along the way it became a pretty marginal browser. Still, getting hardware acceleration working on linuxes was serious work and it's not like this guy would've been fixing the UI issues created in recent version if he wasn't doing this.


I may be weird but I really like the new UI


I am curious to see benchmarks on promised improved perofmance and reduced battery usage. Are there any?


Does anyone know if this will make Firefox on headless setups (like Xvfb + VNC) work better?


do you know that Firefox have a headless mode and you not anymore to use Xvfb ?


I have used it interactively over VNC, I don't think native headless mode helps for that case. Getting GL acceleration for a headless session is not very straight forward unfortunately.


I'm not really sure what EGL is, but will Vulkan also run on top of EGL?


Vulkan has WSI, which is roughly equivalent to GLX or EGL in the OpenGL world.

https://github.com/KhronosGroup/Vulkan-Guide/blob/master/cha...


GLX and EGL have two roles, one is the WSI part, the other is context creation and device enumeration which Vulkan had the good sense to add to the core standard.


"For most cases, you do all the actual rendering work in GL or GLES contexts, which you create with WGL, CGL, GLX, or EGL, depending on your platform.

GL and GLES do the actual rendering. EGL and friends are basically just the glue layer to get things to the screen. (Well, to the window manager)"

https://www.reddit.com/r/opengl/comments/11q0oz/what_is_the_...


> EGL and friends are basically just the glue layer to get things to the screen. (Well, to the window manager)"

EGL can also be used directly with kernel mode setting and buffer management. No window managers necessary!

A simple example I found:

https://github.com/siro20/XlessEGL/blob/master/eglkms.c


SDL2 actually supports KMS so a number of games can run without a display server without any modifications.


That's freaking awesome. I had no idea, is this a recent addition? I've read SDL source code some years ago and I remember seeing a X11 implementation only.


It’s a spec for how GPU APIs can share data without necessary copying and converting through CPU memory. Mainly because the OpenGL specification explicitly does not define anything to do with windowing. But, EGL is not just and OpenGL - XWindows bridge. CUDA, GStreamer and several other APIs use it to communicate.


EGL is primarily about windowing system integration, it's what lets you initialize an OpenGL context in the first place.

It doesn't really accomplish interop between different APIs on its own, though of course various interop methods are based on it — e.g. if you want to import a dmabuf, you'd use EGL_EXT_image_dma_buf_import. But, say, on the Vulkan side you'd use VK_EXT_external_memory_dma_buf — there is no EGL with Vulkan, Vulkan has its own WSI subsystem.


No Vulkan dose not need/use EGL. It nativity talks to Wayland/X11.


What's EGL?


https://en.wikipedia.org/wiki/EGL_(API)

Reading that helps a bit, but you still won't understand the complicated graphics stack without prior knowledge. Is there a good introduction that systematically goes through all (or most) of the components for the average Python, C, database etc. programmer who knows absolutely nothing about computer graphics?

Edit: See also thread https://news.ycombinator.com/item?id=29048815


Yeah I guess it won't make sense without more context, I.E. understanding the rest of the graphics systems on a computer.

It seems like it's a standardized interface for a windowing library like wayland/x11 to talk to a graphics driver, specifically for setting up regions of the screen which will be rendered into by something (opengl etc.)? So there were non-standard ways before?


Well, before there was only X11 in the Unix/Linux world. And the mechanism was GLX. It had different implementations over nearly 30 years: https://en.m.wikipedia.org/wiki/GLX


Does this affect the BSDs at all?


"if someone has an HN account the answer to https://news.ycombinator.com/item?id=29049511 is works fine on freebsd"

– IRC, #freebsd-desktop

https://matrix.to/#/!KYWCpFvqYdeGYJdkxS:libera.chat/$ZOJrjgp...


Yes, nearly everything about "Linux" in Firefox applies to BSD.


I read this and my first thought is "oh shit, is Firefox about to stop working?"

some of us like our software to be stable and reliable, and not switch to the newest bullshit just because they can. I'm still bitter about being forced to find a workaround for FF requiring pulseaudio. Am I now gonna need to find a workaround for this? I run FF 94 right now, and will upgrade with trepidation...

(shoutout to https://github.com/i-rinat/apulse. THANKS.)


No.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: