According to Hector Martin in the last progress report, the Wifi driver is up next, then some not terribly demanding work on various drivers (like adapting the NetBSD driver for the real time clock, and getting the M1 iMac SKU with four USB ports to see all four) after which he will tackle the kernel driver for the GPU.
Parent posted a link to a blog post that has a table that lists how support for various hardware and drivers is going. From that I think you can make your own determination based on what matters to you.
Sure, but you probably need less than 100% support to find things very usable for everyday applications. An asymptote isn't so asymptotic if you don't need it to intercept the origin. :)
But when this test suite hits 100%, then comes the optimization work. Just getting it working is still far from getting all the performance out of the hardware.
I fear you may be overly optimistic. Open source community graphics drivers are not a first, and you can go out and see how much success they have without corporate support.
Writing a truly high performance modern graphics drivers is really no small task. There is a reason why NVidia, AMD and Intel employ large teams to the task.
The folks working on Asahi are terrific, and deserve credit where it’s due—most would have trouble contributing, especially at the pace they’re working at. They are experienced, driven, creative and overall good at what they do.
However, I think it’s also instructive to realize that ultimately, there really are plenty of top-tier developers out there who aren’t working on cool things like this, for one reason or another.
Asahi at least has funding. And while Patron funding isn’t new, it is relatively underutilized for open source. I believe the Patron model is the most promising avenue for open source projects to route money from willing participants. I may only contribute a relatively tiny amount, especially when you consider the wages you can get working for a well-funded SV company, but when you pool it altogether, it’s still nothing to sneeze at.
So in my mind, Asahi achieves success in the face of skepticism because it has the combination of great engineers and enough funding to meaningfully help the effort.
P.S. Even though I do that most graphics vendors have huge teams and Asahi is a strong outlier in this case, they probably have a lot of fish to fry that doesn’t yet or may never apply to the M1 graphics drivers. Valve has found success in writing their own Vulkan drivers; I’d love to know how much engineering effort went into that, both people and time. It might be a somewhat closer comparison. (Well.. not really, because it’s not taking into account the impressive reversing effort needed for M1, but perhaps in terms of actually writing the high level graphics drivers.)
P.P.S.: and also, I firmly believe that the ambition and drive to work on such hard problems is what breeds impressive engineers. In my opinion, you shouldn’t feel intimidated; you should feel inspired. Go out there and hack some hardware for yourself.
I think that I largely agree with you. Certainly a small number of determined and talented people can do a lot and can often move a lot faster than larger teams. I just wanted to temper the expectations that we'll get fully working graphics drivers in a year or two :)
Indeed RADV is a good comparison, though it bears notice that the open-source existing AMD drivers probably also helped a fair bit! That being said, there seems to be 3 people but maybe only 2, and the RADV project has been going on for around three years now, though there are a lot of contributions from outside of Valve.
RADV compiles Vulkan to NIR, so there is also the backend that compiles NIR to assembly that needs to be performant.
Though perhaps the Asahi Linux team can leverage a lot of work done by RADV for their platform, which could speed things up a lot!
Ultimately, the developers on Asahi are trying to play catch-up with massive GPU driver teams comprised of dozens of talented engineers. It’s not feasible to catch up, but I’d be happy to be proven wrong.
I think that's pretty consistent? If it takes one year for OpenGL ES2 coverage, not even optimization, it will probably take two more years at least for proper optimized Vulkan/SPIR-V/OpenGL 4 support along with accelerated vidéo encode and decode support.
That is true and still concerning. The Asahi Linux defense is that only Rosenzweig has been working on it so far, while Martin and Sven have been working on other drivers. They will, in a few months, finish those up (or close enough) and then begin also attacking the GPU, which should then be a lot of progress quickly.
They don't implement OpenGL ES2, they figure out the GPU / graphics drivers interfaces / etc, and OpenGL ES2 then works as more features are made available.
If they have coverage of the GPU, then Vulcan and so on will play for the features ES2 does. Besides you don't need support for all exotic features to run Linux with hw acceleration.
I'd say usable desktop hardware acceleration requires accelerated rendering in the browser and video decode acceleration. These tend to require pretty exotic features and still aren't implemented in a few PC graphics driver.
OpenGL ES 2.0 is pretty simple in comparison, and you won't have to deal with undocumented features and so on.
It's also not as simple as coverage, you also need good performance, which means you need a good optimizing compiler for the GPU architecture, and that's not obvious either. Unless they are using Apple's drivers? I don't know that they are. From reading her website it seems they are not.
> It's also not as simple as coverage, you also need good performance, which means you need a good optimizing compiler for the GPU architecture, and that's not obvious either. Unless they are using Apple's drivers? I don't know that they are.
She's been writing progress reports as she goes. This one is from back in May.
>I’ve begun a Gallium driver for the M1, implementing much of the OpenGL 2.1 and ES 2.0 specifications. With the compiler and driver together, we’re now able to run OpenGL workloads like glxgears and scenes from glmark2 on the M1 with an open source stack. We are passing about 75% of the OpenGL ES 2.0 tests in the drawElements Quality Program used to establish Khronos conformance. To top it off, the compiler and driver are now upstreamed in Mesa!
Gallium is a driver framework inside Mesa. It splits drivers into frontends, like OpenGL and OpenCL, and backends, like Intel and AMD. In between, Gallium has a common caching system for graphics and compute state, reducing the CPU overhead of every Gallium driver. The code sharing, central to Gallium’s design, allows high-performance drivers to be written at a low cost. For us, that means we can focus on writing a Gallium backend for Apple’s GPU and pick up OpenGL and OpenCL support “for free”.
Yes. Simply covering these features will provide basic support for many features. That doesn't, however, means that performance will automatically be sufficient. It also remains to be seen if the more complex featureset of OpenGL 3.1 is as straightforward to cover efficiently.
I'm not saying it won't happen. I'm just saying that we shouldn't underestimate how much work there is.
Even with the help of Mesa and years of effort, the nouveau backend for NVidia cards is still barely satisfactory even for day to day tasks, it's OpenGL performance is very poor even for basic applications. It's really not as simple as just coverage in practice.
Nouveau actually doesn't count as a good reference, because NVIDIA actually locked-out reclocking support from any firmware that is not NVIDIA-signed starting with the GTX 900 series.
This means that if you are running any form of unsigned driver (which would be any open-source driver such as Nouveau) on those cards, the chip will run at the slowest possible performance tier, and won't allow the firmware to crank the speed up. Only the signed NVIDIA driver can change the GPU speed, which is basically mandatory for a driver to be useful.
So - don't blame Nouveau for being behind, NVIDIA has made it so that open-source drivers are almost useless. In which case, why bother with improving Nouveau when the performance is going to be just terrible?
There are a lot of issues beyond reclocking, as we both know. Even before the reclocking issues, nouveau was not up to par despite years of work, and it is still far behind on cards with reclocking.
The point I was making is that mere coverage is not enough for satisfactory performance. If it was the case, nouveau would have good performance on cards with reclocking support.
It doesn't, because it takes a lot of work on the backend to get good performance.
I think it has something to do with preventing people from running the higher-stability drivers that come with buying a Quadro (or hypothetical super stable FOSS drivers) on significantly cheaper consumer hardware, because that makes it much more difficult to justify buying a Quadro in many circumstances. The added stability is part of the upsell and is more software than hardware.
Not quite, no. Even manually setting the clock rate would allow for performance improvement as you could lock the card at its boost clock or at least prevent downclocking under load.
The only lockout solution is to lock speeds to the base clock completely.
It depends on how the card’s speed governor works. Do you set a desired clock rate and then the firmware tries to hold that speed dependent on factors like core temperature or do you set a hard value and the firmware holds that come hell or high water?
From how it used to work, the actual frequency was dependent on both the driver and firmware, though the driver used to and probably still can force a certain frequency.
Video decode acceleration is very nice for battery life and freeing up the CPU for that LLVM build you're running in the background, but it's absolutely not a requirement, it's nowhere near as important as GPU rendering. Heck, a lot of hardware doesn't have VP9 support and people watch VP9 YouTube on it.
According to Hector Martin, writing the kernel driver will be easy as "all the magic happens in userspace." So even though the driver is running in userspace on macOS it won't be that much work to run in Linux, apparently.
I'm sure there will still be tricky bits to figure out in kernel mode... Like how to partition and schedule GPU work so one user can't read and write another users rendered stuff...
I'm still concerned about that weird back and forth between driver and firmware that she talked about in one of the blog posts. Not really a stable API it seems.
They said it's no problem - just use a fixed version of the macOS Display Controller firmware of your choice. For example, they might target the macOS 12.1 version of the firmware, maybe upgrade to the macOS 12.4 version a year later... It's not stable but it's easily made stable.
>> They said it's no problem - just use a fixed version of the macOS Display Controller firmware of your choice.
Is that really an option? Can we drop in the binary blob at will, or will it break if Apple pushes an update?even if that's the case, dual booting may be impossible then. Unless that firmware is downloaded every boot.
The big issue with NVidia GPUs is that NVidia won't allow opensource drivers to set the clock rate on newer graphics cards.
However even on older cards where this is not an issue, community drivers tend to lag behind. It is far for a breeze. So far they are leveraging the Mesa driver infrastructure to implement OpenGL ES 2.0, which is a good beginning. The really hard part is supporting all of the latest technologies at a good performance. It's too early to know if that will be a breeze or not.
So this sound like the people who say we will be able to run first real tests by 2024 are likely right. From there it's then "only" as much work as with Nvidia.
So maybe by 2032 we will have some M1s with somehow HW accelerated graphics. At least it looks like that given the history of Nouveau.
I don't want to put the work down those people are doing! But the truth is likely that without vendor support this will never fly beyond some tech demos.
I'm not sure I'd be so pessimistic either. Perhaps they'd be able to use the open source AMDGPU drivers as inspiration, or perhaps there is something else I'm not thinking about. I think there's a solid chance they get it usable for basic use within a few years.
https://asahilinux.org/2021/12/progress-report-oct-nov-2021/
As an interesting aside, in the progress report he mentions that after he installed Arch Linux, even without support for the hardware GPU:
>glxgears has no right to run at >60FPS with software rendering, but it does