Hacker News new | past | comments | ask | show | jobs | submit login
“With those changes we're up to a 94% pass rate for dEQP-GLES2” (twitter.com/alyssarzg)
146 points by mmastrac on Dec 18, 2021 | hide | past | favorite | 56 comments



According to Hector Martin in the last progress report, the Wifi driver is up next, then some not terribly demanding work on various drivers (like adapting the NetBSD driver for the real time clock, and getting the M1 iMac SKU with four USB ports to see all four) after which he will tackle the kernel driver for the GPU.

https://asahilinux.org/2021/12/progress-report-oct-nov-2021/

As an interesting aside, in the progress report he mentions that after he installed Arch Linux, even without support for the hardware GPU:

>glxgears has no right to run at >60FPS with software rendering, but it does


When has glxgears ever been demanding[1]? I would hope a modern CPU could handle a few hundred vertices(guessing) at 60fps.

1. https://wiki.cchtml.com/index.php/Glxgears_is_not_a_Benchmar...


On an Intel 7700k, using `LIBGL_ALWAYS_SOFTWARE=1` to force llvmpipe, glxgears reports ~3900fps, compared to ~20000fps on GPU (RX570 amdgpu).


Heck, a 3990X can run Crysis using software rendering.


He was livestreaming the wifi work today: https://www.youtube.com/watch?v=7kOlrBUKAfM

I only glanced at the last minute, sounds like it's not working, but pretty close (I could have interpreted his comments wrong though).


What functionality remains to make it a viable alternative to MacOS on an M1 13" MBP or MBA? It seems like wifi is the biggest remaining hurdle?


Parent posted a link to a blog post that has a table that lists how support for various hardware and drivers is going. From that I think you can make your own determination based on what matters to you.


What does a "discard if <condition>" instruction actually do?


It allows a pixel shader to basically return fully transparent, letting previously drawn objects show through.


"Implementing the GPU is the hard part."

"It will take Asahi Linux years to get graphical acceleration."

"Maybe they'll have graphics figured out by 2024 if they are lucky."

Doubters gonna doubt. OpenGLES 2 for Asahi Linux on the M1, 94% conformity, just 1 year later.


Laymen programmers vastly overestimate systems work because they mythologize it.

It's just attitude & elbow grease at the end of the day, and Asahi Linux clearly had plenty of both of those! Cheers.


Maybe, but layman programmers also tend to vastly underestimate the work needed to finish projects, because "it's easy, you just need to..."

I would assume the last 10% is going to be just as hard as the first 90%, just like most engineering projects.


Sure, but you probably need less than 100% support to find things very usable for everyday applications. An asymptote isn't so asymptotic if you don't need it to intercept the origin. :)


But when this test suite hits 100%, then comes the optimization work. Just getting it working is still far from getting all the performance out of the hardware.


I fear you may be overly optimistic. Open source community graphics drivers are not a first, and you can go out and see how much success they have without corporate support.

Writing a truly high performance modern graphics drivers is really no small task. There is a reason why NVidia, AMD and Intel employ large teams to the task.


I think the truth is in the middle.

The folks working on Asahi are terrific, and deserve credit where it’s due—most would have trouble contributing, especially at the pace they’re working at. They are experienced, driven, creative and overall good at what they do.

However, I think it’s also instructive to realize that ultimately, there really are plenty of top-tier developers out there who aren’t working on cool things like this, for one reason or another.

Asahi at least has funding. And while Patron funding isn’t new, it is relatively underutilized for open source. I believe the Patron model is the most promising avenue for open source projects to route money from willing participants. I may only contribute a relatively tiny amount, especially when you consider the wages you can get working for a well-funded SV company, but when you pool it altogether, it’s still nothing to sneeze at.

So in my mind, Asahi achieves success in the face of skepticism because it has the combination of great engineers and enough funding to meaningfully help the effort.

P.S. Even though I do that most graphics vendors have huge teams and Asahi is a strong outlier in this case, they probably have a lot of fish to fry that doesn’t yet or may never apply to the M1 graphics drivers. Valve has found success in writing their own Vulkan drivers; I’d love to know how much engineering effort went into that, both people and time. It might be a somewhat closer comparison. (Well.. not really, because it’s not taking into account the impressive reversing effort needed for M1, but perhaps in terms of actually writing the high level graphics drivers.)

P.P.S.: and also, I firmly believe that the ambition and drive to work on such hard problems is what breeds impressive engineers. In my opinion, you shouldn’t feel intimidated; you should feel inspired. Go out there and hack some hardware for yourself.


I think that I largely agree with you. Certainly a small number of determined and talented people can do a lot and can often move a lot faster than larger teams. I just wanted to temper the expectations that we'll get fully working graphics drivers in a year or two :)

Indeed RADV is a good comparison, though it bears notice that the open-source existing AMD drivers probably also helped a fair bit! That being said, there seems to be 3 people but maybe only 2, and the RADV project has been going on for around three years now, though there are a lot of contributions from outside of Valve.

RADV compiles Vulkan to NIR, so there is also the backend that compiles NIR to assembly that needs to be performant.

Though perhaps the Asahi Linux team can leverage a lot of work done by RADV for their platform, which could speed things up a lot!


Ultimately, the developers on Asahi are trying to play catch-up with massive GPU driver teams comprised of dozens of talented engineers. It’s not feasible to catch up, but I’d be happy to be proven wrong.


RADV actually started as a student project IIRC; Valve got involved later, mostly with adding the ACO compiler as an alternative to using LLVM.


I think that's pretty consistent? If it takes one year for OpenGL ES2 coverage, not even optimization, it will probably take two more years at least for proper optimized Vulkan/SPIR-V/OpenGL 4 support along with accelerated vidéo encode and decode support.


True - but the 2024 cynics (as seen primarily on forums like Phoronix) doubted that there would be OpenGLES in this time frame.


Those last few percent are a huge amount of work, they were at 93% in July.

https://twitter.com/alyssarzg/status/1414354306157875204


That is true and still concerning. The Asahi Linux defense is that only Rosenzweig has been working on it so far, while Martin and Sven have been working on other drivers. They will, in a few months, finish those up (or close enough) and then begin also attacking the GPU, which should then be a lot of progress quickly.


They don't implement OpenGL ES2, they figure out the GPU / graphics drivers interfaces / etc, and OpenGL ES2 then works as more features are made available.

If they have coverage of the GPU, then Vulcan and so on will play for the features ES2 does. Besides you don't need support for all exotic features to run Linux with hw acceleration.


I'd say usable desktop hardware acceleration requires accelerated rendering in the browser and video decode acceleration. These tend to require pretty exotic features and still aren't implemented in a few PC graphics driver.

OpenGL ES 2.0 is pretty simple in comparison, and you won't have to deal with undocumented features and so on.

It's also not as simple as coverage, you also need good performance, which means you need a good optimizing compiler for the GPU architecture, and that's not obvious either. Unless they are using Apple's drivers? I don't know that they are. From reading her website it seems they are not.


> It's also not as simple as coverage, you also need good performance, which means you need a good optimizing compiler for the GPU architecture, and that's not obvious either. Unless they are using Apple's drivers? I don't know that they are.

She's been writing progress reports as she goes. This one is from back in May.

>I’ve begun a Gallium driver for the M1, implementing much of the OpenGL 2.1 and ES 2.0 specifications. With the compiler and driver together, we’re now able to run OpenGL workloads like glxgears and scenes from glmark2 on the M1 with an open source stack. We are passing about 75% of the OpenGL ES 2.0 tests in the drawElements Quality Program used to establish Khronos conformance. To top it off, the compiler and driver are now upstreamed in Mesa!

Gallium is a driver framework inside Mesa. It splits drivers into frontends, like OpenGL and OpenCL, and backends, like Intel and AMD. In between, Gallium has a common caching system for graphics and compute state, reducing the CPU overhead of every Gallium driver. The code sharing, central to Gallium’s design, allows high-performance drivers to be written at a low cost. For us, that means we can focus on writing a Gallium backend for Apple’s GPU and pick up OpenGL and OpenCL support “for free”.

https://rosenzweig.io/blog/asahi-gpu-part-4.html


Yes. Simply covering these features will provide basic support for many features. That doesn't, however, means that performance will automatically be sufficient. It also remains to be seen if the more complex featureset of OpenGL 3.1 is as straightforward to cover efficiently.

I'm not saying it won't happen. I'm just saying that we shouldn't underestimate how much work there is.

Even with the help of Mesa and years of effort, the nouveau backend for NVidia cards is still barely satisfactory even for day to day tasks, it's OpenGL performance is very poor even for basic applications. It's really not as simple as just coverage in practice.


Nouveau actually doesn't count as a good reference, because NVIDIA actually locked-out reclocking support from any firmware that is not NVIDIA-signed starting with the GTX 900 series.

This means that if you are running any form of unsigned driver (which would be any open-source driver such as Nouveau) on those cards, the chip will run at the slowest possible performance tier, and won't allow the firmware to crank the speed up. Only the signed NVIDIA driver can change the GPU speed, which is basically mandatory for a driver to be useful.

So - don't blame Nouveau for being behind, NVIDIA has made it so that open-source drivers are almost useless. In which case, why bother with improving Nouveau when the performance is going to be just terrible?


There are a lot of issues beyond reclocking, as we both know. Even before the reclocking issues, nouveau was not up to par despite years of work, and it is still far behind on cards with reclocking.

The point I was making is that mere coverage is not enough for satisfactory performance. If it was the case, nouveau would have good performance on cards with reclocking support.

It doesn't, because it takes a lot of work on the backend to get good performance.


Wow, why would they go out of their way to do that? Even with my most cynical hat on, I can’t think of how this is in their self interest.

Oh, is it to ensure nerfing of FP64 performance for their consumer cards? Is that done at the driver level?


> Is that done at the driver level?

No, the FP64 units aren't physically present on silicon in high numbers on the non xx100 dies.

However, limitations enforced just by the driver and its FW set:

- GPU virtualisation (see: https://github.com/DualCoder/vgpu_unlock)

- NVENC video encoder limitations to 2 simultaneous streams on customer cards

- Lite Hash Rate limitation enforcement to make GPUs less attractive for miners


I think it has something to do with preventing people from running the higher-stability drivers that come with buying a Quadro (or hypothetical super stable FOSS drivers) on significantly cheaper consumer hardware, because that makes it much more difficult to justify buying a Quadro in many circumstances. The added stability is part of the upsell and is more software than hardware.


I am pretty sure it's to prevent people from overclocking their cards more than NVidia deems safe for the sales of their most expensive cards.


If that were the case then manually setting the clock speed would be supported but it would lock out any speeds higher than the OEM configuration.


Not quite, no. Even manually setting the clock rate would allow for performance improvement as you could lock the card at its boost clock or at least prevent downclocking under load.

The only lockout solution is to lock speeds to the base clock completely.


It depends on how the card’s speed governor works. Do you set a desired clock rate and then the firmware tries to hold that speed dependent on factors like core temperature or do you set a hard value and the firmware holds that come hell or high water?


From how it used to work, the actual frequency was dependent on both the driver and firmware, though the driver used to and probably still can force a certain frequency.


Video decode acceleration is very nice for battery life and freeing up the CPU for that LLVM build you're running in the background, but it's absolutely not a requirement, it's nowhere near as important as GPU rendering. Heck, a lot of hardware doesn't have VP9 support and people watch VP9 YouTube on it.


A lot of people without VP9 hardware support just use h264ify.

Otherwise it's not really feasible to watch high resolution, high framerate videos on a laptop. It absolutely murders battery life.


this is great progress and Alyssa Rosenzweig is awesome, but afaik the gpu is not usable from Linux yet because the kernel driver is not yet written.

the asahi team is very talented, but I don’t think their progress means building a software stack for an undocumented gpu is not difficult.


According to Hector Martin, writing the kernel driver will be easy as "all the magic happens in userspace." So even though the driver is running in userspace on macOS it won't be that much work to run in Linux, apparently.


I'm sure there will still be tricky bits to figure out in kernel mode... Like how to partition and schedule GPU work so one user can't read and write another users rendered stuff...


Will this follow the 80/20 rule?


Yes they have done 94% of the work, only another 94% to completion.


I'm still concerned about that weird back and forth between driver and firmware that she talked about in one of the blog posts. Not really a stable API it seems.


They said it's no problem - just use a fixed version of the macOS Display Controller firmware of your choice. For example, they might target the macOS 12.1 version of the firmware, maybe upgrade to the macOS 12.4 version a year later... It's not stable but it's easily made stable.


>> They said it's no problem - just use a fixed version of the macOS Display Controller firmware of your choice.

Is that really an option? Can we drop in the binary blob at will, or will it break if Apple pushes an update?even if that's the case, dual booting may be impossible then. Unless that firmware is downloaded every boot.


Firmware is stored per-OS installation. (also DCP is unrelated to GPU)


So why it's so hard to have open-source drivers for say Nvidia GPUs?

If you can reverse-engineer Apple's completely new chips it should be a breeze to do the same with already very well know chips form Nvidia, isn't it?

What's the difference? Is there any difference at all?


The big issue with NVidia GPUs is that NVidia won't allow opensource drivers to set the clock rate on newer graphics cards.

However even on older cards where this is not an issue, community drivers tend to lag behind. It is far for a breeze. So far they are leveraging the Mesa driver infrastructure to implement OpenGL ES 2.0, which is a good beginning. The really hard part is supporting all of the latest technologies at a good performance. It's too early to know if that will be a breeze or not.


So this sound like the people who say we will be able to run first real tests by 2024 are likely right. From there it's then "only" as much work as with Nvidia.

So maybe by 2032 we will have some M1s with somehow HW accelerated graphics. At least it looks like that given the history of Nouveau.

I don't want to put the work down those people are doing! But the truth is likely that without vendor support this will never fly beyond some tech demos.


I'm not sure I'd be so pessimistic either. Perhaps they'd be able to use the open source AMDGPU drivers as inspiration, or perhaps there is something else I'm not thinking about. I think there's a solid chance they get it usable for basic use within a few years.

In any case, I truly wish them the best of luck.


We don't deserve Alyssa.


[flagged]


really?

making an account just to post this?


Relax, I'm just complimenting his fine work.


And making the decision to be a prick about her gender.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: