Hacker News new | past | comments | ask | show | jobs | submit login
Mesa 20.3 released with out-of-the-box support for Arm's Mali Bifrost GPUs (mesa3d.org)
143 points by mfilion on Dec 3, 2020 | hide | past | favorite | 22 comments



A big release grows bigger. Personally, I was excited about Mesa 20.3 because Vulkan meets baseline conformance on the Raspberry Pi: https://www.raspberrypi.org/blog/vulkan-update-were-conforma...


One of the Bifrost GPUs supported is the G52, which is going to appear on the upcoming Rockchip RK3566 SoC. Pine64 has said about this SoC:

"Next year you will see us introduce two single board computers based on the RK3566"

and

"it is our choice for NON-Pro PINE64 devices to come"

I am really looking forward to these SBCs! Should be cool (although a PINE64 rep has also said "Also, realistically, I think that it will be 2-3 years before mainline is usable in a desktop application on the RK3566.").

Source: https://www.pine64.org/2020/11/15/november-update-kde-pineph...


I'm really looking forward to 8GB of RAM. 4GB feels pretty anemic nowadays when you know that Firefox or Chrome will happily eat up the RAM with a moderate number JavaScript-laden tabs open.


Mesa has been growing into a powerhouse of an open source project, much along the lines of the Linux Kernel.


The current title reads "Mesa 20.3 released with out-of-the-box support for [whatever GPU]"

Can someone expand on why Mesa needs to "support" specific GPUs?

I thought that supporting hardware under Linux was a kernel matter, and assumed that Mesa was an OpenGL implementation for Linux, so talking to kernel APIs in a hardware-independent way (since talking to the hardware is a kernel matter).

Where am I wrong? What does "support for GPU xyz" mean in Mesa's case? Thanks.


My (admittedly limited) understanding is that in order to provide support for OpenGL, Mesa compiles code from OpenGL to some target backend. Mesa needs support for specific GPUs because it needs to be able to generate code that can be run on those architectures. In this way it is similar to adding support for a particular architecture in a C compiler. I believe that for open source GPU drivers, Mesa is the piece that is used to provide OpenGL support as part of that driver, whereas with proprietary drivers, OpenGL support is provided by other software.

I'm sure this drastically oversimplifies the situation, so please correct / fill in what I'm missing, but I believe this will answer the OP's question.


Thanks. So, the generated hardware-specific code emitted by Mesa is then mostly passed on intact by the kernel to the hardware?


Yes and no.

Specifically with regard to compiling: OpenGL 2.0 introduced the "GLSL" shader language, which is a C-like language that compiles to run on specific GPUs. OpenGL has an API call that says "given this GLSL string, compile it for the current GPU", and the libGL driver compiles that string for the program at runtime. You don't compile it ahead-of-time.

But also, the 3D APIs exposed by the kernel are pretty low-level, mostly just slightly abstracting over what the hardware itself exposes, essentially exposing a different API for each family of GPUs. The work of abstracting over those different low-level APIs in to a vendor-neutral API (in this case, the OpenGL API), is the job of a userspace library (in this case, libGL).

(Now, 2D acceleration, the kernel does a lot more work to provide a consistent 2D API between different devices.)


Thanks, and thanks for the other long reply :)

One thing that confuses me: we were talking about "compiled code", then your answer jumps to talking about shaders. Can all OpenGL code be represented as shaders? Or is the OpenGL code needing hardware-specific compilation shaders only? When the above poster said that Mesa "compiles code from OpenGL to some target backend", were they referring to shaders already?

Said differently: if I import a C GL lib and use it to make a call to draw a triangle, is this a shader, and will Mesa do any hardware-specific compilation? Or is this compilation only happening when shaders are involved?

I guess I'm confused about what shaders exactly are, and when they are used. I think I understand them as programs able to fiddle at each frame with how stuff (surface, pixels) is rendered, but for example, geometry (putting triangles in space) is OpenGL's job even though it isn't about shaders, is it?

By the way, do you (or other passerbys) have courses recommendations to study this? (ideally interactive / tutorial-like / with exercises)


This is a pretty deep topic. The best resource I've found is: https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-...

I'll also do my best to answer your questions, though I won't be able to do them justice in this space.

A pretty good model for GPUs is remote procedure calls. When you want to do something like draw some triangles, you call a function in the GL API on the client side, and what the driver actually does under the hood is serialize the parameters of that call into some binary sequence, then at some point that "command buffer" is uploaded to the GPU, and a combination of hardware and software on the GPU side decodes it and uses it to set up the hardware pipeline that actually draws a triangle.

There's a lot more to it than that, obviously. A very big deal is that if you did an actual RPC for every triangle draw, it would be hopelessly inefficient. So a large part of what OpenGL drivers do is use heuristics for batching up a bunch of calls into one actual request. In OpenGL, the details of that batching, and the way the command buffer is encoded, are completely hidden from the application.

Shaders are basically small pieces of code that get run on the GPU hardware as part of the rendering pipeline. A vertex shader is a program that gets run every vertex in a mesh, and a fragment shader is another program that gets run for every fragment (pixel) that gets rasterized. There are other shaders that are run for other tasks, but those are the two biggies. In the early days of OpenGL, lighting calculations and so on were done by "fixed function" hardware, but these days it's programmable.

In classic OpenGL, you call a function to compile a shader from a string containing GLSL source code, and the driver compiles it to machine language for the GPU hardware. You can use Radeon Shader Analyzer (available online at http://shader-playground.timjones.io/) to see what that assembly language looks like.

Even just to understand OpenGL better, it might make sense to learn about Vulkan. A good (though somewhat daunting) resource is "API without Secrets: Introduction to Vulkan": https://software.intel.com/content/www/us/en/develop/article... . In Vulkan, the batching and other aspects of resource management are much more explicit and under control of the application, though other details are still abstracted by the driver. For example, there is no one standard binary format for encoding command lists. Also, instead of the driver compiling shaders all the way from source, the application is responsible for compiling the shader into an intermediate language (SPIR-V), then the driver compiles that to the actual GPU machine language.

There are some other low level GPU resources here: https://raphlinus.github.io/gpu/2020/02/12/gpu-resources.htm...

Best of luck, I find GPUs to be a fascinating journey!


Awesome. Thanks for the detailed answer and links :)


> we were talking about "compiled code", then your answer jumps to talking about shaders. Can all OpenGL code be represented as shaders? Or is the OpenGL code needing hardware-specific compilation shaders only? When the above poster said that Mesa "compiles code from OpenGL to some target backend", were they referring to shaders already?

OpenGL code consists of both code running on the CPU, and shader programs running on the GPU. Most of the heavy number crunching is handled by the shader programs and the CPU acts mostly as the control plane, handling resource allocation and dispatching data and work items to the GPU.

Early versions of OpenGL predated programmable GPUs and had no concept of shader programs. Instead, various GPU mode registers were programmed to set up the rendering pipeline, which was then fed vertex and texture data to render. Modern OpenGL largely abandons that beginner-friendly immediate mode paradigm and requires you to provide shader programs and do various other setup work to get anything on screen; you can't simply make a DrawTriangle kind of function call on the CPU. Newer APIs like Vulkan strip away even more abstractions.


Thanks for the clarification! On "Newer APIs like Vulkan strip away even more abstractions", another comment links to multi-part article https://software.intel.com/content/www/us/en/develop/article... , and uuuuh yes indeed it's a lot of setup code to draw a triangle :D


(You're making me remember things I haven't thought about in a long time :) my apologies if some details are off)

The simple definition of a shader is "a shader is a program that runs on the GPU instead of the CPU" (or at least is intended to run on the GPU instead of the CPU; if you write a shader that makes use of a feature your GPU doesn't support, Mesa will actually use LLVM to compile it to run on the CPU instead. Performance will be terrible, but at least the program will run). The compilation is only involved when shaders are involved, though on many graphics cards the driver will implement various apparently-non-shader things as shaders under the hood.

In the early days of OpenGL, it had what we now call a "fixed pipeline" where the GPU isn't very configurable, you feed it geometry and textures, configure a few knobs, and then let it run. You could feed it geometry (either polynomial curves or polygon vertices), then the first stage of the pipeline would approximate all the curves as polygon vertices. Then the second stage would take that, and you'd feed it some lighting information and feed it some textures, and would render each polygon to an individual fragment. Then the third stage would composite those fragments together in to the framebuffer. Which it would then presumably display on your screen. You didn't quite have to do things exactly that way, but it was a set of large-ish "blocks" that weren't terribly configurable.

And then very creative people would say "I wish I could change this little aspect of how it processes vertices" or "... of how it processes fragments". And so some hardware vendor would implement another knob on their GPU to configure a specific aspect of one of the stages in the pipeline, make up their own OpenGL extension to configure that knob, and if the OpenGL Architecture Review Board liked that extension, it might become a core part of the next minor release of OpenGL. So it progressed toward a "configurable pipeline".

Then they ended up with enough knobs that they said "Let's just let them write code that will run on the GPU to process the vertices and fragments, instead of adding more and more knobs". So in OpenGL 2.0, we got vertex shaders and fragment shaders. The OpenGL spec specified a C-like language, GLSL, which the graphics card driver would compile in to code that would run directly on the GPU. And so you could run your own "vertex shader" instead of using the vertex-processing behavior that was baked-in to the graphics card in OpenGL 1. You could write the original baked-in behavior as a shader; and so that's what is happening under-the-hood in the drivers for most modern graphics cards. So now we have a "programmable pipeline". At this point they were called shaders because largely what they allowed you to accomplish was, well, fancy shading.

And then very creative people wanted to be able to specify the behavior at more parts of the pipeline, and so OpenGL grew geometry shaders and pixel shaders and whatnot. And so that's the general direction that OpenGL is going, carving out more and more pieces of what used to be "part of" OpenGL and letting you replace it with your own shader code. And then we even got compute shaders that don't even have anything to do with graphics, but let you run general-purpose computation on the GPU instead of the CPU. So the word "shader" a bit of a misnomer these days.

I got a lot out of CS 334 with Voicu Popescu at Purdue, but I'm not sure if much course material from that is online. Also, I find the OpenGL specs to be surprisingly approachable, but I spend a lot of time reading specs so maybe that's just me.


Totally agreed that the word "shader" was bringing confusion! Thanks a lot for the context and references.


The kernel mediates access to the GPU, isolating between processes, but it doesn't abstract over them very much to provide a uniform API for wildly different GPUs. The kernel exposes more-or-less the low-level API of the GPU chip itself. So, an application that wants to use the GPU without tying itself to a specific GPU will use the OpenGL API/ABI, and link against libGL, and then when installing drivers you'd install the appropriate libGL for whichever graphics card you have, and libGL would implement OpenGL on top of the low-level API exposed by the in-kernel driver for that GPU. (The assumption that most platforms will work this way, not just the Linux kernel, is baked in to the spec of OpenGL.)

On Windows (at least back in the days when I was kinda familiar with Windows), that's pretty much how it really worked: when you installed the drivers for your GPU, a large part of that was just it plopping in its own libGL. A growing part of libGL is actually hardware-neutral OpenGL-related code, and most hardware vendors hold their libGL close to their chest because they're competing on optimizing the hardware-neutral OpenGL code in their libGLs almost as much as they're competing on actual hardware. But in the FOSS world, where the code isn't a secret, all that hardware-neutral OpenGL code can actually be shared between the libGLs for different GPUs, so we've ended up with pretty much just one libGL that supports all the GPUs: Mesa. So Mesa is both the graphics card driver and the hardware-neutral part of OpenGL.

Now, as much as I would like it to be, GNU/Linux isn't just the "FOSS world", we still have proprietary things here: For many years you had to choose between Mesa or your hardware vendor's proprietary libGL, most notably NVIDIA's. Eventually NVIDIA realized that it was problematic to ask users to remove Mesa, especially with the growing occurrence of multi-GPU systems. So NVIDIA coordinated with Mesa to create libglvnd (lib GL vendor neutral dispatch), which can be the libGL, but is thin shim that will dispatch to the "real" libGL implementation (NVIDIA's or Mesa's) as appropriate, so that they can be installed side-by-side.


Your statement about how the kernel only mediates access to the GPU is exactly right. GPU drivers have long followed an "exokernel" approach: the kernel just ensures that access to the hardware is multiplexed with enforced security boundaries, everything else -- the actual abstracting over hardware differences -- is done by userspace libraries.

It's somewhat amusing to me that the people who discuss exokernels rarely talk about GPUs, even though that's the one area of computing where exokernel ideas are universally deployed today.


I'm interested in using this to test piet-gpu on single board computers, as I think having super performant 2D graphics on low-ish end hardware is interesting.

Lazy HN: does this support Vulkan 1.2? Descriptor indexing? Memory model? Subgroup size control?

Will it work on a Pine RockPro64? If so, what's the least-hassle way to get up-to-date drivers? If someone is interested in helping get piet-gpu running on the hardware, let me know.


The rockpro64 is midgard, not bifrost. Its been supported for quite a while.

Edit: I see you are specifically asking about vulkan. I don't think vulkan is supported in panfrost right now. Just OpenGL.


Without complete vertical integration this is what makes me wonder about fracturing of software ecosystems each time a new SOC is released. If Google can't avoid it how can Microsoft and other platforms sidestep it as well besides throwing tremendous amounts of resources. No matter what the developer will have to consistently update for these changes for their software to be compatible correct? This is one of the reasons why x86 is still a great platform for backwards compatibility.


Windows is easy because it's a non-modifiable binary image. You build a computer that either runs stock Windows or it doesn't run Windows at all. Your UEFI has to provide boot support for all the stuff in your SoC and you also have to provide WHQL drivers.


Well, this is always been the problem with graphics. The situation with Mesa is not perfect, but strictly better than it used to be.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: