Hacker News new | past | comments | ask | show | jobs | submit login
From scratch OpenGL and shaders with raw Xlib (hereket.com)
97 points by libcheet 6 months ago | hide | past | favorite | 42 comments



It should be noted that Xlib has been a compatibility wrapper around XCB – the "modern" X11 client library replacement – for about 15 years now, so it might make more sense to just target that.

(The only thing on the top of my head that absolutely requires Xlib is some niche Vulkan stuff; e.g. VK_EXT_acquire_xlib_display unfortunately has no XCB analogue.)


It should also be noted that in general, Xlib can do a lot more than XCB with a /lot/ less effort and bugs, and has an actual documentation, tutorials, books.

Xlib certainly is quirky, in particular around error handling, though.


It's been a very long while since I played with any of this, and I don't want to rely too much on the widely understood to be lacking documentation, but are Xlib and XCB still not in a very weird situation in which one depends on another for different reasons? In this specific case, at least multiple years ago, glx wouldn't play nice without Xlib, if it would be possible at all without a lot of pain. I don't expect it to have changed for obvious reasons, XCB never being complete and X interest slowly waning.

https://xcb.freedesktop.org/opengl/#index3h1


These days you can use EGL to get an OpenGL context in a pure XCB application without needing to use Xlib.


Last I checked (which was years ago, admittedly) the GLX libraries still required Xlib. There were XCB functions for GLX, but they didn't work with the client-side libraries, so you had to use Xlib.

Of course, you should just skip GLX nowadays and use EGL instead, which I'm pretty sure does work with XCB.


I haven't done anything with X11 in a few years, but last I looked made no sense to do anything with XCB because the documentation is half complete, whereas Xlib has everything you might need documented.


Otoh, Xlib's approach to networking is kind of wrong on so many levels.

> Calls that don't require a response from the X server are queued in a buffer to be sent as a batch of requests to the server. Those that require a response flush all the buffered requests and then block until the response is received. [1]

X11, the protocol is a distributed systems protocol, built on asynchronous message passing, that outputs to the screen as a side effect. Xlib hides the nature of the underlying protocol, and makes it very hard to pipeline things that should be pipelined for maximum performance. Xcb separates out sending a request and waiting for a response, so if you absolutely need the response before you continue, you can do that, but you can also request many things and then wait when you need them.

[1] https://www.x.org/wiki/guide/xlib-and-xcb/


aww, I thought this is going to be about implementing OpenGL functionalities by directly interacting with the GPU and X.


I don’t think that exists (I sure would like for it to), but until it does you could amuse yourself with:

- A 500-line (non-OpenGL-compatible) software rasterizer: https://github.com/ssloy/tinyrenderer/wiki.

- A “hello Wayland” app written in C without libwayland or anything else: https://gaultier.github.io/blog/wayland_from_scratch.html.

- A “hello X11” app written in x86-64 assembly(!) without libX11, libxcb, or anything else: https://gaultier.github.io/blog/x11_x64.html.


You’re looking for driver programming.

IIRC OpenGL is implemented at the driver level.

I’m sure you could find a, potentially dated, walkthrough of mesa’s code if you looked hard enough.


As a fellow OpenGL enjoyer, this should really target 4.5 or 4.6. DSA is a gamechanger


Honestly it's hard to make the case for learning OpenGL at all at this point, unless you need to maintain legacy code. It's not a good API, and it's a technical dead-end with Khronos no longer developing it and Apple abandoning it even earlier. While Vulkan has the vertical learning curve problem there's now native WebGPU libaries providing a sane middle ground that follows modern principles without getting bogged down in minutiae like Vulkan does.


Then again, it's hard to make a case for not learning OGL. You still need to learn all the concepts whether you'd end up using Vulkan or Metal or whatnot.

At worst you'd end up having to use an OpenGL middleware thing in the middle of you and the lower abstraction level API. Like yes, OpenGL is crufty. It's also just not good in many ways, but in many ways it's simple. The lessons learnt from it also informed Khronos on the design of Vulkan, so even for that, it's nice to be able to see just where some of the design decisions come from.

Also, not everyone develops for Apple and so their deprecation is less compelling than, say, Khronos officially stopping development on the API ;)


Part of the problem is that many of the concepts OpenGL teaches you have no bearing on how modern hardware actually works, so you end up having to unlearn bad habits which OpenGLs messy abstractions enable. OpenGL won't teach you to think in terms of PSOs, for example.

> Also, not everyone develops for Apple and so their deprecation is less compelling than, say, Khronos officially stopping development on the API ;)

Have they not stopped? The last major update to OpenGL was six years ago, around the time Vulkan went public. I recall there initially being talk of OpenGL continuing to be developed alongside Vulkan but that just hasn't happened.


> Part of the problem is that many of the concepts OpenGL teaches you have no bearing on how modern hardware actually works

This is definitely true, but newer OpenGL gives you indirect rendering and bindless textures which is about as good as you can get even in Vulkan in terms of driver overhead.

Regardless, much like picking a programming language to learn, it doesn’t really matter what you start with. Most concepts transfer over and any graphics programmer will know more than one graphics API anyways.

It’s funny you mention PSOs because there is now an extension to not use them in Vulkan (VK_EXT_shader_object) because it turns out it’s not how the hardware works [1].

[1]: https://github.com/KhronosGroup/Vulkan-Docs/blob/main/propos...


> Part of the problem is that many of the concepts OpenGL teaches you have no bearing on how modern hardware actually works, so you end up having to unlearn bad habits which OpenGLs messy abstractions enable. OpenGL won't teach you to think in terms of PSOs, for example.

While this is true, for somebody who is starting from scratch there is a lot to learn before getting to the level at which thinking in terms of PSO is important, and it can be easier to get there via OpenGL, which btw still teaches you a decent chunk of GPU-friendly patterns (assuming of course we are talking about "modern" OpenGL and not display lists and such...). Also, with a good command of OpenGL, one can start trying to understand and re-implement rendering techniques spanning from deferred/forward+/clustered lighting, the various shadowing techniques and even HW raytracing eg. via the GLSL_NV_ray_tracingextension, which is - in my opinion - the more important side of learning GPU-accelerated rendering.


What I think is important about this approach is that when OpenGL's cruft appears it is in a technical context. This provides the oppertunity for one to develop a better technical understanding of what is going on with their project and the targeted goals.


> Part of the problem is that many of the concepts OpenGL teaches you have no bearing on how modern hardware actually works, so you end up having to unlearn bad habits which OpenGLs messy abstractions enable. OpenGL won't teach you to think in terms of PSOs, for example.

Well sure, but as the other commenter pointed out, there's a lot of stuff you'd have to learn about how GPUs and 3D rendering with them actually works, before ever getting to concepts such as PSOs. Like think just the kind of thinking that you'd have to go through just to get a handle on things such as how shaders work, how to pack the data so that it can be efficiently used from the shader, uploading textures and so on. And for these kinds of things, OpenGL is as good a learning platform as any.

> Have they not stopped? The last major update to OpenGL was six years ago, around the time Vulkan went public. I recall there initially being talk of OpenGL continuing to be developed alongside Vulkan but that just hasn't happened.

That's what I'm saying tho. Khronos has stopped OpenGL development, which is a way more prescient and compelling reason to not use the API (aside from if you're targeting more "legacy" hardware that doesn't support Vulkan, like you might if you're developing a Wayland compositor, for example, where you might still like some 3D hardware acceleration to complement hardware planes), than the idea that because Apple doesn't support it, it shouldn't be used.

OpenGL was, after all, always a 2nd-class citizen on Apple. Even back in the OpenGL 3 days, Apple got stuck in OGL 2 for whatever reason.

Oh, and at least with Mesa's Zink[0], you can absolutely use OpenGL even if your hardware natively only support Vulkan. That's not a problem.

[0]: <https://docs.mesa3d.org/drivers/zink.html>


I am still a novice in this world of graphics programming. Not exactly only graphics, but VFX and digital sculpting.

Is not OpenGL a good starting point if you want to transition to Metal or Vulkan later on?

If not, should I seek out the "new flavor" to learn graphics?


If your goal is to learn Metal, then just start with Metal. It's a significantly better-designed API. On top of that, if you're developing on an Apple device then you'll have a decent debugging experience with Metal, as opposed to OpenGL where you'll get... nothing last I checked (but it's been a while).


The only problem that WebGPU itself (as a desktop API) is outright horrible. For someone coming from OpenGL - with all its warts - the sheer bloat and confusion coming from there is a downgrade. Just see the whole fiasco around WGSL - a shading language which is kind of like SPIR-V but also not really because it's theoretically human-writable, the syntax is worse than anything the C++ committee could dream up, etc....

I don't think there is any good replacement for OpenGL as of now (2024). I hoped there would be higher-level Vulkan wrappers coming out to bridge the gap but even AMD's attempt (V-EZ) got abandoned fairly soon.


FWIW the native implementations do let you opt-out of the WGSL fiasco by ingesting SPIR-V instead. From what I gather Apple were the ones who lobbied for a new shading language, so Google and Mozilla kept the door open to existing languages in their implementations.


What are you confused about? I'd perhaps say that you'd have the same confusion with any other API. Your confusion is perhaps about unlearning OpenGL rather than learning WebGPU.


I was teaching OpenGL for several years, we did C++ and OpenGL and then, during covid, we switched to web solutions.. What a pain.

Not the WebGPU or WebGL per se, but teaching engineers not familiar with Javascript or web development, local servers, CORS, etc.. adds a whole new level of difficulty.


"WebGPU" is a misnomer, one nice thing about it is that it's not actually married to the web. Dawn, the C++ implementation used in Chrome, and wgpu, the Rust implementation used in Firefox, are both standalone libraries that you can link into a native application and use their portable abstraction without touching Javascript or Electron or any other web tech.


It is maintained by the w3c, not used extensively by the businesses that will employ the students, it's easy to go from OpenGL to Vulkan. I can reconsider it and happily use it if I have more arguments in favor of it, but from what I see it doesn't seem groundbreaking and adds a layer of complexity in comparison to OpenGL 4.x without really improving the learning curve or the capabilities.


It is driven by Web browsers requirements, anything beyond that are non portable extensions.


I keep seeing WebGPU being pitched as a successor to OpenGL. However as someone who's used it, it's not yet the obvious choice. The native API is still not stable yet. Meanwhile we have things like Bgfx and Diligent that do the same things but are already mature.

Maybe webgpu will win because Apple Google and Microsoft will have to have good support for it but today it's not even available in all browsers yet. When it is released it will be missing features the other middleware solutions have and we'll have to wait for WebGPU Next to be available.


How is it a bad API? OpenGL 4.5+ is almost exactly the same as Metal or Vulkan (you just don't have to manage buffers). It's not going anywhere, either, as thousands of games rely on it.


OpenGL does a lot of work you'd have to do manually on Vulkan. That's why it's beloved. I use it to write a game currently. It's really half an engine. The danger of deprecation is there though. The API itself is fine IMO, with DSA of course. Without DSA it's a shitty state machine.


We'll always have Zink, my man.


The way to go on native platforms is via middleware, WebGPU will always be constrained by the Web.

If you bring extensions to WebGPU for native into the picture, then it is no better than the spaghetti way of dealing with extensions in OpenGL and Vulkan.


If you ship an application using a native WebGPU library then you're in control of which implementation you use, unlike native APIs where you're at the mercy of the platform. It doesn't particularly matter if you use an extension that exists on Dawn but not on wgpu, if every build of your app comes packaged with Dawn.


Just like any middleware engine, no added benefit, with less capable tooling.


The benefit of wgpu is that you don't have to mess with synchronization the way you do in Vulkan and DX12. This is an enormous productivity booster.

Besides, if you pick one native API (say, Vulkan), then you're still going to go through translation layers on the platforms it isn't native to, so wgpu isn't any different. Or you can write multiple backends for different platforms, which multiplies the amount of work you have to do. Either way, saying that wgpu gives you "no added benefit" is silly.


Just like middleware engines, with much better tooling.


One might argue that getting bogged down in minutiae is not the hallmark of a good API either. Vulkan is not available on Apple HW and Metal is not available elsewhere, while OpenGL is at least somewhat portable.


That's why I'm saying that people new to graphics should probably start with Dawn or wgpu nowadays. They follow the same shape as the modern native APIs, but in a more streamlined manner, and are portable to all of the major platforms without relying on Apples long-abandoned OpenGL implementation which is missing foundational features like compute shaders. If you go down the Rust route with wgpu then it doesn't even require delving into unsafe code.

From that starting point there's a natural progression to tackling raw Vulkan, D3D12 or Metal if you outgrow what the WebGPU libraries are capable of. If you start from OpenGL instead then you have to unlearn all the nonsensical abstractions it teaches you.


Portable only to certain extent, the moment you start junggling extensions it could be a complete different API for all pratical purposes, depending on how much extensions one needs to take care of, and the different semantics across them, or similar extensions from different vendors.

Also, OpenGL never really quite made it into game consoles, only in some ways not fully compatible, so if they are a target, one already needs to handle multiple APIs anyway.


The Nintendo Switch has a full GL4.6 implementation, and has Vulkan too. But that's an anomaly in the console world. (SW stack provided by NVIDIA)


It has, but you forgot to mention that the main API that actually exposes everything, is NVN.


Vulkan is available via MoltenVK.

If you have sufficiently low level API that works similarly, then the benefit of two low level APIs is relatively direct translations are possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: