Hacker News new | past | comments | ask | show | jobs | submit login
Leveraging Rust and the GPU to render user interfaces at 120 FPS (zed.dev)
259 points by rck on March 9, 2023 | hide | past | favorite | 203 comments



"Inspired by the gaming world, we realized that the only way to achieve the performance we needed was to build our own UI framework"

I'm surprised you did not look at "Dear ImGui", "Noesis", and "JUCE". All three of them are heavily used in gaming, are rather clean C++, use full GPU acceleration, and have visual editors available. Especially JUCE is used for A LOT of hard-realtime professional audio applications.

"When we started building Zed, arbitrary 2D graphics rendering on the GPU was still very much a research project."

What are you talking about? JUCE has had GPU-accelerated spline shapes and SVG animations since 2012?

BTW, I like the explanations for how they use SDFs for rendering basic primitives. But that technique looks an awful lot like the 2018 GPU renderer from KiCad ;) And lastly, that glyph atlas for font rendering is only 1 channel? KiCad uses a technique using RGB for gradients so that the rendered glyphs can be anti-aliased without accidentally rounding sharp corners. Overall, this reads to me like they did not do much research before starting, which is totally OK, but then they shouldn't say stuff like "did not exist" or "was still a research project".


When we talk about 2D graphics as a research problem, we're talking about native rendering of splines and strokes. JUCE does not have GPU-accelerated splines, it flattens the path to lines and rasterizes the coverage area into a texture that then gets uploaded to the GPU:

https://github.com/juce-framework/JUCE/blob/2b16c1b94c90d0db...

https://github.com/juce-framework/JUCE/blob/2b16c1b94c90d0db...

It also does stroke handling on the CPU:

https://github.com/juce-framework/JUCE/blob/2b16c1b94c90d0db...

Basically, this isn't really "GPU accelerated splines". It's a CPU coverage rasterizer with composting handled by the GPU.


You linked to the software fallback renderer which can be used for cross-platform compatibility. But JUCE also has platform-specific rendering modules.

CoreGraphicsContext::createPath will convert the CPU spline segments to CG spline segments which are then rasterized by CoreGraphics using Metal on the GPU.

https://github.com/juce-framework/JUCE/blob/2b16c1b94c90d0db...

And on Windows 7 and up it'll use the hardware-accelerated Direct2D APIs:

https://github.com/juce-framework/JUCE/blob/2b16c1b94c90d0db...


You mentioned using the OpenGL context, and this code is used by the OpenGL context.

CoreGraphics does not use the GPU.

Direct2D uses a approach which tesselates paths into triangles on the CPU. It is similar to the JUCE code in that the GPU is used for coverage, but it still does not natively render splines on the GPU.


Juce has a CoreGraphicsMetalLayerRenderer which I believe uses Metal to render CoreGrapghics primitives.

And yes, I agree that JUCE will tesselate the splines into straight lines on the CPU. So technically, they are not fully rendered on GPU. But in practice, the slow rasterization is done on GPU while tesselation only has a negligible performance overhead. For example, I heard that UE4->UE5 removed the GPU tesselation support because it's usually easier to just DMA a new mesh from the CPU on-demand. I would treat that as a strong argument that CPU tesselation is practically never the bottleneck.


> Juce has a CoreGraphicsMetalLayerRenderer which I believe uses Metal to render CoreGrapghics primitives.

This class is part of a JUCE demo app, and you can read the source code to it if you want. [0] It uses CoreGraphics to render the graphics on the CPU, and then uploads it as a texture to the GPU so it can be used as a CAMetal layer. So, no, the graphics are still rendered on the CPU, with compositing being handled on the GPU.

I don't know why you're confidently saying something incorrect, it took me all of 2 minutes to find the source code and read it.

> For example, I heard that UE4->UE5 removed the GPU tesselation support

I know it's confusing, but GPU tessellation is a completely different thing. The word "tessellation" in graphics means "turn into triangles". In a 2D graphics context, we're turning splines and curves and 2D shapes into triangles. In a 3D graphics context, GPU tessellation refers to a control cage mesh which is adaptively subdivided. These two have nothing in common except that triangles come out the other side. I am not aware of anyone who has tried to use GPU tessellation to render 2D graphics.

GPU tessellation failed for a large number of reasons, but poor performance was one of them. So, you know, doing this sort of work efficiently on the GPU is still an open research problem. Just because it's not efficient to do it on the GPU does not mean the performance overhead is negligible. For rendering big complex vector graphics, tess overhead can easily outweigh rasterization overhead.

[0] https://github.com/juce-framework/JUCE/blob/4e68af7fde8a0a64...


> I don't know why you're confidently saying something incorrect, it took me all of 2 minutes to find the source code and read it.

I'd say we just have different definitions of GPU-accelerated. Your position appears to be "purist" in the sense that almost all processing needs to happen on the GPU. But for me, if mostly static data is generated on the CPU once and then cached on the GPU and then processed on the GPU in all performance-critical contexts, then I would call that "GPU-accelerated" even if parts of the data preparation happened on CPU. That's also why I brought up the UE. They prepare the data on the CPU, then send it over to the GPU so that rendering in the real-time loop can happen only on the GPU. But the fact that the CPU prepared the data does not make it not-"GPU-accelerated" for me.

That said, can we agree that the ID2D1GeometrySink that JUCE uses on Windows is GPU-accelerated? Surely the Direct2D/DirectX drivers will do stuff on the CPU, but Microsoft says "Direct2D is a hardware-accelerated, immediate-mode 2-D graphics API" and they show very smoothly antialiased splines here while mentioning "Multisample Antialiasing", a GPU feature:

https://learn.microsoft.com/en-us/windows/win32/direct2d/dir...


The absolutism of some of their statements when we have 30 years of GPU research at our disposal is pretty eye-opening. I get that maybe this stuff didn't exist as a crate for rust but c'mon! Splines and shapes, glyph rendering, I wrote a game engine in C# back in 2007 that did all these things, and more. I like the explanation and the breakdown of SDFs for primitives but they are standing on the shoulders of giants and act like they're on an island.


I agree with your final statement and the seeming lack of research, but are you sure your characterization of JUCE is accurate?

You’d surprise a lot of people if you were right about it using GPU acceleration for its UI framework. It does have an OpenGL container that fits into its view object hierarchy if you want to write your own accelerated component, but the rest of the UI is pretty much standard event-loop -> platform-API dispatch. It’s accelerated where the underlying platform-API is accelerated but not in any kind of explicit or fine-tuned way. The focus has always been more on portability than performance, even on the audio side.

(And of course the high-priority audio processing runs in an entirely segregated context from the UI, so the performance characteristics of the two pieces are decoupled anyway.)


>I agree with your final statement and the seeming lack of research, but are you sure your characterization of JUCE is accurate?

That JUCE is "heavily used in gaming" for starters is widly innacurate


I personally know multiple people using it in their games and game-related tooling and Roli (the company selling JUCE) even has an official Unreal Engine 4 plugin. So to me, it appears to be widely used.


Some people will also any available lib for games, doesn't mean it's "widly used". As for such a plugin, it's more like wishful thinking for getting gaming customers than actually something that sells in any amount to write home about.

The kind of games some tint amount of people make with JUCE is like this: https://ankuznetsov.wixsite.com/jucegames and even those are few and far between.

In any case, nowhere even close to "widly used" in gaming. Nor moderately used, or even low use.

"Few and far between users of JUCE for games, and even at that almost all of them using it for hobby casual game projects" is an accurate description.

JUCE does dominate in audio VST/AU fx/instruments GUIs though...


JUCE is sold by Raw Material Software, which was purchased by PACE Anti-piracy, which is part of Avid.

There may be some people you know using JUCE for games, but it’s an odd choice.

Trying to figure out what you might mean, JUCE was involved with the BLOCKS SDK for Roli’s hardware peripherals, but again, that’s not really about gaming and definitely not about any familiar kind of gaming.

That said, I’m sure you’re relating the best information you have and very likely just transposed some detail by accident and are now caught in a relentless, exhausting online nitpick. We’ve all been there. If you figure out what it was, I’d love to hear it!


> Roli (the company selling JUCE)

I know it's tangential to the point that you're making, but JUCE was acquired by PACE in 2020.


Just checked and even without any additional setup JUCE is using CoreGraphics which is using Metal under the hood. So yes, the platform-specific renderer is using GPU.

Also, you can use OpenGL for GUI compositing, too, not only as a 3D context.


CoreGraphics doesn't use Metal. You're confusing it with CoreAnimation.


JUCE has a CoreGraphicsMetalLayerRenderer which I understood as using Metal to render CoreGraphics paths.

https://github.com/juce-framework/JUCE/blob/4e68af7fde8a0a64...


Incorrect. Please read the code more carefully.

It uses CoreGraphics to render into a bitmap on the CPU, which is then uploaded into a Metal texture.


I concur.


CoreGraphics is a CPU rasterizer. It doesn't use the GPU.


Maybe you need to add an "(in Rust)" to all these sentences? Sure there are C++ frameworks, but they probably wanted a pure Rust UI framework?

My 2 cents: it's nice to have smooth rendering in your editor, but I'm currently mostly using a Java-based IDE (IDEA family), and it's responsive enough for my taste. If I were to use the current prototype of their editor, I'm afraid the usage of fixed-width fonts all over the place (which I assume can be fixed, but may also be due to constraints of the UI framework?) would probably bother me more than the 120 FPS UI would impress me (assuming that I had a monitor that was fast enough, which I don't). On the plus side, it sounds like they support sub-pixel rendering - if that's the case, kudos to them!


Hey rob74! Antonio here, author of the post.

Zed is not constrained to use fixed-width fonts and supports all kinds of fonts, monospaced and variable-spaced alike (ligatures and contextual alternates included). Even though we use a glyph atlas, we rely on CoreText for shaping and rasterization (rendering sub pixel variants as well). In practice, this means that text is indistinguishable from a piece of text rendered by the operating system.


Does your framework have any accessibility features? I'm a screen reader user and having responsive UI is wonderful as far as it is accessible at all.


Not true, even if you add Rust. egui was released earlier: https://github.com/emilk/egui

And egui isn't tied to Apple proprietary frameworks.


Afaik egui isn't doing any kind of the fancy GPU based 2D graphics this blog post is about though.


have a look at what egui creator (and his team) been doing with rerun [0] Egui seems well suited to render even tens of thousands pointcloud with wgpu backend.

[0] https://www.rerun.io/


Rerun dev here :). Yes, we're using egui with its wgpu backend! As someone else pointed out it is backend agnostic - it essentially generates triangles & textures in GPU friendly way and passes those down to a backend implementation.

All the visualizations like the 3D scenes with point clouds etc. are rendered with our open-source in-house renderer ("re_renderer") which itself is then passing wgpu surfaces/commands to the egui wgpu backend that composites everything together then.


It uses glow, GL on whatever, so OpenGL when it exists.


That doesn't mean it's doing any kind of GPU acceleration of 2D graphics.

Given that they don't talk about it in the README, I'm pretty sure they don't.


I use egui. The library itself is agnostic but relies on backend libraries for rendering, all of which (or at least the official ones) render on the GPU.


> KiCad uses a technique using RGB for gradients so that the rendered glyphs can be anti-aliased without accidentally rounding sharp corners.

This is known as a “multi-channel signed distance field”, or “msdf”.

https://github.com/Chlumsky/msdfgen


Do you know any text editor that uses it for font rendering?


I used to use it, but its extremely font and glyph-sensitive wether or not it works. And if it doesn't work there is no easy fix (or ability to recognise it nonvisually)


You would have to use that if you want smooth antialiased zooming in and out of text.

BTW, the KiCad schematics are pretty text-heavy. You typically write down all the parameters and IDs of all the electrical components.


As for text rendering, Slug [0] has existed for much more than ten years, and is pretty much the gold standard for GPU text rendering.

[0] https://sluglibrary.com/


Thanks, I'm reading the paper* linked to on that site explaining the technique used in the library. It's encouraging me more to pursue implementing a 2D render on the GPU. I'm also inspired by a recent talk about gkurve**.

* https://jcgt.org/published/0006/02/02/

** https://m.youtube.com/watch?v=QTybQ-5MlrE


Do note that as far as I know, this technique is patent-encumbered, at least in the US.


Thanks. I wonder what the patents cover. With all the work going into 2D rendering on the GPU I'd imagine others discovering similar methods.


ImGui is at least only used for debug rendering, not something that makes it to end user. At least in the small subset of companies I worked at.


ImGui is mainly used for debug rendering, but if you browse the screenshot threads, there quite a few 'end user applications' among them:

https://github.com/ocornut/imgui/issues/5886


No, it's supposed to be used for this purpose.

> Dear ImGui is designed to enable fast iterations and to empower programmers to create content creation tools and visualization / debug tools (as opposed to UI for the average end-user). It favors simplicity and productivity toward this goal and lacks certain features commonly found in more high-level libraries.

It's literally not designed for end user consumption.


ImGui is amazing to work with though. Like holy fuck is it pleasant compared to basically every other UI-development paradigm ever in the history of user interfaces.


Accessiblity, designer,... and it isn't even the first, that is how GUIs used to be developed in 16 bit home computers for games.

It is like static linking, it was also there decades ago.

There are reasons why the world has moved on from those approaches.


What's your point? Static linking makes a ton more sense than dynamic linking, except in a few special case niches (like plugin systems).

And OTH, event-driven UIs have also been around as long as UIs exist, that makes them at least as 'outdated' as immediate mode UIs.


The industry has moved beyond past decisions for a reason, that is the point.

Some of them keep being rediscovered in endless loops of fashion.


> Some of them keep being rediscovered in endless loops of fashion.

The reason for this is often that the environment (mainly the hardware) has changed so much that it may make sense to look into discarded old ideas again. For instance, dynamic linking was extremely important in the age of slow floppy discs and when RAM was counted in kilobytes, but those environmental factors are no longer an issue, and the advantages of static linking outweigh the disadvantages of dynamic linking again.


> The industry has moved beyond past decisions for a reason, that is the point.

And those “reasons” often SUCK; that is the point.


Yes but that's because it's an immediate mode renderer.

It feels intuitive and that's why it's the de facto standard for building debug tooling. But this intuitiveness comes at a price! No matter how hard you try, this will always be slower and computationally heavy compared to retained mode.

Great for some stuff, terrible choice for some other.


Your mixing up terms.

immediate mode rendering and immediate mode gui are unrelated. immediate mode gui libraries generally dont use immediate mode rendering because its realy slow.

immediate mode gui tends to be faster in practice because you just naturally end up with less code. also its very easy to write a retained mode ui ontop of an immediate mode gui library.


The term "immediate mode UI" is about the API philosophy, not about how it works under the hood, and especially not how rendering is implemented (e.g. Dear ImGui records a command list for the whole frame which is then passed to the API user for rendering, this is pretty much the opposite of an 'immediate mode renderer').


You are correct. I've also encountered it sometimes in internal business gui wrappers.


I wonder if this is because the default theme for it is somewhat ugly, and most developers aren't designers to make it look better. It is perfectly capable of rendering standalone applications, if you want it to...


Meanwhile, the gaming world is moving to HTML/CSS/JS for game UI in many cases.


I know of at least one shippping commercial game - in 2023 - that renders the UI using Adobe Flash.


That's super common. It was more common historically, these days it's largely gone away. (Scaleform was the technology that used flash for UI dev)


It's not the first text editor either, Sublime has GPU-accelerated rendering.


Also, looking at their own performance numbers, it appears that all of this only managed to reduce the character insertion latency by 21% when compared to CLion. That's still useful, but maybe not even noticeable to the user.


That latency can’t be render latency, because it’s trivial. It has got to be related to unifying the text buffer with the syntax tree. Otherwise you’ll get wrong syntax highlighting for a couple of frames and that’s jarring.


Google has done a ton of work on GPU accelerated vector graphics too for Android and other platforms. Their skia library is pretty nice: https://skia.org/


Oops I forgot to mention that one. They also have a pretty interesting cross-platform GUI framework using it: https://flutter.dev/


And what was learned from Flutter has produced something I consider even more interesting: Compose UI (https://developer.android.com/jetpack/compose) and JetBrains' Compose Multiplatform (https://www.jetbrains.com/lp/compose-mpp/)


At least as far as dear imgui is concerned, it's an amazing library for dev tools but the accessibility story is dire and it has poor support for high quality typography, so it's a non starter if you're serious about good UI.

This is not to diminish the quality work that went into the library, but I passed over it for those and other reasons. I used nuklear for a bit since its typography was extensible but in the end it was too hard to overcome its other issues, so I use a custom mixed immediate/retained library now with rich text and accessibility support.


> a custom mixed immediate/retained library now with rich text and accessibility support.

I'm always interested in learning about GUI toolkits that have accessibility support. What level of accessibility support does yours have? e.g. does it implement platform accessibility APIs, and if so, on which platforms? Is your library open source? If not, can I see it in action in any commercially available product? Thanks.


I haven't found good platform accessibility API bindings for C#, so right now it just has narration support, adjustable text size/contrast, and full keyboard navigation. I'm hoping that later on in my project's development process I'll have the budget to hire someone to try and fully integrate with the platform APIs - it's designed so it will be possible by pushing updates to the retained model.

The way it works is that it has an immediate mode API (see https://github.com/sq/Libraries/blob/0ca01d949e3df5fabb1440d... for a simple example of the API) and then under the hood it uses a lightweight retained-mode graph, which means that when it's more convenient you can just write more traditional classes and components. In practice most of the UI I've written for my main project is a mix of both models, like for example an editor popup window uses IMGUI-style API while the items in a virtual listbox are represented by a small custom component.

It does have a full imgui-style layout engine under the hood instead of doing retained-mode layout, so it's able to fully re-generate layout from scratch every frame, and to minimize page tearing there is a system where components can request a second relayout pass (typically used for things like text autosize).

Here's a more complex mixed-mode example from one of my development tools: https://gist.github.com/kg/6a6ba42d5019b546858a2b18751de019

Almost all of the on-screen elements in this footage are either immediate-mode or retained-mode UI using the library: https://www.youtube.com/watch?v=ey3FtFWxbhA


I think you might like AccessKit [1], once we figure out the best way to provide reasonably efficient .NET bindings. Basically it's an abstraction over the platform accessibility APIs, written in Rust. And it's specifically designed to be usable in immediate-mode GUIs (the first completed toolkit integration was in egui, a Rust immediate-mode GUI). You push tree updates, which can be full or incremental, and AccessKit retains the accessibility tree.

[1]: https://github.com/AccessKit/accesskit


> But that technique looks an awful lot like the 2018 GPU renderer from KiCad ;) And lastly, that glyph atlas for font rendering is only 1 channel? KiCad uses a technique using RGB for gradients so that the rendered glyphs can be anti-aliased without accidentally rounding sharp corners.

Is this stuff written down somewhere other than the code?

I use KiCad all the time and certainly never noticed this.


> What are you talking about? JUCE has had GPU-accelerated spline shapes and SVG animations since 2012?

I'm not familiar at all with JUCE, but the state of the art in GPU-accelerated 2D graphics has dramatically improved over the past 10 years so I doubt what JUCE did in 2012 is really comparable.


I shipped a JUCE-based app in 2012 and back then it was "good enough" to have GPU-accelerated rendering of SVG icons moving around in realtime with antialiasing. Since we used their OpenGL context, we could even render the GUI inside customers' games for debug visualization.


> KiCad uses a technique using RGB for gradients so that the rendered glyphs can be anti-aliased without accidentally rounding sharp corners

They are not using SDFs for the text, they render the glyphs at the specified font size directly.


I'm not even sure it was wise to use SDFs to draw some shaded rounded rectangles.


I use sdfs for my unified rasterization in my UI library and they definitely are not the optimal method from a performance standpoint. I've had to do a lot of tricky optimizations to get acceptable performance on low spec hardware.

I mostly stick with them because the flexibility and image quality feel worth it.


The bottleneck of UI is not the rendering. A measly 60 fps is plenty fast for UI that feels immediate. We had this in the 90's with software rendering, you don't need a GPU for that today.

What causes user interfaces to hick up is that it's too easy to do stuff in the main UI thread. First it doesn't matter but stuff does accumulate, and eventually the UI begins to freeze briefly for example, after you press a button. The user interface gets intermingled with the program logic, and the execution of the program will visibly relay its operations to the user.

It would be very much possible to keep the user interface running in a thread, dedicated to a single CPU on a priority task, updating at vsync rate as soon as there are dirty areas in the window, merely sending UI events to the processing thread, and doing absolutely nothing more. This is closer to how games work: the rendering thread does rendering and there are other, more slow-paced threads running physics, simulation, and game logic at a suitable pace. With games it's obvious because rendering is hard and it needs to be fast so anything else that might slow down rendering must be moved away but UIs shouldn't be any different. An instantly reacting UI feels natural to a human, one that takes its time to act will slow down the brain.

But you don't need a GPU for that.


Unfortunately this isn't true anymore when you get to very high resolutions like 8k. Just clearing the framebuffer of an 8k display at 60hz requires ~6GB/s; about 1/10th of the theoretical memory bandwidth of modern desktop processors. Add in compositing for multiple windows, text rendering and none of that likely being multi-threaded it's pretty clear a CPU has no chance of keeping up.


You are correct, but so is the comment you replied to.

The latency that matters for a text editor is how long it takes for a new keypress to show up. And that's usually a 32x32 pixel area or less.


That's not sufficient, though. You want the 0.1% case where you press undo a couple of times and big edit operations get reversed to be smooth. You have to hit your frame target when a lot is happening, the individual key press case is easy.

It's just like a video game. A consistent 60fps is much better than an average frame rate of 120fps that drops to 15fps when the shooting starts and things get blown up. You spend all the time optimizing for the worst case where all your caches get invalidated at the same time.


In a video game, you'd just blur the shit out of everything else while you grant yourself a 2nd frame to re-render the rest of the content [1]. Or you do temporal remapping like the Oculus does if you rotate your head too quickly.

[1] https://developer.nvidia.com/blog/nvidia-vrss-2-dynamic-fove...


The other latency that matters is how long it takes to draw each frame when scrolling through the code, which is usually most if not all of the framebuffer.


Agree. In that case, you need to cache the result of your CPU rendering in a GPU texture so that you can smoothly scroll it. But you can probably still get away with drawing all UI components on the CPU if you add a little bit of caching.


You aren't talking about the same thing they are and you aren't talking about a scenario that exists. 8k is still exotic and no one is driving any desktop off of a video buffer going from main memory through a CPU.


We've literally got an 8k screen in the office and 4k is becoming increasingly common. Not only is this a scenario that exists it's one I've personally experienced and fixed.


Your 8k screen is run purely off of the CPU drawing to a frame buffer?


But when you do software rendering you don't need to redraw the entire frame buffer, just the parts that changed, which can be very small even at 8k - e.g. the box of a checkbox being switched


Scrolling requires redrawing most if not the whole framebuffer.


With scrolling couldn't you just shift the existing framebuffer up/down and only draw the newly scrolled-to parts?


The GPU buffering model doesn't usually let you read the previous frame like that these days, so you would need a scratch surface. That increases your bandwidth and memory requirements, so shifting doesn't end up being much faster if it's faster at all.


I suspect there are enough edge cases (semi-transparent things jumps to mind) that it's unlikely to be lossless, anyway.


You don't need to redraw the entire framebuffer with GPU rendering, either. It's perfectly possible to do small, partial updates.

See for example https://registry.khronos.org/EGL/extensions/EXT/EGL_EXT_buff...

And https://registry.khronos.org/EGL/extensions/KHR/EGL_KHR_part...


It's significantly cheaper to render a fullscreen window on the GPU than it is on the CPU if you're running at 2560x1600 or 4K. To push that many pixels, you have to send significant amounts of data (framebuffers) at 60 FPS to the GPU anyway - which is no small feat and is bound to eat into the battery. It's just more efficient to run on the GPU.


Separating the logic thread from the render thread doesn't end up being the silver bullet to performance. If I press the letter 'a' on my keyboard and my IDE suddenly stalls on a bunch of code hinting logic, that new state is still not going to make it to the render thread for a couple frames. As long as the render thread is dependent on state coming in from the logic thread, the stalls still propogate.

You could have some wins around scrolling and any animations that you can commit ahead of time


That's a big reason why VS Code pushed the extensions (for code completion logic, etc) to another process. Where VS proper was in-thread... man, some web projects in VS are just painful.

I do think that they will need to do something similar with a higher level language support for extensions. Could probably piggy-back the Deno work for using V8/JS/TS for extensions similar, maybe even compatible with VS Code extensions. It's a massive user space, but seeing the core feature set in another editor could be really nice.

For me the killer feature of VS Code is really the integrated terminal. Not having to switch windows, or leave the app to run a quick command is really big imo.


> A measly 60 fps is plenty fast for UI that feels immediate. We had this in the 90's with software rendering, you don't need a GPU for that today.

No we didn't. We had nothing close to that in the 90s.

Typing was responsive, but that's not anywhere close to 60fps and a very small region to update to boot (120wpm = 600 keys/minute = 10 keys/s, or 10fps). Scrolling benefits from 60fps+, but in the 90s scrolling jumped by lines at a time because it couldn't do anything better.

You need a GPU to keep up with smooth scrolling at modern resolutions. But this also shouldn't be a surprise. What the article talks about is bog standard stuff for all current UI toolkits. It's what mobile platforms like Android have been doing for a decade now.


60 is the new 30.

Was amazed at how good my latest iPhone feels with 120hz scrolling. It's like magic.


I thought moving to 144 Hz monitor made sense just for gaming, but now on 60 Hz monitor just moving the mouse on desktop feels as if the PC is struggling


Proper development of multithreaded desktop apps and not blocking the UI thread appears to be a lost art. I remember I was drawing all the threads on a person wide piece of paper on the wall when I was working on my first commercial winforms application. It's not exactly hard, but requires basic understanding for UX, threading and the platforms you are working with.

Nowadays I still regularly see applications with (temporarily) frozen windows and I just don't understand how that's possible. When I was developing my winforms apps, anything that would do more than perfectly predictable UI manipulations was run on a background thread (in a task), would be forbidden from starting twice at the same time and only updated the UI when done.


In .NET Core/.NET 5+ I think it even defaults to having awaited Tasks run on a different thread. So long as you're using async/await properly it's almost impossible to screw up (no need to worry about SynchronizationContext etc.), yet I still see tons of examples of people who simply don't understand threading, async/await, etc. and screw it all up


As I don't like async/await (read: I probably don't properly understand it) I just added two extension methods to Control that took a lambda for updating UI stuff (one for Invoke, one for BeginInvoke).

Probably kind of like this: https://stackoverflow.com/a/36107907/2306536

I haven't developed C# for almost a decade, so I probably didn't even have access to async/await back then.


Even if you don't really understand async/await in C#, you can do it mostly right by following just a few rules:

1) Make anything that accesses a resource over the network, in a database, or in the filesystem async. So basically any time you make an HTTP request, execute a SQL query, or open a file etc. Otherwise you should generally stick to synchronous since async code is actually slightly slower than synchronous code. An exception to this might be if you're doing some heavy computation that's going to take a while and don't want it blocking the UI thread

2) Async is viral. If you have a method that calls an async method, you need to make the calling method async too. You need to then make its own calling method async as well all the way back to Main()

3) Never use async void as a return type, with the sole notable exception to this rule being event handlers. Use async Task instead as the return type (or async Task<T> if it returns a value)

4) Always call async code with the "await SomeAsyncFunction()" syntax. Calling it other ways like "SomeAsyncFunction().Result" will eliminate most of the benefits of using async in the first place since it is blocking, can cause deadlocks in some cases, etc.

Follow those 4 rules and you don't really even need to understand how async works under the hood, though personally I'd recommend learning it as it will help you a lot when you run into issues with edge cases.


React Native follows this pattern by moving most JS processing off the main thread allowing scrolling and other input to happen without blocking for a response from the JS VM. However this does end up causing a lot of problems with text input and gestures as now you have a sync issue between the threads, if you get caught processing a bunch of stuff in the JS thread the app may appear to be responsive with scrolling but nothing happens in response to button taps or text insertion. It is the only way RN was going to work on lower end hardware though so probably is the right solution if you assume running react everywhere is a good idea.


Yes, making GUIs responsive isn't as simple as just "don't run stuff on the UI thread". There are good reasons to run stuff there even if you're going to hang up the app for a brief period, namely, the user won't see partial/incorrect updates like non-syntax highlighted text or incorrectly clipped shapes, and - especially important - it means you can't end up with invalid GUI states like the user pressing a button that does X and then immediately pressing another button that does the opposite of X, where you should have disabled the other button but didn't get to it in time. Web apps have this sort of problem if they aren't using enough JS, and it can cause all kinds of weird bugs and errors.

The reality is that moving things off the UI thread is always a calculated engineering decision. It can be worth it, but it complicates the code and should only be done if there's a realistic chance of the user noticing.


I think the bottleneck comes from updating each UI element instead updating them in batches and updating elements that don't need to be updated.


That’s just retained mode GUIs calculating what got damaged and only updating those. That’s how most GUI frameworks work since many decades.


While I do enjoy a nice and smooth gpu-accelerated ui, I never use a gpu-ui framework for my own project for one simple reason: Almost none of them properly support accessibility. Electron (and in general the web), despite its sluggishness has a very good support for accessibility. Most "traditional" native ui toolkit also do.

That would be my advice to anyone making a gpu-accelerated ui library in 2023: Try to support accessibility, and even better: make it a first class citizen.


I can’t speak for this one as it’s proprietary, but You’ll be pleased to hear that pretty much all the open source Rust GUI toolkits either integrate AccessKit or have concrete plans to do so in the immediate future. There are toolkits that can’t even do basic things like render images but have accessibility support :)


That is good to hear !


The Rust GUI ecosystem looks particularly promising in that regard, because there is a foundational accessibility library called Accesskit that's being incorporated in several UI frameworks (egui being the first one to have it already but works is underway do add it in several other places)


If you want to do this, where can you start? What are some patterns for making code that's not too spaghetti when you have to handle tabbing, focus, layout, speech of element contents, the actual hierarchy of the elements etc? Are there standardized OS accessibility API hooks or something?


I can't speak for GPU solutions, but I do have some experience of trying to make HTML 2d <canvas> elements as accessible as possible. You can get an overview of the issues/solutions (disclaimer: using my canvas library) here - https://scrawl-v8.rikweb.org.uk/learn/eleventh-lesson/


I’ve heard it’s hard to even work this out as all of the screen reader tools are expensive, proprietary, and there are no standards. The typical way is to just make your program, and if it gets popular, the screen reader companies will find a way to make their product work.


> work this out as all of the screen reader tools are expensive, proprietary, and there are no standards.

ARIA is a good start, and screen readers built into the OS are a good start.

Moreover, major OSes have accessibility APIs that screen readers will use:

- MacOS https://developer.apple.com/library/archive/documentation/Ac...

- Windows: https://learn.microsoft.com/en-us/windows/apps/develop/acces...


This used to be the case on Windows, but hasn't been for at least 10 years, and has never been the case on Mac OS X (using the historical name for clarity) or Linux. On Windows, the open-source NVDA screen reader is widely used. Furthermore, the hacks that Windows screen reader developers historically used to "find a way to make their product work", particularly intercepting GDI API calls (either in user space or in a display driver) to build an off-screen model, are not applicable to modern UI stacks. And the other major screen reader hack, using application-specific COM object models, was mostly only applicable to Microsoft Office and Internet Explorer. So you basically have to implement platform accessibility APIs to make your UI accessible. (If you use native controls, that's more or less done for you.)

Edit: BTW, I've been in the assistive technology industry a while, particularly on Windows. Feel free to ask me anything.


As someone who's doing some accessibility programming but has no background in the field, I do have a random question to throw your way if you are game.

Our team is developing voice control for a website. This is mainly requested by sighted users who want to use the site hands-free. But we also have an accessibility mode for better screen reader support.

We think voice control might be appreciated by screen reader users too, but we aren't sure how well it would work with a screen reader.

Are there common pitfalls we should be wary of?

One thing we're worried about is that the voice from the screen reader might interfere with the detection of the user's voice.

I know general dictation and voice control software already exists, so my initial assumption that screen reader users would benefit might be wrong. If the existing tools are good enough, perhaps this whole question is moot.


Yes, the screen reader output will probably interfere with your speech recognition. You may be able to work around that on some platforms by enabling echo cancellation when getting mic input from navigator.getUserMedia, but I don't know if that actually works on any desktop platforms.

In general, you should assume that the user already has whatever assistive technologies they need, and you don't need to provide your own, just make the content and UI accessible using semantic HTML and (if needed) ARIA. Providing your own screen reader, for example, would definitely be a mistake. The same should ideally also hold for voice control, but apparently you're actually getting some demand for that.


That's partially true, but fortunately not completely. There are widely use open-source screen readers for Windows and, of course, there's no proprietary screen reader on Linux. And, definitely, there are standard APIs which are used to communicate the accessibility tree between an app and a screen reader. Yes, they are specific for each platform, and Windows has multiple of these, but they are standardized at least for each platform.


Here are recent suggestions for Windows and Linux by a blind person: https://news.ycombinator.com/item?id=35008647


Since browsers can interface, I would guess there are hooks but I would also guess that they are not standardized (between platforms).


Lots of negativity in here. I for one am excited about the prospect of an editor that is as responsive as I remember Sublime being back in the day, with the feature set I've come to expect from VS Code. An editor like this simply does not exist today, and betting on the Rust ecosystem is entirely the right choice for building something like this in 2023.


Here here. I backed Onivim hoping it was going to shine a light in the darkness, and it seemed promising, but ultimately was abandoned? I think, unsure.


That's exactly the rabbit hole I'm in.

I love immediate feedback but getting it ranges from hard to neigh impossible. E.g. I have a complex Emacs setup for rendering Pikchr diagrams, but there are a lot of problems to solve from diagram conception to the end result, so I thought, hey, why not make my own cool RT editor - in Rust obviously.

Unfortunately I learned that GUIs are though problem especially if idea is hobby-based so there's only one developer inside. Ultra responsive GUIs cool, I have a prototype in egui (not sure if that's as fast as Zed's premise but feels fast nonetheless) and yet it doesn't support multiple windows, which I wanted to have.

120 FPS with direct rendering sounds AWESOME just for sake of it, but I believe that for the end-user layout will be more important than refresh rate, and that's different beast to tame.

Personally I "almost" settled for Dioxus (shameless plug: [1], there's link to YT video) and I'm quite happy with it. Having editor in WebView feels really quirky though (e.g. no textareas, I'm intercepting key events and rendering in div glyph-by-glyph directly).

[1]: https://github.com/exlee/dioxus-editor


Hey xlii! This is Antonio, author of the post.

You're right that rendering is only part of the story. To stay within the ~8ms frame budget, however, every little bit counts. Maintaining application state, layout, painting, and finally pushing pixels to screen, all need to be as performant as they can be.

For layout specifically we're using an approach inspired by Flutter, which lets us avoid complex algorithms but still have a lot of flexibility in the way elements can be positioned and produce a rich graphical experience.

Thanks for reading and commenting!


I don't have experience with Flutter, but based on quick glance they're using widgeting and, what I found quite important - ability to develop GUI outside of the application. Something that I think libraries like egui are missing (and which is easily obtainable with Tauri/Dioxus).

Rebuilding whole app to ensure that some box doesn't get cut off ruins development experience, especially for big apps.

Kudos to you guys, I hope you'll make Zed extensible, so that instead of writing my own editor I can use yours ;-)


This seems like the wrong portion of the problem on which to spend time. This is a text editor. Performance problems with text editors tend to involve long files and multiple tabs. Refresh speed isn't the problem, although keyboard response speed can be.

I'd like to see "gedit", for Linux, fixed. It can stall on large files, and, in long edit sessions, will sometimes mess up the file name in the tab. Or "notepad++" for Linux.


I don't understand. Why would you need to render a user interface constantly at 120 fps, instead of just updating it when something changes? Laptop batteries last too long these days? Electricity too cheap?


Hey nottorp. Antonio here, author of the post.

Zed and GPUI use energy very judiciously and only perform updates when needed. The idea is that we can render the whole application within ~8ms, and that shows everywhere: from typing a letter to scrolling up and down in a buffer. However, if the editor is sitting there idle, we won't waste precious CPU cycles.

Thanks for the feedback!


Will you allow developers access to your GUI framework? What about open-sourcing it?


Yeah, might want to edit the title a bit. Or not, considering these concepts are getting lost.

I mean, the win16 api from ages ago had support for invalidating specific screen regions etc. It probably got lost somewhere in the transition to javascript...


It's not about rendering static screens at 120Hz, but rendering anything that's animated at a smooth 120Hz.


"Because it looks good" is probably the most popular reason.


But if nothing changes it looks as good at zero fps :)

Edit: Yay, it's a text editor. What happened to only redrawing the line that's being edited and the status indicators?


What's wrong with using platform APIs? I think that by 2023 most UI toolkits provided by the OS are hardware accelerated.


If their plan is to make their app crossplatform (Windows,OSX,Linux) and be very versatile with the UI customization , then maybe writing a small/focused specifically for your needs UI crossplatform toolkit is not such a bad idea (after all this is what Sublime Text is doing as well)

But the HW accelerated brag is pointeless. Even if they manage to squeeze some extra performance over native toolkits, that is not necessarely going to matter in the grand scheme of things. Drawing the UI is never the bottlneck in a text editor...


   > Drawing the UI is never the bottlneck in a text editor... 
Unless you're using a webview, which is... unfortunately the case for some of the popular code editors available. Sad times we live in.


Smooth animations and performance. Draw a big image with Win32 BitBlt it's painful slow, for example. Imagine that you are zooming an image in Photoshop and it is laggy, the user experience would be horrible. Also, the lag is an important issue in the user interface, even something so small like 100ms would be bad.


If you're using BitBlt instead of Direct2D in anything post Vista, you're holding it wrong.


Why not just use DirectX/*GL for those regions that need it and stick with platform UI for the rest? Blitting API still works just fine if you're drawing combo boxes, no?


You can use ID2D1HwndRenderTarget::DrawBitmap or ID2D1RenderTarget::DrawBitmap instead.


What about the winrt api?


Nee, Direct2D. Save yourself some pain.

WinRT has gone through multiple reboots, who knows what will happen still.

Better use the existing Win32/COM stuff.


It’s hard to make a cross-platform UI that way.


But this UI is not cross platform either, as it is still using proprietary APIs.


That is what wrappers and platform plugins are for, no need to build a full blown API from scratch.


Such solutions are often in tension between using only the lowest common denominator, and the code having different implementation for each platform anyway.

For example, their editor has tabs for editor buffers. Cocoa has a static tabbed widget, which has wrong look and odd UX for this. Cocoa also has a tabbed window type, which isn't a widget you can control. I imagine it'd be hard to abstract that away to work consistently with how Windows does tabbed views. I also haven't seen Windows' tabs being draggable, so that would probably need special DIY solution for Windows which Cocoa tabbed windows don't need.

Anyway, I think native UI toolkits are dying. For most people the Web is their most familiar "toolkit" now, and native platforms instead of fighting that back with clear consistent design, went for flat design and multiple half-assed redesigns that messed up all remaining expectations of how "native" looks and feels.


There are no perfect solutions, but there is possible balance where OS features are abstracted into higher level concepts, not 1:1.

For very contrived example, the cross platform settings panel widget exposes the business API required for settings in general, while the platform specific code takes care of using the host platform concepts to display and manage application specific settings.

While it is more work than lowest common denominator approach, it is still much less than re-inventing the wheel.

As proven by mobile OS platforms, there are still hope for native toolkits.

Lets see how much long the Web will hold, now that the revenge of plugins is here thanks WASM.


My hope is that we'll get a cross-platform toolkit that exceeds the quality of the best native toolkits (without relying on them). That would be great for cross-platform development, but the best thing would be that it would greatly lower the barrier to creating competing platforms. Imagine if Firefox OS was launched with a efficient toolkit instead of being web based for example.


Looking forward to trying this, VSCode is great but I really miss the performance of Sublime Text. I hope they get the plugin system right, killer feature would be if it could load VSCode plugins (incredibly hard to pull off, yes)


Thanks, almostdigital!

After our past experience with Atom, getting the plugin system right is a top priority for the editor.

The thought of cross compatibility with VSCode plugins definitely crossed our mind and it's not out of the question, although our current plan is to initially support plugins using WASM.


Erm, native WinUI apps are GPU accelerated and render at vsync.


Also GTK4 is GPU accelerated whenever possible, with really well mantained Rust bindings for it I think the only thing missing would be macOS which I'm not sure what solutions are there.

Another option in this front could be Flutter if write-once run-everywhere is a need for the project. Another advantage is that it's not only GPU accelerated but it's also retained mode.


Nobody loves native WinUI, not even Microsoft.

Which is kind of a shame. But it's the result of years of product management neglect as well as the pull away from desktop UIs to web UIs.


They only have themselves to blame, after the rewrites their forced their hardest advocates to go through, each one with worse tooling, dropping the UI designer, .NET Native and C++/CX along the way.

Native AOT still can't compile WinUI, while C++/WinRT is like doing ATL in 2000, while bug issues grow exponentially.

Only WinDev themselves, and WinUI MVPs, can still believe this is going somewhere.

The rest of us have moved on.


So is everything written in Apple native UI frameworks since 2009.


And Android and QT and GTK and Chrome and Firefox and etc...

The article is talking about the same generic hybrid GPU-accelerated rendering architecture that everything uses. Seemingly the only "new" part is "in Rust!"


My rui library can render UIs at 120fps, uses similar SDF techniques (though uses a single shader for all rendering): https://github.com/audulus/rui

Is their GPUI library open source?


Sadly I didn't see any links.


What's the real world client experience of developing UIs to render @ 120FPS - is it like once you have tried it going back is really hard?


I hoped they'll go for signed distance functions for glyph rendering as well. Rendering text with shaders is fascinating.


I am curious how this would compare to a ui written in flutter. It seems that fluter is also hardware accelerated and x-platform


In terms of performance it would possibly be even better in Flutter, BUT (and this is a big butt) text editing on the desktop in Flutter currently can only be described as broken. I last looked at this a few months ago and I think it's fixable, but as much as I like Flutter I don't think it's a good option for a text editor _just yet_.


Beyond the rendering which as noted is nothing that hasn't been done before (in general) the inherent OT/multi user + tree sitter functionality is something that entices me.

I'm surprised nobody pointed out lite/litexl here either it's rendering of ui is very similar (although fonts are via a texture; like a game would) and doesn't focus overly on the GPU but optimises those paths like games circa directx9/opengl 1.3

There are great details of the approach taken with lite at https://rxi.github.io

Lite-xl might have evolved the renderer but the code here is very consumable for me.


Nathan Sobo

It should be noted that the main person behind Zed is Nathan Sobo, who created Atom while he was at Github, which is the basis of Visual Studio Code today.

As such, I have high hopes Zed will be a much faster version of Visual Studio Code and am excited to see what him & his team make.


Well, he created electron for Atom. Atom itself was never a part of vsc. VSC was targetting the browser for a while before electron came along


It's surprising that they jump immediately from the problem description into shader details.

IME the main theme with achieving high performance not just in games, and not just in rendering, is to avoid frequent 'context switches' and instead queue/batch a lot of work (e.g. all rendering commands for one frame) and then submit all this work at once "to the other side" in a single call. This is not just how modern 3D APIs work, but is the same basic idea in 'miracle APIs' like io_uring

This takes care of the 'throughput problem', but it can easily lead to a 'latency problem' (if the work to be done needs to travel through several such queues which are 'pipelined' together).


Lots of people nit-picking the 120 FPS but I think Zed looks super promising. The native support for collaborative editing looks fantastic, and I'm excited to try it out.

Curious if you guys have thought about VR / AR possibilities with GPUI?


That would be interesting in VR to have syntax highlighting that moves variable names towards the viewer, or puts context menus with possible methods of some object above.

Would probably just be stupid gimmick, but I want to see somebody do it :-D


I tried this, using https://makepad.dev our GPU accelerated UI and renderstack. And unfortunately it wasn't a great experience. Text popping forward for whatever reason is not really an improvement (i tried indent depth, syntax highlighting reasons, cursor behavior). Maybe 'veeeeery' subtly could do something, but otherwise you dont want it to break visual symmetry as we are used to


Wow, that’s some low level stuff. Most would just use an established UI framework because rendering performance is left to the window manager. I’m not sure I understand the need to go about it like this? Windows is not considered the epitome of performant interfaces but it has no trouble rendering UI’s at 120 fps. When people go and buy a 120 fps display, they are wowed by the smooth scrolling in a heavy application like Google Chrome. The window manager is already hardware accelerated (as for Windows since Vista) and the apps draw widgets on their surface.


It's so weird. The devs of the Warp based terminal are doing something similar (Rust for single-platform low-level dev), and I'm also not sure what the point is. It feels like they're banking on Rust being a selling point, but forgot that the lang can drastically lower your iteration speed when it comes time to compete with other editors.


> Windows is not considered the epitome of performant interfaces but it has no trouble rendering UI’s at 120 fps. When people go and buy a 120 fps display, they are wowed by the smooth scrolling in a heavy application like Google Chrome.

Windows uses GPU-based rendering in most/all of their GUI frameworks. Chrome also makes heavy use of hardware acceleration. Rendering performance isn't left to the window manager. If you're making your own GUI this is exactly the kind of stuff you need to do to make it fast.


First, this looks awesome. Can't wait to try zed.

Second, forgive a naive question since I know nothing about graphics, but would the method described in the article perform better than Alacritty + Neovim?


This sounds really similar to the story we were hearing about Servo back in 2016: https://www.youtube.com/watch?v=erfnCaeLxSI

I was really excited when I saw that demo. Why didn't this turn into a final product that people could use?


This guy Bero1985 wrote the 3D library / engine some years that has extensive 2D features including UI that uses SDF among the other things [0].

[0] - https://github.com/BeRo1985/pasvulkan


What API are they using for interfacing with the GPU (ie. OpenGL, Vulkan, other ) ?

I suspect a lot of time is likely to be spent on the CPU side updating vertex and other data and pushing it to the GPU so it would be useful to have some more detail on how they are handling that.


I saw a few Metal specific types in the source code contained within their demo video



Fast software rasterizers are not slow at drawing text in the first place.


I'm intrigued. What's the applicability that you see for this?


According to Wiki, the technique was invented in 2005 by Casey Muratori: https://en.wikipedia.org/wiki/Immediate_mode_GUI


That’s odd. I remember using it in the 90s. It’s the obvious way to do things if you don’t like small objects.

(Also GPUI seems to be retained, so completely unrelated.)


Some very dedicated fan must have added this. I can see 2001 Java doc about some immediate mode classes:

https://nick-lab.gs.washington.edu/java/jdk1.5b/guide/2d/spe...


Casey ranting about how slow visual studio is and compares it to RemedyBG that uses Dear Imgui (an implementation of Immediate mode GUI).

https://www.youtube.com/watch?v=GC-0tCy4P1U


[flagged]


Why are you reacting so emotionally to something that far down the page? They’re talking about being aggressively multi-core and given their technical audience it hardly seems inappropriate to have that detail along with the other details like how they use GPU acceleration and the benefits of lower input latency.


I wouldn’t say I’m reacting emotionally, or that where it is on the page is relevant. It’s just a funny part of their marketing copy.


You cannot see at 120 fps.


If you're doing a drawing app, then 120fps helps you keep the stroke close to the stylus. Even then it may lag a bit because of a few frames of GPU latency and you have to do predictive stroke points and then adjust your stroke later. See https://developer.apple.com/documentation/uikit/touches_pres...

Similarly, if you're dragging something around with your finger, 120fps keeps that thing closer to the finger. Just improves the UX.


Check out https://www.testufo.com/ on a 120Hz display; the higher FPS is absolutely perceptible.


You're right, you can't see at 120 fps, because that's not how human vision works. You can't see at any fps. However the more fps there are, the sooner you get the information, and so regardless of the lag on your end (the biology behind the eyes, the brain, and the connection between them), you still add some delay coming from your computer.


Why do we need a code editor with 120FPS support ?


Because competing code editors support it and thus lacking this capability would make your code editor seem unusually incompetent.


I am sorry but i tried googling and binging which code editor supports 120fps. I also asked bing ai and it said it is not aware.

Could you please point me here as to what I am missing ?


notepad.exe on Windows for example. If that's too lightweight, then VSCode is a more heavier example.


I cannot stress how much I do not want my 300w gpu to be used to render text that change at most three times per second.

And it's not just about electricity cost and heat stress, it will conflict with everything else that requires the gpu to do stuff, including watching 4k videos on the second monitor, which does have a legitimate case for requiring hardware acceleration since they move a lot of data 60 times per second, and your editor doesn't.

And the limited resource is not the gpu itself but the nearby onboard memory is a scarce resource on its own. I'd be real mad at a software that prevents me to multitask.


This UI is so lightweight it seems like they should easily be able to toggle the GPU compute on or off


I have horrible news to tell you about compositors: they already do use your GPU, even if you're watching a 4K video on the side. Even when requesting a full screen swapchain to avoid having to deal with the compositor, modern OSes will force you to go through that compositor and lie about it being fullscreen.

Additionally, using your GPU doesn't mean locking your GPU on that task. Your 4K video most likely isn't taking up 100% of GPU time, nor 100% of the nearby onboard memory. Everyone already gets a share of that time.


> Even when requesting a full screen swapchain to avoid having to deal with the compositor, modern OSes will force you to go through that compositor and lie about it being fullscreen.

I’m honestly thankful for this bit of lying. Windows 8 and below had horrible, horrible issues with fullscreen programs, especially games - they’d change your screen resolution and thus move and resize all your other windows, they’d freeze for ten seconds when you tried to alt-tab out, and they’d frequently just crash when you switched back to them. Those problems are essentially nonexistent nowadays, and it’s worth the small performance cost that comes from going through the compositor.


I know that in Windows 10+ and on UWP apps, requesting a fullscreen swapchain straight up crashes and tells you "nope, you're going to go through the compositor".

But yes, alt-tabbing Source games was an exercise in patience.


Is that disabled for programs targeting older OS versions? Or are you saying that the new behavior is only for UWP apps? Because that sounds like it would break a lot of old code.


Sorry, hadn't seen that response: yes, it's only when you opt in to UWP that it causes crashes. However, even if you didn't, Windows lies to you. If you ask for FSE (Full Screen Exclusive), it'll say alright buddy, and give you a Full screen that goes through DWM. I believe they have an article on that somewhere on the direct X blog called full screen optimizations, or something along those lines, that explains it.


Compositor and ui framework that interact with them know that 120fps is not the target but that frame render time is, and know not to have active rendering contexts for static resources.

Chasing 120fps rendering here is the problem.


Unless you use Templeos your OS is already using gpu to do gui rendering and composing.


For one that's not entirely true. standard windows gui calls are backed by a shared surface and accumulate drawing instruction on the system side of the render surface,managed by the cpu render pipeline, so that they can keep a dirty rectangle list and only update the dirty areas of the gpu side of the shared surface. From there composing runs on the gpu.

Then again the issue is the focus of rendering at 120fps.the rendering context should be paused between the vast nothingness that happens in between user keystrokes, that happen at something like 4kp, and in burst.

The insistence on 120fps is the issue here, not the rendering pipeline.


That's already what they're doing: https://news.ycombinator.com/item?id=35080427


That's even weirder why would they redraw the whole application on a character change


Where did they say they were doing that?

All I see is them saying they can, not that they are.


Processors could throttle themselves since the 90s. GPUs can too.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: