GPU acceleration doesn't change the dynamics of caching glyphs. Existing text renderers already cache aggressively. Also, worth noting that caching the rendering of individual code points won't work effectively even in all cases for a terminal, because of diacritics, ligatures, etc.
Of course it's all pixels. You can do it all in pixel shaders. But of course, it's a lot more complicated than it seems. Supporting RTL requires some pretty advanced layout logic. Supporting OpenType ligatures also requires some pretty complicated, stateful logic. And you probably want to support "wide" glyphs even for a fixed width font, which are present in terminals where you are dealing with, for example, kanji.
If you want subpixel AA, that's another complicated issue. If you want to be able to do subpixel AA where glyphs are not locked to the pixel grid, you will need to do more work.
If you want to be able to render glyphs on the GPU purely, you'll need to upload all of the geometries in some usable form. Most GPUs don't render curves, so you will probably turn them into triangles or quads. That's a lot of work to do and memory to utilize for an entire font of glyphs.
You also might think you could utilize GPUs to perform anti-aliasing, but the quality will be pretty bad if done naively, as GPUs don't tend to take very many samples when downsampling.
Since a lot of the work is stateful and difficult to parallelize, doing it on CPU will probably be faster, that way you only pay the latency to jump to the GPU once.
> Since a lot of the work is stateful and difficult to parallelize, doing it on CPU will probably be faster, that way you only pay the latency to jump to the GPU once.
You can still easily cache the glyphs post processed, especially if you don’t use subpixel AA. There isn’t that much state to a scrollback buffer post glyph processing.
I don’t get the resistance to this type of rendering when at this point there are at least three major monospace glyph rendering libraries implemented for the GPU, and I bet there are dozens I don’t know about.
No such resistance here; I've written text renderers myself. I'm just pointing out that it's not simple and there aren't trivial performance gains. Like I said, you can't really just cache codepoints. The way this particular terminal emulator does it, it's keeping a cache of individual codepoints. Even forgetting OpenType ligatures, this also won't work for things like diacritics.
Of course it's all pixels. You can do it all in pixel shaders. But of course, it's a lot more complicated than it seems. Supporting RTL requires some pretty advanced layout logic. Supporting OpenType ligatures also requires some pretty complicated, stateful logic. And you probably want to support "wide" glyphs even for a fixed width font, which are present in terminals where you are dealing with, for example, kanji.
If you want subpixel AA, that's another complicated issue. If you want to be able to do subpixel AA where glyphs are not locked to the pixel grid, you will need to do more work.
If you want to be able to render glyphs on the GPU purely, you'll need to upload all of the geometries in some usable form. Most GPUs don't render curves, so you will probably turn them into triangles or quads. That's a lot of work to do and memory to utilize for an entire font of glyphs.
You also might think you could utilize GPUs to perform anti-aliasing, but the quality will be pretty bad if done naively, as GPUs don't tend to take very many samples when downsampling.
Since a lot of the work is stateful and difficult to parallelize, doing it on CPU will probably be faster, that way you only pay the latency to jump to the GPU once.