Hacker News new | past | comments | ask | show | jobs | submit login
Text Rendering Hates You (2019) (gankra.github.io)
399 points by WayToDoor on Feb 14, 2022 | hide | past | favorite | 153 comments



Previous discussion: https://news.ycombinator.com/item?id=21105625 (but well worth re-visiting from time to time).


I built my business around font rendering!

Shout out to Behdad https://en.wikipedia.org/wiki/Behdad_Esfahbod a one of a kind man doing god's work. If you've seen text on a screen, you've seen Behdad's work. I've had the lucky opportunity to chat with him briefly and he personally helped me debug a text rendering problem. He's a great guy!

Glyphs are registered, text is shaped, and everything is drawn via a ballet of various libraries, often with Pango and Cairo: https://pango.gnome.org/ https://docs.gtk.org/PangoCairo/

By the way, if you think this is interesting, you might also be curious to read about how fonts are installed/registered on Unix systems. It is a rabbit hole involving a key piece of software on pretty much every computer in the world called Fontconfig (also by Behdad) https://en.wikipedia.org/wiki/Fontconfig

Text and fonts are HARD. Give a hand to those who work tirelessly behind the scenes on this stuff for us.

( Shameless plug of my site https://fontpeek.com/ )


AA is bad if you do not have a concept of how gamma works.

If you have a program[1] that does not understand the difference between sRGB and Linear RGB, the color fringing is extremely nasty, especially on light-on-dark situations. If they don't get the gamma wrong, they get the blend mode wrong (cough Alacritty at one point cough)

The fix, honestly, is moving to 200% resolution screens (such as 4k 24" replacing everyone's 1080p 24", or 2880p 27" replacing everyone's 1440p 27"), and using greyscale AA: fonts are now so big, pixelwise, that hinting doesn't screw up fonts, hand hinting is no longer relevant, and any given renderer has a lot of trouble screwing it up. Greyscale with wrong gamma and/or wrong blend mode isn't fatal, it is fatal with subpixel.

1: Such as ClearType on Windows, but not the modern DirectWrite engine. Win10 tried to improve ClearType to make it look more like DirectWrite, but only made its error less harsh. Early versions of Freetype that had subpixel rendering also did this wrong.


Fully agree.

Moving to 4k screens (200+ dpi) is the biggest upgrade you can do for text readability, and I suggest anyone who hasn't tried to do so and see for themselves.

I recently had the opportunity to switch both laptop & desktop screen to 4k to avoid hidpi switching issues and I'm glad I did. While the laptop's battery definitely suffers, this is such a massive upgrade for your eyes it's completely worth it.

Hinting and AA is still relevant though. I suspect this is still going to be the case until we _exceed_ 300 dpi. As you say, the difference is that now freetype's autohinter is good enough to substitute all hand-hinting, as is grayscale AA removing subpixel rendering issues.

Most of the other text shaping issues mentioned in the article are still a problem though. Text rendering is hard. On top of that, both Chrome and Firefox's text engine are also especially bad at rendering text on a physical pixel grid, which is quite ironic given "text" should be the first and foremost thing they present. FF was quite decent until the webrender switch. Pretty sad.


Unfortunately at 4k nothing over 22" is 200+ dpi. It's definitely better than the run of the mill 90-100 dpi monitors but not enough to fully eliminate the need for AA. You need to push around a great amount of pixels to have crisp text with no AA. Presently 5k desktop monitors go for $2.5k+, 8k for $4.5k, which honestly is not terrible but still quite expensive. Even those are at 32"+ which defeats the purpose (stil below 200 dpi). It seems like the desktop display industry is not interested in pursuing this, going instead for faster refresh rates, as such it's unlikely that we will see 250+ dpi displays on anything other than sub-15" laptops and phones.


DPI is the wrong measure. What matters is dots per degree. If the device is 2x further way, it can have half the DPI. At typical viewing distances, my phone covers about 1/3rd the fov of my monitor, despite having being a 16x smaller screen. As such, 4k is totally sufficient for pretty much all screen sizes. If the screen gets bigger, you get further away.


Yes and no. For laptop screen sizes (15" and less), 4k is currently pretty good for text quality. However, for ~27" monitors, even accounting for a greater viewing distance, we're still a bit short.

My 14" laptop has the same pixel count as my 27" monitor. Sure I keep my laptop closer to my face normally, but I'm not keeping my 27" far enough to have comparable density (for my current viewing distance, it should be more than double to match the dots/degree).

That being said, I cannot reiterate how much difference this makes for text quality despite still not being ideal.


I once tried an 8K monitor and it was amazing how big a difference it made. The tiniest differences in fonts were easily visible. It was like a high quality glossy print that could move and update.


Honestly for me, I can't see any difference between 1440p and 4k on my 13 inch laptop.


Try it on a mac.

Since Apple removed subpixel AA everywhere, by default, unless you re-enable it you get absolutely gorgeous text rendering on the internal screen but as soon as you plug in a non-hidpi external screen (such as my 34” widescreen which has a 50% lower physical resolution than the laptop) it devolves into a blurry and barely readable mess.


    It seems like the desktop display industry is 
    not interested in pursuing [300dpi screens at > 22"]
My not-very-informed guess is that it's largely (or at least partially) a yield issue.

Insanely high pixel counts => More chances of a bad pixel

Huge screens => Lots of waste when you have to throw a panel away or sell it as a B-quality screen because of dead pixel(s)

I think they'd love to sell us insane 8K 32" screens but, doing it in an economically feasible manner is another story.

Of course I'm just guessing. Not sure how much of the problem is yield, how much is waiting for baseline graphics hardware to catch up, and how much is lack of interest/demand.


Yet TVs are sold with huge pixel counts and even bigger sizes at much lower prices. So I don't think the yield is an issue or they'd be affected even more.


I personally run a 1080p projector and it's working great for me. I'm also keeping my display costs under 1k which is pretty nifty. I do understand we've got folks from all walks of life here but 4.5k for a display alone is a pretty ridiculous opportunity cost until you've literally got so much money that you don't care anymore.


150 dpi definitely feels sufficient to me.

And honestly I prefer grayscale even on 90 dpi monitors sometimes.


Sadly you need a 5k screen for a 27" display to get to 200+ DPI: https://bjango.com/articles/macexternaldisplays/

The LG UltraFine 5K is the only affordable display on the market with that resolution, and it's plagued with issues.


My understanding is the newer 5k by LG is far less troublesome - as shared with me who’s had three different units over the past 5(?) years.

I’ve had one for the past year with zero issues also.


Yes, I bought one last year and it has been fine, no problems at all.


> Moving to 4k screens (200+ dpi) is the biggest upgrade you can do for text readability, and I suggest anyone who hasn't tried to do so and see for themselves.

I've got a M1 laptop and it is indeed very nice but... I love my ultra-wide 38" doing 3840x1600 pixels doing only 120 ppi (still better than, say, 90 or 100), way more than my M1 laptop's screen: I simply don't like the physical aspect ratio of the 200 dpi monitors up for sale... They're way too "squary" for my taste.

I'm really a happy camper with my non-retina ultra-wide even though I tried (and use nearly daily) a retina display.

I think there are two problems with current retina offerings: not enough resolution / physical aspect choice and not good enough refresh rates.

I'm sure they'll get there, eventually. Meanwhile I'll keep working on my 120 ppi monitor.


On the Surface Book’s 267dpi 13.5″ 3000×2000 screen, I found subpixel antialiasing to still make a visible improvement over grayscale (yes, with being careful about gamma and such to avoid the confounding factor of different stem darkness/thickness), though it was definitely much slighter than at half the resolution.


Using what rendering engine?

At these densities, increasing the autohinter strength (going from slight to medium/full) might yield comparable results with AA without having to enable subpixel rendering.

I've always used subpixel rendering along with full hinting strength to get sharper results so far, but I tolerated the fringing more than anything else. I'm quite glad to sacrifice the font shape instead.


Mainly with Windows’ own renderers (ClearType, GDI+, &c.), but in order to be unambiguously sure I did deliberately calculate and craft a quite simple example on one occasion, pixelwise.

You’re quite right about the value of stronger hinting at such sizes.


> The fix, honestly, is moving to 200% resolution screens (such as 4k 24" replacing everyone's 1080p 24", or 2880p 27" replacing everyone's 1440p 27")

I'd absolutely love to do that but there are only 2 or 3 monitors available on the market right now which would work well with integer scaling at 4k or higher resolution. The Apple Pro XDR, Dell 8K, and LG 5K monitors are the only ones that come to mind. There is an old 24 inch 4k monitor from LG but it wouldn't be a wise decision to buy it at this point.

It seems that all the progress in monitor hardware is happening towards increasing referesh rate to unreasonable and unnecessary standards.


This is a huge disappointment in the computer monitor space, and not just because of text rendering! It’s such a bummer that Apple’s Retina displays on Macs didn’t make lead to high resolution displays taking over in the desktop market. My $2000 iMac from 2015 is still my best desktop monitor, and there’s still no remotely affordable equivalent options on the market except the LG 5K that Apple sells (which is well over half the cost of a new iMac).


I've resigned to cynicism when it comes to certain things such as IPv6 and integer scaling HiDPI monitors. I doubt I'll see either of them for the next decade or two, perhaps more.


I tend to agree, but the reason this example is particularly weird is that these displays have been common in Apple's computers for nearly 10 years now.


I don't think those displays are available for retail purchase as an external monitor if someone doesn't want to use a Mac, at least not where I live.


Yeah, and it’s annoying even for Mac users (my use case is literally to have a 5k external monitor to use with my MacBook Pro).


Yeah I have this LG. It's amazing.. such a shame they stopped making a 24" model and I'm really worried if this breaks because there's no replacement. It's the 24UD58-B and cost only 220€ at the time. I wish I had bought two.

Apple seem to be the only one doing 24" 4K now and I don't want a computer attached.


> I'd absolutely love to do that but there are only 2 or 3 monitors available on the market right now (...)

Or, you know, just use good bitmap fonts on whatever monitor you have. Bitmap fonts look crisp on any low-resolution monitor.


>unreasonable and unnecessary

speak for yourself. I'd rather have 1080p 144hz than 4k 60hz


> I'd rather have 1080p 144hz than 4k 60hz

If you do gaming all day, then yeah, sure, go for a million Hz. I work with text on my terminal all day and fonts on a 24 inch 1080p screen look like crap compared to 27 inch 4K.

Besides, I was calling out displays with 240Hz or 300Hz+ which seem stupid but I guess if people will buy it thinking that they'll be able to make more kills in CSGO, monitor OEMs will make them.


So we should simply throw away all non-"Retina" monitors just so software developers don't have to bother with the intricacies of subpixel rendering anymore? Not really good for the environment...


There was a time when people ditched their perfectly fine 8 bit LUT based graphic cards and computers for 24 bit true color ones. It made life much easier for developers. All the effort to manage a color palette within a program, and to try to do something reasonable to support more than one program on the screen at a time just went away, not to mention the choices around dithering algorithms. It was glorious.

Roughly speaking, we’ve had 25 years to get sub pixel vector glyph rendering right, and it still is consistently wrong in many places.

It is a clever hack whose time has come and gone, I will not miss:

- fails miserably if you rotate your screen.

- only works for horizontal text. Anything rendered at differently angles looks different.

- forces a different spatial resolution for brightness and color, in one axis but not the other.

- sensitive to the monitor’s physical subpixel layout. You really can’t rely on an end user to tell you the right answer after they plug in a monitor.

- color fringing is a thing, more so for people with better eyesight. Some people see it, some don’t.

- difficult to reconcile with high dpi printing to make sure what you see on the screen approximates what you are going to get.

- it’s hard to know if the person you send this too will see the same thing you do. Different platforms, renderer, and hardware means they may see something differently.

So, it was pretty good for horizontal text on a monitor where you knew the subpixel pixel order and you got everything right so it approximated the eventual printed output. But its time has passed.

Hand down that monitor to someone’s video game or media watching needs and if you deal with text, get a high dpi monitor.


I'd still take occasional antialiasing failures over common HiDPI scaling failures, any day of the week. Antialiasing wins hard in terms of both severity and frequency.

If a program has broken subpixel antialiasing, it may be slightly more annoying to use for long stretches of time. If a program has broken HiDPI scaling, it's completely unusable with a monitor that requires it.

> - only works for horizontal text. Anything rendered at differently angles looks different.

Anything rendered at non-90° angles is going to be annoying to read anyway. It's not like there's this magical world of 78° text just waiting to be discovered, if only we could get rid of all subpixel antialiasing.


Meanwhile I'm happy with perfectly crisp text rendering in Windows at arbitrary DPI and had to abandon Linux because it looks like shit on 27" 4K.


I have 4 4k monitors connected to my Linux machine and they all look fine, even in vertical orientation.

However, Windows is far superior when it comes to fractional scaling.


Which is exactly the problem. I can't stand Linux/Mac style scaling. At my preferred 1.25 or 1.5 it just looks terrible.


Much easier? most work I do is black and white.


I miss monochrome for programming. I had a 19” portrait monochrome CRT back in the day. Still maybe the best monitor I ever used. Never had a color fringing issue!


You can buy eInk monochrome displays now. You might be interested in them (https://shop.boox.com/products/mira) .


I didn't even mean that. I mean I have a color screen but the pages are usually white background with black text or black background with white text.


Linear which RGB? I'm being pedantic here, but this will start being a problem as wide gamut displays become more commonplace. A colour space such as sRGB is first a definition of where in CIE space the three primaries are, and then a transfer function for how linear values can be encoded in various formats. For fonts, you should probably render them in 16 bit, keeping only coverage data, i.e. linear alpha channel only. Then you can composite in the same linear colour space as your display, but with more bits per pixel. This wasn't done in the past, because it needs custom shaders for the compositing and requires quite a lot of memory.

Another option would be to rasterise directly in the final buffer, GPUs are powerful enough these days to render vector graphics in a shader. Or you could go the Direct Write route and discretise the vector outlines to triangles and let the built-in AA handle it.

In short, I think we're at the point where either you're writing for a system that can handle rendering everything in 16bpc half-float linear colour, or it's so weak that pixel fonts are the most you can do. Either case doesn't suffer from gamma issues, as you apply the gamma only in the final step.


I understand what you mean, as I too have said these things.

Linear RGB in the sRGB/Rec 709 color space is the correct interpretation. Monitors that are not sRGB/Rec 709 primaries with a Rec 1886 gamma ramp (as opposed to the exactly defined sRGB gamma ramp, please stop using this) outside of a strictly color-managed workflow should be replaced with ones that are.

Now, if your UI compositing system is modern, then yes, just output 16 bit linear and let the OS's color management handle it (ie, Vista and up using modern APIs, which is what DirectWrite does); however, the majority of software devs only understand "2.2" or "sRGB" (which is incorrect; Rec 1886 pragmatically is 2.35, sRGB matches 2.35 inside of the 16-255 range; sRGB has never been 2.2, it is 2.2 with an offset of 16 starting at 16, with 0-15 being linear), ergo, I wrote this with the average software dev in mind.


> The fix, honestly, is moving to 200% resolution screens (such as 4k 24" replacing everyone's 1080p 24", or 2880p 27" replacing everyone's 1440p 27"),

GNU/Linux is supposed to be usable even on old hardware so that is not a good solution. There are many ways to improve font rendering for low-dpi screens but maintainers are often unwilling to invest time in that because it makes the rendering algorithms more complex and they reason that most will get high-dpi screens eventually. There used to be a patchset called Infinality for FreeType and FontConfig that significantly improved subpixel rendering. It was discontinued as some (but not all) of its improvements were merged into FreeType.

On Windows you can tune the subpixel rendering parameters using a configuration tool. That's something I wish existed for Linux desktops because screens, peoples' tolerances for color fringing, and light settings are all different. Unfortunately FreeType's subpixel rendering parameters are hard-coded in the source and can't be configured.


> On Windows you can tune the subpixel rendering parameters using a configuration tool.

Hasn't worked for me for a long time. At least insofar as choosing the grayscale results consistently until the tool shows only grayscale doesn't actually result in grayscale text elsewhere. Also, if you rotate your screen, it still keeps the original bgr vs rgb setting. Subpixel rendering is generally a bit broken everywhere. It's especially pleasant when it's used and then scaled-up by the OS, making the fringing no longer subpixel.


> Greyscale with wrong gamma and/or wrong blend mode isn't fatal, it is fatal with subpixel.

I might not be fatal with black-on-white or white-on-black, but grayscale is pretty much still horrible with colored text and background.


Which is why the greyscale should be done with an alpha channel, so it correctly blends with the background, and supports coloured text.


That would be fine, if alpha blending was handled gamma-correctly in most software, but it isn't. Things go worse with alpha and subpixel-AA, that is just cursed.


> such as 4k 24" replacing everyone's 1080p 24"

The problem is the prevalence of 1080p. 1440p on 24" becomes 'retina' at 28" viewing distance (vs 37" for 1080p and 19" for 4k). Grayscale anti-aliasing at that pixel density is really good, gaining you a few inches of viewing distance. I would personally like to see something commonly available between 1440p and 4K (the jump in resolution is abnormally significant).

If you've worked on a high refresh rate it's also really hard to drop down and, currently, it's still mostly a choice given the bandwidth and compute requirements. The difference between 60Hz and 144Hz is like the difference between 480p and 1080p, even for tasks like text editing. 60Hz is fucking awful.


I wrote a small non-shaping rust rasterizer for geometry (Live demo https://mooman219.github.io/fontdue/ ). I definitely agree that higher density screens are the solution, but bad AA and linearization can really reduce glyph quality well before then.


I run 24" @ 4K at 200% scaling and it's amazing.

But 24" 4K is very hard to get now. All 4K screens I can buy here are 27" and up..


Another Fun(tm) rendering bug: In Firefox, when a word is split across lines between two characters normally connected by a ligature, the ligature doesn't get disabled and the two characters rendered separately, but the ligature is sliced in half and rendered across lines:

- https://bugzilla.mozilla.org/show_bug.cgi?id=479829

- https://bug479829.bmoattachments.org/attachment.cgi?id=36373...


The fun thing here is that as soon as I read Firefox, I knew it was going to be the split ligature issue. There are so few text rendering issues in Firefox, and that’s probably the only one most people will ever encounter.


My reaction to this was how would this ever be a common issue? When would you naturally have a line break in the middle of a ligature.

The screenshots show the answer: auto hyphenation. But at the point you're dropping a hyphen between the elements of a ligature, how do you not break up the ligature into two independent glyphs?


I assume the basic issue is that by the time they've decided how big the text is (and therefore if/where they need hyphenation), they've already decided to use the ligature.

Dynamically deciding whether or not to use the ligature when its at the linebreak seems expensive... refusing to do a ligature in the presence of a soft hyphen and otherwise treating the ligature as a single unit that cannot be broken seems like the more tractable way out of the problem... but I'm sure I'm avoiding or unaware of some additional piece of complexity.


There is something profoundly weird with text rendering in Firefox. Having used Chrome for years and then switched for a while I tried all the different antialiasing modes in the config but neither of them look the same as Chrome somehow, it's all just a tiny bit off and it's so unnerving.

Also if you have an input box in html with a size value set to the number of chars you need, that value will be always too short in Firefox, so you have to set it to some multiple which then renders too long in all other browsers where it resolves correctly. I think it's been logged on bugzilla and closed as wontfix, as usual. Typical firefox development cycle.


> There is something profoundly weird with text rendering in Firefox.

Font rendering differs between OSes. Are you on Windows? That's the OS with the most Firefox font rendering options. On Windows Firefox, I generally clear the list of fonts rendered in GDI rendering mode. IIRC you can set font rendering mode 0 and disable enhanced contrast to match Chrome closer.

> Also if you have an input box in html with a size value set to the number of chars you need, that value will be always too short in Firefox

Ouch if true.


Ah yes, I never really bother to install Chrome on any of my linux machines, but my main one is still Windows and that's where I noticed it.

Why should something like font rendering differ between OSes, that makes no sense. A browser should render the exact same image on all platforms. But then again I forget we don't live in a world where anything makes sense.

> Ouch if true.

Yeah see here: https://i.imgur.com/p6sALTq.png

Input field, size specified as 6. Now that I'm looking at it again I suppose it's clear that Firefox ignores the arrow box width while making it visible all the time...? Not entirely font related but it's still rather atrocious.


> Why should something like font rendering differ between OSes, that makes no sense.

Because application font rendering (FreeType and GTK/Qt on Linux, GDI/DirectWrite on Windows, Core Text on Mac) differs between OSes, and browsers generally try to fit in with OSes and reuse native APIs, rather than roll their own font rendering and fail to respect OS-level font configuration settings. And even if every browser chose cross-platform font rendering, different browsers might pick different styles. Note that old versions of Safari on Windows interestingly did use Mac-like font rendering.

...but Windows Firefox has its own font configuration settings (GDI for Arial and Verdana, enhanced contrast) ignoring the corresponding OS settings... oops.


Firefox has other issues with stuff in input boxes, too. If a site puts a "show password" clickable in their password box, most password managers take a few clicks if not outright resorting to right clicking to fill the fields.

But I'd rather be occasionally annoyed by "browser by committee" than use chrome, edge, or whatever.


> Greyscale-AA is the “natural” approach to anti-aliasing.

> Subpixel-AA is a trick that abuses the common way pixels are laid out on desktop monitors.

I disagree with this notion of subpixel-AA being "abuse". If anything, grayscale AA is a simplification, assuming that each color component of a pixel is emitted from the same area. Of course subpixel-AA is more complicated, but it doesn't mean that it's abuse.


It's a hack/trick since it exploits a hardware layout that wasn't meant to be used from the software side.

Like most hacks, it can work for some common use cases but will have a lot of edge cases :

- how to detect the user pixel layout? on web, native applications..

- what about users who switch between landscape/portrait orientations?

- does it make correct gamma rendering more complex? switch to dark mode harder?

Also, I cannot find a simple implementation, focused on subpixel-AA. So people will probably have to start from scratch..

Here is the most interesting link I found: https://www.grc.com/cttech.htm


Yes, it's more complicated, and parts of it might need hacks, as there are no standard way to get subpixel layout. That's a failure of those display APIs though, not the failure of the concept of subpixel-AA itself.

> does it make correct gamma rendering more complex? switch to dark mode harder?

Most text rendering is gamma-incorrect anyway, AFAIK, subpixel-AA or not. It doesn't make it more complex, just work in a linear colorspace either way.


> - how to detect the user pixel layout? on web, native applications..

99% of the monitors are RGB. BGR monitors are an oddity.


But they do exist, as do other sub-pixel layouts (pentile, RGBG, ...). If you use sub-pixel rendering assuming RGB you are going to look worse on some devices (many Samsung phones, IIRC).

Also, if you use sub-pixel tricks in static resources such as images (or your text renderer is not aware that the sub-pixels may not always be arranged horizontally) you are going to have colour halo effects when a device is rotated. Windows is doing that (optimising for horizontal RGB, when vertical RGB is the reality) right now on the screen I have rotated to portrait aspect. It isn't stark enough that I can see any colour halo at this dot pitch, in fact the effect is very subtle, but seen side-by-side text looks a little blury on that screen compared to the other screen (practically the same dot pitch, but landscape so actually is a horizontal RGB layout). Someone with better eyes (mine are somewhat shite) might find it more irritating that I do. I must get around to telling Windows to just use greyscale text trickery...


My CRT would like to speak with you.

More seriously, it seems every phone screen invents it's own pixel layout and of course, can be rotated 90° at any time.


The fact that there are many different pixel layouts on phones is kinda irrelevant since they've largely all converged on retina resolutions for their viewing distance, making subpixel entirely pointless.

For this reason, to my knowledge no one has ever bothered designing pentile subpixel rendering or whatever. Like I'm sure there's a cute tech demo but no one has any reason to seriously design and ship such a thing.

Same situation with the vast majority of macs. Apple just disables subpixel at the OS level since most things are retina anyway.

It still has its practical applications when you have the kind of userbase Firefox or Chrome does, but yeah it's increasingly easy to just Not Bother.


I use one monitor in portrait mode, one in landscape.

Something relatively common for devs.


As soon as phones, tablets, and hybrid laptops (i.e. surface, etc) enter the picture you can no longer assume RGB because users rotate the device. People now stream PC/console games on their phones, too.

Incidentally 'vertical sync is vertical' is also no longer a correct assumption. In some cases if you rotate a device now the tearing (when vsync is off) is vertical instead of horizontal, which is really disorienting if you're not used to it.


Subpixel AA literally gets the colors wrong at the boundary, so calling grayscale AA a simplification is a bit too much IMO. Some people prefer subpixel AA and it might even be a matter of accessibility for others, so its existence is a good thing I guess, but it is not grayscale that makes any false assumptions.


Pixel boundaries on your display are imaginary, all you have are RGBRGB light sources next to each other. The pixel boundaries are determined by your left-most and right-most light sources on the display, so you have |RGB|RGB|..., but it really shouldn't affect how you render text in the middle of your screen.

edit:

> it is not grayscale that makes any false assumptions

Grayscale assumes that those pixel boundaries are real, and they are relevant.


How do pixel boundaries relate to the fact that you cannot avoid color fringing with subpixel AA? You could shift your whole framebuffer (or maybe just a glyph, with possible caveats) by a subpixel or two and have GBR or BRG pixels, it doesn't matter. What matters is that you need to keep the hue constant across triplets of subpixels to avoid fringing, and this is what grayscale does, and what subpixel does not do because it is its whole point. Subpixel basically assumes all your subpixels are the same color.


This assumes a naive and wrong way to do subpixel-AA. See my other comment: https://news.ycombinator.com/item?id=30330936


Is there any subpixel text rendering implementation that avoids color fringing to such a degree I cannot see it, or not? It's certainly there on both Windows and Linux.


I don't know if such rendering is commonplace, but I know for sure that imageworsener handles this correctly for downscaling images. Check out the option `-offsetrb 1/3`.

https://entropymine.com/imageworsener/subpixel/


I see there is some interesting stuff in the parent page as well, thanks :)


I don't know what you can and can't see, but I could not tell the difference between gray scaling a screenshot and not, on my mac back when they still used colored subpixel AA (unless I zoomed in on the screenshot).


Yeah it is a subjective and configuration-dependent thing for sure.


This assumes you actually have RGB light sources in that order. There are (were?) consumer displays with RGBW or other alternative arrangements out there, not to mention panels with a different color ordering. And if you rotate or invert the display, the ordering becomes different.

An ordinary consumer will never understand what's going on here, so if your code assumes RGB subpixel rendering for text is always correct, it's just going to look ugly to them for no reason.


I was assuming that to make a point. The actual subpixel order can either be retrieved by an appropriate API (if available) or be provided by the user (display settings). Point being, subpixels don't usually overlap on the same area, which is what's grayscale's assumption. I agree that if subpixel order is not known, then grayscale is a sane default.


If your stroke is 7 subpixels wide, it will be, for instance, RGBRGBR. That has three Rs, two Gs, and two Bs. That is 50% more red light than blue or green. That is tinted red, no matter how you look at it.

Subpixel antialiasing does distort colours. That is just a physical fact.

The only reason it works at all is that on average, it will even out to a uniform white. But in the details, it will not. Especially around straight vertical edges.


That's a very naive and wrong way to do subpixel-AA. Roughly, to calculate the light intensity of a red subpixel, you must also take into account the red light intensity around a whole pixel area around that subpixel, centered on that subpixel. There are better filters to use here, this would be a box filter, which is 1-pixel wide.


No, it is not. It is the physically correct way to do it. 50% more red light is emitted than blue or green. That is a red tint. Your eyes aren't going to see that as anything other than a red-tinted line. That is what you will see. Sub-pixel antialiased text looks like it has colourful fringes. People see this. It is not in their imagination. It actually is there, and it is exactly because of sub-pixel antialiasing.


>No, it is not. It is the physically correct way to do it. 50% more red light is emitted than blue or green. That is a red tint.

That is a red tint. And that's why it's not the correct way to do it. A subpixel antialiased white line that is seven subpixels wide, on a black background, should not produce (8-bit RGB triplets): 0-0-0, 255-255-255, 255-255-255, 255-0-0.


Then that means you can not render full white.


Full brightness white is supposed to be reserved for light sources anyway, definitely not text background !


I’m not agreeing or disagreeing with any of you as I don’t have enough knowledge on the topic, but what you say would effectively mean that there is no other possible color besides RGB, which is technically true, but is a nigh useless distinction as color blending is a real phenomenon that happens to human vision. In this way, subpixel hinting, while a hack with quite a few tradeoffs, just hijacks color blending used for producing color normally on your screen for sharper fonts.


> but what you say would effectively mean that there is no other possible color besides RGB

It would not at all mean that? I don't know why you would think it means that.

All I am saying is that a if you draw a vertical line 7 subpixels wide, the light it emits will not add up to white. This is a simple physical fact.

To make it easier to imagine, think of a one subpixel wide vertical line. It will be either pure red, pure green or pure blue depending on position. In no way will that ever look white.


> Pixel boundaries on your display are imaginary, all you have are RGBRGB light sources next to each other.

Kinda. From my testing (in my subjective perception) several years back, grayscale text looks slightly colored at the edges, but subpixel-rendered text looks more colored in the opposite direction. And in my testing, vertical lines look more fringey if they take up GBR or BRG subpixels than RGB. I suspect these are because the green subpixel is perceived as the brightest by humans, and looks better in the middle of a pixel, and RGB and BGR put it in the center so it works out.


There can be various causes for those symptoms:

1. Even if the rendering gets the subpixel order right, the subpixels are rarely equidistant on a display. They tend to be closer to each other within a logical pixel.

2. Some subpixel-AA algorithms are naive, and trade color accuracy to luminance accuracy, causing more color fringes.

3. Incorrect gamma handling.

4. If aiming for more luminance accuracy, incorrectly taking into account the relative luminance of the subpixels (what you observed with the brighter green subpixels).

Imageworsener[1] has an option to downscale with subpixel antialiasing, it's gamma correct by default, and it has an option for adjusting the subpixel offsets within a pixel (instead of the naive default of 1/3). It might be a good way to try out correct subpixel-AA, but it only works on images. I guess you can render text on high resolution and downscale that with imageworsener, but you won't get any hinting that way, but maybe that's desired.

Very few software does AA correctly, let alone subpixel-AA.

[1] https://entropymine.com/imageworsener/


I'd actually prefer a Bayer panel over the traditional stripes, if it's sufficiently dumb in "native" mode and there exists a post-process shader for the GPU to monolithically handle reverse-debayering, overdrive (LCD response speeds are non-linear), dithering, and color-mapping. Ideally, the latter even able to use a head tracker to compensate for viewing-angle-dependent gamma curves (the main pitfall of MVA panels for semi-static content).


I've been wondering for a while why LCDs have settled on RGB instead of the clearly superior Bayer tiling. Any ideas ?


Bayer can't display pixel art at 1:1, just like with bitmap fonts. If you want crisp color, you need to be aware of the Bayer tiling, and ideally you'd want to do the inverse-debayering on the source-side of the display cable, because that way you can save 4x bandwidth (you can treat a Bayer panel like a High-DPI one while rendering, but Bayer-pixels need to match RGB source pixels for crisp results).

Do keep in mind that many TV panels are Bayer panels, as they tend to consume yuv420 or yuv422, if fed externally, which greatly reduces the bandwidth penalty of doing the 24bit to 8bit per-pixel color depth conversion in the display's panel-driver-GPU.

For text rendering there are some alternative subpixel layouts like Pentile in use for smartphones, but patents and software/hardware integration (software in this case including Freetype or whatever replacement is used) inhibit usage of these for general purpose desktop monitors. One large benefit of these tends to be their vertical/horizontal resolution being matched, allowing both portrait and landscape mode to be crisp. Also, less physical pixels to drive take less power to drive.


I guess that my question is also why source pixels would be "RGB", when "Bayer" is closer to the human visual system.

Because it's easier for people that don't know much about the human visual system to work with a (more) flawed model ?

However, while abstractions have a benefit, it's kind of weird that even the "closest to the metal" model seems to often take "Bayer/420/422" as more of an approximation, rather than "RGB" being one ?

Do you see how weird it is that the above-mentioned TV panels are NOT (?) Bayer panels physically ?


If text rendering hates you, an editable text renderer is 10x worse.

Anyone who has spent any time working on contenteditable or any of the browser based rich text editors knows what a nightmare it is. The temptation to abandon all hope and start from scratch with a canvas element and build you own renderer and editor is strong. You start researching how to do it, maybe even create a quick prototype but quickly discover all the edge cases in this article and more.

People who have spent significant parts of their career fixing all the inconsistencies between browsers and devices such as Marijn Haverbeke with ProseMirror (and CodeMirror) should get a Gallantry award for the work they have done.


https://lord.io/text-editing-hates-you-too/

Generally speaking, implementing this stuff from scratch is a terrible idea; especially on the web, it’s not possible to implement this stuff from scratch without losing platform functionality that is not exposed—though in the last couple of years the primitives have reached the point in functionality and consistency where you can get surprisingly close on most platforms. But things like touch grippies can’t be done, and I wouldn’t be surprised if various fancier touch keyboard interactions like swiping space bar to move the caret can’t be done (I haven’t tried, and I’m not sure how they’re implemented).

Things like CodeMirror are generally good, but it’s still not terribly uncommon for them to be completely broken on some less common but still up to spec and maintained platforms, because they’re simply being too clever for the web as a platform and I wish they’d relax and stop trying to be quite so clever.


It makes me marvel at Google's engineering since they re-built Docs from scratch using canvas.


Exactly, the new Google Docs is a great example of how it can can be done, but only by a company with the resources they have.

I have wandered if the solution is to use WASM to take a desktop rich text editor and make it available in the browser within a Canvas element.


No, new Google Docs sucks

For one, extensions that help with translation no longer function because there is no text for them to read.

For two, any language that uses an IME (Chinese, Japanese, Korean, Thai, etcc...) the experience is super subpar


Yeah I'm really interested in seeing something like that, for general purpose. I'd love to see what's possible with WASM + WebGPU for stuff like this.


Yes, and Flutter, the web version, also does it from scratch. Which means CJK sucks on Flutter


Marijn is a wizard and ProseMirror is an incredible piece of software, but Marijn isn't dealing with text rendering issues. I'm obviously oversimplifying but here goes: ProseMirror is a wrapper over contenteditable and gives us all the control we need. However, it's still the contenteditable that takes care of text rendering and cursor selections.

The real people dealing with those issues are folks working on Zoho Writer or Google Docs - who ship their own text layout engine as part of their rich text editor :)

If there's an open source initiative that wraps over ProseMirror but renders to their own layout engine - that would be a one kickass project!


In 2004 or thereabouts we were building custom in-house CMS and wanted to make easy to use wysiwyg editor with just a select features: bold, italic, links, images, headings. I mean, how hard can it be, right? Oh boy. After much developing and cursing we just defaulted to TinyMCE or whatever it was at that time. 1/10, would not try again.


FCKEditor was a bit cleaner, IMO. But only marginally.

Also, there was ‘WYMeditor’, which displayed the semantic structure of text instead of a WYSIWYG rendition—so I kept meaning to try it out some time, but never got around to it in earnest. Textile and Markdown took hold before I again had a need for a complex editor, which is probably for the better.

(Now, if we convince all the ‘whitepaper’¹ authors that their incredibly complex layout of ‘a wall of text and occasional pictures’ doesn't require PDF and that I don't have a 14" portrait-oriented screen—that would be great.)

(¹ as opposed to greenpaper, I guess.)


I wish we could've used Textile back then, I used it for my personal blog, but it was no go since the users were Word-people who literally made html pages with Word and tried to upload those to CMS. Oh, that brings back memories... Trying to clean up a mess that resulted from users pasting rich content from Word to our content editor and then wondering why it broke their site from certain point onwards.


Ah, that was a dark time for WYSIWYG editing on the web. I actually made my own back in that era [0] (well a little later, but when TinyMCE and CKEditor were still the goto solutions), and getting it to work cross-browser when IE6 was still a thing and had no dev tools was an absolute nightmare.

[0]: https://github.com/nicoburns/ghostedit


And it is still pretty much the same in 2022.


I'm currently trying to get text layout working in a game and the /only/ text control provided by the engine is one that "can" be editable, since it's assumed for non-editable text you'd just use graphics. However, this game has a lot of dialogue and must be translated into multiple languages so that's not a realistic option.

It's a fucking nightmare. It turns out it uses Cairo internally so that's not bad, but I /need/ to know where the last glyph is and answering questions about how your text actually is going to be laid out in the end is next to impossible everywhere I look.

Since, again, this has to work in multiple languages including CJK I am loathe to code my own text control but I may have to.


Have you looked into harfbuzz? afaik it's designed to solve that problem, for all languages.

https://harfbuzz.github.io/


I'll keep it in mind, but right now I'm trying to only use the classes that are provided by the engine or my own code as much as possible, since adding a dependency becomes a whole /thing/ in cross-platform game engines sometimes.


Not sure if your game has any spatial 3D consideration but this project uses real time Multi-channel signed distance field generation :

[1] https://github.com/Chlumsky/msdfgen


Check out slug if you haven't already: https://sluglibrary.com/


It's pain all the way down على طول الطريق uʍop ʎɐʍ ǝɥʇ llɐ


Maybe it's just on my side but the l's in the flipped "all" seem to be either displayed upright or not at the correct height. Which would only support the linked article.


The flipped text is doing a trick where you pick things from Unicode that merely look like upside down Latin letters. Some of them deliberately are exactly that for whatever reason, some aren't. So the effect isn't exactly perfect and shouldn't be. There isn't actually a plain text way to say "for some reason render this upside down" because like, why.


Alright, makes sense. Regarding the "why", well, Unicode has a lot of weirdness already. Like the subscript/superscript characters that don't cover any complete alphabet. And after seeing various tricks like t̢̬̯̮̥͋͒̔̾̽͘͟ḥ̛̙̟͍̦͉͔̤̝͔̪̮̳̠̩̗͔͕̽̐̏͟͢i̔̊̃̌̆͋ͫ̏̽ͧͪ̂҉̺͚̺̤̳̥̰̬̻̜͈̦̰͙̲͘ͅs̢̓̾̎̃͛ͧͪͬͨ̍͂ͩ̅̑̎͌̚̚͏̗̙͚͉͢ not much surprises me anymore.


Unicode's goals include subsuming all prior existing text encodings. That means ASCII and ISO-8859-1 of course, but also really obscure encodings that maybe only were ever used on a handful of machines. The idea is if your project isn't using Unicode, the reason shouldn't ever be "Oh we need this obscure character - we use it for compatibility with our 1978 Huge Corp X42 mainframe but it isn't in Unicode".

This is often the reason why character X is in Unicode but seemingly equally useful character Y isn't. At some point somebody put X in an encoding on some archaic hardware you've never seen, but not Y. In this sort of case neither X nor Y are things you should emit if you don't need them, but they're in Unicode in case somebody finds a pile of Huge Corp X42 tapes and wants to convert that to Unicode.

In cases where both X and Y, and even the more obscure Z are encoded, chances are Unicode has them all because humans (not just some obscure computer hardware from the dark ages) used X, Y and Z to write stuff, and another Unicode goal is to be able to encode all human writing systems. The over-abundance of combining forms you abused on your word "this" is almost all for that reason.


Those are actually just regular l's, and the "upside down n" is just a regular "u". The more interesting letters are from the Unicode IPA Extensions range: https://unicode-table.com/en/blocks/ipa-extensions/


For extra pain, you can go at the text above with your browser's devtools and change the font-family to something funny like Comic Sans MS.

Turns out text rendering _really_ does hate you.


Yeah CodeMirror is good. I highly respect Marijn's endurance to work on something like that and have it come out so slick.


My most recent war story here is trying to work around the text cursor and selection indicator bug on iOS Safari/WebKit. They are displayed in front of all other elements on the page, even when outside of a scroll areas overflow, super annoying for heybred web apps.

So I decided to implement my own cursor and text selection. Cursor was ok, just make it transparent and replace it with an element drawn in the correct position.

Text selection on the other hand… it seemed ok at first within a paragraph but the minute you span between blocks, columns or are in tables, oh dear… and that’s before I got to RTL text.

Gave up in the end.


Oh this article is back! This is a shot in the dark, but perhaps someone could help me; I remember a link was posted at one point or another about font rendering as well, but from a design perspective about the considerations that need to be taken when designing a font, how glyphs are connected for different languages, etc. - it was presented quite beautifully, with many highlighted examples. Unfortunately all I remember from the site was that it had a green/teal theme to it :( and I haven't been able to find the link ever since

Does anyone have any clue what I'm talking about? Pointers would be appreciated :)


Interesting read. I remember having to implement Unicode support and RTL in a graphic library (based on some version of InkScape, if I am not mistaken) that was code page. I implemented it using Uniscribe. The article 'Uniscribe: The Missing Documentation & Examples' was of great help implementing it. However, when I had to implement fallback fonts (for when the standard font does not contain the character you want to display and you want to get it from a different font) I got stuck because I discovered that at time (around 2015/2016) it was not implemented because MicroSoft had replaced Uniscribe by something new.

There is also something interesting about the cursor moving through a miked LTR RTL text. It jumps over a character, meaning that it goes through different location depending on whether you move forward or backwards through the text, where forwards means that the cursor moves backwards on the screen when going through a bit of RTL text.


> 6.1 Fonts Can Contain SVG

This is amazing-- it means someone can design a font where glyphs consist only of SVG elliptical arcs!

Ooh even better-- the font designer could place the arcs in such a way that setting large-arc and sweep flags would generate different glyphs.

If someone has not designed such a font in the year 2022 then what are the large-arc and sweep flags even for? How can we seriously continue calling this site Hacker News? Why isn't this very comment full of SVG arcs?!?

Edit: OMG what about animating the large-arc and sweep flags in an SVG-based font? I want a transition that starts at war with Eastasia on a Monday and ends on war with Eurasia on a Tuesday.

We have the technology


Having different styles for elements in ligatures makes sense in some other situations too.

E.g. you want to emphasise the accent on a letter when you are teaching the language, I found myself doing exactly that with Hebrew and punctuation, and I stumped upon this problem


Excellent summary. Text rendering and content creation and layout (of which text is fundamental) are central to the creative experience on a computer. Anyone know any well-known (or otherwise) fora for discussing these kind of existential-level concepts?


Can confirm this is very hard to do. I was able to create my own 2d text rendering primitive from scratch (for fun), but it has the following "features":

  1 font
  4 font sizes
  16 foreground colors 
  2 background colors
  Support for a generous subset of ASCII characters
  Text wrapping which succeeds as expected in ~50% of cases
  DPI "agnostic"
The only thing that I could ever got any traction with was rasterizing every combination of font/size/color into some texture atlas and then using a dictionary of coordinates & dimensions around each for lookup & calculation. Clearly, this approach falls flat on its ass once you involve non-trivial requirements (i.e. beyond a subset of ASCII), or a variety of fonts/sizes/colors which create adverse combinatorics. That said, when you can reduce any problem space to composition of basic 2d images, things get really simple to think about.


On my Firefox/Fedora browser, the "tofu" glyph seems to have turned into... a goat? Cow? Is anyone else seeing this? Do I just have weird system fonts? How do I even research this? I guess it helps illustrate the article's point, anyway.


Everyone is in the mood to write about text rendering today for some reason, love it. Great to see so many points paralleled like this article talking [1] about sub-pixel offsets being a challange and Nuklear's Wiki article talking [2] about snapping glyphs to an interger offset.

[1] https://gankra.github.io/blah/text-hates-you/#subpixel-offse...

[2] https://github.com/Immediate-Mode-UI/Nuklear/wiki/Complete-f...


Several of the initial definitions are brilliant (like defining emoji by their use of multiple colours), but some are misleading or even incorrect (for example, ligatures draw multiple consecutive scalars, but are made up of one or more glyphs; scripts are sets of scalars, not of glyphs).


Both Chrome and FF seem to have fixed their text rendering with intersecting cursive glyphs since 2019 :)


not on Linux.


Ah, so maybe that was always only an issue on Linux, it’s fine on Windows.


No it was a cross platform issue when I wrote the article. It's pretty fundamental to how most engines render text. That said, text rendering generally involves some amount of platform-specific APIs to get a "native" look and feel.

I still see it on the latest Firefox nightly on MacOS. I know Windows DirectWrite handles the issue (which is why Edge handles this issue so well), so if the that's being used more it would probably help.


6.1 Fonts Can Contain SVG

That whole section needs to go. Don't embrace any non-standard stuff. Fortunately support is poor, so let's hope Google doesn't add support for this in Chrome or it may become popular enough that everyone has to support it.


Ah ok! We'll be sure to drop looping support from Animated GIFs too, because that's also non-standard[0] and is just an extension Netscape 2.0 slapped on there.

I regret to inform you the vast majority of the web is just Shit People Tried and that became popular enough for everyone else to support, with lots of hacks to make things interoperate better.

[0]: https://en.wikipedia.org/wiki/GIF#Animated_GIF


They just extended the COLR spec to COLRv1 which essentially is a whole new canvas API in fonts: https://github.com/googlefonts/colr-gradients-spec/blob/main...


Huh, this is not that story where the author, and possibly other people, spent over 1.5 years attempting to redo their text renderer/editor, and had to abandon the project in the end.


Do you mean this [1]? It links to the referenced article too.

[1] https://lord.io/text-editing-hates-you-too/


Yeah, that's it.


In my experience, subpixel antialiasing works reasonably well for dark-on-light, but it looks terrible for light-on-dark.


[2019] (still awesome!)


That was fascinating. Thank you.


ascii using cleartype seems to work fine for me, I don't see the issue.


Clickbait titles are getting worse and even more nonsensical


OTOH I would have never read an article on Text Rendering if it wasn't for a clickbait title :) - IOW totally justified for the good cause of making people more aware of obscure yet interesting topic they never cared about!


That’s the article title, and the article is over two years old, so I’m not sure it’s an effective data point for the diagnosis.


It's kind of an off-putting title for a good article, to be honest.


I agree. It is an excellent article. Text systems are weird.

There's an entire book about strings, in Swift: https://flight.school/books/strings/


I run a big CRT monitor with one terminal window and bitmap fonts. I've never had an error using this setup, except when something tries to use a character that isn't in my font and I get a �. This is rare.

I've also programmed FPGAs to act as serial terminals and had zero problems perfectly displaying bitmap text. If you get something you can't perfectly display, chances are you don't need to display it so you mark it and throw it out (or you have a bitmap version handy).

Due to these experiences, stuff like this seems like a major case of 'too much complexity'. Why is this sort of fancy text rendering so important (for a english speaking and writing person that can pretty much do all work and hobby projects in ASCII)?


"An english speaking person able to work entirely in ASCII" is a single use case that still leads to that complexity: we have emojis because people started drawing things with ascii characters. More recently, we have fonts for coding allowing for ligatures, and they're popular among a significant number of your fellow english speakers who could work entirely in ASCII.

Even then, much of the complexity outlined in the article still appears when you consider variable width fonts--really, your use case is english user, using only ASCII in a monospaced font. Variable width fonts need kerning, and to accommodate changes in glyph size based on italic or bold variations. Also, English has ligatures like æ, so if you want to deal with older English texts you need them.

And then what do you do if you're working in English but need to discuss other languages, even just common European ones with accents?

So your use case is now english ASCII monospace user working who's basically just programming with their editor, and this is a very, very narrow use case considering the rest of English speaking/writing users who want to use computers in their own lives.

Now multiply all the complexity I've sketched out for every language on Earth spoken/written by some community that would also like to use computers.


> (for a english speaking and writing person that can pretty much do all work and hobby projects in ASCII)

Most people are not English-speaking/writing and expect “fancy” fonts to not be rendered in a crappy way. Also, the relative blurriness of your CRT alleviates some of the problem. The low-res, mostly latin-only font-rendering world of thirty years ago was certainly simpler.


You think that it's "too much complexity" just because you happen to be in the small minority of people in the world who can do with extremely limited text rendering?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: