Hacker News new | past | comments | ask | show | jobs | submit login
Let’s fix font size (tonsky.me)
918 points by rhabarba on March 30, 2021 | hide | past | favorite | 347 comments



> Specify it in pixels

No thank you.

I regularly wonder if it might not be better for the web if we got rid of pixels as a unit entirely. We can't realistically do that for a lot of reasons, and there are legitimate reasons to use pixels sometimes, but resolution independent font-size and container sizing should be the default. If you are using pixels on the web for font sizes, consider this a public intervention that you need to stop.

The fact that `em` units are not related to physical size attributes is intended. We don't want them to be related to physical size -- what `2em` is saying is that you want a piece of text to be twice as large as the "default" size of the specific font on the platform/device. That is a superior way to think about text size, because as a user I don't want you to try and figure out what the pixel density or screen ratio or resolution of my device is.

Size your fonts with `em` and `rem` units, and size your containers based on the number of characters you want to fit in each line. Don't use pixels. It doesn't matter if your fonts are bigger on Mac or Windows. It is intentional that platforms can size their fonts differently, and your interface designs should take that into account.

If anything, native platforms should be moving more in this direction, they should not be moving away from it. There was a period where you could decide to only care about responsive design on the web, but increasingly you should be thinking about responsive design on native platforms too. And resolution-independent font-sizes based on the user/platform are part of that process.

EDIT: Yes, I'm aware that pixels on the web are not necessarily 1:1 ratios with physical pixels on a screen. That doesn't change anything, pixels are still bad to use on the web. Tying font size to resolution, whether that's browser window resolution or physical resolution, is still equally problematic. And it certainly doesn't mean that we should move in the opposite direction on native devices -- the web's pixels are the way they are because the alternative, being tied to literal physical pixels, would be even worse.


I'm sorry, but you seem to be under some misconceptions about how pixel sizing works on the web.

> resolution independent font-size and container sizing should be the default

It already is. CSS "px" doesn't refer to physical pixels at all, they're called logical pixels. Which on old screens at 100% zoom may map 1:1, but these days are more likely to map 2:1 or even 3:1, and with OS and browser zoom levels may be things like 1.666:1 or 2.5:1.

> Size your fonts with `em` and `rem` units

In the browser, em/rem are necessarily defined by a parent/root element that is ultimately defined in either px or pt, so your suggestion doesn't achieve what you think it achieves.

Today, whether you write your CSS in "em" or "px" results in an identical user experience. It's really just whether you prefer to work with a baseline of something like "16px" and use larger round integers (8px, 1000px, etc.) or prefer to work with something like a baseline of "1em" and use small numbers with lots of decimals (0.5rem, 30rem, 0.05rem).

Ultimately we need some measure of consistency, and browsers have long defined that as a default legible body text size of 16 px = 12 pt (= 1 em by default).


> It already is. CSS "px" doesn't refer to physical pixels at all, they're called logical pixels.

Okay, yes, but when people ask what the current resolution of a page is on the web, most of the time they're talking about those logical pixels. You're correct, but also this doesn't change anything about my argument. It's still a mistake to tie layout to the resolution of a web page, regardless of what unit you're using for a pixel.

> Today, whether you write your CSS in "em" or "px" results in an identical user experience.

No it doesn't. Like you said, `em` units are based on the current container font sizes. Pixels aren't.

There's no way in the pixel world to say that you want a container that will always display 40 em-sized characters, and to have that continue to be true even as font size increases and decreases.

You can try this yourself[0]. Set Firefox to Zoom Text Only (View->Zoom), then zoom in/out on a container sized with pixels and a container sized with `em` units. Only the `em` sized container will change its size, because its width is being tied to the font settings so its width scales proportionally as the font size changes.

[0]: https://jsfiddle.net/jbwf6um4/


Just completely forget "resolution" when talking about CSS pixels as they have nothing to do with the actual resolution (virtual or physical) when it comes to layout. A CSS "pixel" is just another way of saying 1/96th an inch just like saying a pt is 1/72 an inch.

Em is indeed "font size * value" which is the difference, and only difference, vs using pt or px or pc or any other absolute sizing reference. Per CSS spec though this is only a difference for the _developer_ not the _user_ as the other person originally stated - either way on any screen at any zoom the results are identical to the user. Well negating the user only zooming portions of the design which, by definition, is not going to give a proportionately scaled design back. Also you can actually implement this in CSS using px you just need to tie it to calc() - though cleaner to just use em if that's all you wanted to do at that point. Like the original commentor said though, eventually em is going to read back to something which defines font size in absolute measures, such as px or pt, so for all of the debate either way eventually your font is all absolute measured based it's just a matter of what the original measure is currently set to (16px by default).

Which comes to what I really wanted to say after clarifying the above, I like defining the root font size in px and using rem everywhere else. In that way I can think in terms of "I want this header to be 3x as large as the base font on the page", easily change the base font to whatever px value I want (scaling all the page text accordingly), and I don't have to worry about inheritance if I change a parent's font size. Best of both worlds between em and px IMO. Like I said you could get the same thing, or something more advanced even, with calc() and a variable but no reason when rem has been built into browsers for ages.


> which is the difference, and only difference, vs using rem or pt or px or pc or any other absolute sizing reference

It's a difference for anyone who changes the default text size in Firefox. The "Zoom Text Only" option works the same way as Firefox's default text sizing. This is a difference that does matter for anyone who adjusts either the default or minimum font size in their browser, it's not theoretical.

> I like defining the root font size in px and using rem everywhere else

This is better than using pixels everywhere. I would not define a root font size in pixels, I would leave it as 1rem. BUT, what you're doing is much, much better than what the article is proposing, so I'm not going to be too critical.


Again by definition "zoom text only" is meant to change the proportions of the design, that the proportions of the design were changed accordingly is not proof the design method is wrong it's proof only text was zoomed. If you use em for everything as a "fix" for this you've actually just broken the whole point of "zoom text only" as it just became functionally identical to "zoom". I.e. there is no way to reconcile "zoom text only" not changing the proportions of the design and having a meaningful "zoom text only" at the same time. That something can't do both is therefore proof of nothing.

The same as above is true for the root font size discussion, if you set it relative and base the page layout on relative than the user's default font size setting also just turns into a proxy for a full page zoom. Again it's not proof in itself that one is right or wrong it's just that you're either going to stomp on the users default font preference by making it zoom everything instead or you're going to have a design that looks wonky for users with high custom font sizes because the layout wasn't made for big text.


> Again by definition "zoom text only" is meant to change the proportions of the design

Not necessarily. Zoom text only is meant to change the font size and by extension things that are based off of the font size. Full-page zoom pretends that the browser window is a different resolution (even reporting a different resolution to Javascript) and then scales the page up in the final rendering step. I don't know that the point is necessarily to preserve/change proportions one way or another, they're different algorithms that affect different parts of the page.

This is still a really, really interesting perspective, arguably the most interesting/compelling justification that I've heard on this entire thread as to why a designer might choose to use pixels for width in a normal layout. But even though you have an interesting and insightful take, I'm not sure I buy that the reason designers are ignoring `em` units is to preserve user choice. And in practice in the real world, I notice that the pages that follow an `em`/`rem` philosophy when thinking about column width tend to handle both resizing of text and full-page zoom better.

Let me ask a question; if the goal of full-page zoom is actually to keep the same proportions for its font-size, if that's really the main difference between font-only zoom and full-page zoom, then is it a problem that full-page zoom doesn't affect percentages or viewport width units? Why draw the line specifically at the ratio of font-size to container width? If the point of full-page zoom is to preserve proportions and ratios, shouldn't it act more like zooming a PDF?


The number of reference pixels reported to JS changes but so does window.devicePixelRatio. In JS this info is given for you to do your own logic with, in CSS it is not. As such it's not "scaled up at the last step" any more than at 100% zoom when CSS finds out that means foo inches at bar zoom is foobar physical px on a certain monitor. This is opposed to MacOS/iOS or Gnome where things are calculated against a reference scale and then that reference is scaled an integer amount and then that rendered bitmap is scaled by the dpi factor to be displayed - in CSS layout dimensions are always just passed to be directly rendered to the display and part of that direct pass is it always has to figure out how many device px the reference px is.

Not to preserve user choice in itself no but em isn't preserving user choice in the end either so it's not an argument to convince either side.

It's a problem in the same way your original example was a problem i.e. there is no way to reconcile all of these with good answers consistently so that a certain layout methodology doesn't is proof of nothing. And all options being imperfect leads to the simpler dimension based layout being popular. Of course that means a relative dimension based layout is no more inherently "wrong" just most find it harder to work with and get nothing new they couldn't have accomplished with any other dimension type + the describing CSS syntax.

Percentage/viewport based layouts independently may be an actual "wrong" choice depending what kind of content the page contains. The CSS spec actually reflects that percentage based layout is a unique design choice - em and px are both a "dimension" type and you can make either do the same thing with CSS if you wanted (that doesn't mean just replace em with px and call it a day but it's always doable) whereas percentage and viewport are their own special "percentage" type and you can't make them always behave like another unit type because they actually represent a unique concept.


At least once or twice a day I’ll find a comment on HN that just 100% completely humbles me. This is that comment. Where do you learn things like this?


Ironically I don't consider myself all that knowledgable at this kind of stuff for the most part - this just happens to be a very particular thing I went deep down the rabbit hole on in a passion project where I was trying to find out how to solve this exact "how should layouts render on different displays" problem. Turns out the web browser's approach to the problem is very elegant even if the specifications have a lot of complexity and warts from organic growth so I spent a lot of time looking at the specs to make sure I understood how I wanted to do my own thing.

In short do fun things you just want to understand and you too can sound much smarter than you are when a certain topic comes up :).


> As such it's not "scaled up at the last step" any more than at 100% zoom when CSS finds out that means foo inches at bar zoom is foobar physical px on a certain monitor.

Technically yes, but practically no. If you zoom in on a screen, CSS breakpoints based on resolution will trigger. As far as your actual CSS styling goes, you can basically think of the page as having a smaller resolution that then gets scaled up after layout is determined. CSS doesn't have access to the information you're talking about, so it's not valuable to a designer to think in those terms -- at least not unless you're planning to embed a bunch of interface logic in Javascript that doesn't belong there.

The end result is that if you zoom in 200% on an 800px wide viewport, from the perspective of the CSS you're writing you are effectively working with a 400px wide viewport.

> but em isn't preserving user choice in the end either so it's not an argument to convince either side.

This continues to be a compelling argument.

What I will say though is that the more I reflect on it, the more this seems to be an argument for using both `px` and `em` units based on the context of the individual component you are styling. It doesn't seem to be an argument for abandoning relative/point units entirely, as the original article advocates. From the perspective that scaling text alongside containers and scaling text inside containers are both valid ways to scale content, there are going to be some parts of your interface where it makes sense to respond to text size increasing, and some parts that don't.

So for example, I can buy that for a main paragraph, you might want users to have the ability to scale text independently of the column width. But while you're doing that, you probably still want labels to fit inside of buttons. Similarly, you probably don't want the text options on a dropdown menu to overlap with each other and make each other unclickable. And more typographically, you want margins between your paragraphs to remain readable.

So maybe in that scenario if you're maximizing user choice it makes sense to leave you main column in pixels and to only set menus, buttons, and paragraph spacing and margins in `em` units. I could see something like that being reasonable.

The ending of your comment confuses me. I've never heard someone say that percentage-based layouts are explicitly the "wrong" choice, sometimes it does make sense to have a 2-column 50% layout. More specifically:

> `em` and `px` are both a "dimension" type and you can make either do the same thing with CSS if you wanted (that doesn't mean just replace em with px and call it a day but it's always doable)

`em` is a relative size based on the current container's font. So a `4em` sized box means `4 * container_font_size_in_pixels`. There's no way to represent that logic with pixels unless you're using Javascript to change the CSS whenever the font changes. Because `em` units aren't a constant logical unit, they're dependent on the context of the container they're set on.

But if you're going to be updating your layout with Javascript all the time, then you can also absolutely use percentages to get the same layout as pixels as well, they're not a fundamentally different measurement in that scenario. You can have a 1000px container and set all of its children to use percentages, and do all the math to make them line up. You can have Javascript that catches page resizes and readjusts all of its styles.

But if we're talking about the underlying logic, both `em` and percentage units are defined in relation to another unit. They're not equivalent to a fixed value, they scale depending on where they are in the DOM and what those other units are currently doing.

If you're working in pure CSS, there are components you can build that will have behaviors that are impossible to replicate with pixels. If you build a dropdown menu that's using `em` units, it will act differently depending on where it's inserted into the DOM. Pixels won't.


> What I will say though is that the more I reflect on it, the more this seems to be an argument for using both `px` and `em` units based on the context of the individual component you are styling. It doesn't seem to be an argument for abandoning relative/point units entirely, as the original article advocates. From the perspective that scaling text alongside containers and scaling text inside containers are both valid ways to scale content, there are going to be some parts of your interface where it makes sense to respond to text size increasing, and some parts that don't.

Yeah I'd agree with this as well after all of the thought on it - if you want to really get to the full 100% mixing the units per part of the design as you describe is going to enable a more optimized outcome than any "pure" approach could get to.

The way to make px behave like em in pure CSS is to set dimensions based on a --custom-property so you can do things like width: calc(var(--parents-font-size * 2.5)) or what have you. Bit messier if all you literally want to do is font sized based scaling but also way more flexible to inject other logic into if you want to do more than that. Of course JS can give you even more freedom to change these kinds of things (no restrictions that types have to match and such) but that comes with it's own swath of problems not needed for most anything that isn't a true app-in-a-page.


I think you are missing the main point : default text size IS a size in pixel.

If my memory serves me right, the default text size is 16px for chrome (not sure about other browsers).

Be it rem, em they are sizes relative to an ancestor. rem being the root ancestor (html selector or browser default), and em being the closest ancestor having an absolute value defined for its font-size property.

Em is great when you want to establish a font size hierarchy in some container for example.

You can test rem by setting media queries defining different font-size values.

You'll see that 1.75rem is always exactly 175% of root element's font-size value.


That is a point, but it's not a point that matters. It's no different then saying "oh, but browser pixels eventually resolve down to physical screen pixels, so it's the same thing."

The problem here isn't whether or not font size eventually at some point in the browser is getting set to pixels -- it's who sets it. Is the browser setting it? Is the user? And is your layout equipped to handle it when it changes?


I’ve not checked: I change the default text size on my iPhone. Does that impact Safari?


I don't use Safari/iOS, so I don't know for certain, but AFAIK the answer is no.

However, I believe in the most recent version of iOS Safari there will be an option in the left-hand corner of the URL to adjust text size for any page you're on, and Safari will remember that setting on a per-domain basis.

But again, I don't own an iOS device, so I'm not sure exactly what algorithm it uses to resize. It says it's text resizing, but maybe behind the scenes it's doing something different.


If pixels and points are the same thing, don't all the arguments in the OA apply to pixels, then?


> It's still a mistake to tie layout to the resolution of a web page

You've got to tie it to something. Your argument falls apart because "em" are still defined in terms of "px" at the root element. You're always defining layout relative to a logical resolution. Also, responsive design reports viewport widths in logical pixels, so logical pixels are still what everything is ultimately tied to.

> Set Firefox to Zoom Text Only (View->Zoom)

Nobody does that anymore. That was how browsers zoomed two decades ago. For a long time now, browser zoom defaults to zooming all content, i.e. logical pixel size. Heck, Chrome doesn't even have a text zoom option anymore. I'm actually surprised Firefox still does -- maybe it's a backwards-compatibility thing?

It's true that maybe two decades ago defining sizes based on em was a best practice, precisely because browsers only zoomed font sizes. But that ceased having relevance a long time ago, once browsers stopped using text zoom (except as an obscure setting).

Today, in the browsers and browser settings 99% of people use, px vs em has zero difference for the user. It's just a matter of programming taste. (Especially now that it's generally recommended to use rem over em anyways.)


> because "em" are still defined in terms of "px" at the root element

No they're not, they're tied to the resolved font size of the current element they're in.

That matters if someone overrides part of the CSS, so even from a pure engineering perspective this makes a real difference. It also matters if the user changes the font size of their browser.

`em` units define font size in relation to the current font size. That font size might be set elsewhere in CSS using pixels, it might be set by the browser. It might be set using viewport width, it might be set using percentages. It might be set in pixels and overridden by the user using the browser's minimum font size feature.

In all of those cases, as a designer you will be glad that you used `em` units for container width and child text rather than pixels.

> Nobody does that anymore. That was how browsers zoomed two decades ago.

This is how Firefox's default font sizing works today. I listed that zoom option so it would be easier for you to play with it, that's the only reason. But if you go into your settings and change the default font size of the browser (which is absolutely something that people who have bad vision still do today), then you will see the same result in the JSFiddle I linked.


> This is how Firefox's default font sizing works.

That's misleading. Firefox's default zoom sizing zooms fonts together with everything else. Virtually nobody uses font-only zooming. People with bad vision still overwhelmingly use page zoom -- because if your vision is bad you need everything enlarged, including things like icons and images.

Also, your description of how font size for em's is set is incorrect. It's always set ultimately relative to px. If set by the browser, it's a default stylesheet that specifies "body { font-size: 16px; }". It cannot be set as a percentage of viewport size using HTML/CSS, that doesn't exist as an option in font-size. If the browser sets a minimum font size, that too is ultimately in px.

Your whole argument hinges on people using font-zooming, when that died for all intents and purposes many, many years ago. There's no reason for web designers to accomodate nonstandard font zoom anymore -- forcing all UX units to be in ems just re-implements page zoom using text zoom, which is utterly redundant.

Use px or use rem, they're functionally equivalent -- it's just a matter of developer preference.


> Virtually nobody uses font-only zooming.

I don't know how you can claim this. I have set up multiple computers for elderly people and taken advantage of that feature. Heck, I have taken advantage of that feature on devices with weird aspect ratios like the older Surface Pros.

> It cannot be set as a percentage of viewport size using HTML/CSS, that doesn't exist as an option in font-size

What?

  font-size: 1vw;
> If the browser sets a minimum font size, that too is ultimately in px.

This is being pedantic. If a user or a parent container is setting the font size, that is clearly a different scenario then setting it in the container itself.

If you're going to go down this route we might as well claim that browser fonts are set based on physical pixels as well, since ultimately the browser resizes its internal representation of pixels based on the device aspect ratio.

> it's just a matter of developer preference.

I don't know any experienced developer who writes CSS for a living that would claim this. Pixel-based CSS files are a nightmare to maintain when nesting components, this makes a tangible difference to how you architecture your CSS files.


Wow, you're right about vw. I've never once seen it in the wild, MDN's page for font-size certainly doesn't mention it, and I've never seen it listed anywhere as a best practice. But TIL, thanks.

It's clear you really like font-zoom, and also that your preference is extremely uncommon. I still don't know why you recommend writing CSS using "em" dimensions for UI elements to work with font-zoom, rather than just "px" with page zoom, when the end result is identical. I mean... why would you use font zoom when the rest of the world has moved on to page zoom?

And I also don't know why you say nesting pixel-based components are a nightmare. Dimensions work exactly how you think they will, relative to the page.

On the other hand, nesting em-based components is a nightmare, because they jump crazily in size. You never want your date picker jumping 2x in size just because you placed a button for it inside of a larger header row vs. inside of a smaller form line. In fact, that's the whole reason Bootstrap migrated from em to rem, precisely not to have that problem.

Em-based components are really only appropriate for the most trivial typographical "components" if you can even call them that, like a bullet or tiny badge or something that gets used in different sizes of text. Certainly not components you would usually nest.


> I've never once seen it in the wild, MDN's page for font-size certainly doesn't mention it, and I've never seen it listed anywhere as a best practice.

This is usually called “fluid typography” and in my experience is a pretty widely-known technique, but tends to be treated more as a “holy grail” because it’s difficult to get right and browser support for the various techniques hasn’t been great. Most people/teams tend to just aim for pixel-perfect design at a small number of breakpoints.


> and also that your preference is extremely uncommon

I'm still confused about this. You're commenting under an article that specifically calls out font-based zooming and differing font-sizes between devices as a problem in native environments. If it was as rare as you say, why is the writer upset about it? Do people only customize their fonts in Gnome/Mac, and never on the web? Nobody wants their browser to inherit system settings? Firefox didn't even have an option to set a default zoom on every webpage in the style you describe until early 2020.

I mean, you can say this is uncommon and I'm in a bubble, and maybe you're even right, but I feel like at some point I'm going to want to see some stats on that, because I'm not sure I believe you.

> On the other hand, nesting em-based components is a nightmare, because they jump crazily in size. You never want your date picker jumping 2x in size just because you placed a button for it inside of a larger header row vs. inside of a smaller form line.

Depends heavily on the component; this is why both `em` and `rem` units are useful. But it's not that uncommon to want nested elements like a date picker to scale.

My experience (doing a fair amount of front-end work in my day-to-day job, and writing a ton of CSS from scratch) is that (particularly when I'm writing in BEM style) I use `em` more often than `rem`, even for complicated components. But yeah, I do use `rem` sometimes, it's not that uncommon. It depends a lot on what you want the behavior of a component to be.

> In fact, that's the whole reason Bootstrap migrated from em to rem, precisely not to have that problem.

Two things. First, they didn't migrate to pixels, they're still inheriting their font sizes from user/browser preferences. In fact, Bootstrap in part moved to `rem` specifically to get away from pixel units in multiple parts of the codebase[0]. I don't see any sources online that claim they've abandoned the `em` unit entirely.

Second, I would very, very hesitantly suggest that maybe Bootstrap is not actually the best example of responsive design. I'm not going to get into an argument about it, it's a very impressive framework. It's just in my experience often a pain to work with, precisely for this kind of reason (among others). You nest an element in a static page and then dig through layers of classes to figure out why nothing adapts to the container it's put in. Bootstrap is very useful until you start to heavily customize anything. Just my opinion, and to be fair I have not tried out Bootstrap 5 yet.

The reason why nesting pixel-based components is a nightmare is because it forces you to care about every CSS class that sets a width in your component. It forces you to update every width and height in the component every time you want to make a change. For codebases you do control, this is sometimes merely annoying (depending on how clearly the CSS is written). But for 3rd-party styling systems, it can be a nightmare.

[0]: https://github.com/twbs/bootstrap/issues/19943


> I mean, you can say this is uncommon and I'm in a bubble

If you use Firefox, and several people around you use Firefox, then you are definitely in a bubble


First, let me just push back on the overall idea here: if your philosophy for how the web should work and what features are worth supporting is solely based on Chrome's default behavior, that's not a philosophy I want to encourage on the web, and I'm less inclined to listen to your ideas about what the web should and shouldn't do.

But whatever, that doesn't matter because Firefox also isn't the only browser that has text scaling. If you go into accessibility options for the current up-to-date Chrome browser on Android, there are 4 options you'll see:

- Text scaling

- Force pages to allow pinch-zoom

- Request simplified views

- Enable captions

What you won't see is a default full-page zoom option.

Not only does Chrome's text scaling option not uniformly do a full-page zoom, it also appears in my tests to selectively target and resize fonts only in certain elements on the page based on whatever unspecified algorithm Google is using to decide what does and doesn't count as a "paragraph". This means that as a designer you're not just being forced to handle multiple font sizes when you lay out a mobile site, you're also being forced to handle font sizes that scale non-uniformly across your page.

Are we going to argue whether the accessibility options in the default Android web browser are a bubble? I don't see what data people have that is making them so confident that nobody uses text scaling, because as far as I can see text scaling is a feature in every mainstream browser and is predominantly advertised on mobile devices. Is it really, genuinely weird that some people might take advantage of it?


What is the use case for font-only zooming?

I just tried it on a few sites and it either doesn't change much/anything or breaks stuff -- either by overlapping text, or not increase the width of text zones so you end up with few words per line in the middle of the screen which was particularly bad on a newspaper site (along the ridiculously undersized article photo).

I thought it could save some space in headers and such if they don't zoom those, but if they used ems everywhere as might be best practice then there's no win to have.

So it's either the same or worse than full-page zooming in every test so far. This is not only a now-obscure feature, it should be considered deprecated imo -- zooming (or changing default) font size feels like hijacking a responsive web page source and hoping for the best.


> either by overlapping text, or not increase the width of text zones so you end up with few words per line in the middle of the screen

Those the problems that I mentioned. Those problems are why you don't use pixels instead of `em` units.

If you increased the font size in a PDF then columns, diagrams, and layouts would also break. But you wouldn't be surprised because you don't expect the PDF to be responsive. We expect the browser to be responsive.

The use-case for font-only zooming in general is that as a designer, there are elements that should scale with the font and elements that shouldn't. Full-page scaling works by acting like the browser is just a lower resolution, so everything scales upwards except percentage units and viewport units, including things like border widths, spacing between columns, etc... Sometimes as a user that's what you want. Sometimes it's not.

As a designer, it's useful to be able to have more control in the scenario where a user wants to scale only the font and not the border-width of an element to make something that genuinely feels responsive, rather than to be completely subject to a zoom method that is just pretending you're on a screen half as large and then upscaling before it renders.

It's also useful because we'd like to build the web into an extensible platform that users can interact with in creative ways. So maybe we'd like the ability to scale only part of a webpage. Maybe we have an extension that allows us to quickly zoom in and out of a paragraph.

Note that this is not completely theoretical, Firefox already lets users set minimum font sizes. If I set a minimum font-size of 14px, it doesn't matter if you say that your font-size is 8px, it will render at 14px -- but all of your bigger titles and headers will be unaffected. That's the kind of feature that works if browsers allow font-based zooming, and pages are built to accommodate that, but that is impossible to do if the only way we can resize a page is with full-page zooming.


What worries me is the testability. Too many variables, especially external unpredictable ones injected by browsers and user stylesheets, means accessibility testing combinations explode further than they already are.

Full-page zoom - something I imagine was less common in the past for performance reason - is much more in line with "how the site was made, intended to layout as, and tested for."

It's nice that the HN header bar doesn't grow on font-zoom, but it's hardly a killer feature (edit: and isn't even supported on my iPad where I'd want it). I always full-page zoom and was never bothered by it or even noticed it. But clearly half the web is broken for font zoom. I'd say don't fix it, leave it behind instead. Just like 14px doesn't really mean 14px anymore and that's ok.


> What worries me is the testability.

This is why you should keep it simple :-)

This is the web, not print publishing.

Follow the rules, then to save save some time test with default settings in Firefox or Safari or iOS, the point is that the most usual problem I've seen the lastest few years are Chromeisms - non standard behavior that happens to work in Chrome.


>I just tried it on a few sites and it either doesn't change much/anything or breaks stuff

This tells you you're visiting a site who's designer(s) don't want the user to control their browser. Ask them and dollars to donuts they'll treat it like the "doctor, it hurts when I do this" joke.

Look at the vitriol in the comments here, there is unmistakeable antipathy toward the user. Nobody is going to say they pride themselves on a user-hostile approach to web design, but a duck is still a duck and the vast majority of designers are still going to think in terms of pixel-perfect.

"When you're slapped you'll take it and like it." --Sam Spade


I set my fonts larger in Android sepecifically... I can still double-tap/zoom if I need larger images. It's the reading that tends to be the most problematic.

I'm far sighted, so see most things without glasses fine.. I have glasses at my desk, but don't tend to have them on me when out/about and trying to use my phone.


Chrome on mobile absolutely has a text zoom, it's exposed as an OS function, but chrome respects it... my vision is not great, so in accessibility settings I have fonts set to largest. Chrome does a decent job (as most Android UI) of respecting this. The apps that don't, or don't do it well (Facebook) are the ones that are particularly annoying.

It may not be exposed in the browser directly, but the functionality is definitely there.

edit: and I'm frankly okay with using px/pt for the baseline font at the root level and using em/rem elsewhere. In the end, I understand that it should be 72pt/in and 96px/in as far as CSS goes... others may not... Devices that don't have their actual PPI set so browsers can properly calculate the CSS dimensions is slightly annoying though. But that's been a bit craptastic forever and a day.

In the end, nothing will ever be close to perfect. I do find it annoying that different fonts with the same size and line-height will render far differently... a 12pt font should render (roughly) 6 lines per inch.


> now that it's generally recommended to use rem over em anyways.

Whoa! Apparently I missed the note on that one. I very much favor using em over rem precisely because it is relative to the current font size. This is especially nice with SVG icons:

    <h1>
      <svg class="svg-icon"><use href="sprite.svg#my-icon" /></svg>
      Some Heading
    </h1>

    h1 {
      font-size: 1.5rem;
      color: rebeccapurple;
    }

    .svg-icon {
      fill: currentcolor;
      height: 1em;
      width: 1em;
    }
Do you care to tell me why em units are not recommended anymore, and who it is recommending rem in favor of em?


Your specific example is an exception to the "rule". The em measure is relative to the font size of the last containing element that specified an explicit font size. So with a heading, a 1em x 1em icon makes sense because it'll be the height of the container h1 element (which you set as 1.5rem).

The rem unit is relative to the base document font size. Your icon in an h1 set to 1rem x 1rem would be sized relative to the body's base font size. In your example a 1rem icon would be .66 the size of your h1.

When you use rem units consistently their size is more predictable throughout the document. If you've got a complicated stylesheet overriding sizes all over the place em units can end up inconsistently sized and tricky debug. I've seen rems recommended for probably the past ten years or so.


I think there are many of these “exceptions”. At the top of my head:

* Make inline code (say: `0.85em`) relative to fit equally nicely in a big heading or in a smaller `<aside>`.

* Margins and paddings so that larger font sizes have more spacing between them and more breathing room

* Column width where you want to fit a certain amount of character at most in a single column (say `grid-template-columns: repeat(auto-fit, 30ch)`)

* Max container width of a text heavy container is probably best specified in `ch` units (say `max-inline-size: 120ch` is a good one) as long tines are hard to read. If you later decide to use a larger font, you don’t need to change this value like you would have if you’d use `rem`.

I could go on... But I really doubt this is a good rule. And I am skeptical that any serious front end developer is recommending against units that are relative to the current font size.


>Nobody does that anymore.

It's used as an accessibility option for people with low visual acuity.


Nobody does that anymore. /s


> Nobody does that anymore.

I use Ctrl+'+' all the time to zoom text.


The default behavior of Ctrl-+ (and the only behavior in Chrome, Safari, Edge) is different from Firefox's "Zoom Text Only".

Just hitting Ctrl-+, it eventually switches to the mobile stylesheet https://i.imgur.com/E7rAR37.png

With "Zoom Text Only", the layout partially breaks and you get columns of text with only one word per line https://i.imgur.com/C5NiyFA.png


>It's still a mistake to tie layout to the resolution of a web page, regardless of what unit you're using for a pixel.

That's not what happens though. CSS pixels are independent of the resolution and logical.

https://www.w3.org/TR/css3-values/#absolute-lengths


Two things:

A) there is no way for your OS to know what a millimeter is on the 3rd-party monitor you hooked up. So yes, they are in theory roughly tied to real-world dimensions, but in practice, regardless of what the spec says, a browser "pixel" is just another arbitrary unit of measurement that may or may not map to a physical width. This is especially true if you're mixing monitor resolutions on a multi-monitor desktop setup.

When I swap browser windows between my HDPI monitor and my normal monitor, both the reported resolution of the window and the physical size of a "pixel" change. It doesn't remain fixed to 96 pixels per centimeter.

And when I open Chromium, I get different sizes entirely, because Firefox is using a different scaling factor on my HDPI screen than Chrome is. The spec is great, and you're correct, browser pixels don't map to individual physical pixels on a screen. But no, in the real world it's not a completely independent logical unit. In practice pixel densities and screen sizes will affect the physical size of a browser pixel in some scenarios.

B) the effects of tying your layout and column widths to resolution are negative even if your resolution is defined in real-world logical widths. Even if every single browser pixel was perfectly sized on every device so that it was genuinely 96 pixels per centimeter, it would still be a problem to define all of your column widths and font sizes in centimeters.

People are debating the exact definition of the word "resolution" and it does not matter at all to the point I'm making; this is a useless debate. If people are really offended that I used the word "resolution", then fine, swap it out for some other word like "rulers". I don't care what word you use, it doesn't matter. It is still generally advised on the web not to define your layout based on "rulers". If CSS had an "inch" unit, it would be bad practice to make a static layout that set all of its widths in inches. It would still usually be preferable to set font size and to think about column widths/margins using `em` and `rem` units.


A) monitors do report physical size to the OS!

https://en.wikipedia.org/wiki/Extended_Display_Identificatio... > 21 Horizontal screen size, in centimetres (range 1–255). If vertical screen size is 0, landscape aspect ratio (range 1.00–3.54), datavalue = (AR×100) − 99 (example: 16:9, 79; 4:3, 34.) > 22 Vertical screen size, in centimetres. If horizontal screen size is 0, portrait aspect ratio (range 0.28–0.99), datavalue = (100/AR) − 99 (example: 9:16, 79; 3:4, 34.) If either byte is 0, screen size and aspect ratio are undefined (e.g. projector)

This is not perfect of course, may be defeated by cheap adapters, KVM switches, splitters that send image to 2 monitors, etc... And projectors can't know the projected size. More importantly, people sit far from a projector screen; absolute size is not very meaningful without knowing how far away people sit!

You're right that a window spanning multiple monitors has mixed physical size, but that's an edge case that doesn't invalidate taking physical size into account as a goal. Some OSes do re-scale UI when a window fully moves into a monitor with different scaling.


> A) monitors do report physical size to the OS!

And OSes do something with this? When was the last time you hooked up a 1920x1080 monitor to a computer and it changed the reported resolution from 1920x1080 pixels to something else because of the screen size? You're saying that if I hook a 16 inch 1920x1080 monitor or a 12 inch 1920x1080 monitor to a computer that `screen.width` in a web browser is going to report different numbers to make sure I maintain the correct physical dimensions of a browser pixel?

I'm in a weird position here, because obviously you know what you're talking about, but also very obviously if you hook a 1920x1080 monitor up to a desktop computer and full screen a web page, the CSS breakpoints are not going to act differently depending on the size of the monitor. I can observe that fact in front of me across multiple computers right now. So I'm not sure what's going on.

Maybe we're talking past each other or something? But I do know that in practice my smaller Macbook reports a higher screen resolution than my 19 inch giant Linux monitor. And maybe there are OS-level tools someplace to handle that, but they're not available in CSS anywhere that I can find. What you're saying doesn't pass the real-world test of "is it demonstrable on any of my computers sitting in front of me".

> a window spanning multiple monitors has mixed physical size, but that's an edge case

Is it? If I want my Linux box to scale differently on different monitors, that's something I need to manually set up as a user, at least from most distros I've used. Maybe Ubuntu or Windows does something else? The closest I've ever seen to this is my monitor scaling for different aspect ratios when I mirrored Gnome to a projector or a monitor of a completely different resolution.

---

But again, this entire topic ends up being kind of a weird thing to be arguing about because it does not matter if you're using real-world centimeters. It doesn't change anything about what I'm saying, and I almost regret even bringing up "pixels" in general because it seems like they're a giant distraction to everyone.

You should not be building layouts in centimeters. Centimeters will have all of the same problems with scaling fonts as any other fixed unit, the reliability of your measurement doesn't change that. People are fixating on the individual technical details of what's going on in monitor firmware instead of the broader point that in most cases container dimensions should be defined using relative units that scale correctly as browsers/users alter font sizes.


> It already is. CSS "px" doesn't refer to physical pixels at all, they're called logical pixels. Which on old screens at 100% zoom may map 1:1, but these days are more likely to map 2:1 or even 3:1, and with OS and browser zoom levels may be things like 1.666:1 or 2.5:1.

Yeah it's a nightmare if you're doing pixel-art-y games in the browsers, there's last time I checked no way in general to get access to physical pixels anymore, which makes integer-multiple upscaling AFAIK impossible in general. Hurrah. [ disclaimer: The browser rendering stack is complicated enough that I might be wrong. ]


CSS px * devicePixelRatio = physical pixels

I feel your pain about non-integer scaling ratios. The way I hack around it is to set the canvas context to an integer number of device pixels, and then set the CSS size of the canvas to a fractional size: device pixels / DPR.

We use this along with some other hacks on keep.google.com and some other sites to map 1:1 to display pixels to try to ensure hardware overlay promotion for the inking surface.


Doesn't always work (may work for your use case). There are edgecases in the browser where at an HTML level the browser computes a CSS size but at compositing level (where devicePixelRatio is used) it might have clip by pixel here or there.

And example, lets say your windows 99 device pixel wide and a DRP of 2.0 and lets say you ask for 2 side by side elements (via flexbox or whatever) each width:50%. At the HTML level the browser will compute there are 49.5 CSS pixels available. It will then compute the size of each 50% as 24.25 CSS pixels but in order to fill the 99 pixels, one of those 2 elements has to be 50 pixels and the other 49. There is no part of the spec that says which side gets 50 and which get 49. Most browser compute 50 for both sides, the pass it up their compositor which will use 50 pixels of one and 49 of the other. The point is, you can't know which one will get which by doing CSSpx * dpr.

This is why ResizeObserver device-pixel-content-box was added. So that you can ask the browser what it did.


> ResizeObserver

TIL. Oh gosh I don't know if I have the brain power to work through all the necessary rendering layers logic to fix up my game engine to make the pixels as they should be, but if I ever do and figure it out, great thanks to you! (and even otherwise, thanks!)


I will check out that ResizeObserver property.

I hack around it by taking the CSS size of the container, finding the closest smaller integer device pixel size, and set the canvas context to that and the canvas CSS size to the fractional size, which can be smaller than the containers CSS size.

Basically rounding down.



There is, ResizeObserver -> device-pixel-content-box But ATM it's Chromium browsers only.

Some examples on this page

https://webglfundamentals.org/webgl/lessons/webgl-resizing-t...

Bugs filed for other browsers but AFAICT it's not a priority for them.

Note: For a pixelated game having this info useful. Also for diagrams, as the user zooms in you'd like to be able to re-render a canvas at native resolution the same way SVG and text gets re-rendered. But, a noticeable problem is static images. You make pixelated image say, 100x50 and display it 4x like this

<img src="100x50.png" style="width: 400px; height: 200px; image-rendering: pixelated;">

But, what's the browser supposed to do with that on a screen with 1.25 devicePixelRatio? It will look like crap.


At someone's recommendation a while ago I made a proposal for integer-multiple upscaling - https://github.com/w3c/csswg-drafts/issues/5967 . It feels very niche, especially given how complex the standards already are, but OTOH I'd use it if it were a thing...

(Also, that linked page is very cool, and shows how awfully tricky the problem is...none of the proposed solutions work on my iPad, especially not once one zooms in/out in the browser - oh it turns out it was related to browser zoom level - at normal zoom they work (sometimes when I resize the page at normal zoom it works, sometimes it doesn't...). So I guess it's 70% of the way there...but not all the way there).


The reason it doesn't work on iPad is because iPad doesn't support the required API. Without that API there is nothing you can except special case per browser and pray. And even then, Safari doesn't change DRP based on zoom level like Firefox and Chrome do so it's currently impossible on Safari period.


I've added an event in my calendar to go back to this topic again this day one year from now to see if things have improved any (I skirt with frustration-induced burnout every time it raises its head as relevant, so I need to limit my exposure). Thanks for knowing more than me about this cursed subject matter.


Playing with xterm.js with several fonts trying to get some regularity is/was an exercise in frustration for sure.


So, rather than using the traditional device-independant units, we should use another device-independant unit that pretends to be device-dependant, but isn't?


You don't want to be doing font-related things in px units. For instance, you don't want, say "margin-bottom: 10px" under a text element. The margin will stay the same if the font size is increased or decreased.


The benefit of rem is of course that if I want all fonts to be 1.5 times bigger, al I need is:

html { font-size: 150%; }

Then, 1 rem = 16 px will become 1 rem = 24 px etc.

With pixels, I will always be stuck with having to update every value in my CSS.


I understand why, but to outsiders it is odd that the one measurement you don’t specify is physical size.

12pt is ~4½mm.


Absolutely agreed. The article laments things like "I have no good way of knowing how big my text is going to be when I say 16pt," and my response is "why would you ever need to know that? Do your containing elements not reasonably resize when their inner content gets bigger? How are you handling internationalization where strings will certainly be longer?"

It's ok for apps to look different on different platforms. We need to stop trying to design things pixel-perfect to the mockups - I thought we could have left that trend behind in the early 2000's.

OS font rendering can even provide info about the physical dimensions of the screen and give you an "actual size" unit you can use to match your screen to real world paper. So a lot of this article, to me, reads as a misunderstanding of why fonts render the way they do.


> It's ok for apps to look different on different platforms. We need to stop trying to design things pixel-perfect to the mockups - I thought we could have left that trend behind in the early 2000's.

Thank you, Yes Yes!

The web was much better when it was annotated text: You took a primarily text document, maybe decorated with a few formatting hints, and let the User-Agent take care of the rest. Over the years, browsers have handed way too much fine-grained control over to web developers, and the result is thousands of developers treating my HyperText browser as some kind of interactive graphic design canvas. This has really been a step backwards in web usability, even though it enabled nice fancily-designed blogs and "web apps" of dubious value.


> and my response is "why would you ever need to know that?"

The author wants to know, so this is simply the wrong question. I agree with the author that it is strange that a quantity which is in principle easy to compute behaves in such an unreliable way. It makes designing stuff more an iterative instead of a first-time-right experience.

Also, if you specify X and get Y instead, then that is a way for websites to fingerprint your browser, which is another reason to make the computation more reliable.

> It's ok for apps to look different on different platforms.

We've been all in the situation where you designed something for device A and it looks off when viewed on device B. A word that wraps in the wrong place, or a button that is 1px taller than the button next to it, etc. More control over what is shown in the browser is always better. (Of course, a user should be able to override everything, but that is another matter entirely).


> We need to stop trying to design things pixel-perfect to the mockups

This is the huge war that has left CSS and much of the web a battle-scarred crater-ridden minefield.

Users want to be able to buy a range of devices with different sizes and resolutions for different use cases.

Different users want different text sizes as an accommodation for myopia and age.

Manufacturers want to be able to bring out new devices that have a different screen size and resolution from the previous ones.

Designers (and clients!) want to know exactly what something will look like when they publish it. But insisting that you can predict the "pixel" output in all cases is to fight all of those use cases. It's just that sometimes they win.

Apple are the only vendor who've really got high-DPI and DPI scaling working consistently; Windows support is there but so widely ignored or broken by apps that you can't rely on it.


A problem I had and I could not find a solution (maybe someone could help me).

I want to show a button for the user to click. I respect my user and I want my button not to be too big or small, I would like it to be equal or similar to his preferred button size he configured his OS to use. If you search for the best practices you find studies that say that on mobile the button should be maybe 10mm real size , but good luck setting a css dimension on the button that will ensure either this default size on all devices. IMO there should be a css value for preferred font size and preferred touch button size so you don't get at the same time complaints that the button is too large or too small. (I do mostly bgackend so maybe I missed something when searching, maybe there is some hidden trick to do this...and I am referring at mobile devices)


Having another unit for "touch area" would be kind of nice, I can imagine uses for that. The device/browser/user could set how large a `ta` unit is for buttons/inputs.

  .ClickBtn { min-width: 10ta; min-height: 5ta; width: 15em; }
Wouldn't just be useful for mobile, I can imagine even on a desktop a user with Parkinson's or something might have trouble precisely using a cursor and want bigger interaction areas when they visit a website.

The most common solution I see online is to swap layouts when the page drops below a certain aspect ratio or width, which can actually be kind of annoying on desktops. It's a hacky solution.

There are also some media queries you can do to detect the presence of a mouse in some browsers, which can work, but also doesn't really hold up in a world with convertible devices that can be simultaneously mouse-drive and touch-enabled.


> The most common solution I see online is to swap layouts when the page drops below a certain aspect ratio or width, which can actually be kind of annoying on desktops. It's a hacky solution.

In many scenarios, there is no way to interpolate between the ideal layouts for large and small screens. This makes it necessary to arbitrarily select some point where the layout is rearranged as the viewport changes.

I agree that this behaviour can be annoying on desktops as the window is resized, but it’s better than the alternative - a layout that needs to work the same way for drastically different screen sizes.


The problem I have presented in the grandparent comment could be solved either with:

- let me specify the button size in real world dimensions

- let me specify that my button should be the same size as Safari/Chrome buttons in the toolbar.

Then my users can't complain that on their device the buttons are too big or too small. The page was something like a map with a toolbox on bottom, if you make the buttons too big the map area is reduced, if the buttons are too small then are hard to use.

With native toolkits like Qt you have a toolbox widgets, you don't define any dimension or colors and the user theme will handle everything.


Right, the issue here is that we don't have enough semantic control over why an element is a certain size in CSS.

One of the underlying reasons why `em`/`rem` units tend to work better in CSS for container widths is because very often semantically you want to design a column in relation to how many characters of text it should display in a row before wrapping.

But not always; case in point touch interfaces where the reason everything is getting bigger is not because anything fundamental has changed about the user's text preferences, but because bigger elements are necessary for a bigger pointer.

So it does kind of make sense to me that we could have a semantic unit that is unrelated to text size and specifically related to a user/device defined touch size. And then you could have a mix of min/max widths and heights based on different semantic units to have something that responds equally well to both narrow windows on the desktop and mobile devices.

I am all for having more sizing units that are based on user-defined attributes that are applicable to a specific category of measurement (font-size, touch-precision, screen-brightness, whatever), not on trying to guess what specific device a user has.


if the OS reports the PPI correctly, px (1/96") and pt (1/72") give you real dimensions.


A 5 inch wide page on a desktop and a 5inch wide page on a mobile device have different concerns because they're controlled by different pointer devices. It's a little irritating that they both collapse to the same touch interface.


That's the case for most ui/ux at a given point for responsive design, regardless of the ppi/size reporting.

Also, I'm refering to text/font rendering, not UX breakpoints, though I do understand the issue... the problem is, short of sniffing the user agent for "phone" or "mobile" there's no way to tell for sure... touch interfaces are unreliable as many desktops also have touch support.

Note: I have used the phone/mobile sniffing to do some things slightly differently for desktop vs. mobile before. For the most part, it's not too big/bad of an issue.. I do know people that don't like certain sites/apps on desktop as this practice has changed though (namely, skype and discord) as a friend liked to use about 1/4 the screen for chat, but the ux now changes to phone ux when too small.


Browsers never report the real PPI, probably anti-fingerprint feature.


Assuming the ppi is setup correctly, both px (1/96") and pt (1/72") will do that... however it's up to the device mfgs to setup the ppi for the OS correctly.


If I have two monitors with two resolutions and I put the two screens together the text is a different size. When I'm reading the text I want it to be consistent as I move it across screens. With this approach will the text be the same size?

The main issue I have as I get older is that as the resolution increases the text size seems to get smaller


This is exactly the problem that I'm getting at. The reason your font is getting smaller with higher pixel densities is because the font size is being tied to the pixel density.

In a better native world, your font size would be completely unrelated from the monitor(s) you're using. It is the attempt to tie font sizes to physical attributes of your monitors like their resolution that is giving you this problem. And it is the need for scaling to be based on resolution that makes multi-resolution monitor setups break.

There's no reason why your OS couldn't size text differently on each monitor based on monitor-specific settings that you control, and there's no reason why the applications you're using couldn't implement responsive design and accommodate that setup.

As designers we would just need to embrace responsive design and stop trying to control everything.


> The main issue I have as I get older is that as the resolution increases the text size seems to get smaller

Most operating systems provide options for you to address that, by adjusting a scaling factor used by the display system.


If you have the PPI set per display correctly, it should be roughly the same for px/pt measurements.


I like this opinion but not sure if it translates well to building apps in the real world.

A lot of work goes into UX research and A/B testing, if fonts are too flexible none of your results will be meaningful.


Any kind of user customization will have the same effect. Any preference that the user can edit can be a confounding variable in an A/B test.

I don't want to live in a software world where as a user I'm forced into a one-size-fits-all approach to every interface just because it makes A/B testing easier.


I would posit that if rendering differences in font display are affecting A/B testing a workflow, the A/B testing apparatus has failed to control for confounding factors and was probably invalid anyway. If OSX is known to render differently than Windows (and it's not just fonts), why are you comparing cohort A from Windows to cohort B from OSX? That is an analysis error.


We should go back to immediate mode GUIs and bitmap fonts. Design once, render anywhere.


Bitmap fonts become illegible on high-res displays.


The main point of the article, as I see it, is that the "em" used in fonts isn't the intuitively correct em size.

It's just a control box used by the font developer, and the result is that designers can't get a consistent font size. They can know, more or less, what something is going to look like given the font they want to load, but the fallback font might have a different concept of em size and look more different than it would need to be.

So having a different metric, which is exactly capital height, makes fonts look more similarly-sized to each other at the same capital-height resolution, which he demonstrates in the article.

This seems valuable to me.


> > Specify it in pixels

> No thank you.

The article is NOT about px versus em or pt.

It is about specifying sizes in a way that makes sense, i.e. based on visible quantities rather than abstract quantities that are difficult to understand.


It’s not just difficult to understand, the em height / character shape described by the article are total nonsense — they have zero relationship to the font itself, and no relationship between one font’s eg height and another.

Which ultimately means that font sizing is entirely nonsense: the only thing you can say is font size 1 is smaller than font size 2, but it’s impossible to say by how much, or guess the difference, except when talking about one, and only one font.

That is, a font with size 16 roughly corresponding to the same size in another font is entirely defined and enforced by nothing at all — it’s purely a “happens to be true”.

Thus, the only real way to handle font sizing predictably is to never switch them out, because you’re really dealing with undefined behavior


This is the problem... for me, what I really want to see is 12pt actually meaning 6 lines per inch, regardless of the font specified.


Agreed, this definitely goes off on a tangent.

The article is mostly about switching two fonts with the expectation that the visual size of the characters remains consistent. That expectation fails because the em size is arbitrary with respect to the visible caps-height.

As it stands, fonts can have drastically different rendered sizes at the same specified font size. This leads to all sorts of problems, like ensuring fallback fonts have similar metrics to the one you expect to load.


> The article is mostly about switching two fonts with the expectation that the visual size of the characters remains consistent. That expectation fails because the em size is arbitrary with respect to the visible caps-height.

It would fail with caps-height, too, as other visible measures of characters (not just abstract ones like em) have no guaranteed consistent relationship to caps-height, so, as such, there's no guarantee that fonts with the same caps-height have the same “visual size”, or even visual vertical size, since cap height doesn't i close descender and ascender height.


Perhaps we should have a "median-height" property, for specifying the median height of the actual glyphs in the text.


Typography is a specific skill set, nobody said that it should be 'easy to understand'. Makes sense to whom - to web developer? He is not the intended audience.

If the font size would be specified using only cap height then the work of font designer on specific relationship between negative and positive space and line height would be thrown out. That would be a net negative for the state of typography on Web.

Again putting things into fixed size containers that do not adjust based on different font face is not a good practice and it is not what Web meant to be.


Fix the OS and CSS to make it possible to define "real physical sizes". When you are printing a book, you are not printing a text with font size of 24 pixels. Forget about pixels, pixel density of modern displays is so hight that you rarely see individual pixels anyway and pixels are irrelenevant. But OS and web standards are still broken, frozen in time where XGA 19" CRT with 1024×768 px were common.


Other comments (not to your specific one) have already pointed this out, but again: the "px" in CSS is as much a device-independent, physical length unit as "pt" is. It only means 1/96 in = 0.0275 cm.


Agreed... would love to see fonts consistently defined sizing... not sure it should be based on the cap-height, but with a line-height of 12pt, I definitely expect to see 6 lines per inch, and that's not what you get.


Hey just a heads up, on the web (or at least in CSS) px is resolution independent. It represents 1/96th of an inch [1].

The web has physical, font-relative, and viewport-relative lengths which all serve slightly different purposes. Like you, I find font-relative to be super useful much of the time, but the others have their place as well. I just want to point out that CSS specifically doesn't have a resolution-dependent measurement.

Using pixel (1/96 in) sizes for a font is fine if you care about the physical size of the font being displayed. Starting from the system default font-size as you suggest will also work, but you'll need to test it on all the platforms (especially if you use a custom font).

[1]: https://www.w3.org/TR/css-values-4/#absolute-lengths


Yes, completely correct, and it's good that you pointed that out.

However, you also shouldn't be sizing fonts based on 1/96th of an inch units. That will also break responsive design in many cases. Not all of them, like you said there are legitimate use-cases, but most of the time you should not be trying to make a font be a specific physical size.


> …there are legitimate reasons to use pixels sometimes, but resolution independent font-size and container sizing should be the default.

Web pixels are different than device pixels, and are resolution independent.


The are independent of the screen but also of the user preferences. Using rem or relative sizes bases your sizing off of the user's selected font size. As soon as you size something in px you have just thrown away the user's font size choice.


This doesn’t seem accurate. Items sized in pixels will get increased with CTRL + “+”.


Ctrl++ is only necessary because you've "thrown away the user's font size choice".

As a user, I have my font preferences set to what I want to read. If you maintain main body text size = 1em, then those preferences are respected. If you start arbitrarily specifying font size in px (whether CSS px or physical px), then you destroy that. This can still happen with 1em, too, by opting for a font whose em box is screwed up, as described in the article, but that's the _easiest_ thing to test for, because it will be apparent on all screens (including the designer's).


I believe that that zoom is handled differently from default font size.


Yes, correct, I should have been more specific.

The resolution of the webpage as reported by the webpage, even if it doesn't line up with the literal number of pixels in a screen, should not be used for font sizing in most scenarios.


I fail to see a convincing argument here. The "px" unit has a well-defined relation to devicePixelRatio and works predictably across many devices.

> It doesn't matter if your fonts are bigger on Mac or Windows. It is intentional that platforms can size their fonts differently, and your interface designs should take that into account.

You're going to mess this up, guaranteed. I see it all the time. Font size strongly relates the sizing of all your other interface elements, which are naturally defined in logical pixels (or "px"). You can't design your way around that.


> Font size strongly relates the sizing of all your other interface elements, which are naturally defined in logical pixels (or "px"). You can't design your way around that.

A huge part of responsive design is the art of designing your way around that. In most cases (not all, but most), don't define your interface elements in pixels (or any other logical unit that is unrelated to font size).

I also see native apps fail at this, but it's not because it's an unsolveable problem, it's because a lot of native apps are behind the times on responsive design.

Normally I'd criticize native UI toolkits here as well, but the truth is that for many toolkits (including GTK and QT) this hasn't been a problem for a while; most popular UI toolkits support font-based widths for layout. Designers just have to take advantage of those features.


The operating system sizes everything according to the equivalent of devicePixelRatio, not font size. Device manufacturers choose this value according to screen size and pixel density. It is reasonably predictable. The em-size however could be anything.

If your intent is to drive the layout with font size, using units such as "em" does the exact opposite of what you want: you're simply adding another unpredictable factor into your size calculations.

> A huge part of responsive design is the art of designing your way around that.

Using "px" for responsive design works just fine. What exactly is the advantage of "em" in this regard?


> The operating system sizes everything according to the equivalent of devicePixelRatio, not font size.

You just finished telling me that "font size strongly relates the sizing of all your other interface elements". And I agree with that, which is why it's preferable to use a unit that's tied to font size.

> The em-size however could be anything.

The `em` size is a constant based on the current font and font scale that the user/OS has selected that will scale predictably as that specific font scales.

> What exactly is the advantage of "em" in this regard?

It scales predictably alongside the currently displayed font. It almost completely solves the problem of Mac and Windows displaying fonts at different sizes. Again, you just finished telling me that handling OS font-size differences was a task that I was guaranteed to mess up. Well, `em` and `rem` units make it so you won't mess that up.

When a user changes their fonts to be 1.5x as large, you don't want to be using a logical unit for your layouts and popup window widths that completely ignores that change.


> You just finished telling me that "font size strongly relates the sizing of all your other interface elements". And I agree with that, which is why it's preferable to use a unit that's tied to font size.

It is not preferable, because em-size is decoupled from the size of other UI elements.

> The `em` size is a constant based on the current font and font scale that the user/OS has selected that will scale predictably as that specific font scales.

Anything in this chain could change em-size: The OS, the OS setting, the UI toolkit, the browser, the browser setting - independently of the rest of the UI elements. Those changes could have been made by the OS vendor, the device vendor, the browser vendor or the user. That's what makes it unpredictable and that's why it breaks.

> Again, you just finished telling me that handling OS font-size differences was a task that I was guaranteed to mess up. Well, `em` and `rem` units make it so you won't mess that up.

They don't. You probably just don't know you messed it up, because you haven't tested all the devices out there.

> When a user changes their fonts to be 1.5x as large, you don't want to be using a logical unit for your layouts and popup window widths that completely ignores that change.

Here's how you actually achieve that: Use "px" everywhere. The only reason I see for using "em" in place of "px" is that you want to decouple the scaling of fonts/labels from the overall scaling of the UI. However, you probably shouldn't want that, because it's going to mess things up.


> It is not preferable, because em-size is decoupled from the size of other UI elements.

Only if you make it that way.

You're arguing that I should size my layout in pixels because... I size it in pixels? If your em-size is decoupled from the size of your other UI elements, that's a choice you made, it's not the OS forcing you to do that. We're talking about the decision not to decouple em-size from the size of other UI elements. There are tons of popular, modern desktop UI toolkits that give you the ability to do that.

> Anything in this chain could change em-size: The OS, the OS setting, the UI toolkit, the browser, the browser setting - independently of the rest of the UI elements

No, any of those things could change the base em size of the font. Which will then scale linearly and predictably as long as you're using `em` and `rem` units everywhere, including for your layout.

> Here's how you actually achieve that: Use "px" everywhere.

You're commenting this underneath an article that explicitly complains that the pixel approach doesn't work today because OSes define fonts in points, because users can override your choices for font sizes, and because these units are not a uniform size across different operating systems.

Meanwhile, if you build a QT app and use `em` units, it just works.

And of course it works, because regardless of whether or not you think that pixels would be great for fonts, users still get to override your font choices on native devices. The only way to solve that problem is to tie width to font size. You're complaining that anything can change the base `em` value of a piece of text, and I'm sorry to tell you this but on most modern UI Toolkits users can also change the font and size of your text content even if you set it in pixels.

Unless you're advocating that Gnome/iOS/Android/Mac should get rid of their current text scaling accessibility options, which... good luck. Pixels won't save you from that stuff, container widths based on current font size will, because for very good reasons you increasingly don't get the option on modern computers to force font to be a specific size.

> is that you want to decouple the scaling of fonts/labels from the overall scaling of the UI

I worry we're talking past each other. Don't decouple anything. Use `em` units to scale how large a container, button, margin, image should be. Use them everywhere, not just on your fonts.


If you just use "em" scale for everything, you simply add another factor into the size calculation. It's the same as just using "px" directly, except it's now also larger/smaller depending on the font-scale, on top of the UI scale. Most of the time, on most devices, you won't notice that, but on some devices it will make everything unexpectedly larger or smaller, usually due to some vendor setting not matching well with devicePixelRatio.

> You're commenting this underneath an article that explicitly complains that the pixel approach doesn't work today because OSes define fonts in points, because users can override your choices for font sizes, and because these units are not a uniform size across different operating systems.

If you specify font size in "px" (instead of "pt"), font size will roughly match pixels/devicePixelRatio, which is as consistent as it gets across platforms. Using "em" instead would be less consistent.

> Unless you're advocating that Gnome/iOS/Android/Mac should get rid of their current text scaling accessibility options, which... good luck. Pixels won't save you from that stuff, container widths based on current font size will, because for very good reasons you increasingly don't get the option on modern computers to force font to be a specific size.

The intended behavior of changing the font size is to not scale the whole UI, just the fonts. If you use "em" for everything, you just defeat that feature. You might as well use "px" directly, it's more predictable and easier to think about.


Essentially, you've made svg text a special case. So diagram-based languages and frankly any graph-drawing framework that let you specify text labels go out the window.

I think svg `<glyph>` was meant to be the answer to this question, but that was an answer for people who don't care about readability. So now it is rightly deprecated.

Unfortunately, that leaves essentially no answer except guessing and checking with pixel-sized fonts to make sure font-rendering-stack development troglodytes haven't screwed up my diagrams.

So don't touch my pixel sizes, bro.


> Essentially, you've made svg text a special case. So diagram-based languages and frankly any graph-drawing framework that let you specify text labels go out the window.

Hear me out on this: a graphics format that is specifically designed to scale losslessly as it's biggest selling point, should have support for scaling `em` units.

It's absurd that building responsive SVGs is currently harder than building responsive web pages. Instead it's got some kind of messed up, obtuse system where we either do viewport scaling and break all of our line-widths, or do percentage-based scaling and break all of our custom paths.


You can use `<foreignObject>`[1] in your SVGs to wrap text inside a nice HTML rendering (with word wrapping and other superpowers). However I’ve found that there is always some issues (e.g. I need the graph to shrink in the inline direction when text is longer then X; or this rect should flow lower when I have multiple lines). I’ve never actually found my self using <foreignObject> in the SVG diagrams I write, and usually just listen to a resize event and manipulate everything that way (shrug).

1: https://developer.mozilla.org/en-US/docs/Web/SVG/Element/for...


Again, em's don't solve the problem of having to guess and check across the results of every rendering system. There are plenty of cases where I want text to be written between two vertical lines without overlapping either of them.


> There are plenty of cases where I want text to be written between two vertical lines without overlapping either of them.

Which is a problem in svg only because you can't space those vertical lines using `em` units.

I'm not only talking about text, SVG is terrible for responsive design in general even for charts/diagrams that contain no text at all. The entire spec is locked into a world where everything is going to be a fixed aspect ratio that never changes, where zooming in should zoom everything including both text and border widths, where the idea of spacing elements responsively but not having the elements get bigger as the viewport resizes is impossible -- which just isn't an accurate representation of how modern SVG diagrams are used outside of PDFs.

You have basically two options for responsive design[0] -- viewport units (which are terrible because many SVG engines and browsers don't have the ability to avoid applying viewport scaling to path widths), and percentages (which are flat-out unusable in paths). Don't even get me started on that, there's zero reason for paths not to have the ability to specify points using percentages. Heck, I should be able to specify paths using a mixture of viewport units and percentages in the same path.

But whatever, at this point the rant getting off topic. The point is, we're not talking about setting font using `em` and every other part of the SVG in viewport units. You should have access to `em` everywhere, including when positioning other elements and drawing paths.

[0] Okay, 3 if you do a bunch of hotlinking and embedding and nested viewports and garbage that nobody should have to think about.


In fact, I think I'm underselling it. Because I think the gaming industry essentially says, "fuck it," to this same problem and simply uses images for the text.

Am I right, gaming industry?


The gaming industry isn't doing responsive design. For multiple reasons, some legitimate and some not, but the point is that I would consider them to be a separate category.

Most games on the web are going to be using Canvas I suspect -- in my experience at least, they usually won't be rendering to the DOM or using CSS at all.


I strongly disagree with this general philosophy. One should be able to use an absolute, non-relativistic unit of measure that requires no external knowledge at all. Not even to other parts of the same layout.

Relativistic units are inadequate if they are free to cause non-pixel perfect transformations of designers intent from platform to platform. It is no more reasonable to re-layout a carefully designed UI then it is to reposition objects and people in a movie scene played on devices of different resolution, in fact it is much less reasonable.

Perhaps, in some circumstances this will call for platform specific design, and that is correct and appropriate. Perhaps a UI will look like a postage stamp on a high pixel density display. Adjusting for that difference is the domain of the operating system and the user. Even the OS may be unaware of the actual physical size of the display, for example when using a projector.

Automatic layout is a convenance, but it works against not for good design.


Pixel-perfect design should not be a goal for the web if it means compromising usability.


It doesn't mean compromised useably, it is a guarantee of usability. Dynamic layout, re-flow and responsive design compromise usability. The utility of responsive design is not improved usability, its improved designer productivity at the cost of usability.


> Perhaps a UI will look like a postage stamp on a high pixel density display. Adjusting for that difference is the domain of the operating system and the user

... how? If you've fixed the layout, there's no way to do this other than zooming the whole thing.

Nobody wants to re-layout movies because they generally aren't trying to read text off the screen.


Text is displayed on the movie screen all the time, so I'm unclear what your argument is here. Yes zoom the whole thing, as apposed to zooming only the text and then changing the layout to accommodate. As a user I do this all the time in browsers, in video games, and even in display settings. It works great, because everything stays locked together, and there are never any unanticipated interactions between elements. The absolute worst case scenario is the aspect ratio doesn't fit the device perfectly.

You know what doesn't work great? When I resize my browser window and the responsive designed threshold decides my 3k monitor is a phone. So much engineering effort to achieve a bad user experience, when zooming the entire UI is superior in every way.

Just use pixels, they are unambiguous units that are consistent across all digital media, scaleable without changing layout, and understood by both designer and user alike.


> pixels, they are unambiguous units that are consistent across all digital media

The size of a pixel and the DPI can vary by a lot. This is how we got in this mess in the first place.

> As a user I do this all the time in browsers, in video games, and even in display settings.

How do you zoom a whole video game? Isn't that rather inconvenient in that it's shrunk your effective field of view?


> absolute, non-relative unit of measure

:)

Another physicist?


If you have two fonts at size X, they should both display the same number of lines in Y space... The problem is the relativistic measure isn't even consistent between fonts.


> It is no more reasonable to re-layout a carefully designed UI then it is to reposition objects and people in a movie scene played on devices of different resolution

Is this unreasonable? We don't have the ability to design this kind of responsive control over video streams, but we do have the ability to speed up and slow down videos, embed captions that use native device fonts rather than bake themselves into the video file, adjust contrast and brightness, skip over segments and timestamps, and to adjust audio balance. And many players also have the ability to zoom in and cut off black bars. None of that seems like a problem?

I'm regularly thankful for the ability to auto-skip intros and adjust playback speed. Every single video player should have the ability to play at 1.5x speed.

I think in general it's important to separate practical design from art. I don't think any user has any obligation to worry about whether or not it's "reasonable" for to adjust an interface in whatever way works best for them. The goal of design is to make it easier to accomplish a task; the designers individual vision doesn't take precedent over that.

> Adjusting for that difference is the domain of the operating system and the user.

That's exactly why we have semantic units that are tied to a specific meaning/purpose, not ones that just represent abstract concepts like pixel width. The purpose of an `em` unit for widths is to describe the design of a column based on its intended use -- to show X characters in a line. The more specific we are about not just the width of an element, but also the intent behind that width, the easier it is for an OS and a user to adjust for size differences in specific scenarios.

If I drop a page to 300 pixels, did I do that because I want the font to be bigger? Am I scaling the entire window down including the font? Do I want the same design and font-size in a single column? Semantic units allow me as a user to communicate the context a page is in, which allows the page to respond to that context.

Absolutist design is anti-customization; it imposes the designer's will over the user's will. This matters because even in a single device/platform, it can often be impossible to design a single interface that always works. I use i3wm, so when I open a program it opens in tiled mode. You might get half of the screen, you might get 1/3rd, you might just get the upper left corner. The websites and programs on my computer that are designed responsively handle that, and the ones that aren't, don't. Thankfully, native programs have (slowly) started to get better about things like dropping out of 3-column views when they only have 300 pixels to work with.

But the answer to that problem can not be that from now on I'll just open every program in full screen mode so the designer can perfectly position every element. The answer is responsive design, and if that means that a few pixels are out of place when I jam the design into a 328x451 pixel box, then that's fine. I want the interface to work inside the box first, to fulfill its functional requirements, and then to look pretty second.


> Size your fonts with `em` and `rem` units, and size your containers based on the number of characters you want to fit in each line.

AFAIK this is not a good solution in most situations. The number of characters that you can fit on a fixed container varies greatly depending on platform / font / settings. By consequence, you'd lose a whole lot of control over you layout if you were to scale the containers in this way.


I realize this is going to come off as dismissive, but don't use fixed container widths.

Responsive design is a paradigm shift away from perfect pixel-level control over every container size on a static-width document. It's not print design, it's a different medium.

Stop making fixed layouts that break whenever I snap a window at half-width to the side of my screen; there are legitimate reasons why someone on a computer might want their fonts to be bigger or smaller, and your column widths should adjust to accommodate that.


Agreed! This idea is presented really well in https://everylayout.dev


https://every-layout.dev

But thanks for the reminder! I meant to have a look at that.


I recently had lasik and went through a period of a few weeks where I couldn’t read small text on screens. It’s absolutely infuriating that you can’t just increase the text size on most websites. Most websites when you increase the text size it balloons the entire design as well. I had to do so much swiping left and right every line because text doesn’t reflow the way most designers code for the web. Em units should only be used for text, not for other aspects of your design. Let the text reflow within a fixed-size design!


I agree with pushing all aside to size by "Cap height" and "distance between baselines", but like you: totally disagree with using pixels.

Font size is a ridiculous nonsense land with so many gotchas that it deserves it's own University semester, but pixels are a clear, specific, and-tangible thing.

It would simply cause more issues as a certain number of designers would forget or not even be aware of physical pixels, and further cloud the space as they communicate with peers, bosses, subordinates, and the public.

We could probably get to exactly where the author (and I) want to be by changing `em` and `rem` to use cap height and distance between baselines, or by using a new measurement altogether (although: https://xkcd.com/927/)


Agreed, I would have no objection to having more typographic tools on the web around stuff like line height.

I don't want to get rid of `em` units, because they represent the font designer's vision of how the characters are best sized and positioned. I regularly do want to defer to that in my interfaces.

But yeah, lining up fonts is a pain in the neck. The author is right, it would be nice to allow line-height to be defined more consistently. It would be nice to allow me to somewhat size two fonts to be the same height or width as each other without eyeballing things.

So no, not pixels, but sure, have more ways of talking about font size based on concepts like baseline and cap height.


Agreed with only a single sticking point:

If the font designer's vision is for the character baseline to be above or below other font's baselines 'within the em', I want to override that vision.

As well: I feel like this type of standardization could actually free designers to make more interesting typographical layouts as they could freely mix fonts and sizes without cascade effects on the rest of the text.


Right. In a lot of ways it's the same principle we're talking about on the web with using `em` units. Give users control (except in this case the layout designer is the user).

I want font designers to be able to define a basic box and baseline for their font, and I want to be able to respect that if I'm just throwing the font onto a page without thinking about it. But I also want to be able to easily override it if I'm mixing fonts, and to be able to override it without manually adjusting a bunch of margins by trial and error that are specific to that font -- margins that will then immediately break if my users sub out their own fonts again.

It's good that we have controls in CSS for stuff like line height, ligatures, character spacing. Having more of those controls would be useful, and having quick options for them that can kind of "standardize" different fonts to all act the same would be very useful.


I feel like combined with https://tonsky.me/blog/font-size/ that these comment threads are fleshed out enough to go the HTML and CSS groups for discussion, refinement, and eventual implementation.

Clear, useful, reasonable, game-changing, and able to be opt-in.


> Size your fonts with `em` and `rem` units, and size your containers based on the number of characters you want to fit in each line.

I have no idea what you mean by this.


An example of a side note which should be smaller font-size and should be relatively narrow (on a maximized desktop screen):

    aside {
      font-size: 0.8em;
      max-inline-size: 30ch;
    }
This will fit approximately 30 characters at most on your aside and the font size will be 80% of the font size of the parent container (which is probably an `<article>` or a `<section>`).


Using em and ex is definitely the best way of doing relative font sizing, but the reason is something I don't think people properly understand.

1em is the width of a capital M character. 1ex is the height of the capital X character (of the font currently in use).

If you specify block element dimensions using em and ex values, then the margins and paddings will be relative to the font and it will look really nice (in my opinion).


1em as used in fonts and browsers has nothing to do with the size of a capital M character (though the name as used historically means it should). A font is drawn using a design size box (typically 1000x1000 units in OTF, 2048x2048 in TTF) and this box is scaled to the 'em' size. How big the glyphs inside the design box are drawn varies from font to font (which is the problem that the article is talking about).

The ex unit is supposed to be the height of the _lower case_ X character (not the upper case) -- but typically is just 1ex = 0.5em


The article is about OS font size, not CSS.


This article is not about px versus em At best, this is a tangent because the author used px as an example.


A couple people have brought this up, and I'll admit I don't really get what they mean. The article is pretty clearly critical of non-pixel based rendering.

Three of the four problems the author sees with fonts:

> Unpredictable: you can’t guess what you’ll get. 16 pt means how many pixels, exactly?

> Not practical: you can’t get what you want. Want letters to be 13 px tall? Can’t do it.

> OS-dependent. Get different rendering when opening a document on a different OS. Can’t share editor config between macOS and Windows.

Most of those complaints are relevant to what I'm talking about:

- A pt measurement unit might be different between computers, which the author considers a serious problem to solve, and the web considers to be a fact of life to be embraced.

- There's no way to specify exactly how many pixels a font takes up, which is not necessarily a problem people should be trying to solve.

- Different OSes might display your font at different sizes, which is, again, not a problem that most native apps should be worried about.

The author does have a (somewhat) legitimate complaint that `em` units are somewhat arbitrary. It's a little bit weird because it doesn't acknowledge that in this case "arbitrary" means specifically chosen by the font designer to incorporate the best spacing for their font. But whatever, I can get behind an effort to make it easier to compare fonts to each other, and it doesn't seem like there's a compelling reason to force everyone to follow the font designer's choices.

But the author's solution is to use pixels (or some other unit-based non-relative measurement that will be guaranteed to render at the same size across OSes):

> Specify cap height, not em square size.

> Specify it in pixels.

In short, I share the author's frustration that there's no way to relate different fonts to each other in a reliable way, I share their frustration over working with line-heights, but I don't see how their criticism of pts and need to have font size perfectly synced between Windows and Mac works into that. In a responsive world, it should not be an issue for your app that font size might be a different number of pixels than you set it as.


Yes!

Don't fonts provide some form of hinting as well, to improve rasterization accuracy on various display configurations?

Also, doesn't sub-pixel anti-aliassing break when using a rasterized source like the article suggests.

These are two arguments against it - pro vector.


This is a fantastic article and I welcome this conversation.

As the author states, sizing of fonts and line spacing is currently inconsistent across OS's, inconsistent across fonts, and inconsistent with other UI elements.

It's madness. There's no good reason for any of it except inertia and lack of forethought.

The author's proposal to specify cap height is really the only reasonable solution I think there is, though I'd modify it slightly:

- When defining cap height, be clear what the logical obvious equivalent in other scripts, e.g. Chinese logogram height

- Specify cap height in whatever unit other UI elements are specified in (the author refers to pixels, but that's arbitrary)

- Then line height can also be controlled predictably and exactly in whatever unit UI elements are specified in

End result: zero inconsistencies. Text shows up exactly where it's meant to no matter what.

How do we get there? Browsers should adopt it first, with something like a CSS "font-cap-size" property that can be used instead of (and override) "font-size", as well as something like "cap-line-height" overriding "line-height". (And for units, recommend using "px", or "rem" ultimately based on a root "px", and never touching "pt".)

Then OSes can migrate similarly with new function calls/parameters based on cap height, and also try to phase out "pt" sizing in preference of the logical "px" sizing used in the rest of interfaces.

I really think this is the way. I really hope browsers and the W3C can consider something like this. It's decades overdue.


<sarcasm heavy="true" comedy-intended="true" do-not-hate-me="true" smiley="true">

What do you mean!? Forethought!? That's BDUF! YAGNI! This is the problem with you damn tech weenies: all you want to do is fix things and think!

There is no business value to fixing this problem. Ship! Ship! Ship! Ship! We need to float the IPO or SPAC or...something so we can all get RICH! Who cares if the damn font system is broken!? I don't remember seeing a story card or epic for that in Jira.

Now, there has to be -- like -- hundreds of blockers in Jira you should be working on right now instead of futzing with the tech bullshit.

</sarcasm>

On a serious note: is it possible there's a reason why everything is just slightly shit?


Very effective tagging ;) I've trained actors for decades, and people understand character descriptions (the 'spec' in a commercial) more and more as essentially persona tags.


Completely agree. This has caused me endless frustration over the last... 20 years (wow, time flies).

Wondering, do fonts need to be adapted to support this? Is the property 'cap height' typically available in fonts? X-height is.


> wow, time flies

Ya. I wrote a baseline-aware layout manager 12+ years ago. So much futzing to make a pixel perfect, general purpose solution.

I really thought this font and UI stuff would be settled (ancient history) by now.

I'm very grateful to Nikita (tonksy) for tackling these issues.


The inconsistency between cap height and em size has existed in fonts since XVI century. Why do you think it was unintentional? Do you think it is inertia that held font designers for five hundred years to not to make cap height the measure of the font size? Why different cap height were tolerated?


In printed text you would mostly have seen a single font used on a page, if not for the whole work. There's no inconsistency when there's a single font, especially not when the only variations are the odd piece of italics, a few (pretty much pre-defined) styles for headings, footnotes etc. and no zoom or different display devices.

If you could turn a paperback sideways and have the text flip it would look pretty bad - lots of the layout and text is either hard-coded or even manually applied based on the physical format.


The article addresses this. The physical constraints that lead to those conventions don't apply to digital typography.


I don't think that it does to be honest. I think author does not understand typography, he did some Googling and tinkering with font software and that's about it.

The real story is more complicated. Yes there is no 'em size' as a physical constraint in digital world. Yet the concept that the 'em size' represents is very important for a font. Font is more than the collection of glyphs (shapes of letters). Things like em size and default line height may seem arbitrary but they are important decisions made by font designer that should be respected by users of the font.


Well, now you need to make the argument for why they need to be respected by users. I don't see why they need to be.


Agree 100%. It's also possible that a migration path could end up using the proposed "leading-trim" and "text-edge" properties as part of the solution.


>I see no reason to honor the so-called “default line-height”. It’s basically a font designer’s personal preference, imposed on every user of the font in every viewing condition.

Erm... To a large extent, these details of a font _are_ the the designer's -personal-preference- professional judgement, and that's the point. We use different fonts because they have a distinct feel or quality, and how much spacing they have between lines is one of the many little details that add up to the gestalt of a font. If you don't like the line spacing of a font, don't use it. Or tweak it if you like. But don't pretend that declaring that all fonts should have a line spacing of 200% of their cap height isn't similarly trying to impose your personal aesthetic preferences on everyone else as well.


The problem seems to be that, should you want to change the font size or line height of text in a given font, it's currently difficult to get a consistent baseline and character size between different fonts. Having some default line height for a font is fine. Making it impossible to align the tops and bottoms of capital letters between different fonts in a sane way is not great.


Going down this road, you are about to discover that visual weights, slopes, and assorted other minutia also differ, leading inexorably to the ransom note effect. Some fonts are designed to work together; those that aren't, don't.

Visual height is the tip of the iceberg.


Sure, this isn't meant to make it impossible to shoot yourself in the foot design-wise if you intermingle fonts. It's more of a layout concern. If I change the font in an app or website, it _always_ leads to tweaking all the font sizes and adjusting vertical padding on things like buttons. All to get the baseline and vertical height at roughly the same spots between fonts. The author's proposal of a "cap height" (I don't like that term, though) would make that tweaking a thing of the past.


"The author's proposal of a "cap height" (I don't like that term, though) would make that tweaking a thing of the past."

Maybe, but that's not how I'd put my money. :-)


> The solution

> - Specify cap height, not em square size.

> - Specify it in pixels.

Specify cap height, really? How does that work for Arabic or Tibetan or Mongolian (or ..., or ...) -- so many writing systems where "cap height" is meaningless, because there are no "capital letters" and no single dimension that is common across most characters in the set? Not all the world uses English.

Specify it in pixels? So if I set my editor font size to 16 pixels, on my MacBook's display, with 226 pixels per inch, the 16-pixel tall letters will be somewhat less than 2mm tall, if I've got the calculation right. Move the window to my external FlexScan display and that same 16-pixel letter will now be 4.32mm tall.


> so many writing systems where "cap height" is meaningless

Can you name any writing system that doesn't have a logical equivalent? If you pull up all your examples, a quick eyeballing of them reveals there is a clear upper and lower (or left and right in the case of vertical scripts) "line" that is the obvious equivalent to cap height, even if ascenders or descenders sometimes go over it.

If you don't want to call it "cap height" in favor of a more inclusive term then you're welcome to suggest one, but the idea is generally applicable. It's not ignoring other scripts.

> Specify it in pixels?

This is a solved problem, the author presumably meant logical pixels (e.g. as used in CSS), not display pixels. Logical pixels map to display pixels either 1:1, 2:1, 3:1, etc. or by a non-integer ratio when other browser or OS zoom settings are used.


For people who are worried about Arabic script or Persian which uses the same script for a few centuries; you can define boundaries for the letters of said scripts. It is well studied and known to people who design fonts or typography for these scripts. You might need to adopt different approaches for achieving a similar outcome. My native language is Persian and by a simple search for “Anatomy of Persian Letters” in my native language, I found some easy to understand articles. For example in the following article it’s mentioned that the scale and boundaries are measured with with wrapping “dots” of a certain size around the letters. I imagine you can count those dots to define a container size equivalent to cap sizing. The link is in Persian/Farsi but you can recognize the dots and the shapes in the pictures. https://virgool.io/@typokhat/anatomy-of-perso-arabic-letters...

I am not a typography expert but I am interested in the article and wouldn’t shoot down the idea without hearing some expert opinions just because the scripts look different.


Do you really see a clear cap-height (and baseline, to measure it from) here? https://software.sil.org/awami/design/

Or look at Thai: https://omniglot.com/writing/thai.htm. Yes, there's a pretty clear upper and lower "line", but it's much more like an equivalent to ex-height in Latin than to cap-height.

Giving it a different name doesn't help: if you put Latin text with 16px cap-height alongside Thai text with the "body" of the letters sized to 16px, they'll look awful (the Thai will be much too big).

(As someone else has already said, ex-height is usually a more useful measure for Latin script, and that would be closer to matching how something like Thai might be measured -- though it'd be a bit on the small side. And it wouldn't interoperate well with Chinese, for example.)


> Do you really see a clear cap-height (and baseline, to measure it from) here?

I absolutely do. Do you... not? You can do a quick web search for "Arabic type anatomy" that will make it crystal clear for you.

I mean it obviously has nothing to do with "capital" letters (so a more inclusive term would be better, probably), but there are clear upper and lower "bounds" beyond which ascenders/descenders poke out.

The relative sizing of fonts (and line heights) across languages is a separate issue, and is complex. But it's a disaster in interfaces today, where e.g. traditional Chinese characters sized to cap height become virtually illegible they're so small.


> I absolutely do. Do you... not? You can do a quick web search for "Arabic type anatomy" that will make it crystal clear for you.

I also do not. When I search for that, I find articles like https://www.type-together.com/arabic-type-anatomy which say things like "Do not use ‘x-height’ and ‘cap height’ simply because there are no ‘x’ or capitals in Arabic."

So since you do see such a thing, maybe you could show us? You could give a specific link or make a drawing.


The article you linked clearly shows lines labeled "baseline", "descender", "Latin x-height" and "ascender".

And yes, it says do not use the term "cap height" but I already stated that a more inclusive term would be better. It's still obvious what's being referred to.

In current practice (e.g. put roman and Arabic fonts next to each other on the same line in your browser), the Latin "baseline" and "cap height" lines clearly are conceptually equivalent to the Arabic "descender" and "ascender" lines.

Do you see it now?

I don't understand how anyone's trying to argue that there are languages that don't have obvious logical top/bottom boundaries (or right/left for vertical scripts) for basing font metrics on.


> I don't understand how anyone's trying to argue that there are languages that don't have obvious logical top/bottom boundaries (or right/left for vertical scripts) for basing font metrics on.

I don't argue that at all. I agree that Arabic has ascent and descent. I just don't see that Arabic ascent corresponds to Latin cap height (rather than, say, Latin ascent).


The argument that the article is, essentially, trying to make is that some such correspondence obviously does exist visually - i.e. that you can write e.g. English and Arabic text in the same line, sized such that they look the same size.


> I also do not.

I might be missing something obvious here.

However, referring to your link all the text is arrange into nice equally space lines that are pleasing to the eye.

Doesn't that nice layout suggest there is some sort of 'cap height' value at play which then allows those vertical lines to work together so well?

Edit: The way I see 'cap height' is it is some sort of maximum guaranteed value. Now it turns out for English all capitals will have that exact same maximum value.

But the value is not saying all capital letters must take on that maximum value, instead it is that all the characters in the font will not be bigger than that size.


> Doesn't that nice layout suggest there is some sort of 'cap height' value at play which then allows those vertical lines to work together so well?

The em size? The thing the article is suggesting is worse than cap height?

> Edit: The way I see 'cap height' is it is some sort of maximum guaranteed value. Now it turns out for English all capitals will have that exact same maximum value.

No, it's not a maximum guaranteed value, it's the height of capital letters. In a Latin font, the ascenders of lower-case letters exceed it.

This is exactly where my difficulty is: some Arabic letters have ascenders and descenders. Depending on the font, it may or may not have a baseline. Letters that don't have ascenders and descenders have a height which I would say corresponds roughly to the x-height. The one thing that I can't see any correspondence to is the height of capital letters!


Oh come on, this entire article is obviously about Latin typography. Other scripts are simply out of scope.

You'll find that nearly all typography articles focus on a single script they way. Often that's Latin because of its widespread use in the west, but not always.

There's no racism here. Articles about pizza aren't dismissive of kebab.


> . Other scripts are simply out of scope.

Luckily, the font engine developers for popular operating systems did not scope those issues out just because they were inconvenient for designers who falsely assume that "16" should mean an arbitrary part of a letter in a font is always 16 pixels high.

> There's no racism here.

No one said there was until you brought it up?


Nobody said anything about racism.

Yes, the article was obviously focused on Latin typography, but it was calling for a user interface change that would affect everyone using the affected software, no matter what language they speak/write.


For Latin, x-height is much more important than cap height, in particular for selecting compatible sets of fonts.


So basically:

"This car is only for driving in sunny weather on a perfectly smooth surface. It doesn't have lights, tyres, suspention or heating because traction, snow, rain, road bumps and darkness are out of scope."


> Oh come on, this entire article is obviously about Latin typography. Other scripts are simply out of scope.

The author proposes reworking either the way operating systems and browsers handle font rendering and scaling, or the way fonts are designed. Other scripts are very much in scope.


Forget "foreign" scripts. More pressing is the concern of properly sizing emoticons, so they don't look odd when mixed with Latin.

I'm sometimes wondering what problems collective human intelligence had solved, had we stopped short of encoding emoticons in Unicode.


Sorry to be pedantic, but I think you mean “emoji”.

“Emoticons” are the ones built out of letterforms, like “:)”


Technically, emojis are built out of UTF-16 letterforms.


That is an impressive amount of technical incorrectness to fit into one short sentence asserting technical correctness!


OK. Well let's come up with a solution that takes into consideration these other systems. Instead of getting angry that someone was ignorant of other languages, help them understand a way their ideas could be adjusted to be more inclusive.


Who's angry? I just pointed out that the proposed "solution" has some shortcomings that didn't appear to have been considered at all.

OK, a more inclusive solution: how about sizing fonts based on a coordinate space within which the font designer is free to draw the glyphs using as much or as little of the space as they consider appropriate for a given character. (They can even draw beyond the nominal "bounds" of the sized coordinate space, e.g. for long tails or flourishes.)

We could call it... oh, let's see... maybe the "em square".


That doesn’t solve the need of: “I have two different fonts and want to display them having the same size”. And also “I want to center a piece of text inside a box”. Both of which are basically impossible these days without special casing for each font.


"I ... want to display them having the same size" is too vague to be useful.

Do you mean having the same x-height (the main body of the lowercase letters, assuming Latin script)? Or the same cap-height? Or the same ascender height? Or the same descender depth? Or the same character widths? Or a similar overall amount of "ink" in each character?

Yes, it's impossible without special-casing for each font, because there is no clear basis for making such an equivalence, given how designs differ.


The general case would be to have text written in the same paragraph align well. To me that encompasses same baseline, same x height, and in the case of mixing scripts same cap height (in case of CJK). Basically “do what I mean”. I understand that it’s hard, but it should be at least doable. If it would require being able to specify which height to consider, so be it, make it an option.


This is why some fonts are designed to work together. Trying to force fonts that weren't is going to be ugly no matter what you do.


This is a nice solution which however won't solve the problem for non-updated font polices.


The author seems to be using the web notion of CSS pixels (which take PPI into account) without qualifying it. This is understandably pretty confusing to anyone who has used pixels in, well, any other context.


and em works in arabic and mongolian?


Tibetan doesn't even have a letter m.


What are M&Ms called in Tibet?


I would meditate over this question, but there is no Om.


ༀ is U+0F00 TIBETAN SYLLABLE OM.


□&□s


Cute, but I was halfway expecting something like M&Ms are M&Ms regardless for the response. M&Ms are still M&Ms even if local language doesn't have an 'M' in their alphabet/language. Just don't ask me how they'd type it out in an email requesting more □&□s


This has basically always been the case for typography. The point size of physical type refers to the height of the slugs, not the letters on them. You can't just grab two 12pt fonts and expect them to be exactly the same size.

The standard solution is to use x-height, rather than cap-height and I agree that it would be nice to be able to select a font based on that instead of the em-size (font size).

Line height on the web is indeed broken, as it's measured from the center of the em-box and I agree it would be nice to be able to specify baseline distances instead but the catch there is that you really need to know the content of the text being typeset, as soon as you start adding words with diacritics (ÅÀÄ) your lines collide.


Agreed, I think that the font based on line-height would probably be more appropriate... ex: 12pt means you get 6 lines per inch (or whatever the os/device uses) regardless of font. As long as the font scaling shows lines per the designer's desire. Then there could be some variance, but you get at least closer to desired output.


Comfortable line-height usually has enough space for any diacritics. If you want to go tighter, you’ll need to know you won’t have them. But that’s true today too.


Worth noting this article is discussing OS-level native app font-sizes, and is quite separate to discussions of CSS `font-size` on the web, for which this "problem" is well-known and `px` is the commonly used unit.

Also:

> P.S. VS Code seems to take editor.fontSize value directly in pixels. That’s a start!

I presume this is just the result of VS Code being Electron/HTML/CSS, rather than MS deliberately "fixing" the issue explicitly for this app.


You can't fix font size without also addressing printing. 72 may have been chosen as the PPI due to Mac monitors at the time, but it wasn't just for the on-screen size: It also meant that what you printed out on paper would be certain size as well. The roots in physical printing is precisely why there can a disconnect between screen size and text size: The bedrock assumptions that were baked into the system were targeting desktop publishing.

That's still the case today: 12pt font on a 13" 1080p screen will print may look a lot smaller than 12pt font on a 24" 1080p display, but when you hit the print button they will both be the exact same size on paper.


I think TFA is mistaken on it being mac sizing... afaik, 72pt/inch for printing pre-dates the mac.. and 12pt meaning 6 rows per inch was pretty standard long before computers[1]

Also, if ppi is reported correctly on the device/os (usually isn't), rendering is actually pretty close most of the time.

https://en.wikipedia.org/wiki/Point_(typography)


Here's a simple issue that I've never figured out:

1. Create an svg

2. Draw an svg rectangle in there

3. Draw an svg text "Hello World" so that it fits inside the rectangle.

4. Make sure #3 above works regardless of which platform/browser you are using.

Now, I don't see it as a catastrophic problem that I can't (easily) automate everything up to step #3. I mean, maybe I could by iterating over various sizes inside an offscreen HTML5 canvas and measuring the results or something. But it isn't the end of the world to, say, just guess and check in a browser until I hit the right pixel size.

Problem is, that rendered result tells me absolutely nothing about what will render on other platforms. Maybe OSX will eat less horizontal width. Hell, maybe some troglodyte who lives deep in the guts of Gnu font rendering stack makes a change that accordions out the horizontal width so it's way out of whack with Windows and OSX. There's no spec that says a troglodyte can't do that. (This all holds for fixed-width fonts the same as variable width, btw.)

It's weird because I see all these SVG frameworks that have demos with, say, a graph where text looks to be aligned so that the end of a label doesn't overlap with a vertical line. But there's never a caveat given-- if there were, it should be that SVG cannot guarantee there won't be an overlap there, because it cannot control the underlying font engine that way. You literally have to check with your eye on every platform that you want to support[1].

Edit: clarification

Edit:

[1] You can't even simply check for each different font stack-- you have to check every extant version of that stack. Because again, a font stack can make a lot of arbitrary changes from version to version.


Wait, like this?

https://codepen.io/cperardi/pen/VwPmMrM

I would have no expectation at all that would work without specifying a specific font. Fonts just have different horizontal metrics. That doesn't seem to be a bug, so much as “fonts are different, if you want very specific space-constrained rendering, you should explicitly specify a font, and ideally serve it so all platforms get the exact same font metrics”.


> you should explicitly specify a font, and ideally serve it so all platforms get the exact same font metrics

Oh wow, do I have news for you.

The hidden premise of my quest is that:

1. I'm shipping a fixed-width font and using it on every browser/platform combo.

2. I'm setting the same exact pixel font size that is being used on every browser/platform combo. (Well, I was until I found this discrepancy-- read below.)

3. The font metrics are different across all platforms!

Now, normally this shouldn't matter. The rendered width differences across, say, OSX and Windows are so small that we're talking a fraction of a pixel for a small test string.

But at some point in the past five years, somebody made a change in the Gnu font-rendering stack that made it an outlier in terms of rendered width. The same small test string would be greater than a pixel wider than the other systems. So for a medium to long string the width is different enough that in general you'd need to set a smaller pixel width for the recent Gnu systems if you want to get close to the same width as the other systems.

There's no spec about these metrics, so there's absolutely nothing I can do about the discrepancy.

Welcome to the Font Rendering Matrix!

I maintain a diagram-based language that has text in boxes connected by lines. On OSX and Windows the box is tight-fitting around the text. But on recent Ubuntu the box has empty space at the right-- this is because I had to choose a smaller font-size for Linux to keep the wider text from overlapping the box.

Once the renderer does all its font business with the smaller font (and I am not limited myself to integer numbers, btw), the width of the Gnu-rendered string is noticeably less wide than what would be rendered on OSX/Windows. However, I must keep the box at the same dimensions as it would display on OSX/Windows-- otherwise a user on Ubuntu could place two boxes side by side and cause an overlap when opened on OSX/Windows.

So I'm forced to make things a bit ugly on Ubuntu to retain compatibility across platforms, and introduce Ubuntu users to The Font Matrix problem if they happen to complain.

Edit: clarification


(Or convert to outlines, and then embed some sort of metadata for accessibility and machine readability. But that’s probably overkill.)


As a user I sometimes need to select a part of the text and copy to my clipboard


Are you explicitly using a linked/embedded font resource, so that you control exactly what font is used to render the text? If not, there's certainly no expectation of uniformity.


Agreed. I simply would not expect this to work properly if you don't specify a font. Just compare Times New Roman and Verdana, two fonts that are broadly available, and broadly different in terms of character widths. Verdana will probably bust out of that container.

Maybe if we had container queries, and could set font size as a…container unit? I don't even know what to call it. Like vw and vh units, but relative to a parent container.


would that not just be `%`?


Any decent vector drawing tool should have a mode where you save the text as shapes instead of letters, so it renders consistently. I know Adobe Illustrator does.


But then you can’t select it?


This is one of the failings of SVG as a really good graphics format. The ideal solution would be to embed the font in your graphics file so that it will always be available to the renderer, but the font’s licence may not allow this. The second best thing to do is convert the text to curves, and embed a blank font which can hold the text. This trick is pretty common in PDF files, especially scanned documents with added OCR data.


No. But the idea is that this is the version exported for end users, not an editable version. If you need the text to be accessible, you can add a description/title for screen readers.


Why would you expect this to work with the font changing? Am I missing something?



> It’s basically a font designer’s personal preference, imposed on every user of the font in every viewing condition.

Buried the lede here; but "personal preference" is really design. Article perhaps should be retitled "I don't like typographic design because it forces UI element to adjust to typographic spacing rather than the other way around". Clearly you can make arguments for both being the "right" way.


The concept of changing "line height" seems completely broken for monospace fonts. Changing the line height of a monospace font makes line-drawing characters (lines, boxes, etc) fail to line up, either by overlapping or by having gaps between them.

This gist shows an example of the problem; GitHub's CSS breaks this property of monospace fonts: https://gist.github.com/joshtriplett/dc2446716999c54cc9a5c48...


I agree, by default I would want to trust the line height of my font designer, just like I trust their letter spacing. I very rarely use multiple fonts on the same line (maybe a monospace font for inline code?).


Many fonts are designed so that you’ll set at least 1.2 line height. 1.0 barely fits the characters themselves


Right. If you're mixing multiple fonts on the same line, the default should be the maximum line height of the fonts, unless overridden.


Those should not be drawn by font. I believe the only way is for the app, e.g. terminal, to intercept and draw them directly, not through the font. Otherwise you’ll get horizontal gaps too https://github.com/tonsky/FiraCode/issues/449


That looks like a rendering bug in the application. There's no fundamental reason they should have either horizontal or vertical gaps, if character cells are rendered at the correct size.


font metrics are fractional at most sizes. You can optimize font for one particular size, but for any other, there will be gaps


Relatedly, it's really lame that many apps have an "Actual Size" zoom option (that's ⌘0 to you Mac Safari users) that doesn't mean anything. If I create a word processor document with a 11" tall page size and zoom it to "Actual Size," I should be able to hold a sheet of US Letter up to the screen and have it match perfectly, but it is much too small. Displays (except projectors) provide physical dimensions via EDID metadata, which can be used to compute the correct scaling factor.


Physical size might be sufficient if you are holding a paper up to the monitor but for most uses what matters is the apparent size to the user, which would also need to take the viewing distance into account.


Surely the size of a piece of paper at the same distance as the viewing distance from the monitor would end up having the same dimensions as the paper on the monitor?


>(that's ⌘0 to you Mac Safari users)

Or a lot of other Mac apps. Adobe software has had ⌘0 for longer than Safari has exisited followed by ⌘-/⌘+ for zooming out/in. FF does it as well


Font metrics is a super-complex topic.

I have been digging into old Word documents and it's started to dawn on me why Visual Basic "worked" as GUI builder in a time when its competitors didn't... The font metric system in Windows 95 gave great control, with the ability to position individual characters and more configuration choices (e.g. strikethrough) than most web browsers today.

It is way "beyond state of the art" to change the typeface in your text editor easily and have it work.

I have lately been doing a project that involves printing onto cards with bitmaps I make with Pillow, and you learn pretty quick that there is no such thing as a "unicode font" but instead you have to patch up your own "unicode font" by putting together a 日本語 font with a nice Latin font, and not worrying about other languages until I have to print them... The web browser and other high-level tools do it automatically but they don't do well.


The only "unicode font" I know of is the Noto family.

> Noto helps to make the web more beautiful across platforms for all languages. Currently, Noto covers over 30 scripts, and will cover all of Unicode in the future.

https://en.wikipedia.org/wiki/Noto_fonts


I have some memories of zooming into a word document and the space occupied by the text was not the same any more... I forget what OS it was running but W95 or earlier is likely.


Ah, yes... the good ol' days when changing your selected printer driver could also cause Word to reflow your entire document (because it affected the font metrics) and suddenly your carefully-tuned page breaks and figure placements were all wrong.


In other news, this guys dark theme simply blacks out the sight and only lets you read text by using the mouse cursor as a flash light.

It's hilarious.



Great minds.... think alike. And also don't look up prior art comment threads before expressing their opinions. :-)


It is a pretty amusing feature, and it's fun watching people discover it.


Heh, that's cool. But images are over the flash light making it seem strange.


I just didn’t spent much time actually implementing this. It was a quick joke. BTW emoji stay visible too!


It is, but I really wish there was an option other than that yellow. Maybe if you click it again you can get an actual dark theme? Pretty please?


Reader view in light or dark mode works fine.


Night theme, as opposite to day. It doesn't seem to support a dark theme.

Anyway, Firefox Dark Reader extension makes they both moot.


As a game developer, I often have to write custom label layout code to make certain things fit in a certain size box across several languages.

None of us have ever figured out what the font size represented, so we just have code that tells us the size of every character and we piece it together like that. That's right, we just measure every character in order to get the pixel size. Because we use pixels. For everything. Because we kinda have to. Because thats what OpenGL uses.

This article finally explains why font sizes are so useless.


> The solution > Specify it in pixels.

No, no, no... I don't understand why the author would suggest specifying in pixels. We know pixel size varies widely between different displays. This means we need stupid hacks like having a multiplier to convert from specified pixels to actual pixels. So why pretend it's specifying pixels in the first place?

It's always annoyed me that DPI is not expected to be set correctly. All monitors should come with a DPI value that you set in your OS. Of course, a user is free to add some kind of multiplier on top of that (to account for variation in sight and preference), but the baseline would be standard.


You can set DPI correctly and match physical size, but why? Viewing distance still varies greatly, so there’s no point for a button to be 5mm tall always


Sometimes I want to:

1. size based on how big it is in the real world - mm (but accurate!)

2. size based on user preference - rem?

3. size based on how large it appears to the eye - angle subtended on the retina (displays that don't know this can assume an average distance to the eye different for desktop, mobile, jumbotron, etc and work it out from their size)

4. size based on the kind of interaction the user will have (touch / mouse / eye / point) - some kind of new unit based on Fitts law - or should this just be a bunch of special cases?

5. size based on proportion of the available space - % vh vw

I basically never need to size based on some other factors but maybe they have a use:

1. size based on the minimum resolvable size (i.e. the smallest something could be without losing definition). I guess this is much more of a concern if you're using a lot of very low resolution screens - px (but actual)

2. size based on the size of the whole document (including document that isn't visible)

Of course this is all made extra complicated by the fact that sometimes containers need to be sized based on concerns bigger than them, while sometimes they're sized based on the sizes of their contents. And whatever framework you're using is probably bad at exposing key pieces of information about e.g. 'how much of this text can I get in this width'

Unfortunately operating systems tend to have little clue about how big in the real world their display surfaces are, they are even less clueful about how far away the user is. Everything seems to be based on an early assumption that what was important was the dots on the screen, and as that assumption has proven false, instead of attacking the problem in a principled way, we've just created scaling fudges on top of pixels to take into account higher quality, user 'zoom' preference etc.

In the hypothetical ideal world, I'd set my IDE editor font setting to a subtended angle setting and switch between desktop and laptop displays, upgrade the resolution of my screen, or the size of my screen and never have to change it.


Just look at those to two images with all the “Andy”s: In the first the words all have similar visual weight, none stand out or take up considerably more or less space. In the second image where all the cap heights are the same some fonts make the word look huge (courier new) others tiny (victor mono). That to me is a clear illustration of why this proposal is not going anywhere.


They really don't seem very similar at all in "look" here, and their measurements are all over the place.

https://s.tonsky.me/imgs/font_size.png


> I am using Sublime Text 4 on macOS

did I miss when sublime text 4 was released? I'm still on 3


Sublime Text 4 is under private alpha for users with a paid license.

https://gist.github.com/jfcherng/7bf4103ea486d1f67b7970e846b...


Oh wow had no idea.

Anyone knows what the major changes are compared to v3?


We just released a teaser on twitter with some of the major changes: https://twitter.com/sublimehq/status/1377088860836913153


Awesome thanks!


oh cool


Oh man, I'm still on 2. I wonder if I can still get discount upgrade from 2->4


I think the funniest part about this this post isn't the lack of typesetting among people who work with digital typesetting (there should be no mystery between points, em, rem, px if you do this for a living, that's 101-level stuff)... instead, it is the subtext pointing that the latter of the two complaints that spawned this is that vertical centering is still a challenge when it really should not be. And I can't help but observe that the two are tightly related.

Also, I didn't know the original Mac screen was 72dpi. That is so delightful.


"500 years of typography is useless; let's throw everything out!"

"Also, I see no reason to honor the so-called “default line-height”. It’s basically a font designer’s personal preference, imposed on every user of the font in every viewing condition."

Yes, that would be "font design".


I typeset a scanned and ocred copy of an old book. It used a moderately common font except with extremely narrow line spacing. None of the tools I had supported a line spacing narrower than the spacing chosen by the font designer, so I had to create a hacked version of the font to make it work.

Traditionally, the leading was separate from the type. Nowadays it isn't, and that can be a huge pain in the bum.


Is this app-development specific? In CSS the font-size and leading are easily decoupled.


I'm not sure what you're asking. As a software developer, supporting negative leading isn't hard, but as an end user (as I was in the example above) that's no help if the software developer hasn't done the work.


I believe the problem there is that the leading was specified separately from the font and was negative. In the old days, that would have required filing down each type element.


This is also a very big pet peeve of mine, Mr Author. I'm honestly more frustrated when I want to have multiple systems with different fonts at least look similar in 'space occupied,' whereas the lettering being slightly different is just... really a whatever matter.


Em was originally the width of the capital "M", not lowercase "m" as the article says.


Reading this article makes me feel nostalgic: Remember the time when text rendering meant bitblt-ing from an 8x8 pixel matrix stored in ROM.


And font design meant uploading a new set of bitmaps to the video card (if you were lucky enough to have a machine where they weren't strictly limited to what was provided in ROM).

The first non-Latin font editor, text processing and typesetting software I worked on ran on just such systems. Yes, those were very different times!


"Dude, I can't believe that your TTY renders the 'a's counters at four pixels high whereas mine at six! They're both eight-by-twelve in the bitmap."


I don't have strong opinions on this topic or any particular expertise but I looked at the "cap Height" vs "point size" examples rendered to support the thesis of the article and thought to myself, "The difference doesn't seem worth worrying about, the existing system looks like it works to me."


> Instead, macOS always uses 72 PPI to convert points to pixels.

Can't displays report their physical dimensions now? Couldn't this be used to restore using a physical dimension like mm for font sizes? (Please not inches of all things though!)


macOS already does this at the pixel step, by multiplying the entire pixel layout by a scaling term before actually drawing the widgets. So effectively, macOS does not use 72 points-per-inch, it uses 72 points-per-pixel, and then ensures 1 pixel is a particular physical measurement, for the same compatibility reasons that caused us to drop device pixels.

The reason why the author is having issues with text sizes on different size monitors is that macOS does not support fractional scaling; and it's high-DPI detection is fairly limited. A non-high-DPI monitor has no scaling options whatsoever, it's locked to 1x. High-DPI monitors are locked to 2x; with the additional option to change the desktop's layout size and scale the resulting image to achieve the effect of fractional scaling. (Apple is really paranoid about introducing rounding errors in pixel-aligned layouts, which is why they handle non-integer displays this way.)

BTW, if anyone happens to know how to manually flag a monitor as HiDPI in macOS Mojave (or if I can do it in Catalina/Big Sur), please let me know. I have a 1080p NexDock Touch that I have to run at 720p because macOS won't give me the fractional scaling options on an external monitor.


I really wish I could use 1080p with a 1.5x scale factor.

It is worth noting that when Apple was initially developing this feature, around the time of Mac OS 10.5 (Leopard), they had fractional scaling. I thought it was awesome. I can only assume that some artists felt it was not awesome. (I'm going to stop googling for this now, but I think 'Resolution Independence" is a key phrase that will help in finding it if you are interested).

As for HiDPI mode, I know I've done it in the past for external displays, but I haven't been able to get it to work with my 1080p displays using Big Sur. I think this might've been the most useful link I've come across: https://www.reddit.com/r/hackintosh/comments/jh6pjd/hidpi_on... . If you do manage to get it to work, a toolbar app like ResolutionTab or EasyRes is then a big help for changing between modes.


The reason why Apple doesn't do fractional scaling is rounding errors: if you have a design that's aligned to the pixel grid at 1x, it's only guaranteed to remain aligned to that grid if you scale by other integers. If you aren't on the pixel grid, then the layout engine has to round sizes up or down to fit, which is dangerous. You're adding error to layout and then multiplying it by the number of elements being sized.

This gets particularly bad on the web, where a number of different tricks have been employed by browser vendors over the years to avoid rounding errors (see https://www.palantir.net/blog/responsive-design-s-dirty-litt...). If you have, say, a float-column layout with percentage widths, then rounding errors can easily wind up making things either not line up or, worse, actually overflow their containers for no reason, breaking the layout. Throwing non-integer scaling into the mix makes this even worse.

Of course, this is a tractable problem, but it was much easier for Apple to just render-and-downscale from the nearest whole-number factor instead of making the lives of any developer with custom layout engines that much harder.


I don't get it - why does alignment matter when pixels are so small you've got no hope of seeing if something is aligned down to such a small measurement or not?


Normally it doesn't, but there's edge cases where errors can stack up and cause the entire layout to break.

A related problem happens in word processors: if you just layout and render a document at 1x, but at print time, you layout and render at any higher resolution; you will get a different layout. Lines of text that are just over the width of the page when rounded to the 1x pixel grid will be just under the width of the page at the higher-density grid of a high-DPI printer. This happens even at integer scales because fonts are designed on abstract grids that have no relation to (and are far finer than) the pixel grid that the document will be displayed on printed on.

(Complicating things, Windows intentionally renders fonts using custom rounding logic to force them onto the pixel grid as much as possible, this is known as ClearType. macOS doesn't do this nearly as heavily, which is why fonts look less sharp but more faithful on non-Retina Macs.)

AFAIK word processors either have to always round metrics down (so that layout decisions are deterministic at all scales) or always work on a 1x layout and render at higher resolutions using the 1x layout metrics.


Apple agrees, since they’ve stopped pixel-aligning things now.


Really? Or are they just doing the same "render at a higher resolution and then downscale" trick that they did for the iPhone 6+, 7+, and 8+?

(For context: https://medium.com/we-are-appcepted/the-curious-case-of-ipho... )


My theory is fractional scaling was hard and integer was easy, so they went the easy way. Don’t think artists were asked, since I saw non-integer-scaled iPhone and non-integer-scaled default on retina macbooks (which are both implemented by rendering at higher resolution, leading to worse results than direct frational scaling)


They do and in fact both X11 and Windows are capable of using that information to scale font pixel sizes appropriately. In both cases this mostly defaults to off and fixed DPI for UI font layout because the result of non-default display DPI leads to either the result looking wrong/ugly (blurry text, thickness of font strokes disproportionate to other UI elements etc) or even unusable (there is edge case in Win32 UI where one way to create non-resizable dialog-style top-level application window causes its contents to scale with DPI, while size of the window itself does not -> you get UI elements that are cliped outside of the window).


It could, but would usually be misused. People would design for the size of display and typical viewing distance that they like to use, forgetting that these things vary hugely.

(And that doesn't even begin to address the question of what part of the font you'd be measuring.)


A lot of people are saying that we can’t use cap height because of non-Latin characters and that was also my first thought.

But we could say that every character is defined relative to the cap height of “M” and then it is up to the font designer to choose how big an あ or a ㅕ is relative to the M.

That would however not be much of a change compared to an em, which basically just says “how big is your line compared to the tallest line available?”

Unlike the author, I haven’t created a successful font, but my feeling is that it is better to have things defined relative to a max height, than have things defined relative to “a height”.


One of the factors in the font bounding box is character accents. We don't use them much in the U.S., but other countries do. Looking at Fira Code, for example, it seems the bounding box is unnecessarily tall for a capital A. But when you look at the accented capital A, it is about right (it has a little extra padding). Still, there is some inconsistency. For the example, the "Latin Small Letter L With Acute" (ĺ) in Fira Code breaks out of the bounding box (whereas the same accent on the capital L doesn't). The same with descenders and their accents. The bottom boundary accommodates most of them, but some hang just a bit below. The "Box Drawings" glyphs included in Fira Code extend well outside the bounding box horizontally and vertically (but are kiss-fit on the left and right edges).

As a designer, I agree with your basic point, that more consistency is needed. But it's worth nothing that the same issues affect traditional print design. A 10 pt font in Adobe InDesign, whether in an inch, point, or mm based layout, will vary in size from one font to the next. Their cap heights and x-heights will be different, sometimes dramatically. However, the one thing that will be the same is their leading (baseline spacing). That would be a good start for a reformation with regard to browser displays.

For now, it is probably a case where we can blame the browser devs or standards organizations, but they will in turn blame the specs of the source fonts, which are in fact inconsistent as I said above.


Related HN threads on a post I made in 2012.

https://news.ycombinator.com/item?id=15639616 https://news.ycombinator.com/item?id=4236429

"All of this has happened before, and it will all happen again."


Both em and px units are meant to have an absolute size (in, let's say, millimeters or real pixels of the screen). This makes both units completely different from percentages and quite similar to each other; the rather arbitrary and unimportant cases in which they are scaled differently don't have a serious impact. On the other hand, I've wasted more than enough time to debug layouts in which using both units made sizes inconsistent; I tend to use px only for ease of arithmetic.

The important and interesting point in the article is that what should be sized (regardless of the units!) are line height and "cap height" (x-height + ascenders) instead of x-height + ascenders and descenders + arbitrary empty space. It isn't a very convincing argument: depending on the font and text, the apparent height of a line could be represented more faithfully by x-height or by accounting for descenders (and swashes), and specifying different sizes for different fonts isn't particularly difficult.


Typography nerds are the best. I admire this article - merits or demerits - people who love typography always exude passion


I can't be the only person looking at the top of the article, thinking "but if you just add the descender and ascender values together (the total height of the character, not the block space that it occupies), you get almost 32."

I'm guessing if you even slightly tweak the way the measurements are being taken, you'll get 32. even if you just assume "always round up", you get 32.

This is one of the strongest cases of manipulating data by providing a bunch of data points to obscure the one point, enabling belief in "well, they provide all this data and none of it is 32, so they must have a solid point here!"

This misleading opening really just makes me want to avoid reading the rest of the article. I hope there wasn't any real information hiding behind that facade. :x


Here’s a picture with more samples:

https://s.tonsky.me/imgs/font_size.png

There’s nothing to hide: some fonts’ height adds up to font-size, some don’t. Because it’s not a rule, you can’t rely on it, so it doesn’t change a narrative.

Fira Code is one of the fonts that don’t add up, though. 31.19 != 32, no matter how hard you massage it.

P.S. for other examples, I also chose the extreme examples to better illustrate the point. But I only chose relatively well-known fonts, so it’s not all random or manipulated.


>This misleading opening really just makes me want to avoid reading the rest of the article. I hope there wasn't any real information hiding behind that facade. :x

I think you've missed the whole point of the article (which is that this is font-specific as opposed as universal), and you've made a conspiracy theory to boot!

"This is one of the strongest cases of manipulating data" - lol.


That’s an absurd take... the article’s first paragraph is quite clear he’s going to tell you exactly where that 32 ran off to, and despite your random arithmetic that happens to find the number with no double-checking whether it works on any other font-size or font (equivalent to finding your birthday in license plates, by arbitrary manipulations of the numbers available), it turns out the 32 is in fact missing from any visible artifact of the font.


The author could have also mentioned, among the other not-32 heights, the distance from ascender to descender being 31.19 and explained how that's not it either. Instead, he just didn't mention it, as if he was trying to hide it because it was inconvenient to his narrative.


He could have also mentioned that floor(cap-height + x-height + descender) also equals 32, but isn’t the answer either. And probably a few other arbitrary arithmetic combinations would work as well.

But it seems it me it’d be much more efficient to just explain the real answer instead of telling you all the ways you could misinterpret it... which is exactly what he goes on to do.


I was working on Display an Event Title, across devices. Sine the title can very dramatically, from just a word, to as many as 10+ words. Its hard to size this properly, so that it displays on phone and computer screens. Not really sure which approach I should have taken, but I ended up using if statements to change font size based on word count. Wish there was some way of setting the font size so that it fills to the end of the screen. But its no larger so that title is longer than two lines. Its really easy to spot a title when its too big or too small, so that its not aesthetically pleasing. But don't really know how to do that in code. Seems like a font size that is tied to the line length might help.


Perhaps using…

http://fittextjs.com


Thanks, looks great


I have a similar issue on the web.

I have a library, which adds DOM elements and some text to the page. The library expects a certain font to be loaded on the webpage. (This is an internal project, so is usually the case, but not always.) Obviously there are fall-back fonts if not, but those fallback fonts happen to be a different size given the same font-size specification, and so everything looks like crap.

The obvious solution would be to throw away the custom font. But if we really want to keep it, the only solution is to load the page, then measure the width of the text, then we know if the right font is loaded, and if it isn't then tweak the styles so things line up better.


Font metrics are a pain. A while back I was working on trying to reliably reproduce the behaviour you get from the win32 font api using stb_truetype, but due to time constraints I just ended up having to manually tweak font sizes to match.


> 16 pt text on Windows is ⅓ larger than 16 pt text on macOS

Is this why text looks smaller on Macs compared to Windows? My parents found it difficult reading text when they got a Mac because of the small text


Back in the day Apple decided that text on the Mac should be as close to text on the page as possible, and Microsoft thought that text on screen should be larger since computer monitors are normally further away.


I'm a huge fan of the idea. I've also been raging for years because of those inconsistencies.

Though I'm not entirely sure this is a matter which can be resolved on the short term :

I'm not sure about this matter, but I think a character's alignement depends on how it's been made to be in the font's files, I don't see any clean way for a browser to change that automatically.

Also, it raises another question, would font designers care if their font's default settings are tampered with (I'd guess that yes, they would).


The font-size-adjust CSS property changes the font-size property to specifying x-height (instead of the default point size). AFAIK there's no way to specify cap height instead. And unfortunately it's only available on Firefox by default, and Chrome requires a special about:flags enabled.

https://developer.mozilla.org/en-US/docs/Web/CSS/font-size-a...


> Specify cap height, not em square size.

I'm curious how that would be accomplished. There is nothing that requires a font-maker to have a consistent cap height in a font.


The only font I’ve seen that reported cap height wrong is Ubuntu Mono. Nothing stops designers from misreporting this metric, sure, but in practice it is MUCH more consistent.


I long for the time where we used crisp, non-scalable, beautiful bitmap fonts. So called "anti-aliased" fonts look always irremediably blurry.


I'm still using custom bitmap font for monospaced things (text editor, console and such). No antialiased thing has made me changed my mind - bitmap fonts just look so crisp and sharp and readable.


Agreed. I changed terminal emulators from Termite because of pango dropping bitmap support. Alacritty and foot both support bitmap fonts and Wayland. Unfortunately many many more programs still can't use bitmap now. Sway, swaybar, waybar, wofi, mako. Pango is everywhere and it sucks. I can make my terminal and emacs look crisp and clear, but a lot of other things look blurry now.


Isn't specifying font size as "display-independent pixels (dp)" supposed to deal with this?


> 16 pt text on Windows is ⅓ larger than 16 pt text on macOS.

px != pt

It appears that Windows and Mac browser screenshots are misleading at best.

1px in CSS is logical unit that is equal to 1/96 of inch. On any platform.

And so neither 1px nor 1pt have anything with screen PPI (size of physical pixel measured by a ruler on screen surface).


So a 12px character on a 14" 1080p screen is the same size as a 12px character on an 18" 1080p screen? I'm still somewhat skeptical, despite fifty posts in this thread claiming just that.


In pixels, yes. In physical size, no (pixels are much smaller on 14" than on 18")


Right... but that puts the lie to "1px in CSS is logical unit that is equal to 1/96 of inch. On any platform"

It's rarely equal to 1/96 of an inch, in reality. It bounces all around.


In browser, yes. Screenshots are of Sublime Text, which is a native app.


Perhaps use a different name for it, so we can have both?

So e.g.: "font-size" is the old behavior, and "font-height" is the new behavior.

By the way it would also be nice to be able to specify the height of lowercase letters, e.g. "font-lowercase-height".


Why not in mm, and then use the actual screen resolution to convert it to pixels.


Because not many people care (yet). Of course font should always be defined in physical units (be it pt, inch or mm). Pixels are irelevant. When you are printing a text, you also don't care about how many pixels are being printed - your printed page will look the same on any printer, you can print a ruler. But somehow software industry has ignored this for decades. Why? I don't know. Probably inorance, bad cooperation amd lack of standards.


The article points to an interesting UI bug in macOS, text alignment in buttons is weird. Cf https://grumpy.website/post/0UfwgmMDe


I'd bet this was done on purpose, since the optical centering of "Revert" is improved by this change.

However, Apple should reduce this optical offset for smaller buttons with relatively larger text, e.g. those in the Bluetooth System Preferences panel.

The side-effect is that all-caps button text is no longer optically-centered, but that's a relatively-rare case other than the "OK" button.


There is a issue of the incoherence of this behavior in Big Sur. Example: on my screen, in the menubar, vertical alignment of menu items (File, Edit...) is not the same as date & time widget (slightly upper). I don't know if it's because of my non-Retina screen and the lack of optimisation in recent macOS in this regard.


Whether the arguments or proposals are correct or better. Doesn't matter.

Us, as in people reading blog post lack the ability to change 100's of years of conventions and standards across all the world. Change


other interesting related article and HN thread:

- [Continuous Typography / Max Kohler](https://maxkoehler.com/posts/continuous-typography/)

- [HN Discussion](https://news.ycombinator.com/item?id=26523550)


Meh, if it were up to me webpages wouldn't have any say at all over fonts or colors. When I was 14 I would browse the web with no images, and with the colors and fonts overwritten, and it was so nice. Then image maps became a thing and it was a pain, then javascript, then menus made of small images, then flash, and it's just gotten worse and worse every time around.

Now there are webpages that don't work right on a 3ghz single core processor with 2 gig of ram, because of some reason that has nothing to do with what is best for the user.


You might like Gemini. All styling is done in the client.


Not sure if capital letter size would catch scenarios with little circly swirls that make other characters taller than capital letter characters


Silly idea. If you fix caps height it's impossible to create a font that is say deliberately small caps, or has a consciously flamboyant F


typo:

    >Caps are what human eye actually percieves as text block
should be 'perceives'

...also, cool-looking article. didn't follow the details but it def seems like fonts are always more trouble than they're supposed to be.


What about other languages than English and other alphabets?

What's the cap height of Ä? Is it higher than A? Or the same? Will your fixed line height then make Ä intersect the line above? What about あ what's the cap height of that? CJK languages don't even have capital letters... Not to mention the other scripts I'm less familiar with.

This proposal loses credibility immediately for narrowly considering only one language and ignoring the other languages of the world.

Trying to "fix" fonts isn't a one person job. You need a committee of representatives for all modern scripts that are in use. And that's not likely to happen because frankly, is working well enough as it is and disturbing the status quo here is going to cause more issues than it solves, see https://xkcd.com/927/

There are other problems with the proposal, specifying sizes in pixels messes up printing where you have anything from 300 to 1200 DPI. Not to mention high DPI 4K screens. Talking about inconsistency? Try using a 96 DPI screen next to a 168 DPI screen and try to get even the same font to line up... The only way to do that is device independent units and a correct DPI setting for the screen.

I posit that it's not fonts that need fixing, it's the rest of the UI (Windows etc) that need to start correctly using device independent units instead of weird scale factors...

In fact I'm going to go on a rant here... All graphical UIs are doing this wrong, each individual display should use (by default, with override) the EDID reported DPI and all layout should be done with device independent units (RIP bitmaps). This makes everything on the display be the size (in SI units) that was specified and it's consistent between displays of different DPI. On top of that there should be a scale factor to make the entire UI larger for accessibility reasons or if you're using long viewing distances. Scale factor should auto detect if the DPI is too low and be set higher to compensate, this makes text legible by default on low DPI screens like old TVs and projectors that are used for media centers.


Something any proposal of this nature tends to be missing is an inquiry into the implications for non-English langauges.


You mean, like German?


ẞ (capital sharp s; added to Unicode in 2008) and even the much older ß (eszett) are not included in all modern fonts. Sometimes Eszett is simulated with a lowercase β sign even today. Often its size doesn't match the ASCII characters.


Fira Code has it!


Neither pixels nor ems are the right solution, and a side-effect of an era of low-resolution screens, what we need is real physical units, and not just for fonts, but for every UI element. I should be to say that a button is intended to be an inch tall (or 2cm or whatever) -- the point is that it should have a height relative to reality, not relative to nothing. Imagine if anything else worked like dimensions in software: you asked for chair 3 feet tall but every maker of chairs had a different definition of a foot. This is how points, pixels, everything works on the web. And the most important thing to remember here is if you still like this system, you can build it on top of one that fundamentally relies on real units at the bottom. But it is incredibly difficult to do the reverse. CSS has "in" and "cm", but they don't mean inch or centimeter. On most browsers, "1 in" is defined as 96 (CSS) pixels[1]. Except, if you print, then 1 inch DOES mean 1 inch. Which hilariously means that if you want to use CSS to lay out something to print, you can't tell how it will actually look until you print, because there's no "please just render like you would when you print" mode. This has been requested, and the classic answer of "who would ever want that" was the response. You can with a lot of work (and JS unfortunately, unless you pre-generate a huge CSS media query for a wide range of screen resolutions) create yourself a unit such that 1em = 1 physical unit, but this has other issues when you start wanting to do other things in CSS.

Screen UI is the only aspect of design where we pretend that precision shouldn't matter. It is stockholm's syndrome. I remember how people said that its fine (or better?!) that you can't know "for sure" what font will display on the screen on the web. MAYBE it's Times New Roman, but maybe not! You should be flexible. This is nonsense. Perhaps that's the reality we live, but it's not desirable. We shouldn't fool ourselves into preferring that. Clearly if we could make the exact font show up it would be a better state of affairs. Again, if you really believe font uncertainty is critical to the web experience, then by all means, add a script that randomly swaps in a different serif font occasionally. Boom, everyone is happy!

Dimensions is the same kind of problem: we're using ideas from an age where screen were 256 pixels wide. When you design for print, you think about actual size of the produced object. Just because someone can resize the window does not mean that the height of the letter x should be some abstract unknowable quantity. And just because you can set the baseline in size to a specific physical size doesn't mean users can't apply a zoom to that. All it means is that the language with which we communicate to each other is real and not completely different depending on who is reading it.


The problem with physical sizing is that 1cm on a large monitor will still look different to 1cm on a 5" mobile phone, because of the different viewing distance. If you had the same physical size, a browser window may be 20cm across on the monitor, but on the phone you'd see a few cm and a scrollbar.

Anyway, it's pretty ridiculous once you start using units without regard to their definitions, like points and pixels. You'll just end up with a big mess. If something different is wanted, then it's time to define something new.


Sure! But now we’re working with what we want to work with, how many inches on mobile vs how many inches on Desktop. Just like you want a different size chair for a dollhouse vs a real house, but we still agree on the units! But on computers we have this separate confusion of units with no static meaning.


(Logical) Pixels + system-wide scaling, problem solved. Windows already got it (apparently, since Windows 95 times!), macOS too, but more poorly


On Mac we definitely don’t have this. Everything is clamped to integer multiples (1x, 2x, and 3x). So the “real” size bounces around wildly screen to screen.


Speaking of font size

HN home page font too small

My browser zoom settings fix these comment pages

But not the home page


relevant XKCD: https://xkcd.com/1273/


too much pictures?


To be honest, this kind of solution throws the baby out with the bathwater. What this solution effectively proposes - is to change the way fonts are designed. Font design is not just drawing letters, cap height to em size relationship is also an important decision. It doesn't have to 'align' with anything, space around letter is important.

Again, there is a strange illustration in the "The Problem" section where lowercase 'm' from one font is compared to lowercase 'm' from another one. Yes, they have different x-height. Why is this a problem? The difference in cap height to em size ratio in different fonts is intentional, it is not random.

In my like 15-year design career never ever I needed that fonts from different font faces would have the same cap height when the specified size is the same. Never ever I heard from my design colleagues that this is somehow bothering them. It is fine that different fonts have different "ink ratio", that's intended.

And one particularly striking part - about the default line height. In fonts that are good at being used for large amount of text - you can typeset a book with it or use for body copy of a website or print magazine, the default line height is that particular height that makes it the most comfortable to read. And it's perfectly fine that it is different in different fonts - it has to be. It is not random - it is intentional.

If there would be no specific default line height - then designers would have to tweak it each time for each font to make it readable or more realistically the quality of typography on Web and elsewhere will decrease because nobody would bother to do this properly.

Sometimes you need sensible defaults, sometimes you need to respect the work of type designer.

To discard hundreds of years of history and knowledge to satisfy marginal at best use cases feels barbaric to me, to be honest.


Hm. As a user, when writing a document I put down the text first, using the default font and whatever font size feels comfortable for writing. Then in the polishing step, I try out different fonts. My standard routine is to switch font, notice that it's now way "smaller" or "denser" (or the reverse) than before and adjust font size until it's roughly back to where it was. Some font switches don't visibly change the feel of the size that much, but most do.

I guess what I'm saying is that as a user, whatever property is tied to "font size" is not working for me. I don't know if caps height would work better for me than font size does, but it could hardly work worse and the examples in the article make me think it may be substantially better.

And I would like that outcome! I would like to be try out several different fonts without needing to adjust everything for each one. I'm sure font designers had some specific purpose or set of purposes in mind for their fonts, but I'm using it for my purpose and in practice the sizing mechanism just isn't right for me.

We can achieve the same end result either way. You have two non-orthogonal variables you can adjust to get whatever you want. This article is arguing that one of those variables is the wrong control knob, and I'm inclined to agree.


I agree that technically you can have two non-orthogonal values and would actual satisfy some of the problems of users when switching fonts. Like a checkbox 'preserve x-height when changing fonts' would make those changes less drastic.

Yet the article argues that font size is broken, my point is that author does not understands it's intended usage.

Fonts are a very specific instrument with a long tradition of knowledge associated with it, yet there is context of a more 'consumer' setting, like using a code editor, here maybe there is a clash between professional usage (designers should be fine that different fonts have different size) and more of personal usage.

Fonts were not meant for personal usage generally, and there is a certain learning curve on how to use them correctly, but there is no 'interface' currently for a general type of user who just want the text to look good.


I agree.

The article is written from a programmer’s point of view and I understand that a programmer doesn’t want to deal with font characteristics or how something is perceived.

The role of a designer ist to think about font sizes an line heights and designers have to understand how typography on the web works. Typography is not something which is based on numbers or technical rules.

I think it’s good that the job of creating good ‘online stuff’ hasn’t merged into one person doing everything. It’s still too complex, because you can’t turn everything into numbers and rules.


What's the x-height of z̷̡̡̧̛̛͕̩̯̮̯̱̱͚͎̠̳̦͉̤̙̙͔̭̩̝͉̱̯̠̘̝͙̞̟̝̦͈̻͈̱̳̯̱̦̭̱̝͇̪̬͚̹͈̉͐̃͑̒͗̆̒͂̀͂̿͑͋͂́̓͆̅̐̓̎̔̓̔̓̊̀̌̈́̀̀̍̏̈̈̑̍̈̐̓̂̆̉̑͑̒̌͛̀̀́̈́̈́̊͋̃̅̑̎͆̏͗̑̊̀̓̾̈́͗̄̆̂̿̑̌̀̅̽̑͂̒̆̓̃̔̅̒̀̓̂̄̋̐͆̔̂̓̋̽̉͌̈́͂̀́̏͌̾̎̿̑͒͑̿̀̈́̈́̏̃̑̈̐̓̈́̔͋͊̃͊̈́̎̈́̃̓́̏͛̆͗̄̓̈́̊͋̕͘̕̚̚͘̕̕͘̕͘̕͘͝͝͝͝͝͝͝͠͠͝͠͝͝͠ͅͅͅ?


Interesting character :) what is it?


It's just a "z" with a ton of combining diacritics. Which is entirely irrelevant. x-height is a dimension of the font, it's nothing to do with an individual character. (In most Latin fonts, it'll match the height of the "x" glyph, but that's not a hard and fast rule.)


Capital Ẕ̷̲̳̥̎̆͌̏̍̆́̓͠ḁ̶̡̖̲̮̫̞̘̠̳̗̯͇̪̘͊̿̐̀̍̽̂̇͌̈́̿̔͌͂͘ḷ̵̙̯̓͋̌̆̍̚̕͝ͅğ̸̡̳̠̦̥̹̝̳͕̜̆̒́̉͛͝ͅơ̸̝͉̟̜͎̜͚͑̾͘ͅ


Pixels are meaningless unless you're an artist, especially with modern graphics APIs where it's often difficult to actually draw individual pixels.

IMO: everyone who's used font size outside college knows what an em is. Just standardize a point size in terms of a physical measurement (maybe stick with 1/72 inches since everyone except Microsoft apparently uses that) and make the "font size" a coefficient that maps ems to inches.

Furthermore: cap height doesn't capture as much as an em does: there's spacing between letters which has little to do with cap height. Everything about this is wrong.


How long has the solution been around?


dork finds a thing he doesn't understand, begins to understand 25% of it, and then decides to "fix it" like it doesn't have centuries of history and learnings around it that impacted the decision, write your own metaphor


Does anyone really... care? I can't remember ever getting upset about font sizing. If it's too small to read make the font bigger, if you wanna fit more on the screen make it smaller.


Clearly you have never tried to center text inside a button


There are three font sizes with valid uses beyond designers trying to justify their existence: small, medium, and large.

Likewise there are three valid fonts: fancy (serif or non-Latin equivalent), plain (sans-serif or non-Latin equivalent), and mono-spaced.

This means only 9 font sizes need to be decided. The software can provide reasonable defaults, and a simple UI to allow to user to change them, which won't take long.

The only difficulty is when a font doesn't provide full Unicode coverage. Full coverage will have to be assembled from multiple fonts, which means the sizing won't necessarily be consistent. But in this case consistency is already lost, so I see no need to further complicate the UI. The user can still tweak things with fontconfig or equivalent.


I think this covers 99% of all practical cases. I'd go further, and say that all three sizes and all three fonts should be purely under the control of the client, not the web designer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: