Hacker News new | past | comments | ask | show | jobs | submit login
Ligatures in programming fonts: hell no (2019) (practicaltypography.com)
157 points by susam on Dec 21, 2021 | hide | past | favorite | 195 comments



Ligatures in programming look cute, and maybe if you want to showcase a snippet of some particularly clever and elegant code, print it on some expensive paper with tasteful palette (and none of your CMYK nonsense! Only pure pigments!), frame it, and hang it on the wall, I can see its appeal.

At work, I've tried it. I've tried hard to like it. I had to stop because Fira Code (and ligatured Iosevka) took the principle of the least astonishment, threw it on the concrete floor and kept beating it down till it stopped breathing, and then some.

I found myself spending time wondering "what characters is this made of?", and "how do I type THAT?" when viewing some existing code, and "WTF is this?" when a ligature that would have likely made sense in Ruby or Haskell (or in APL?) made it into my Perl code.

Also, "is this a ligature or some Unicode glyph?" was a question I found myself asking. In a better world, I shouldn't be wondering about it while fixing some integration with one of Intuit's molochs or somesuch. But here I was.

Switching to non-ligatured text shed a lot of cognitive burden off me. At least I can tell which thing is ASCII and how I type it. And my code doesn't look as a Guillome Apollinaire's calligramme.


> I found myself spending time wondering "what characters is this made of?", and "how do I type THAT?" when viewing some existing code, and "WTF is this?" when a ligature that would have likely made sense in Ruby or Haskell (or in APL?) made it into my Perl code.

I guess everyone's got their thing, but on that paragraph I have to say my experience with ligature'd fonts in programming has led me to wonder the same thing... never.

I don't say this over some love of them; I'm fickle and change fonts about quarterly; some with ligatures some not, but I can't think of a single instance of WTF or actual confusion.


Symbol-dense languages and coding styles, especially those that differ from the mindset of the font designer, are probably more likely to trigger spurious ligatures.


Yeah, I'd imagine Perl development is pretty much the exact worst case for programming font ligatures both as a code developer and a font developer (accidentally creating ligatures the font never intended to exist in strange runs of symbols). Especially in how much Perl development exists that treats spaces as rare/unnecessary/optional.


Even without ligatures, Perl code makes you say: "WTF is this?" more often than any other language I have tried.


A classic joke I've heard is that Perl is the closest to a "write only" programming language that has escaped into mainstream usage.


I can't agree. I type up something in Fira Code - I look at it - and I smile inadvertently. It's just another one of the small, pleasant aesthetic experiences that make up programming as a whole for me.


Good for you, I'm not saying my experience is a universal truth. But this is my experience. I love all things typography, and thought it was genuinely a good idea. Turned out it wasn't for me.

But I'm glad it clicked for you.


> I found myself spending time wondering "what characters is this made of?"

Funny sidenote: I've got my Emacs configured to show any non-ASCII char in source code in bold pink on a pure black background (including zero-width spacing chars). That way the non-ASCII chars do really standout and I rarely wonder "which char is this?".

I don't forbid them: they do just really stand out.

Most of the time the only Unicode chars in source file are anyways the copyright symbol and some chars in author's name (in comment). Sometimes in strings.


This works as long as those characters truly are a rarity and will probably become unbearable otherwise. For example, try opening some Agda code, or reading firmware or BSP code with Chinese comments (hello, ST Microelectronics), or some other ancient or internal code with non-Latin native-language comments (Japanese is pretty common, my native Russian can also be found in the wild occasionally).

I’ve heard somewhere that Java’s support for Unicode identifiers was originally motivated not only by “oh shiny” (although I expect there’s a measure of that, too), but also by allowing the Japanese to write their code how they wanted it, and apparently did indeed help with capturing mindshare. Though, to be fair, I’ve never actually seen Java code with Japanese identifiers.


Interesting. I just check for non-ascii on-demand with a tiny macro around occur (because I can never quite remember the regexp syntax for named character classes):

    (defun find-non-ascii ()
      "Find and show all of the non-ascii characters."
      (interactive)
      (occur "[[:nonascii:]]"))


I've had the same experience, with ligatures applying in languages they weren't designed for. It struck me that the font is a strange place to make the decision about what character combinations should be combined. Something like emacs' prettify-symbols-mode seems like a better approach: make the decision per-language. Of course, that's not nearly so simple to implement.


I had the opposite experience.

In Sublime Text for some time, the C++ ligatures did not work. It is fixed now.

By default, no ligatures are used.


> Also, "is this a ligature or some Unicode glyph?" was a question I found myself asking.

In code editors that use monospaced fonts the ligatures will often look nicer since things like Unicode arrows will only take up a single “character space”.


This is precisely my problem. I like monospaced fonts because a single character occupies a well-defined width.

I do not like it when len('≡') == 3


Though if you did it differently and the ligature was in a single-character width it would mess up alignment between users who are and aren't using the ligatures... probably a worse problem.


No, that's what happens in some ligature fonts: '=', '<=' and '===' all render a single character wide. It messes up alignment and drives me up the wall


Except no ligature font I’ve ever seen does that, because that’s a terrible idea and a mark against any specific font that does that, not against ligatures in general.


I tend to read horizontally and not vertically, so that hasn't been an issue for me. The column oriented code layouts I just find so eye-rolly but people like different things.


I don’t think code should be aligned to begin with.


Many of these ligatured programming fonts look the same way, by shifting the combined characters horizontally into a single width space.


Yeah, but that "single width character" is often still in the middle of two widths worth of space. In most environments that should still often be enough of a clue.


It wasn’t, for me, trying Fira and other ligatured fonts. The cognitive overload was too much, for no clear (again to me) benefit. I love well crafted typefaces, but decided this is a place where this particular craft wasn’t for me.


It's a subtle clue still, but you pick up on it more and more with use, especially after cursoring through/deleting past the ligatures enough. It eventually becomes second nature.

Of course, there's no reason to pick up those subtle clues/habits if you don't see a benefit to the ligatures in the first place. The benefits themselves are sometimes subtle, and I'm unlikely to convince you if you've already made up your mind on that. For what it is worth, I find that they improve readability (and per the Python maxim: code is read way more often than it is written), and they do catch subtle bugs sometimes. As one example, in Fira Code (and successors), the difference between the ligatures of == and === are quite noticeable and useful in JS programming. But yeah, mostly it's just for cleaner reading: math symbols and pretty arrows. If that doesn't appeal to you, it doesn't appeal to you.


Unicode includes "double width" characters as well.


I don’t understand the relevance.


The displayed width alone is not enough information to tell a ligature from a single character.


I was under the wrong impression that “double width” was only a property of some codepoints that were effectively double width variants of the printable ASCII symbols, for use in East Asian contexts. But now I see that there is a combining codepoint, so codepoints in general can be made “double width”. So this does indeed seem relevant.


Check boxes and stuff occupy about a character- width-and-a-half when pasted into Notepad++ set to use Lucida Monospace.


Especially when programming, but also at most other times, I want a 1 to 1, unambiguous correspondence between the symbol I entered and the display on the screen.

Making my ":)" into an emoji is wrong. "Do what I meant" input magic bullshit is always going to be wrong about "what you meant" sometimes, and occludes the possibility of "I didn't know what I meant until I saw it" with its helpful corrections.

Have had this argument with fans of "autocomplete" systems, too. An analogy I like is a powered exoskeleton: if you had one you could run faster and jump higher and all these good things: but then when it breaks or you're not wearing it you've less native ability and no familiarity with the unassisted world, which makes you dependent on the tools.


> [this] makes you dependent on the tools.

Socrates had a similar opinion w/r/t reading and writing:

> This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.

I feel like you and Socrates are incorrect for similar reasons.

Humanity advances when it can hide complexity. Tools help with that. To argue against tooling because it makes you dependent on the tooling is like telling the construction worker to put away the jackhammer and pick up the pickaxe. Programmers are super productive (compared to other professions) because we can build tools that make us better programmers. The leverage recurses.


> Humanity advances when it can hide complexity. Tools help with that. To argue against tooling because it makes you dependent on the tooling is like telling the construction worker to put away the jackhammer and pick up the pickaxe.

Calling replacing ">=" with a ligature for "≥" "hiding complexity" strikes me as more than a bit of a stretch. They're two different visual presentations of the same thing, neither more or less complex in-and-of-themselves.

The ligature has some external complexity, though, in that for certain things I have to remember that that's "really" 2 characters in the file, and backspacing directly in front of ≥ will net me > instead of deleting the entire thing.

So if anything, ligatures are introducing a small bit of complexity that otherwise isn't there. Any relation to Socrates' quote seems forced.


Reading code is harder than writing. It's good to make writing harder so reading is easier.


I had autocomplete more in mind.


Socrates did actually have a point, though. This goes for all technology. Sure, it often makes us more capable in one way, but in many other ways, it reduces our capabilities.

Nobody today can memorize an epic like the Illiad today like people routinely would in Socrates' day. It a bit impressive if you can reproduce a 3 minute song text from memory. The Illiad often took days to recite.

Sure you can argue that tools (like writing) allow us to do more, to store more information, but the point is that they also cut us off from doing other things. Recitation from memory is available even when there is no paper.

This has gotten a lot worse recently, with tools like Wikipedia acting like external memory for a lot of people, granting them access to a lot of extremely superficial knowledge they don't actually possess themselves. It is certainly faster to look something up than to learn it, but a huge part of the benefit in actually knowing various things is to be able to connect what you are seeing now with something you know from beforehand, to generate a conclusion. That only works if you know the thing, you're not helped by theoretically being able to look it up.

The same with auto completion. Sure you can type faster, but it also means you, as a human, become less capable because you've never needed to hone the skills necessary to program without it. Programming without autocompletion is available in any text editor, even on paper.

Using tools is fine, but being completely reliant on them even to perform basic tasks is hobbling yourself. You turned yourself into an idiot in a powerarmor: Completely useless without your training wheels, unable to reason, unable to perform even simple menial tasks. I have a hard time seeing how that's made you, the human, more capable.


> Nobody today can memorize an epic like the Illiad today like people routinely would in Socrates' day.

They can, and occasionally do. It tends to be people with a lot of spare time, or interested in memory techniques. And it probably helps if you know (ancient?) Greek, so it actually still rhymes. But it isn't super useful, so people generally don't.

If you paid people a living wage to recite the Iliad, you might get some more takers. We kinda do that, except it's actors in theatre plays or films. Sure, they don't take days to recite, but e.g. people put on several, different plays a year.

> It is certainly faster to look something up than to learn it

If it's faster to look it up, why bother learning it? Conversely, if someone had to look it up often, I bet it would be learned. Spaced repetition and all.

> The same with auto completion. Sure you can type faster, but it also means you, as a human, become less capable because you've never needed to hone the skills necessary to program without it. Programming without autocompletion is available in any text editor, even on paper.

I loathe this argument. Compilers or interpreters (computers in general) are sticklers for the exact spelling. Humans aren't; I still understood what you meant when you misspelled "Illiad". Why exactly spend the time and brainpower knowing exactly what something's called, as long as you have a rough idea of how the code works. Or, why spend the time typing it fully if you can have it auto-complete? When do you have to program in any text editor? Or even on paper, where the exact syntax can't possibly matter? (Equally, I'm sure that if this was a necessary skill, people could pick it up - occasionally they do, like for whiteboard interviews.)

Humans are very capable, and also lazy (one of the best engineering qualities). Don't mistake having no reason to do something with not being able to do something.


On your last point I'd add that using GitHub copilot has (at times) felt like pair programming to me. It has made suggestions for something I didn't know or think of yet, but I noticed that I remember that next time. So it's possible that I'm actually learning more when aided by tools, much like a teacher looking over my shoulder.

I used to work for someone who was convinced phone gps + maps would doom young people because if they became unavailable, everyone would lack navigation skills. I thought it was absurd. First, who was ever taught how to use a map explicitly (for basic usage)? Second, what is there to learn? I think these kind of arguments forget that most people are pretty bright, especially in times of need. If gps is down, I'm sure people can find their way to Burger King with a paper map and stopping to ask a few of the locals, just like everyone did intuitively back in my day. They just wouldn't have to walk uphill in the snow with onions on their belt, because the war.


> If it's faster to look it up, why bother learning it? Conversely, if someone had to look it up often, I bet it would be learned. Spaced repetition and all.

Because of what I said, knowing something means you always have access to it. Otherwise this type of hypothetical knowledge may be an unknown unknown for you, even though you could look it up, you don't know to do it. You have access to facts, not understanding.

Real knowledge enriches all areas of life. If you for example learn to write with a pen (not just to scribble crow's feet, proper calligraphy), you will never look at letters the same way again. You could go your entire life not acquiring this understanding, but your world would have fewer colors, less nuance. Handwriting also means you must think through what you want to write before you write it, which is another practical skill.

It's similar with music, if you learn to play an instrument, you will never hear melodies quite the same. Better still if you learn to dance, beyond an entirely new connection with your body, music will forever gain entirely new dimensions.

If you learn to cook, you will appreciate flavor in different ways, be able to pick the taste of individual ingredients when tasting a soup. It isn't just just "that's good", or "that's bad", but "oh, is that celery?"

It's not just about what this knowledge let you do, that is the least of my concerns, but how they shape your relationship with the world. If you only have access to the world through tools, you become cut off from experiencing it.

> I loathe this argument. Compilers or interpreters (computers in general) are sticklers for the exact spelling. Humans aren't; I still understood what you meant when you misspelled "Illiad". Why exactly spend the time and brainpower knowing exactly what something's called, as long as you have a rough idea of how the code works. Or, why spend the time typing it fully if you can have it auto-complete? When do you have to program in any text editor? Or even on paper, where the exact syntax can't possibly matter? (Equally, I'm sure that if this was a necessary skill, people could pick it up - occasionally they do, like for whiteboard interviews.)

This is not my problem with autocomplete, but rather that it prompts you with suggestions for things to type. I absolutely use shorthand when I write code by hand. My beef with autocomplete is that your programming turns into a series of multiple choice questions as to what to do next, rather than something you truly drive by yourself. How questions are posed absolutely shape what sorts of answers you will come up with, and if you are not the one posing the question, you will never truly be in control.


> Real knowledge enriches all areas of life [...] if you learn to play an instrument [...] if you learn to cook

I don't disagree that there's value in knowing something. But I don't have a problem with people choosing what they want to know and invest time in (like cooking or playing instruments), and being able to ignore stuff that doesn't really matter to them (trivia, whether it's information on wikipedia, or random programming stuff).

In fact, that's one of the great things about modern times. There's more information out there than ever, so you can go really deep into an area (like calligraphy), even as a relative amateur. You can't do everything though, so it's a trade-off. Being able to forgo memorisation of things you don't want to prioritise is possibly a boon then?

I'm not sure of the equivalence of "knowledge"/rote memorisation vs doing an activity. We're getting a bit deep here. Personally, I'm not sure e.g. memorising the Iliad makes you that much better at writing epics, at least compared to trying to actually write epics (there's going to be some cross-pollination, but how large is that effect, and can you also get it by studying the Iliad without remembering it in its entirety?). At least memorising poems in school didn't make me better at writing them.

Or closer to home, is memorising random programming snippets (because of the lack of auto-complete) is superior to actually just writing code? Certainly studying e.g. a well designed standard library and noting design patterns is helpful, and knowing commonly used functions will make you more efficient, but there's no substitute for actually doing the thing/activity.


But how does memorizing all 33 methods of your AbstractBeanFactoryProviderImpl enrich anything?

Learning to play an instrument is enriching because you start to understand the logic/inner workings behind the music you listen to. You need the same skill for programming, sure, but I don't see how autocomplete would replace that.


> But how does memorizing all 33 methods of your AbstractBeanFactoryProviderImpl enrich anything?

Auto-completion is no doubt a large part of what's enabled your AbstractBeanFactoryProviderImpl to get those 33 methods in the first place. Which is a big part of my point, autocompletion shapes the way you think about code, shapes how you write code.

There are also a lot of examples of functions that, by name alone, do not do what they seem to do. That shouldn't be, you might say. But even in the face of that objection, they do exist. It would be very easy to jump to the conclusion that for example "Boolean Boolean.getBoolean(String)" parsed the string and returned its boolean equivalent.


> Auto-completion is no doubt a large part of what's enabled your AbstractBeanFactoryProviderImpl to get those 33 methods in the first place. Which is a big part of my point, autocompletion shapes the way you think about code, shapes how you write code.

Sure, but that doesn't necessarily make it better or worse. People have written horrible code without auto completion, just like they have written horrible code with auto completion. I'm not convinced that you just magically get better at thinking by making your life harder and typing more for no real reason.

> There are also a lot of examples of functions that, by name alone, do not do what they seem to do. That shouldn't be, you might say. But even in the face of that objection, they do exist. It would be very easy to jump to the conclusion that for example "Boolean Boolean.getBoolean(String)" parsed the string and returned its boolean equivalent.

And not having auto complete doesn't help with that at all. Auto complete MIGHT also include commentary for the function and explain what it does (if that commentary exists in the first place) and at least help with the issue.


The goal of most work is not to make you, the human performing the work, more capable - even when you're performing it for yourself.

The goal of work is to achieve a goal. Whatever achieves that goal most efficiently while not creating problematic "debt" is always to be preferred.

Your example of "writing code on paper" is a perfect example of a non-problem. When you write code on paper, the name of the function doesn't matter, since this is not executable code. So, whether you call it "sort" or "order" is fully irrelevant. Even if you "call" a function that doesn't exist, as long as the intention is well captured, it's not a problem. Say, if I'm writing "C" on paper and I write `arr.Sort()`, that will not affect anyone: it's simply a way to express what I want to do. If I'm ever going to write this as a program, I may use qsort(), or implement an actual sort function, or whatever makes sense.

Overall, memorizing things that don't need to be memorized "just in case" is more a circus trick than an actual useful skill.


> Nobody today can memorize an epic like the Illiad today like people routinely would in Socrates' day. It a bit impressive if you can reproduce a 3 minute song text from memory. The Illiad often took days to recite.

AFAIK this isn't something that was very common back then. Reciting the Illiad was not a hobby but more like a paid profession. People paid for this precisely because reciting something for a few evenings from memory takes a lot of skill/training and most simply couldn't do it. Same as today.


It was an oral tradition. Recitation was how it was passed on before it was finally written down. You see the echoes of this in a lot of early Greek writing, they contain a large amount of quotes out of Homer. Being able to navigate this maze of "Darmok and Jalad"-speak with Homer quotes meant you were well educated.

Only thing I've found that comes close is some of Augustine's writing. He does the same with Bible quotes in his Confessions.


I know, I learned ancient greek in school, the history and writing was a large part of that. Most (I'd guess > 95%) of the Greeks weren't able to recite the Illiad if I remember that correctly, oral tradition or not. Even the best educated of them wouldn't really have been able to do so. They were familiar with the content, of course, just like a lot of people in what we consider the West are familiar with what the bible is about. But I'd be happy to be proven wrong about that, school was a long time ago :)


I think there's nuance to that. As in I think the truth lies in the middle.

I am someone that wants to know 'enough' about how stuff works underneath. I don't like too many magic tools. I need to understand enough about it to be able to diagnose and work around failures.

I have no qualms about using the tools though. I definitely want the jackhammer and the power armor! I want auto completion. It's awesome. It makes me so much more capable to quickly be productive in an unfamiliar code base or framework (if naming is sane ;)). If I don't have it? Sure I'll read the API docs and swear under my breath. Happens every time I gotta work on our python code. Could I do it better by having it in my working memory? Sure but I don't work with that code often enough for it to make sense to keep in memory (or learn in the first place).


I wonder if it ever collapses. It's probably unlikely, the analogy of layers that strictly build upon each other might not be the best, but on every iteration where we hide complexity, some knowledge from a few layers down is lost, and we might do one thing more efficient now, but another thing in a more convoluted way, without even realizing. So it's hard to say whether it was worth the tradeoff. I guess everyone working in tech for long enough has some pet peeve about where something was way easier 20 years ago, or stories about junior devs frequently not knowing X and doing Y in a really dumb way, and are super surprised when they learn X and then get told it had been around for decades. It's obviously a net gain after all, but if you were able to look at it from a distance with total knowledge, it's also hilariously stupid. But then that's exactly how evolution works, and why there's a nerve going from your brain to your throat taking a detour around the aorta.


When I think about the "tower of abstractions" collapsing I usually end up coming back to the thought that it won't be a total collapse. It'll be a collapse to or around a specific layer of abstraction that we need to rework from (eg: something significant changed about a previously safe assumption). Like the 1906 earthquake in SF, it knocked down most buildings that had architecture that could not support the stress of an earth quake.


The best programmers I know are able to see through the abstraction and are comfortable getting into the lowest-level details. They have those skills because they've actually used them.

I'm not saying we'd be better off without optimizing compliers, but I do think every programmer should have practical experience taking the lid off and poking around inside.


There is a major difference between being able to poke into low-level details and being able to program at full efficiency in Microsoft Notepad or ed (which is what the OP seems to be advocating for).

One is a rare and occasionally extremely useful skill, the other is basically a party trick, like being able to recite the Illiad from memory.


> An analogy I like is a powered exoskeleton: if you had one you could run faster and jump higher and all these good things: but then when it breaks or you're not wearing it you've less native ability and no familiarity with the unassisted world, which makes you dependent on the tools.

Assistive technology is the result of historical progression, so why eschew auto completion because it might fail when you don’t eschew your computer which might fail as well? Should we all just be programming on pen and paper because that’s lower down the progression stack and has less chance of failing? You know that argument was probably made in the 70s by some old school programmer when those fancy new interactive terminals started coming out (I believe dijkstra actually said something to this effect, but can’t find a source).


> Should we all just be programming on pen and paper because that’s lower down the progression stack and has less chance of failing?

Yes. Not because of the resiliency (it's hard to imagine a failure mode where pen-and-paper programming would be the only option) but because it makes you better at your craft.


I mean, sure. We should always know the stack down to the electrons. Not only should we learn to code on paper, but build our own adders from primitive boolean logic gates. But this doesn't really follow (most of) us out of our BS computer science program (or perhaps until we have to do programming interviews).


I’m of two camps.

On one hand, I don’t use fonts with ligatures (I use Triplicate by the OP, very happy with it), and usually work without colorful syntax highlighting.

The assumption is that code is read more often than it is written, and I have no idea how my code will be read in future—thus the rule of thumb: if I understand it when it’s plain text monochrome, then the next maintainer (or future me) will be able to understand it with syntax highlighting on. I’m already biased to understand my own code more easily than anyone else, so I shouldn’t need extra visual aids if I’m not the only one who’ll work with it.

Plus, ligatures, syntax highlighting and auto-completion strike me as, in a sense, attempts to create a rudimentary GUI (TUI) to AST on top of plain text—but due to limited capabilities to work with it’s always semi-broken: syntax highlighting is off, ligatures have false positives, etc.

On the other hand, the idea that “we must keep using plain text” strikes me as somewhat stuck in the past.

The age of syntax-tree-driven, program-as-structured-data version control and editing capabilities must not be far off. At that point, we’ll likely get much greater flexibility—highlighting/autocompleting not just tokens but entire patterns (for adequately structured programs), exposing new refactoring operations, and more. We might start seeing cases where a genius high-level architect may not be that proficient at editing programs in plain text, and it’ll be obvious in hindsight.

(As an aside, I’m a little sour that GitHub/MS threw resources onto black-box ethically questionable Copilot rather than structured code.)

On the third hand (one foot?), there’s something to be learned from ligature+syntax highlighting+linting text-based authoring experience as a GUI—generalizing a little, we don’t have many other GUIs that attempt to present source data nicely while maintaining it zero-effort for you to drop into low-level edit mode. I wish whatever next-gen structured code authoring experience comes next has implementations that preserve this feature.


> On the third hand (one foot?)

OTGH, "on the gripping hand". Like many 1980s - 90s online idioms a science fiction reference, in this case to https://en.wikipedia.org/wiki/The_Gripping_Hand


Code completion is basically the way you get structured code capabilities in a non-structured editor. Ah, but it was invented to solve an input problem in Alice Pascal (circa 1985) which was totally a structured programming environment, so in a way code completion is a product of the environments you are wishing for.


I don’t know if I’m “wishing for” such environments, I just strongly suspect they will arrive (or they technically should, if they overcome industry’s inertia).

ALICE offered structured editing in a world where graphical output was so limited (and the lacking capability to just enter text in freeform was definitely a dealbreaker). I wonder how such an IDE could look today if rethought from fundamentals after all the advances in making and displaying GUIs.


They've been arriving for the last few decades, but haven't been successful yet. No one has really thought through a fluid efficient editing experience for them that can leverage the benefits of structure without the stilted input entry that the structure seems to imply so far...and our understanding of UI hasn't gotten much better than it was before. Someone might crack it someday, I hope.


The problem is versioning code as structured data. It is not yet here, but there are very recent advances (Pijul et al). Freed from the shackles of LoC, GUIs can do all sorts of magic with structured.


I wish people wouldn't indulge in these luddite fantasies of some ancient caste of ninja programmers. It's blatantly wrong to the point where I'm not sure if the above is not meant as a joke.

It's also insincere: do you regularly write out code with pen & paper? No, of course not. Nobody does. Just like nobody reads all the fine print, checks the emergency exits every time they enter a building, has an emergency drinking water reservoir, or a go-bag for volcano emergencies: these are teenage fantasies of what grown-up life should be like. Believing and propagating these ideas only sets you and your listeners up for failure.


I spend a lot of time thinking through problems with pen and paper, and also highlighter and printer, and whiteboard. I've found this class of tools often lets me focus on a problem with less distraction.

It's not always the right tool, and of course the code must be typed up sooner or later.

I often work in environments where the development cycle time is terribly slow for one reason or another, which puts a premium on thinking through things ahead of time.


I'm sure builders would struggle without diggers and cranes, and doctors would struggle without electricity. But why would I purposefully do away with useful modern inventions just so I can remain compitent without them?


Why would you occasionally lift heavy weights, only to put them back down, repeatedly? But this activity is viewed as beneficial by so many that there's an industry providing for it.


The key there is "occasionally". Fitness is deliberate practice: you don't just carry heavy weights when you need to get something done, you explicitly go when you have time to exercise your muscles.

Similarly, if you want to occasionally exercise programming without assistive technologies, more power to you. But if you're tying one hand behind your back while your goal is to actually produce useful code, then you're just being silly. Might as well use a screen reader instead of looking at the screen, or literally tying one hand behind your back.


I think assistive tech is awesome but we should so use it to learn. Instead of slowly looking up stuff in API docs and typing it out, running the compiler slowly to let it tell me I have a typo in a long method name I have an IDE with completion and it tells me right away if I did mistype something I didn't autocomplete.

But I should use this to learn too. Learning by doing. Autocomplete to find the method that I need and next time I remember the name already and I don't have to find it in an autocomplete list I type the unique part and autocomplete the rest. I just gained efficiency and I would also be able to program without the assistive tech when needed.

Like making use of the fact that I could walk to the grocers or bike and use that to work out naturally while getting fresh air. But if it rains I'll take the car. No need to pay a fitness studio.


In the original case, the goal is to maximize effect in the external world. Here, tools that magnify one’s performance are desired.

In your weight lifting case, the goal is to maximize effect individually in your body. Here, the work of your body is the point.


I wonder if it would be possible to hire desk workers for 6 hours a week of manual labor that happens to also be good exercise. Surely there are people that would rather make 300 extra dollars a month for their 6 hours a week of exercise rather than pay a gym.


And one of the points of maximizing that strength-building effect in your body is to increase your body's capacity to effect change in the external world (with or without tools), is it not?


For most people, it is certainly not. The vast majority of people who exercise do it for health and aesthetic reasons.

If we could have abs and a perfect heart while eating McDonald's and never moving more than 3 steps, the vast majority of gyms would close down.


Because we have more efficient ways to lift weights, very few people get paid to do that. So, feel free to lift weights at home, but don’t expect to get a job loading a ship by manual labor.

It’s the same when writing programs: when at home, do whatever you want, but in a job, efficiency is key.

(Back to the subject at hand: I doubt whether ligatures help or hinder much, efficiency-wise)


So you can conveniently stay fit without limiting your career choices to those involving manual labor.


You and the parent comment are polar opposites, I see.

How about it's fine, as long as I can still put my cursor in between the two underlying characters (edit them separately), and the resulting digram doesn't look so different as to be distracting?


I wear glasses that help me see `better`. When I don't have them I can't see. If I stop wearing glasses I may get adapted to that, but I prefer glasses and I always wear them.

I also like my code colored/beautified/ligtured.


"I don't walk/carry groceries/exercise, instead I drive the car around. If I don't, I can't actually carry heavy stuff because of my weak body, and sure, I may get adapted to that (and may even become healthier in process, meh) but I prefer to drive the car and so I always do." That's a different analogy and it suggests a completely opposite conclusion. Which one of these analogies is correct? Maybe they both are faulty?

P.S. I too wear glasses and you know what? I'd rather be able to see well without having to wear them. Alas, the technology is not yet here (well, it is, but I was told the chances of complications that could leave me completely blind were about 20% so uh, I'd just wear glasses even though they're quite a nuisance to wear with a face mask).


Intuitively I feel there's a big difference between my glasses and your ligatures, but I'm having a hard time putting my finger on it...


I mostly agree with you - I hate when my :) it turned into an emoji, or I get smart quotes because the editor has a _different_ code point in there than what I typed. But I do love ligatures because that's just a representation on the screen that makes it easier for me to read. It doesn't change the underlying data, so I'm ok with it.


Even editors that automatically put extra “ to close a string or ) when I hit ( is problematic in my mind. My workflow means that I often put “ or ( after the editor feel like I should, so I end up with ) in wrong places. It drives me nuts.


Nowadays those text editors forcefully ignore extraneous " or ): that is, you press ", the editor prints "" with the cursor between them, and when you press " second time, it doesn't print the third " instead the cursor just moves after the second ". It is annoying in those cases when you actually want """.


vscode doesn't and it doesn't even respect the setting to not insert " or ). Grrr


Font ligatures are always going to be wrong because their logic is both general and simple.

On the other hand, if you actually parse the input and use it to make transformations, it would be possible to show you what the tool your are inputting to believes you mean in a clearer fashion, which is very useful. Automated indenting and syntax highlighting (when done correctly) can catch errors as you type.


> Especially when programming, but also at most other times, I want a 1 to 1, unambiguous correspondence between the symbol I entered and the display on the screen.

Ligatures don't necessarily stand in the way of this. It's a matter between the font and the syntax.


I am happy for other people, even those I collaborate with, to set up their editor however they like. Ligatures aren't to my taste, but it's not really affecting me when I read the code and I'm not regularly pairing right now so whatever. Maybe someone's done a scholarly study to test editing and comprehension performance for fonts with and without, but I doubt the results would change my personal choices.

However, all that said, if you ever check in unicode identifiers that I can't easily type then you're on your own.


I'll offer a possibly valid use case for Unicode identifiers: mathematical notation. C# allows Unicode for variable names, and I wrote some code using θ (theta) as a variable for a geometric angle. I really liked having a struct for cylindrical coordinates to name the values as (r, h, θ), which flowed quite well visually, better than writing out "theta" or "angle". And since it was a member of a struct, once I had the symbol in the editor once, the Intellisense code completion would just show the θ in the picker dropdown.

(It was for a personal project and I'm not sure I'd do that in a team environment, but I'd at least ask the other developers if they'd like it. And of course one use of Unicode being good doesn't mean that all uses of Unicode are good.)


Yes, this is the most common argument, but still, not in my back yard. I'm lucky enough to work with plenty of maths PhDs who don't insist on these shenanigans and still seem productive.


> However, all that said, if you ever check in unicode identifiers that I can't easily type then you're on your own.

Between copy and paste and most language's auto-complete tools and the emoji soft-keyboards in most operating systems today you can "easily type" most any unicode identifier with just a little bit of knowing your tools. (The Windows emoji keyboard has almost all of the Unicode math symbols, for instance.)


If you only know these Unicode input systems, I strongly recommend looking into https://en.wikipedia.org/wiki/Compose_key systems


I do have an AltGr international key that has many Compose Key benefits. It is handy. I tend to make the other recommendations first as people can be picky about changing their keyboard, and learning something like a Compose Key can be a lot of work for some people.


Sure, I get fuzzy search for emoji names in Emacs. Still not gonna do it in code.


It should be noted for clarity that unicode in code is orthogonal to this, which is display only.


Not quite, because in both cases the issue is "what the hell is that and how do I type it?"


If I send you a file I edited with ligatures you can't see them, not true with a file with unicode-only chars.


Yes, that was my original point. But my decision on whether to use a font with ligatures, or whether to allow unicode in program text, is still motivated by the same thing, so I’m not sure I’d say they were completely orthogonal concerns.


The author points out two problems, which both are ridiculously obscure edge cases: if there’s really a Unicode arrow in code you deal with, it likely won’t matter much if there’s an arrow or something closely resembling an arrow. The second point recommends getting rid of ligatures entirely just because they may, in some situations, do not display the right thing.

Both arguments ignore the 99.9999999999% of times ligatures do what they’re supposed to, and help me identify some symbols quicker. I’ll happily accept being confused once in a year about a strange-looking arrow in a debug log output, or from a weirdly rendered arrow in some ASCII art print.

People complain about weird stuff.


Why on Earth one would want in a character-oriented text (i.e. where every character matters separately) to have some of them collapsed into one is beyond me. Why not take it further and have IDE render all occurrences of "cat" with an emoji.


Because in the cases ligatures are used most prominently, a "logical" character is made from multiple characters.


Nobody expects mathematicians to write all their formulae in ASCII either, they are allowed to use special markup to denote specific things. Why can't developers use more specific notation for displaying code? We use `==` for equality checks due to a limitation of typewriters, not because it's the only sensible way to do this. Give ligatures a chance for a single afternoon, and you'll see the value of purpose-specific glyphs. They aren't replaced by arbitrary hieroglyphs, but actually self-explanatory, helpful representations of the thing a bunch of characters are supposed to tell you.


But the programming language is token-oriented.


But keyboards are not. Even if I’d be able to convince the maintainers of my favorite programming language to also accept some Unicode glyph as synonym to ‘==‘ how do I type it? Ligatures might not be ‘the right’ solution, but they are a highly efficient approximation. Also the cost of reversing the decision to use them is basically zero.


That's what I said. To the programming language it's just a token and representing that with a single ligature is fine.

I replied to a comment, that the document is character-based and therefore ligatures are inherently wrong.


The last place where you want to have to mentally apply any decoding rules is in "debug log output".


Are we still talking about the same thing..? Fira code has like 20 ligatures which enhance the rendering of arrows and equality signs. There’s no „decoding rules“ to apply. This is just way, way overblown. Are you really stumped and confused if there’s „=>“ vs. „⇒“ in a piece of text..?


I never understood the justification though, beyond "it's pretty". When I see '<=' in a conditional, I see one symbol and attach its meaning. I've never needed to 'encode' that either.


I think the author failed to state the primary issue I have with ligatures: code listing using ligatures tend to be harder for me to read because the symbols are unfamiliar to me.

I first ran into this issue trying to learn Haskell, where some beginner websites used fonts like Fira Code in their listing. The "weird" operators I saw in the listings were unclear to me. In some cases I simply wasn't sure how to even reproduce the code in my editor. I think it is something I could grow used to and someday perhaps even enjoy, but as a beginner it was a barrier.


In most cases "copy and paste" still just work (unless the code listing was a screenshot, which is a bad idea anyway for teaching code) and you can learn it that way.

Also most of the lists of intentionally ligatured operators are pretty small and easy to find reference materials for. For instance Fira Code's big first chart in this section: https://github.com/tonsky/FiraCode#whats-in-the-box


In the case of a "real Unicode arrow", I'd just as soon use a Unicode literal if the language supports it, like '\u2192'.


Even better if your language supports it: use named escape sequences so you don't have to look it up. "\N{RIGHTWARDS ARROW}" vs "\u2192"


Or I could type -> and get on with my life.


How does that give you the unicode character you need in the output? Or are you saying there is just never a reason to have those in strings?


If your computer is set up properly you just type <mod>-> to get →.


Unfortunately, there’s a small amount of relatively common things you might want in literals that can be very confusing in monospace display: tabs, of course, but also all the other spaces (en, em, hair, etc., legitimately needed if you want your typography to be up to 18th-century standards and not argue about the “single vs double space” typewriter-age nonsense), hyphens and dashes and minuses (aka “why I’m still using -- and --- in my TeX code”), and so on.


I use the proper dashes, too, in prose (but when using TeX I use Unicode input these days). But even though some programming languages (such as Julia) would allow one to use the different dashes for different purposes, I don’t think that would be a good idea: they look too similar in, as you say, in monospace fonts.


I do use dashes and real (not pseudo-quotes, i.e. `"` and `'`) quotes in code comments but I only use the space symbol for whitespace.


The author is also very clear about that they’re not talking about what you use in your local editor. It’s about when you make decisions impacting others. It’s a kind of accessibility.


It depends on the language. At least for me as a Haskell programmer, the arguments are not really valid.

* The operators do indeed mean the mathematical symbols

* Haskell does not support unicode in operators normally. The operators are ascii.

* I have yet to come across where the ligatures were wrong for my code and as I do not have those unicode operators in my code (and am not really sure how to type them) they were never ambiguous.

I am happy to have ligatures in Haskell. A colleague of mine programs in Agda though, which supports unicode operators and they have their mathematical meaning there. So he's fine without ligatures and does indeed type the unicode symbols using vim. It really depends on the language.


> So what’s the problem with programming ligatures?

> (1) They contradict Unicode. [Goes on to mention U+FB01 LATIN SMALL LIGATURE FI and how a ligature for U+003D EQUALS SIGN, U+003E GREATER-THAN SIGN is indistinguishable from U+21D2 RIGHTWARDS DOUBLE ARROW]

No. Nonono. Hell no, one might say. (I’m frankly disturbed to hear this said by a typography expert, because this means something is deeply wrong, perhaps with my understanding here.)

It was never a goal of Unicode to have a code point for every ligature a typographer might ever want. For example, there is a code point for FI and for LONG S T, but not for FJ or FT. Like other “presentation forms” (a thing that Unicode is explicitly not supposed to contain), these exist only for roundtrip compatibility with old encodings (some ancient Adobe-Latin variant maybe?) and are deprecated for any other use (although I’ll be damned if I can find a clear statement to that effect on the Unicode website, there are notes in other places[1]).

The second part holds water to some extent, but given the amount of visual ambiguity afforded by allowing a text to contain arbitrary Unicode (which we must if U+21D2 is at all a consideration), no programming font can solve it, and no programming font tries.

[1]: https://twitter.com/fakeunicode/status/945017346858532864


> (some ancient Adobe-Latin variant maybe?)

As with so many problems in encoding anywhere have a single well renowned crazy source: some EBCDIC code pages had manually laid out ligatures.

(IBM's shadow remains a heavy one.)


I like the ligatures in Fira Code. I find the arrows, boolean operators and the pipe symbol "|>" a lot easier to read as symbols instead of three seperate characters. I can understand why some people would dislike them. I'm not sure about merging together regular letters, although I haven't actually ever noticed this.

In my mind it depends on where you are on the spectrum, the hardcore programmers that are one with the code and want to see every letter in a terminal editor and those that stray more towards the designer and practical side of programming. Just do what works for you, and get burned, otherwise you don't learn but endlessly follow "best practice" advice.


I'm not a fan of the ligatures. But I'm not sure this website's "They contradict Unicode". Yes there are code points for ligatures. But it's also very reasonable for a typesetting system to encode the individual characters and just combine them to a ligature glyph for presentation. Which is what programming fonts have to do, since the ligature code points aren't part of the programming language.

I do something much weirder, I program in proportional fonts. Once you get used to it it's really lovely. Unfortunately the way I figured out how to do it requires some aggressive reformatting of code (mostly using tabs instead of spaces) which makes it hard to share code with others unless they buy in to your preferences.


My impression was that codepoints for ligatures are there for backwards compatibility reasons. In that case they wouldn’t add codepoints for brand new ligatures. So I don’t really get the point that he’s making (if I’m correct).


Yeah, most of the codepoints for ligatures that exist in Unicode are for compatibly mapping from old encodings that predate modern font ligature support (such as EBCDIC had a bunch of manual ligature encodings). These days Unicode suggests leaving ligatures entirely to fonts and unencoded as most people don't need to round-trip through an EBCDIC speaking mainframe (hopefully!).


What’s weird about programming in proportional fonts? I’ve been doing that for a decade now. Auto formatter mandated at work doesn’t allow for manual format directives anyways, so no one notices.

Ligatures can be nice if you can solve the edit problem: keeping an -> as two characters even if it looks like → can help a lot. Unfortunately, editors don’t do that, so they become too annoying to use (maybe when we render code like we render latex will it be accepted).


> Editors don't do that

VS code does it at least, if i'm interpreting you correctly. You can position your cursor in the middle of the ligature and change either side


I use vim in Windows Terminal as my daily driver, and the terminal definitely treats ligatures as separate characters (taking up multiple character widths, you edit the characters one at a time, etc). I'm mildly curious what editor doesn't, because that would lend significantly greater creedance to the article's arguments.


That sounds pretty nice. Like if you do a backspace into a right pointing arrow (->) does it turn a dash?


Yes. I use ligatures on my system (Windows) and this is true in all IDEs, editors and terminal Windows, including when I SSH to a Linux system and use vim.

It really becomes seamless, and you get used to it in a similar way to syntax highlighting or high resolution screen. Yes, I can code on 768p laptop screen with laggy input and a crappy keyboard, but it's not as nice as my usual workstation.


I tried it in Visual Studio (not Code) once, and was disappointed by the experience. Perhaps they just got it wrong.

Now if only there are any proportional coding fonts with coding ligatures baked in :p.


I think NF Code does? https://github.com/sgigou/NF-Code

Input and Go Font are the two main coding proportional fonts I know. Neither advertise ligature support. There are hacks to copy ligatures from Fira Code in to other fonts but I don't know if they would work for this purpose. https://github.com/ToxicFrog/Ligaturizer


Awesome! I'll definitely take a look at those.


It's OK with nVim in Terminal.app on OSX, when you set a nerd-font of some sort. Has presentational ligatures, doesn't interfere with editing.


I like programming ligatures for the same reason I like syntax highlighting: it makes it easier for my brain to quickly scan a line of code. I have yet to be bitten by some ambiguity, and the more prominent difference between == and === has prevented a few subtle bugs.


I’d like to say more than “me too,” but you’ve nailed why I love ligature fonts so much.

It’s strange to me how the OP is so laser focused on ligatures being “bad” because they violate Unicode. Coding up novel solutions is a stalwart of what it means to be a capital ‘H’ Hacker! I’m almost offended on a cultural level that they couldn’t at the very least appreciate the spirit of ligatures in programmer fonts.

Personally, I’ve never experienced a situation where Fira Code was the cause of of an ambiguous syntax or Unicode issues. But if they have, why not come up with a solution instead of being a hater?


Count me in. I suspect a lot of the "confusion" oriented posts are people exaggerating to make a point, as we as developer seem to so love to do.


Count me in, too. For me it is really easier to parse ≥ than ">=", where I might even get confused about the order of the symbols. Seems more succinct and close the mathematical expression it embodies.


Everyone can do whatever they want in their own editor. My personal peeve is being subjected to ligature'd fonts in code examples in programming blogs. I find it particularly egregious when the blog is explaining a new language or syntax to an ostensible beginner.

Fortunately Firefox still supports user styles. Here's how to stop the madness for anyone that cares. https://winaero.com/enable-loading-userchrome-css-userconten...

Inside my userContent.css I've got this gem.

    * { font-variant-ligatures: none !important; }


In my opinion programming blogs shouldn't touch the font settings in `<pre>` and `<code>` at all. Then everyone can view the code how the prefer.


Counterpoint: I like ligatures in programming fonts.


Counterpoint to your counterpoint: the standard is still the best way, and ligitures are non standard.


Can you point me to where in a programming language standard it says what font, hinting, and kerning must be used when editing code?


Standards matter for interop, not so much for personal preference in personal environments.


Rip rc files, no colored terminals anymore :(


I hope you use the standard Windows OS.


I bet ligatures would be unambiguously helpful in programing if they were limited to the presentation layer and controlled by the syntax highlighter.

For example, the author uses Fira Code[0] as an example of well-intentioned by problematic ligatures. The author says this is bad because (1) it contradicts unicode and (2) the substitutions will be inappropriate sometimes.

(2) is solved by applying substitutions in semantically relevant places with the syntax highlighter. This would be particularly useful when typing special sequences. If you get the ligature substitution, then you know you don't have a typo.

(1) is trickier. You want to save a unicode file, and you want to be able to copy text selections that end part way through a ligature. This requires some finesse.

[0] https://github.com/tonsky/FiraCode


In what sense does it require finesse? It seems to work fine in vim at least.

Agree that it may be nice if they were more syntax aware. Although, I'm not sure what I want to have happen if I comment out some code.


I guess? I've never really encountered problems with it, but I'm fine if people don't share my tastes.


I mean I don't personally like them but I'm not gonna yuck your yum. As long as you run your code through the autoformatter before you submit, you can program in a variable width font for all I care.


Papyrus, even.


What about Comic Sans?


This only affects you reading code on MY computer. Your computer can display my code in whatever font you wish. This is much to do about nothing. It's simply a presentation layer, and it matters most to the person doing the coding and should be whatever they choose that makes them perform better. For me, ligatures were a big improvement. They shortened code so more is displayed on the screen with no alteration in meaning and made other things just more obvious, intentional and stand out. And when I upload it to github or anywhere else, it's displayed in a traditional font with no ligatures because it's totally nondestructive and has nothing to do with the actual code - only how it appears (to me).


This article really irritates me, despite the fact that I broadly agree with the conclusion.

The points on Unicode and wrong substitution feel weak, because there's plenty of valid situations where Unicode isn't displayed "correctly," and plenty of situations where the characters displayed don't match they semantic meaning.

In my opinion, the main reason to avoid ligatures isn't some big moral issue on "correct meaning" or "properly using Unicode." The main issue is that ligatures can cause confusion if the reader doesn't know the ligature (which is most readers).

For what it's worth, I like using font ligatures on my own IDEs. I think they look nice. But I would never choose them for anything displayed for anyone else.


The article is probably written to irritate, whether intentionally or not. While it's easy to accept that ligatures in public code is bad, he still goes on to say:

> “What do you mean, it’s not a matter of taste? I like using ligatures when I code.” Great! In so many ways, I don’t care what you do in private. Although I predict you will eventually burn yourself on this hot mess...

Also note the ending line:

> If you don’t believe me, try it for 10 or 15 years."

Well, ligatures has soon been in ten years or more in programming fonts.

In many ways, the article has a good point, don't use ligatures in public code.

It has second point too, which I think is not true: If you use ligatures, you may write logical bugs by twisted semantics. This is a statement, which would require statistical study on programmers who use ligatures and who doesn't. Do they write more bugs or not?

I find some ligatures very helpful for readability, and thus they actually reduce bugs, but it's all in the eye of the reader.


Ligatures are display only. They are not part of any language syntax I know of.


I don't see a problem at all. If you like ligatures in your code and they work for you, you use them, if not you don't.

It's not like they're hard coded somewhere, they're a matter of personal preference, just like spaces/tabs and fixed-width fonts.

It's just the author's preference in rant form, nothing more.


The other thing the author isn't considering is that some of us have UI additions that make my IDE display a ligature, that visually replaces a known set of keystrokes. Now, what I actually typed is still there! It just looks fancy in my code editor.


spaces/tabs ARE hardcoded in your code, but otherwise, yes exactly.


Ligatures in code fonts are an interesting development. I don’t have any experience with them. The most neat thing about them is that they are very much a only-I-need-to-use-this convenience. It doesn’t really infringe on anyone.

For the most part. I used a beginner Haskell book for some course. And they used the proper symbols for some operators. But then newbies might think that the symbols used in the book is what constitutes what raw Haskell looks like, which is wrong. So that’s very inappropriate.

On the other hand, this is not the same as using ligatures to display things like `==` in an editor; the former is text substitution, while the latter is purely a visual thing. And I am fine with the latter as long as it is not misleading.

These visual ligatures are the closest thing that most languages will get to using “nice-looking” (subjective) symbols instead of just every combination of all the printable ASCII symbols. Now, using Unicode in some new tech is intrusive since that is a decision that affect all users of that tech. The author of this piece did do that in his Pollen by using U+25CA LOZENGE (`◊`) as the central metacharacter. I don’t know how that panned out since I haven’t used that very interesting typesetting language.

Modern/new languages might want to loosen up on the aversion they have towards Unicode. But it’s a chicken and egg problem: input methods suck so most languages don’t want to burden their users with having to use symbols that don’t show up on their keyboards. And in turn input methods suck because we often don’t need anything better (at least us who are monolinguals and/or from the West). Maybe the best that languages that want to use Unicode can do is to provide Unicode symbols as an alternative by making sure that some ASCII soup can always be used instead (kind of like C’s trigraphs…).


This reminds me of discussions with older Fortran programmers who argue for ALL CAPS ALWAYS.

I guess ligatures are fine by me, I use them and have not been bitten. Editors should rather focus on showing whitespace somehow, I have been bitten by tab/space/CRLF many times!


Speaking of fortran, one thing that annoys me in Fira Code actually is that /= renders as some weird thing, rather than "does not equal."

And of course it has no chance of understanding .ne. but that's fine.

In LaTeX lstlisting you can sometimes get weird little underlines to explicitly show spaces in code. It looks pretty bad!

I almost want a font that draws l as a lower case cursive l, although I guess it is probably better not to fix this problem as somebody out there will have a font that mixes them up.


I use ligatures that turn spaces into tabs.


That's pretty cool. My font doesn't have them, sadly.


What is your tabstop length preference, then? :)


Ligatures help to convey the intended meaning of the code, which outweighs the rare case when they go wrong.

Programming languages' operators like == are actually ASCII arts for the convenience of the machines, never the humans. It is encoding a concept that is not representable in ASCII, so it has to be encoded by a conventional combination of two characters. When reading your own or others code, you have to do this subtle mental labor of decoding the ASCII art back to the concept.

Programming ligatures, on the other hand, alleviate this burden. They conform the intended meaning of the author. It's not a typographical problem, rather it's a communication problem.

For most ligatures, you can guess the underlying ASCII codes, and if you can't guess, you only need to learn once. In an editor or on a webpage, you can copy the code and paste onto something that doesn't have the ligature font enabled to learn its components.

The idea case is Unicode programming like Agda, but it has its own problems like not easy to type and find out how to type, lacking some operators, hard to read with small font size, etc.

Ligatures are a balanced option between the ASCII art and the Unicode programming. Not hell yes, but neither hell no.


I personally love ligatures, but

  if you’re preparing your code for others to read—whether on screen or on paper—skip the ligatures.
I can agree with this.


I've actually caught multiple bugs using ligatures which make operators more obvious. It may not help you, but for me it makes logic expressions and various arrows a lot easier to read. On the whole unicode confusion thing, if you are naming your variables with unicode arrows and logic operators, that might just be an antipattern you want to avoid for other reasons.


I was quite confused about why I was not seeing any differences between the font examples, till I remembered I have uBlock Origin Blocking remote fonts.

I also agree with another comment that I don't want my ":)"s being converted into emojis. And the ligatures look pretty good, but I wouldn't use them when programming.


It's hard enough to find a good coding font without having to disqualify them because of their ligatures. Many fonts overdo it with making them pretty or look like single characters. If they instead appeared like two typed characters that form a single operator without much ambiguity that could be useful. Of course you don't want to depend on using the ligatures for the whitespacing to be 'right'.

Much as colored syntax can go very wrong with a poor color palette, mappings, or monitor color profile, ligatures go wrong enough not to make it worthwhile. Maybe they'll get sorted out eventually as we have good color defaults these days.


Discussed at the time:

Ligatures in Programming Fonts: Hell No - https://news.ycombinator.com/item?id=19805053 - May 2019 (86 comments)


I really don’t like ligatures but I also don’t like it when my IDE auto-completing quotes/braces/brackets/etc. I try to turn all that kind of stuff off.

Basically, I find it very distracting when typing something and then seeing something else appearing around my typing. Ligatures definitely fall in that “distracting” realm for me. It may not be as pretty but I like the symbols showing exactly as I typed them.

I don’t begrudge others who like these —- it’s their preference. I just prefer non-ligatured versions of coding fonts.


His list for characters easily confused is weird... the bulk of them I've never had problems disambiguating. My simple test is: lI1O0 and to just stay with monospace.


> The problem is that ligature substitution is “dumb” in the sense that it only considers whether certain characters appear in a certain order. It’s not aware of the semantic context.

In many code editors (even my terminal with fish shell!) this is not true. Ligatures are broken up when the text changes style due to syntax highlighting. Syntax highlighting can consider context, so if there is a mistaken ligature, this can be fixed by changing the syntax highlighting rules.


On a side note, Triplicate Code is the only font I've ever considered buying to use on my screen. Reading the book: "Oh, those code examples look lovely, what font?" checking the website: "$120. It's nice, but is it $120 nice?" Logically that's not too much to pay for something I stare at for 40+ hours per week, but for irrational reasons it's hard to pay that much for something so intangible


Not exactly related, but wouldn't it be awesome to have a font with ligatures to make J or K programming have the readability benefits of APL? Random thought!


Just imagine if J or K could be as glorious as this:

https://www.youtube.com/watch?v=a9xAKttWgP4



Nah, ligatures are fine. Most of the time they’re used in situation where we are literally trying to create new character by combining others. Like |>, =>, !=, ===, >=, ~=, etc. The individual characters don’t have semantic value on its own, only the combination does. Then it’s quite fine to display it as one ligature character. What I mean is the meaning of |> is not a combination of meaning of | and meaning of >.


That really depends on the programming language.

For example {| might be paired with |} as special quotation syntax, at any rate some form of bracket. See, Fira Code has a ligature for those.

But in Ruby you'd see one of them in one liner blocks, e.g.: list.map {|x| x*x}

Here {| is not a thing that should be combined. It would look really confusing as you'd search in vain for a corresponding closing special bracket. { and } denote the block and | and | the argument(s).

How symbols are combined to form meaningful units depends on the syntax of a language and this issue cannot be avoided. So your milage may vary.

I'd be on board more if ligatures were applied in a language specific way, aware of the meaning of tokens in context, e.g. to leave <= in string literals alone, render it ⇐ when used as an arrow (maybe in a language that has left arrows), and something like ≦ when used in a comparison context.

But the same thing being rendered in different ways may be very confusing (unless there's an easy on/off toggle).


These are rare exceptions and I think formatters can usually deal with that. I believe it’s also possible to disable specific ligatures (ie not disable all ligatures, but just blacklist those that don’t work in given language)


I like how we had to figure out some new utterly inconsequential thing to argue about after vim/emacs and tabs/spaces got old


I agree with the author. I only ever look for fonts that will help add contrast to visually similar characters (ie: i, l, 1, o, 0, etc)


When I was required to use Eclipse, Source Code Pro was my preferred font. In January of 2020, JetBrains (which I've been able to use for the past few years professionally) came out with JetBrains Mono which is quite nice too.

It has ligatures... which I don't enable.

I've found the blog posts on the designs of each to be interesting.

https://blog.typekit.com/2012/09/24/source-code-pro/

https://www.jetbrains.com/lp/mono/


Assuming anybody else reads your code -- this will just enable you to write code that is visually ambiguous to people using normal fonts.

Ligatures, on the other hand, don't hurt anybody else.


Design Blogs With Polarizing Titles: Hell No


Great opinion. It is perfectly fine to have strong opinions in such subjective topics, and I respect the vision of others.

Also: I would probably never use your font, as I think ligatures look amazing.

And I have never encountered a single case of your objection #2.


Lig makes appearance somewhat cool but they definitely have drawbacks. Last time while writing rust i had to use shift operator a <<= 1; and it looked so ugly i had to add space between << and =.

Does this mean I will stop using lig? Nope.


It is possible to only enable the specific ligatures you are interested in if your editor and font support it. https://github.com/tonsky/FiraCode/wiki/How-to-enable-stylis...


Personally I never had trouble reading code without ligatures. I tried Fira Code and other ones in the past but it felt like a purely cosmetic thing that looks ugly.


I think the ligatures in Fira Code are pretty sweet. I've been using it for over a year. But that's just my preference. Everyone rolls a different way.


the author almost went there with photo from a printed page: ligature could confuse people learning how to program. That's an actual point that matters to some people, which I would give a shit about. Instead they presented such mind blowing discoveries as ligatures "contradict Unicode" and "are wrong sometimes". I don't use ligatures in anything other than code, why are those problems?


Dictating to other people how their personal workflows should look: Hell No


He is not your boss, so chill.

But if he was, it would be OK, no?


Micromanaging the fonts your employees use for their own environments would be a pretty big red flag.


No.

Ligatures and substitutions are display only.

I can setup my ide to display how I want it to display code. It affects no one else.

Why should any manager dictate that to me? I'd not be working for someone like that. Feels that it would be someone on a huger power trip.

Imagine dictating what font your Devs have to use... That would be mental.


You're allowed to disagree. Vociferously if that's your desire. If you post a good counter rant, people will probably enjoy reading it even if you are just ranting. Including reasonable sounding counter arguments would be nice too, but it's not required.

It's called "discourse". Differing opinions are voiced and compared to each other, not in search of "Universal Truth", but because different things work better or worse for different people. Greater awareness of the scope of possibility allows individuals to find the things that work best for them. Society as a whole benefits, eventually; at the cost of lots of noise... which we seem to have regardless.


Font ligatures are 100% subjective aesthetic. There is no rant worth reading about it (including this blog post).

Either you like them or you don't and it doesn't affect anyone else's machine. It doesn't matter if it's pure Unicode or whatever loosely connected complex faux objective reasoning you can drum up to support your subjective view.

I didn't like them, but then I did get used to them and I prefer them for JS/TS (only FiraCode's specifically), let me stretch that into an article...


Most of the hyper-personalised things in IT tend to cause some unusual breakage at various stages.

tl;dr: you can obviously do whatever you want, but when you interact with others, having more things in common helps the transfer of information and knowledge.

Take a super-personalised 'pretty' keyboard with a custom layout. That's great if you have a desk that is yours where only you sit, and you never sit anywhere else. The reality is that you aren't guaranteed to be the only one there, and you're also not guaranteed to be always there when you are writing. So now you have to either have a backup-keyboard or have lower performance for yourself or others that use your desk.

Same goes for the way you like your shell, when you configure it in such a way that it no longer represents anything that looks like a default distro shell, your examples may no longer work for others. Examples of others might no longer work for you. You, using a shell on another system might no longer be familiar to you, and when you have someone looking at your work (be it a presentation or co-worker) they might have a hard time understanding what you are doing at all.

This also goes for fonts, color schemes, menu configurations, even the desk, chair, monitor or network you are using.

None of that means you shouldn't use what works well for you, or that you shouldn't do whatever you want when it doesn't interact with others, but in reality you never really work in isolation, and as soon as you need to interact with something that isn't yours or someone needs to understand what you are interacting with in your setup the flow of information is quite influenced by the presentation based on your preferences.

That doesn't mean that everything and everyone needs to be the same either. If you work on a software project together with others, you probably have a coding standard and editorconfig that embody the commonalities, and then it doesn't matter if someone is using neovim, babi, IntelliJ or ed. But if you end up on stage somewhere showing your work to a larger audience, having an editor that is familiar enough to not distract or confuse, a shell that looks like the shell the audience might be familiar with and a font that people don't need to "interpret".

You can replace stage with any other form of sharing, including screencasts, screenshots, books, websites etc.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: