Hacker News new | past | comments | ask | show | jobs | submit login
Mathematicians should stop naming things after each other (nautil.us)
218 points by abnry on Sept 5, 2020 | hide | past | favorite | 398 comments



Hard disagree.

Once you get to the advanced levels of any field, terminology being "accessible" doesn't really matter, but being precise does.

Areas like philosophy and law actually suffer in my opinion when they overload common words with uncommon meanings, or descend into weird disambiguations that depend on suffixes.

For example, in philosophy there's "contractarianism" and "contractualism", and trying to remember which is the general term and which refers to a specific theory drives me nuts. (If "contractualism" were just known as "Scanlon's theory" it would be a lot easier.)

Naming things after their creator is actually super-helpful because it's really easy to disambiguate, helps situate things historically, and once you're at that level there often isn't a single unique word or phrase that can easily encapsulate the idea anyways and isn't easily confused with something else.


I Disagree with you, in Computer Science we have things like: "Quick Sort", "Merge Sort", "Map", "Hashtable", "LRU", etc... etc...

They are much more descriptive and easy to remember, even though the Algorithms can be complex themselves. Event the name "Boolean", could be changed to "Conditional"... and be even more readable. Also, Dijkstra algorithm can be generalized to "Shortest Path Algorithm" (there can be more than one).

Math, and physics to some degree, have become self-referential to the point that start becoming more esoteric magic black books to beginners...

While CS was born out of Math folks, and unfortunately has adopted some of the same esoteric symbolics, I hope Computer Science doesn't follow that path on the long term, otherwise it will become divorced from day to day real life applications.

Let me give you a clear example:

Now, imagine if we called Double Linked Lists as "Darombolo lists", or whoever invented it. (I made up that name), Double Linked List is very easy to visualize and remember. "Giacomo Darombolo List", is not, and just adds to 'must cram/memorize' things to make things work.....

I personally don't like "cramming" useless trivia in order to work in my field. I hope Computer Science divorces from Math, and takes its own path to more logical naming of things and less useless symbols used in it.

It is like the whole field suffers because the authors' Narcissism, that they must name things after them.


Tim Sort, Hamming Codes, Huffman Coding, RSA keys, LZW encoding, Duff's Device, Bloom Filter, Carmack's Reverse, awk, linux, git. We have a lot of things named after those that discovered, invented or popularized a structure or technique. Certainly nowhere near as commonly as does mathematics, I will agree.

In CS, no doubt, we often end up on the other end, where a single term means different things in different contexts and beginners may get confused at our reuse of terminology. Often the reuse gives some metaphorical understanding to the newcomer, even if it largely leads them astray in the details.


Bloom filters prove OP’s point though. The first few times I heard the term, I wondered how a Photoshop filter to blur things could possibly apply to the problem. Maybe if it was called an exclusion filter it would be less jargony, I don’t know; naming things is hard.


At least having a word like "filter" in it narrows down the choices even if it doesn't make it unique. If instead it was "Bloom's construction" or "Bloom's algorithm", or "Tim's procedure", we'll be at a total loss to even guess what it was about, which is what happens with a lot of math starting from "Pythagorus theorem", anyone instantly recall "Apollonius' theorem", "Ackerman function", "Euler's function"?. If "Fermat's last theorem" or "Goldbach conjectures" weren't crazy famous I wouldn't have a clue.. The request to at least give us a "Fourier transform", if not "frequency spectrum" is not unreasonable.

I've lamented this for a long time, but on the other side, I doubt if mathematicians would ever get sufficient recognition if their names weren't immortalized thus, since they can't get patents on their works. They totally deserve recognition. Would you even remember Leonard Euler if his work was named factually? Most of us I guess have no idea who came up with sin/cos/exp/log etc. I'm glad for the names of these functions, but lament the loss of knowledge about the one (or many) who discovered them.

Longer names are a candidate .. along the lines of "Einstein's theory of general relativity". "Euler's relative prime counting function" .. but they too will likely, depending on familiarity, collapse over time.


Do they get recognition? I’m an atmospheric scientist, and though I’ve used the equations many, many times, I have no idea who Navier-Stokes was. Or maybe they were two people? Whatever. Presumably they invented the equations and were scientists or mathematicians or something. If real recognition only comes from inside the field, everyone else has pedagogically unuseful name to deal with.


But could you propose a better name for the math terms you mentioned? Fermat last theorem for example is famous because of its history and not significance and I don't think any other name would be better. Pythagorus theorem - how to call it with a short and significant name? The only option I can think of is "a squared plus b squared equals c squared" which is hardly a good name :)


The alternative "Euclidean distance" is already half way there and is better since we at least know it's about "distance". At this point, offering any alternative will feel alien and unfamiliar, but "Linear distance" works for me if I feel the need to push Euclid out as well.

edit: If I want to talk about distance in a curved space, we already have a well named "Geodesic distance".


That sounds like a different theorem. While it conincides with sums of squares of distances for the Euclidean setting, for the case of a sphere or other manifold it is decidedly about triangles, not so much distances.


I was telling a friend about Bloom filters and he'd never looked into them, because he'd assumed from the name that they were some kind of screen space shader.


For a long time I thought the algorithm or data structure used by it had some sort of metaphorical relationship to a flower blooming.


He wasn't assuming - there are Bloom shaders[1] are an item in Computer Graphics, but at least the name has intrinsic logic ("light bloom")

1. https://en.wikipedia.org/wiki/Bloom_(shader_effect)


>Bloom filters prove OP’s point though.

That depends on your native language, though. For Non-English speakers sounds no different than 'Shell sort'.


I'm a non-native English speaker, and I struggle to remember that Bloom filters are named after a person. I have at times, when coming across the term, ended up wondering how they're supposed to relate to whatever "bloom" the name refers to.

I don't think it really "proves the point", though, except for the point that sometimes names are confusing. So maybe it would be nicer if someone had happened to consider whether a specific naming might be confusing, but that's not the same as "names that aren't directly descriptive are automatically bad".


And conversely why are photoshop filters even called filters!



The point being made is that photoshop manipulates matrices and filters are circles made of translucent material you put on your camera; and in fact many of those photographic filters don't primarily filter light but rather distorts it in some desired fashion. "Transformations" would be much more accurate.


In signal processing, those transformations have always been named filters. It goes way back to the early 1900s


I always presumed it came from signal processing[1].

[1]: https://en.wikipedia.org/wiki/Filter_(signal_processing)


Exclusion Set


Page rank. I thought git was named by Linus, and to the extent that it is named after him, it was self awareness of his reputation?

I remember when Duff's Device was a neat trick; does anyone still use it these days?

Edit: a neat trick, not a bear trick.


Page rank is a funny one, it's named after one of its creators, but also describes what the algorithm does. Kinda like Baker's chocolate (named after Walter Baker, but is popular for baking). I wonder if there are other examples.



Page rank is probably one of the more clever names out there. I didn't realize at first it was named after the founder, as it's a good description to how it works.


I can't believe I just realized it's named after Larry Page.


    rank (adj):
      1. (of vegetation) growing too thickly and coarsely.
      2. having a foul or offensive smell. very unpleasant.
      3. (especially of something bad or deficient) complete 
         and utter (used for emphasis).


I was today years old when I found out. I assumed it just did what it said on the tin: rank (web) pages.


I always saw it as nominative determinism.


>I thought git was named by Linus

Yes, I included that one as a joke. "git" is a British slang term. Linus once quipped that he names all his projects after himself.



Speaking of Linus, did you know the embarrassingly invalid "Linus's Law" was actually made up by Eric S Raymond, and he just blamed it on Linus?

https://en.wikipedia.org/wiki/Linus%27s_law

>In software development, Linus's law is the assertion that "given enough eyeballs, all bugs are shallow".

>The law was formulated by Eric S. Raymond in his essay and book The Cathedral and the Bazaar (1999), and was named in honor of Linus Torvalds. [...]

>Validity

>In Facts and Fallacies about Software Engineering, Robert Glass refers to the law as a "mantra" of the open source movement, but calls it a fallacy due to the lack of supporting evidence and because research has indicated that the rate at which additional bugs are uncovered does not scale linearly with the number of reviewers; rather, there is a small maximum number of useful reviewers, between two and four, and additional reviewers above this number uncover bugs at a much lower rate.[4] While closed-source practitioners also promote stringent, independent code analysis during a software project's development, they focus on in-depth review by a few and not primarily the number of "eyeballs".[5]

>The persistence of the Heartbleed security bug in a critical piece of code for two years has been considered as a refutation of Raymond's dictum.[6][7][8][9] Larry Seltzer suspects that the availability of source code may cause some developers and researchers to perform less extensive tests than they would with closed source software, making it easier for bugs to remain.[9] In 2015, the Linux Foundation's executive director Jim Zemlin argued that the complexity of modern software has increased to such levels that specific resource allocation is desirable to improve its security. Regarding some of 2014's largest global open source software vulnerabilities, he says, "In these cases, the eyeballs weren't really looking".[8] Large scale experiments or peer-reviewed surveys to test how well the mantra holds in practice have not been performed.

>Empirical support to the validity of Linus’s law [10] was obtained by comparing popular and unpopular projects of the same organisation. Organizations like Google and Facebook are known for their quality standards. Popular projects are projects with in the top 5% number of stars (7,481 stars or more). The bug identification was measured using the corrective commit probability, ratio of commits detected to fixing bug. The analysis showed that the popular projects had more bug fixing ratio (e.g., Google’s popular projects had 27% higher bug fix rate than Google’s less popular projects). Since it is unlikely that Google lowered its quality standard in it most popular projects, this is an indication of increased bug detection efficiency in popular projects.


> Organizations like Google and Facebook are known for their quality standards.

Citation needed, especially for Facebook. The parts I saw are full of vile hacks, committed by an endless stream of new clueless devs --- young developers are of course better by the dictum of Mark Zuckerberg. I'm curious how his code looks or if he ever wrote anything substantial.


A few other favorites: Knuth shuffle, Levinshtein distance, Dijkstra's algorithm, Kruskal's algorithm, Kosajaru-Sharir algorithm, Bellman-Ford algorithm, Knuth-Morris-Pratt algorithm, Boyer-Moore algorithm, Rabin-Karp algorithm, Turing machines...


> Levinshtein distance

Common enough (though there's a diversity of pronunciation and spelling of the name), though for any practical purpose probably "edit distance" works better. (And Hamming distance would probably be better called XOR distance or something.)

Funny coincidence, there was a comment on HN the other day when someone called Euclidean distance "bird distance", and everyone agreed it was a great coinage. I think "Manhattan distance" is similarly evocative, and no less precise than any alternative.


Very notably things like the Shannon-Hartley Theorem.

Gaussian noise.

The list in physics could go on for quite a long time. Terms like "gaussian noise" are used as short versions to describe something that would need an entire paragraph if described in a verbose way, as if the reader did not understand the fundamentals of the concept.


Calling “gaussian noise” “normal noise”, like the associated distribution, would not lose any precision


It would be harder to search for, and to a lay person it still doesn’t explain what it is. Even worse, given the ambiguity of the word ‘normal’ it could easily cause misapprehensions. Ask the person on the street what normal noise is and they’re probably going to answer in decibels.

At least Gaussian noise is easily googled.


I'd consider 'normal noise' to be 'white noise'


The value in the understanding of the term is not in how searchable it is.


What does gaussian noise have to do with a vector that points orthogonal to a surface? (Not entirely rhetorical; it'd be interesting if there was a relation there.)


I thought orthogonal surface vectors were named after Abby Normal.

https://www.youtube.com/watch?v=C9Pw0xX4DXI


To some extent that reflects that in programming we encounter a mixture of naming origins between computer science (where naming follows an academic tradition that shares its heritage with Mathematics) and software engineering (where naming follows a tradition closer to that of other engineering tooling, where naming is more like branding - think 'Duck tape' or 'Allen wrench').

Often the closest thing we do to the whole 'Thurston maps are Thurston-equivalent to polynomials, unless they have Thurston obstructions' thing is that we qualify our statements about programming entities by the programming language or platform to which the statement applies. So you might reasonably have a statement like 'A JSON value consists of an array of JSON values, a map of string keys to JSON values, or a primitive value', which sounds just as self-referential as the Thurston example, but because it's called a JSON value, not a Crockford value, it sounds less conceited.


Isn't Allen the inventor's name?


It is a brand name.


Sure. So is Ford.

But back in 1909, it was also the name of the guy who signed the patent. Whose company name was his surname.


>"where naming is more like branding"

I meant it followed the branding pattern, so it happened the brand was derived from the name.


I guess there's no commerce in mathematical theorems. But the use of his name as a generic (not just for products sold by the company which bought the trademark) seems similar in that it's a community decision -- we could have all decided to say "hex wrench" instead.


I like the idea that git is named after the guy who discovered it.


Git wasn’t invented, it was discovered.


The absolute worst is some thing I can't remember the real name of right now - I thought it was generic programming but it doesn't seem to be it. It doesn't connotate the meaning in the slightest, and even connotated something other than what it was. It was just storing a matrix or array of past return values of a function to reuse. It feels like I'm having a Mandella moment.


Are you thinking of dynamic programming? It's the most poorly named thing in CS, in my opinion. The name tells you absolutely nothing about what it is, and IIRC the name "dynamic" was chosen because it's difficult to use as a pejorative.


Memoization? Or dynamic programming? The latter is a funny story:

> ... Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities.

http://www.eng.tau.ac.il/~ami/cd/or50/1526-5463-2002-50-01-0...


Dynamic programming is it. Thought I'd gone crazy for a minute.


You probably mean dynamic programming. I agree the name is awful.


These are good examples but would all be better and more accessible with descriptive names. I doubt anyone would allow a pull request in which a developer named the variables after team-members, so why do we condone it when it comes to R&D?

Just because it's a common practice does not mean it's good.


or acronyms.


in Computer Science we have things like: "Quick Sort", "Merge Sort", "Map", "Hashtable", "LRU", etc...

Those are trivial concepts though. The article brings up "perfectoid spaces", as if to suggest their superiority to "Scholze spaces", yet neither name gives any clue as to what they are:

In mathematics, perfectoid spaces are adic spaces of special kind, which occur in the study of problems of "mixed characteristic", such as local fields of characteristic zero which have residue fields of characteristic prime p.

A perfectoid field is a complete topological field K whose topology is induced by a nondiscrete valuation of rank 1, such that the Frobenius endomorphism Φ is surjective on K°/p where K° denotes the ring of power-bounded elements. [1]

Oh, okay. See, the problem is that modern mathematical structures are built on top of centuries of prior work. Computer science, on the other hand, is still in its infancy as a field.

I hope Computer Science divorces from Math, and takes its own path to more logical naming of things and less useless symbols used in it.

That's silly. Computer science is a subfield of math. All computer scientists working in research are mathematicians by training. You won't get anywhere at all if you try to enter the field without a mathematical background.

[1] https://en.wikipedia.org/wiki/Perfectoid_space


Well, CS is more like an extension of a very tiny corner of maths. And practically speaking, most computer scientists and 99% of programmers are terrible at math, to the point that they don’t even genuinely understand basic undergraduate concepts from linear algebra.


Working computer scientists (academics) are definitely not terrible at math. There are a great deal of complicated proofs involved in areas such as complexity theory, algorithms, and data structures. Their jobs are not altogether different from those of mathematicians.


It might be complicated, but as far as I’ve seen, not in a mathematical sense. Perhaps you have some examples?


The tight performance bounds for Union-Find data structures are ridiculously complicated to prove given how simple the algorithm is, even in a mathematical sense.

There isn't a lot of theory-building in algorithms, compared to the more traditional fields of maths, but the combinatorics are formidable.


Computer science is not a subfield of math. Theoretical computer science is, but there is a lot more to cs than just tcs, for example: operating systems, programming languages, human computer interaction, etc.

I agree on your first point though, I think the concepts named after people are usually so abstract that the hard part is to really understand the concepts, not to remember the name.


> Computer science is not a subfield of math.

I wonder what Alan Turing would say about this.

On more serious note... Most people confuse software engineering with computer science.

Computer science is a branch of mathematics, software engineering might not be.

Oh and by the way, there's a bunch of mathematics and logic (again, a branch or mathematics) into language recognition and compilers...

Edit: think that many prominent computer scientists were/are mathematicians: think of Donald Knuth or Claude Shannon, for example. They laid the ground for other stuff to happen.


Dijkstra wrote extensively on how mathematically illiteracy among working programmers is why software is generally of such poor quality.

He grew up in a world where he could only beg an hour a week of compute time off the Americans so his community put a remarkably high emphasis on software quality and clear semantics.

And that’s really what advanced mathematics is all about: clearly communicating otherwise intractably complex ideas by letting the symbols do the work.


> And that’s really what advanced mathematics is all about: clearly communicating otherwise intractably complex ideas by letting the symbols do the work.

I agree, and I think you nailed the point perfectly.

The point that the article misses completely.

Thank you.


> mathematically illiteracy

I rather enjoyed that!

I think there's truth to it though, even at as high a level of situations where you have logic that leaves you thinking 'ok, this works... but it doesn't actually make sense' where something's hard to understand or debug because it's got tangled up into a functional but illegible mess.


All of the important ideas in all of the fields that you mentioned have an absolute foundation in math. Just because we are able to work without using math everyday does not reduce the relevance of math to programming at all.

Programming is math. I’m not sure why this bothers people so much. When you cross a bridge and it’s able to keep standing, you don’t feel gratitude to math? When you write any program, you should feel the same gratitude.


Is baseball also a subfield of math? The problem of swinging the bat can be reduced to physiology, which can be reduced to physics, and then to math. (Sorry for the snarkiness -- reductio ad absurdum just seems like the easiest way to argue this point.)


Baseball's development had nothing to do with math.

Engineering is not a subfield of math either, it merely uses math as a tool. As a field, engineering evolved in parallel with math, only borrowing mathematical methods when their suitable applications were discovered.

Computer Science is a subfield of math because it was developed by mathematicians as a direct descendent of algebra and the study of algorithms, which date back to the ancient Babylonian and Greek methods for division, computing the GCD of two numbers, finding square roots, etc.


It seems kind of strange to me to draw the lines based on anthropology. If there was alternate universe in which a philospher with no mathematical training invented Turing machines and so forth, would you consider CS a subfield of math in one universe but not the other?

Your classification seems as reasonable as any, but the lines seem fairly arbitrary to me.


You can’t invent a Turing machine without math, so your question doesn’t make sense.


As so often happens: https://xkcd.com/435/


for example: operating systems, programming languages, human computer interaction, etc.

Those topics might seem to have nothing to do with math but all of their components have mathematical underpinnings. Algorithms, data structures, complexity theory, and even the physics of end-to-end latency, colour perception, etc.

There probably are some people out there, working in these fields with only a high school math background, but I'd imagine they're exceedingly rare. Anyone who's completed a CS degree has done their fair share of math.


Everyone who completed a physics degree has done at least as much math as a CS degree holder, is physics a subfield of math?

(I think there is a discussion to be had about making the distinction "CS" vs "theoretical CS" as the GP comment does, or if it should be "CS" vs some other term ("computing"? feels a bit general))


Physics is not a subfield of math. It has entirely non-mathematical origins (see Aristotelian physics [1]).

Computer science was developed by mathematicians as a study of algorithms, procedures for computing, and methods of abstraction. In the words of Hal Abelson, computer science is not a science and it's not really about computers in the same sense that geometry is not about surveying instruments [2].

[1] https://en.wikipedia.org/wiki/Aristotelian_physics

[2] https://www.youtube.com/watch?v=2Op3QLzMgSY


There is the Curry Howard Lambek Correspondence (or should I say: the Types, Logic, Cartesian Closed Category Correspondence). Curry Howard in particular, says the act of providing a term for a type is the same as providing the proof to a theorem (modulo a few details). Note that this isn't the same as saying writing a concrete computer program and proving a theorem in a type theory are the same type of activity.

Numerical methods and algorithms are fields of math as old as geometry, especially if we focus on the Babylonian or Chinese styles.

Hermann Grassmann sought to formalize arithmetic, not wishing to assume them as granted. In doing this, he also connects recursion, induction and the natural numbers (he would have known of recursion from its early application in the theory of combinatorics). Peano, Dedekind, Frege, Zermelo and many others would also work on the foundations and axiomatization of mathematics and deduction. Computing began as a side-effect of attempts to formalize just how far such an approach could be taken. The Turing Machine arose to tackle Hilbert's Entscheidungsproblem. The lambda calculus as an approach to the foundation of mathematics. Functional programming languages were originally part of tools meant to study formal mathematical objects while Logic programming sought to apply ideas from formal logic and the axiomatization of mathematics to automatically search for programs.

Dedekind said: "In speaking of arithmetic (algebra, analysis) as a part of logic I mean to imply that I consider the number-concept entirely independent of notions or intuitions of space and time, that I consider it an immediate result from the laws of thought."

What we find is computing reaches right to the foundations of mathematics. Whenever we try to systemize thought, we end up with ideas which seeming inevitably also lead to the foundation of computation.


I tried reading "the road to reality" and came to the conclusion that math is physics minus all the boring parts.

(Mostly joking. Mostly!)



It is like saying Chemistry is a subfield of math, because at some point they use numbers to describe things (mass and such)....

It is not. Applied Computer Science has as much common with math as Chemistry does with math.

I view theoretical computer science as mostly self-masturbatory, to the point that is very very divorced from real life applications and is benefiting very little to us.

Also the market has spoken as well. Someone with CS degree, and 5 years of experience can command a higher salary than someone that took 5 years to get his/her phd in CS. A phd degree is not seen as valuable, mostly because it is not seen as beneficial and it is very divorced to reality of applied computer engineering.


I wouldn’t use money as a measure of how useful something is. Useful for getting a job sure, but to society not so much.

The computer science that helps big companies get more control is the most useful by this metric. Oh look big data and AI are popular. Programming language theory to help create less buggy programs is less so.


You wouldn't have a compiler without someone having developed formal language theory. Or, at least, probably not one built on a solid theoretical foundation that actually happens to be helpful.

You wouldn't have complexity analysis of algorithms, with which most of us don't need to directly involve ourselves, but you do apply its results when choosing an algorithm based on the knowledge that was originally obtained through that analysis. Or if you're not choosing your algorithms, at the very least someone who chose them for your platform did.

You probably wouldn't have lossless data compression (and an understanding of it) at its present level without someone having done mathematically-based work on things like arithmetic coding [1] and range encoding [2]. Again, you probably don't write that code yourself (I haven't), but it's there.

The list goes on.

A PhD degree isn't really a good investment in terms of just salary in almost any field that I can think of. That just means work that's closer to (and directly applicable for) direct revenue streams tends to pay better than work that's further away from them. That doesn't directly mean work that's further away from revenue streams is less valuable down the road; it just means there's less certainty about its ability to help generate revenue, and that there are more steps, more interim work and a greater financial risk involved. While most businesses don't, and shouldn't, bother, that doesn't mean they might not benefit at some point if someone else does it. "The market has spoken" is a shortsighted way of looking at these kinds of things.

Sure, there are areas of theoretical computer science that are more similar to pure maths in terms of abstraction and applicability, and which are pretty much a pure intellectual pursuit. They are very far from engineering or applications. But theoretical work in CS is broader than that, and some of it underlies much of what we have in the practical world.

It's also true that most of software development and engineering work don't really require involvement with much of the theory, partially because someone else is already doing that work within the platform, and partially because most business software is actually theoretically more or less trivial.

Still doesn't mean the theoretical side is useless, because not all software is trivial.

[1] https://en.wikipedia.org/wiki/Arithmetic_coding

[2] https://en.wikipedia.org/wiki/Range_encoding


> You won’t get anywhere at all if you try to enter the field without a mathematical background.

Unnecessary gatekeeping. There’s a lot of engineering-oriented research that has nothing to do with math.


There is no research anywhere, in any field, that has nothing to do with math.


Now this is most certainly wrong. Example (search "history research"):

"Research in history involves developing an understanding of the past through the examination and interpretation of evidence. Evidence may exist in the form of texts, physical remains of historic sites, recorded data, pictures, maps, artifacts, and so on."


Taking a very obvious example, you can't do much in the way of historical research without at least trying to establish which things happened before which other things. It's a serious problem in ancient history.


but you can do that without any math background, for sure there are applications of math in history research but they are just applications


I'm pretty sure modern historians use a lot of mathematical tools: statistics, information science, digital archives, computer imaging, etc. It's very hard to search for this, though, because all of the results concern the history of mathematics. You have to examine the tools and methods used.

Pretty much every field in the social sciences and humanities requires their undergrads to take at least one course in statistics. Sure, these students may complain about it but they need to be trained to not make common statistical errors in their publications. Unfortunately, they still do, which highlights the importance of mathematical education even in these fields.


> in Computer Science we have things like: "Quick Sort", "Merge Sort", "Map", "Hashtable", "LRU", etc... etc...

And in quicksort, we have Hoare’s and Lomuto’s partition schemes. Not that “quicksort” is actually particularly descriptive.

We also have Timsort.

> Even the name "Boolean", could be changed to "Conditional"... and be even more readable

Booleans aren't conditionals, conditionals in crisp binary (or, as it is commonly known, “Boolean”) logic operate on booleans (conditionals in fuzzy or nonbinary crisp logics do not.)


> We also have Timsort.

And Shellsort!


CS needs to move much farther toward math and not away from it. Naming things well is always a worthy goal, but just like DDD tells us, it’s a fool’s errand in a global namespace. The distinction between certain concepts are so subtle that it’s arrogant to think that just choosing a better word is all it takes to make the distinction more clear.

Unfortunately, we rely on idioms and made-up terms for lots of complicated concepts, but I don’t believe that narcissism is to blame. I believe the magnitude of the number of concepts we need to know about overall is gargantuan, so much so that words would get overloaded if we tried to describe everything accurately. Which would be more confusing than it is now.

Also, you’re assuming a minimum context of knowledge when you say something like “Shortest Path” is a better name for Djikstra’s algorithm. What if you don’t know what a graph is, or know what a graph is but don’t know what a path is? How is Shortest Path any less opaque? There’s no lowest common denominator of knowledge, so having agreed upon terms in a given domain is the only way to remain precise.


terms like "quick sort" and "map" are actively harmful because what they mean is VERY ambiguous, and in some cases becomes wrong over time. "quick sort" is no longer the quickest sort algorithm by any standard, it happens to just be quicker than some of the algorithms that came before it. "map" tells you nothing about the properties of the data structure other than that it 'maps' keys to values, but even the nuances of that could vary (can you have duplicate keys? duplicate values? do the keys have to be non-null?)

"Boolean" and "conditional" are semantically very different, the difference is significant.

The "there can be more than one" is specifically why it's important to call the algorithm Dijkstra. It's possible to look it up in Google or in a textbook and immediately find the algorithm in question. Generic terms don't have that property.

Calling this narcissism does everyone a significant disservice.


> Calling this narcissism does everyone a significant disservice.

It’s not even really narcissism.

Here’s the original paper describing what is now called Dijkstra’s Algorithm: http://www-m3.ma.tum.de/foswiki/pub/MN0506/WebHome/dijkstra.... Note that the algorithm isn’t called that anywhere in the paper. In fact, it’s not even named.

When other people want to discuss that work, they need to call it something, and some natural solutions are “an algorithm proposed by Dijkstra” and, subsequently, “Dijkstra’s Algorithm”, as you can see in this contemporary paper. https://www.ams.org/journals/qam/1970-27-04/S0033-569X-1970-...)

As that second paper shows, you can’t really call it “the Shortest Path Algorithm” because others were developing other approaches for similar problems. “Shortest Path, Assuming All Edge Weights are Non-Negative and You Can Afford to Search Blindly Without a Heuristic, Algorithm” doesn’t really roll off the tongue either.


This is true, concepts aren’t usually named by the person inventing them. But it’s mind boggling that someone could consider it unfair to give the giants that created mathematics and laid the groundwork for physics and modern technology their due.


It is also introduced as Dijkstra's shortest path algorithm if the text is expected to be read outside the domain such as in textbooks.


Sure, but

a) It'd need to be called that by Dijkstra himself to be even arguably narcissistic and

b) The qualifier "Dijsktra's" is important because there are other algorithms for finding a shortest path (Bellman-Ford, Floyd-Warshall, A*), with different trade-offs (Bellman-Ford is slower, but can handle negative weights; Floyd-Warshall gets you all pairs and may be better when the graph is dense). Accordingly, I think the grandparent's suggestion of purely descriptive names isn't feasible.


I'd call my sorting technique "AAAAAAAAAAAAAAA Sort" so it gets listed first in the algorithm directory.


!my sort is better.... sort.


!AAAAAAAAAAAAAAAAAAAARMS RACE SORT


Yet we still have Dijkstra's algorithm vs Belmann-Ford, because their differences cannot be captured well in a single word.


Hmm. Dijkstra = Nearest Node First, Bellman-Ford = Shortest k-Edge Path.


at this point in time it is probably a better idea to simply list both dijkstra and bellman-ford in a wikipedia page for optimal path finding algorithm


How about A*?


> Event the name "Boolean", could be changed to "Conditional"... and be even more readable.

Definitely not. The term conditional is used for statements of the form “if X then Y”, in computer science as well as logic/philosophy.


I’m not sure there’s a huge difference between fields: CS also has AVL trees and the RSA algorithm (both named after inventors), red-black trees (mnemonic, maybe, but not descriptive), and B-Trees (Boeing? Balance, Bayer? No one knows, not even the creators —- see 16:10 here https://vimeo.com/73481096)


The mathematical objects mentioned in the article are way more abstract than something like a sorting algorithm. There aren't any words in our vocabulary which would concisely help to describe these concepts.


You could also generalize the person name "algorithm" with "method for solving things" and it would be more readable.


Not sure if this is what you meant, but this reminded me that "algorithm" is ultimately named after a mathematician https://en.wikipedia.org/wiki/Muhammad_ibn_Musa_al-Khwarizmi


The OPs point did point out that accessibility becomes less important in advanced topics. The things you've mentioned are largely elementary computer science concepts so they're both more amenable to descriptive naming and more important to have that.

Consider coming up with a descriptive name for LLL basis reduction (where LLL is the initials of the 3 authors) or some other advanced algorithm.

Someone more educated in CS might be able to give better examples.


> Even the name "Boolean", could be changed to "Conditional"

ah, my pet peeve. When I invent my language, a "bool" is going to be a set, closed under the operations "union" and "intersection".

if you want conditionals, you can use zero and nonzero, the way things were always intended.


Very few concepts or techniques, whether in mathematics, CS or elsewhere, are named after people by themselves.

It's rather that others start calling the concepts by the person's name when discussing the concept or algorithm, after it has been introduced by that person in an article or elsewhere.

It would be good not to accuse others of narcissism when there's actually no narcissism involved.

Edit: omitted part that wasn't constructive


[c]onditional and Boolean both have meanings, and they are not the same.


I can't think of a concise phrase that would capture a precise mathematical construct, while differentiating it from related constructs.


I still sometimes forget that currying is named after a person, thinking there’s some analogy to composing arguments as if they’re ingredients


Meanings of words drift over time, like, litterally.


Oh wow, I know this isn't what you meant but wouldn't it be great if we could use literally and litterally to mean literally and figuratively??


We can, but everyone uses literally to mean litterally.


A digression, but I don't think this is actually the case.

I readily acknowledge that literally is often used when a sentence is figurative; that's not the same thing. For literally to be used to mean figuratively the utterer would be worried that, but for the presence of "literally", the sentence might be understood to be literal.

I contend that the role "literally" usually plays is that of an intensifier. I believe it plays that role through ordinary application of hyperbole: the utterance "X is literally Y" is usually meant as "X is very Y; X is so Y it is almost as if it were literally Y; but of course you understand that it was not, in fact, literally Y - we're all reasonable people here."

In much the same way, when someone says "You left me waiting for days" and it's been a handful of minutes, we don't say "'days' sometimes means 'minutes'" - we say that people exaggerate.

I recognize that I'm disagreeing with at least one dictionary; I believe they got it wrong.

And I won't claim that there is literally no single person who in fact uses "literally" to mean "figuratively" - but I have never encountered such an example and I believe it to be rare enough that we can consider it an error, even in a descriptivist treatment.


Ha! And punish my poor spelling even more.


I thought you were making a valid point that spelling drifts too!


Isn’t this why science deliberately chose Latin for naming? Being a dead language it won’t change meanings over time


I think that’s because when science started to get serious, Latin happened to be the lingua-Franca of those in Europe who could afford to be part of it. Equivalent deal for why we use Arabic numbers.


But Latin was used specifically because is was NOT a common tongue.


I wouldn’t be surprised if I turn out to be wrong, but I was taught the causal chain was:

1. The Romans spoke Latin 2. Catholic church based in Rome, did everything in Latin 3. Between tithes and indulgences, church got rich and powerful 4. The rich and powerful keep learning Latin to keep up to date with news from the other rich and powerful


latin was used because it was the language you used to write important stuff, it was with the creation of france and Germany that other language were upgraded in social status


Ugh I feel both sides.


I think people have a misconception that mathematicians get together with pomp and ceremony and someone pounds a gavel and declares, "By order of the secret council of mathematicians, such-and-such theorem is hereby dubbed 'Davis's Theorem'", or something.

Rather, what really happens is that mathematicians are a community, and they refer to things in whatever way is convenient. Davis's colleague refers to such-and-such theorem as "Davis's Theorem" not because of some committee on naming, but rather because they were there at the conference where Davis announced the theorem, and everyone at said conference excitedly talked about "Davis's Theorem" for the whole rest of the conference because it was so exciting.


Even with people's names, you get bizarre confusions and mutations.

https://www.tek.com/blog/window-functions-spectrum-analyzers

>Hamming and Hanning

>These two similarly-named Hamming and Hanning (more properly referred to as Hann) window functions both have a sinusoidal shape. The difference between them is that the Hanning window touches zero at both ends, removing any discontinuity. The Hamming window stops just shy of zero, meaning that the signal will still have a slight discontinuity.

The Hamming Window is named after Richard Hamming.

https://en.wikipedia.org/wiki/Richard_Hamming

But the Hanning Window is named after Julius von Hann, and lots of people just throw in an extra "ing" to make them sound alike, but its excruciatingly correct name is Hann Window.

But it seems fitting that they're almost but not quite alike, and so is their spelling. Maybe for symmetry there should be a Halling Window that stops just below zero.

https://en.wikipedia.org/wiki/Julius_von_Hann

https://en.wikipedia.org/wiki/Window_function#Hamming_window

https://en.wikipedia.org/wiki/Hann_function

https://numpy.org/doc/stable/reference/generated/numpy.hanni...

https://numpy.org/doc/stable/reference/generated/numpy.hammi...


I love how they named the inverse spectrum the cepstrum, which uses quefrency, saphe, alanysis, and liftering, instead of frequency, phase, analysis and filtering. It should not be confused with the earlier concept of the kepstrum, of course! ;)

https://en.wikipedia.org/wiki/Cepstrum

>References to the Bogert paper, in a bibliography, are often edited incorrectly. The terms "quefrency", "alanysis", "cepstrum" and "saphe" were invented by the authors by rearranging some letters in frequency, analysis, spectrum and phase. The new invented terms are defined by analogies to the older terms.

>Thus: The name cepstrum was derived by reversing the first four letters of "spectrum". Operations on cepstra are labelled quefrency analysis (aka quefrency alanysis[1]), liftering, or cepstral analysis. It may be pronounced in the two ways given, the second having the advantage of avoiding confusion with "kepstrum", which also exists (see below). [...]

>The kepstrum, which stands for "Kolmogorov-equation power-series time response", is similar to the cepstrum and has the same relation to it as expected value has to statistical average, i.e. cepstrum is the empirically measured quantity, while kepstrum is the theoretical quantity. It was in use before the cepstrum.[12][13]


Completely agree. And when using common terms, people also make all kind of assumptions from it, and that comes with a bias, before even getting a grasp of the concept it describes in a given field.


Going to pile on to the agreement here -- for example, much confusion can be had when discussing things like "intentionality" with laypeople. Philosophy is riddled with regular words that take non-regular meanings. It might make philosophers feel smart, but it's a detriment to the field imo.


Much, though not all, of this is due to philosophers discussing ideas from the past three thousand years originally in a multitude of languages. How should one discuss a "Platonic Idea" without confusion of a general idea?


People's names can be confused too. A few weeks ago I referred to the Legendre symbol as "Lagrange symbol" by mistake when talking to a colleague. Such things are inevitable however you name things; some words happen to have small Levenshulme-distances...


I assume your misspelling of Levenshtein distance was intentional. Figured I'd drop the correct spelling here for anyone trying to look it up.


Particularly if the names are hard to pronounce or remember because of the cultural and language difference. Few French-speaking people would confuse Legendre ('the son-in-law') and Lagrange ('the barn').


Lagrange vs. Legendre is kinda really easy to mix up, since you have Legendre polynomials and Lagrange polynomials and both are important in numerical analysis...


>Naming things after their creator is actually super-helpful because it's really easy to disambiguate

For the most part I agree with you, however there are some notable edge cases where someone's name can be heavily overloaded (e.g. https://en.m.wikipedia.org/wiki/List_of_things_named_after_L...)


To a certain extent both sides are right. What it really comes come to is communication. The real question is who should the language be optimized for?

One side feels the intended audience is of the same field and sufficiently sophisticated enough to understand the somewhat obscure naming. Others don’t understand it because it simply isn’t for them.

The other side may come from other tangential domains with their own unique language. They don’t understand why those specialists use such obscure language meanwhile they do the same in their own field.

You see the same in any large organization. Seemingly random acronyms get created as lazy shorthand that conflict with other orgs understanding. It’s of course no consequence to those in the “in crowd” but it hampers communication between groups. Considering communication is one of the ever present hurdles between groups it seems reasonable to me to optimize communication between groups rather than within groups.


>Areas like philosophy and law actually suffer in my opinion when they overload common words

It's more that people take material from our fields and misuse them in casual contexts.

>If "contractualism" were just known as "Scanlon's theory" it would be a lot easier.)

It would also be wrong as it isn't his theory, he's just a philosopher with a recent in-vogue formulation of it. The source of the theory in modern western philosophy is Rousseau. There's no issue with discussing "Scanlon's theories"; but that term refers to his theories, not contractualism at large.


See, you're actually making my point. ;)

Social contract theory is known as contractarianism. [1] (And the source was first Plato, but is best known through Hobbes. Rousseau was then the next best-known iteration after Hobbes.)

But "contractualism" is generally used to refer to T.M. Scanlon's theory specifically. [2]

This is my point. They're too easy to mix up. ;)

[1] https://plato.stanford.edu/entries/contractarianism/

[2] https://plato.stanford.edu/entries/contractualism/


Am I though?

The preambles of the articles that you're citing do not agree with your position, neither does a number of other definitions found easily online, neither does the academic publication record (you'll easily find 500+ articles on Contractualism penned before Scanlon).

See: "There is no necessity for a contractarian about political theory to be a contractarian about moral theory, although most contemporary contractarians are both. It has been more recently recognized that there are two distinct strains of social contract thought, which now typically go by the names contractarianism and contractualism."

[...]

"Contractarianism argues that we each are motivated to accept morality “first because we are vulnerable to the depredations of others, and second because we can all benefit from cooperation with others” (Narveson 1988, 148). Contractualism, which stems from the Kantian line of social contract thought, holds that rationality requires that we respect persons, which in turn requires that moral principles be such that they can be justified to each person. Thus, individuals are not taken to be motivated by self-interest but rather by a commitment to publicly justify the standards of morality to which each will be held. Where Gauthier, Narveson, or economist James Buchanan are the paradigm Hobbesian contractarians, Rawls or Thomas Scanlon would be the paradigm Kantian contractualists. The rest of this entry will specifically pertain to the contractarian strain wherever the two diverge."

In other words, this isn't about Scanlon the person, it's about two different schools of thought regarding people and their relationship with society. To dumb them down significantly, one's about the selfish, desperate flight from the state of nature, the other is about the crafting of a persuasive encompassing rationality of co-operation.

----

Also, just as an aside, while this difference might seem like it's a small nitpick, it's actually one of the fundamental theoretical divides between continental European and American/British legal systems. So yes, there's a reason why the terms are distinct.


Could not possible disagree more. "Contractualism" is far easier to remember for most people than remembering the name "Scanlon". Proper names do NOT easily translate to concepts for majority of people. If you can't properly define your concept without the use of a proper name, you aren't trying hard enough.

I think this trend needs to die, and luckily it seems to be falling out of favor.

Similar to naming companies after the founder. Tesla Motors could easily have been "Musk Motors" if it was started in the era of Ford and Lockheed.


using suffixes and syntactic variations to distinguish concept also does not translate, in the literal sense of translating to other languages, especially those that have different rules for suffixes. Scanlon or whatever other name is much easier to translate to any other language than random unusual words


> Areas like philosophy and law actually suffer in my opinion when they overload common words with uncommon meanings,

This is definitely true. It doesn't matter as much as a technical person learning it but at a certain point it becomes absolutely impossible to communicate with regular people not steeped in the terminology about it (e.g. object, property, event, part, substance, sort, kind, type for analytic ontology).


Absolutely. When defining new areas the most important things are not bringing along baggage of irrelevant context and being distinct from other things--anything that's unique and visibly distinct (e.g. not like Q123512). Using common words would be the worst as we have ideas about them but the analogies break down fast and get in the way.


> terminology being "accessible" doesn't really matter, but being precise does.

There is also an argument that for experts names are even more accessible than the alternative - if a descriptive name is used but is wrong, then it becomes more confusing. Math already has a problem with overloading common words with special meanings.


You can have only so many Riemann theorems before it becomes confusing, though.


http://en.wikipedia.org/wiki/List_of_things_named_after_Leon...

> Many of these entities have been given simple and ambiguous names such as Euler's function, Euler's equation, and Euler's formula.

> In an effort to avoid naming everything after Euler, some discoveries and theorems are attributed to the first person to have proved them after Euler.


You know you've made it when you need a Wikipedia article to keep track of what you've done. Gauss' is similar, if maybe a bit shorter: https://en.wikipedia.org/wiki/List_of_things_named_after_Car...


Absolutely agree. I'd go so far as to say this is one of the real strengths of mathematical naming conventions.

You are absolutely on point with the contrast to Law; I have degrees in both subjects, and this has also been my experience.

Jargon exists for a reason. The deeper the field, the better the jargon needs to be to allow proper communication. The fact that we can communicate complex mathematical ideas in language at all is a minor miracle.


Simply being able to look up the luminarie's name, years of life, and location helps alot with defining an ideas part our knowledge evolution. I hear a theory made by someone in england in th 17-1800's I already know this theory has probably sprang out of the enlightenment and should looking at contemporary works for context of the thinking at that time.


I disagree with your hard disagree.

Because I don't give a shit about who discovered something allegedly first. I do care about what it means/does. Maybe it's a language problem, because some languages make it more easy to add words into a new word describing that new thing.

edit: Otherwise it's just protocol overhead, line noise, gibberish to me.


Any name given to object is necessarily going to be an incomplete description of said object. It's a tradeoff between the length of the description and the precision of the description. If you want to know what the object does you can look up its definition.


I ran into something like this with Perrow's 'Normal Accidents' - he defines common words with specific meanings, when he could have just as easily used two common words with no ambiguity.

If you use three words, you're in real danger of forming acronyms, but I think two words are a sweet spot.


> Once you get to the advanced levels of any field, terminology being "accessible" doesn't really matter, but being precise does.

If you really want to build knowledge barriers, then yes.

This might have been true decades ago when advanced academic concepts really might have been relevant for a small group of experts - but today, advanced math is the foundation of huge industries - and not everyone working with it can be assumed to have a formal education in the field: Often, you want to make use of an algorithm and simply need to know the concepts necessary to understand that algorithm.

Also, being precise doesn't have to mean being opaque - a name should give an uninformed reader at least enough information to roughly categorize the concept: Even "Timsort" is better than "Tim's algorithm", because this at least gives me a hint that I deal with a sorting algorithm.


I strongly agree that precision matters, but in no way does naming a concept after a person help with precision. Almost entirely orthogonal. What does help with precision is typed languages. You cannot use a term until you've precisely defined it in terms previously defined, which can be back-traced automatically by a computer to ones and zeroes. It must be a language both parseable by a human and a machine. At that point you have precision. That doesn't preclude using names as identifiers, but makes that largely irrelevant.

I strongly disagree that accessibility doesn't really matter. It always matters. Maybe not during draft time, but in the long run. There are strong economic incentives to make things less accessible for non-experts, but in no way does that help advance the field, or force experts to reduce elements to their most basics, it is just rent seeking.


Is this a joke? I really can’t tell


GPT-3 is leaking =)


I'm with you on the "Hard disagree", but I'd say something like: one of the great rewards to being a Mathematician is that things are named after you, and you get that legacy. God knows the pay ain't great.


> Once you get to the advanced levels of any field, terminology being "accessible" doesn't really matter, but being precise does.

And in what multiverse is "Calabi-Yau manifold" a precise terminology? Literally all that can be gleaned from that is that it's a manifold and it's the invention of some mathematicians (or maybe one mathematician with a hyphenated surname).

> If "contractualism" were just known as "Scanlon's theory" it would be a lot easier.

My disagreement would make diamonds look soft in comparison.

"Contractualism" at least conveys it might have something to do with contracts. "Scanlon's theory" tells me absolutely nothing about what it might be. That is worse by every objective measure.


But why do you have to be able to glean meaning from the phrase alone? And how often does that really happen in other situations, when last names aren’t involved?

The term “polymorphism” is a good example. It isn’t named after someone, but could anyone without a background in computer science have any clue what that term actually refers to? Sure, they could examine the root words and try to figure it out, but would they be any closer to the actual meaning of the word?

I don’t think it makes sense to make field-specific jargon accessible to the masses. Instead I think it makes more sense to make it easily researchable and distinct from more commonplace words.


> But why do you have to be able to glean meaning from the phrase alone?

Because it helps me wrap my head around what it is, what it does, and why I should care about it.

And by "me", I more importantly mean a rhetorical "me", i.e. a random layperson who happens to be a politician or someone else with disproportionate power over things like scientific endeavors. More on that in a sec...

> Sure, they could examine the root words [of "polymorphism"] and try to figure it out, but would they be any closer to the actual meaning of the word?

I mean, a little bit, yes. "Poly" = "many", "morph" = "form", "ism" = some kind of state of being, and from that someone could figure out that if something exhibits "polymorphism" it means it has many forms (and this does indeed provide at least some intuitive understanding of e.g. how a function can have many different implementations under the same name, and that implementation being decided by the form of its arguments).

> I don’t think it makes sense to make field-specific jargon accessible to the masses. Instead I think it makes more sense to make it easily researchable and distinct from more commonplace words.

And this is why the masses write off science and "them nerds telling us how the world works" as useless, and in turn why our planet is dying and humanity's decline into stupidity is accelerating. It's precisely why so few people trust science: because they don't understand it, because every effort seems to have been made to make it entirely opaque to anyone without decades of academic background that the vast majority of people cannot afford (schoolwork doesn't put food on the table).

Maybe - just maybe - we could instead try to remove barriers to entry into making STEM some elitist Kool Kids Klub that deems laypeople as unworthy because they don't have the time or energy to memorize the names of a bunch of dead white men, and maybe then we can live in the world we all want: one where science and scientists are taken seriously, and where we don't wait until it's already too late before we even start thinking about addressing a self-induced extinction event.


I've always thought there was a nice breakdown here in math; name things after what they are if they can be stated concisely, name them after discoverers as a short hand for here-there-be-dragons.


Is that the breakdown? Euclid's Algorithm, Pythagoras' Theorem, Fermat's Little Theorem are all elementary, but the example upthread of a "perfectoid field" which is a "complete topological field K whose topology is induced by a nondiscrete valuation of rank 1..." is much more abstruse.


There's no system. Abelian groups are a concept that is simple, intuitive, and easy to describe concisely.


In addition names have another huge accessibility advantage: they are easy to translate. using pedantic suffixes for precise meanings sometimes means that some languages run out of variants


The thing is, there is a trade-off between making math (at any level) accessible to outsiders, and making things easier for the heavy practitioners.

For naming things after their inventors, you are very much losing accessibility to outsides (that includes mathematicians working outside the field). I could see the trade-off for clarity (for most people) not being big enough to off-set the loss in accessibility. The easier a field is to get into, the quicker it will grow. Moreover, it also makes the field a lot more fruitful to work in.


Yeah, but after a while you'd have Smith's rule, Smith's law, Smith's corollary, ...


What about names like Page Rank?


What about it? It is just pure luck that Larry Page happens to have the perfect name for the algorithm named after him. Doesn't happen to everyone.


Yet another example: International relations theory overloads "Liberalism", "Realism" and "Idealism". It's too easy to project meaning for already overloaded words like these.

Long articles and opinion pieces are written by people who saw something with these words and it never occurred for them that they should check out what these words mean in the context.


> Imagine how much steeper the learning curve would be in medicine or law if they used the same naming conventions, with the same number of layers to peel back:

I she kidding? Off the top of my head: diseases named after people. Parkinson's disease. McArdle's disease. Bell's Palsy, Hodgkin's Lymphoma, ...

https://en.wikipedia.org/wiki/List_of_human_anatomical_parts...

In Law, precedents are referred to by plaintiff and defendant names: Smith vs. Klein. There are laws named after people, e.g. in the US. Kirsten's Law; Mann Act; Wetterling Act; Sonny Bono Copyright Extension; ...


Yeah, I laughed when I read that sentence. Although some rules in law have naming conventions that tell you something about the underlying rule (for example, the merger doctrine, or assault with a deadly weapon), a huge number of rules are named after the case that created the rule--or, in other words, just a person or company's name. So in administrative law, you might apply Chevron deference, or say that a post-hoc rationalization poses a Chenery problem. And, of course, sometimes lawyers disagree about what case created or recognized a rule, so you'll have some people calling something Auer deference, with others calling it Seminole Rock deference.


Right, this is a ridiculous argument.

Wikipedia catalogues two different lists of eponyms in medicine: diseases and symptoms!

“Being awarded an eponym is regarded as an honor: "Eponymity, not anonymity, is the standard."”

https://en.wikipedia.org/wiki/List_of_eponymous_diseases

https://en.wikipedia.org/wiki/List_of_eponymous_medical_sign...


The whole argument seems to be a plea for better mnemonics, but "clearer" meanings aren't often that much clearer because of the ambiguity introduced (and often hidden).

When law does use descriptive terms it's actively damaging to lay people. Too many laws are written where common words mean something similar to but importantly different from what they mean in the field. So then as a layperson you think you know what is legally required to do, but (surprise!) you don't.

This is why in programming, we're so often suggested to name new things non-descriptive terms. As you replace things and split things out and combine them together, you introduce tons of ambiguity if you name things too descriptively.

I'd read the evolution of math to name things how they do to be a collective choice for precision, rather than a move for people's egos.


Using names which mean something is impractical. The whole point of a name is to have a symbol so that we don't have to mention the meaning. The meaning is what the symbol invokes by association, not what it contains literally. The meaning is verbose, far more so than the symbol, and trying to capture meaning in names creates unwieldy, verbose names that far far short of capturing all the meaning.

We include meaning-words in names. That's why it's "Bell's palsy" and "Feigenbaum constant", and not just "Bell's" or "Feigenbaum".

Such shortenings are possible in a narrowly established context surrounding an informal conversations.


A lot of medical and legal terms are literal in some way, but Greek or Latin, so I don't know how much that helps.


I guess historians have an impossible job!

Having not read the article, is it possible she’s arguing the similar notion of erasing history because it’s psychologically unbearable for some people?


No, she is not. Read the article and please stop fitting whatever you don't like into your world view.


Yeah, that didn't pass the laugh test.

My mother always joked you could tell that elastin got named by an assistant while the doctor was out sick.


I always wanted to propose my own "Reuben's Law" (named for me) -- no more laws shall be named after dead children.


In medicine they are often named after the first (documented) patient, not the discoverer.


"Often", here, should be read as "quite rarely".


Meh, that's like saying it's so hard to remember San Francisco from San Jose from Mountain View from Palo Alto, why can't we just name them as Big Sea City, Big South City, Middle Town, and Expensive Town.

I.e., it's a fake problem that only sounds plausible to outsiders - if you live in the Bay Area, then Mountain View being called Mountain View is the least of your problems in driving to Mountain View.


Agree, although as a Mountain View resident I'm pretty disgruntled that there is neither mountain nor view here. A town like Mammoth Lakes, CA would be a better candidate to deserve the name "Mountain View".

Jokes aside there are places in the world that name themselves much more intuitively like you described.

Beijing = northern capital

Nanjing = southern capital

Shanghai = on the sea

Hong Kong = fragrant harbor

Xi'an = western peace

Tokyo = eastern capital

Taipei = north Tai

Tainan = south Tai

Taichung = middle Tai

Taitung = east Tai

Shandong = east mountains

Shanxi = west mountains etc.


Many places, even in the West, were named after unique characteristics. Language changed though. I studied in a German town called Paderborn (we lovingly called it "Paderboring" though). A river called Pader originates there and the word "born" is old German for source or spring. Many other German towns have similar names that make sense in old German or are derived from Latin names that were more descriptive but nobody understands anymore.

In the USA there are many places like that as well and it's more obvious, since language hasn't changed since they were named. Ironically the landscape has due to human doing in many cases. Think Thousand Oaks, Walnut Creek, Mill Valley

A sibling post makes the argument that descriptive names would eventually lose the descriptively as language changes. This is very much validated by the German names. However, that took hundreds if not over a thousand years in some cases.


Case in point: Pontefract*

The advantage of descriptive names is that the more you know, the more you can infer despite them being far removed from current language. On the other hand if there isn't a good candidate for a descriptive name, a surname-shaped nonce is better than a misnomer.

* Latin for Broken Bridge



In the same vein, Seoul = "capital". It used to be a correct expression to say "the Seoul of North Korea is Pyongyang," in Korean. These days I guess it will draw weird looks.


Word2Vec sees no problem with this.


there are places in the world that name themselves much more intuitively

Plenty of similar examples in the United States. West Gate, South Park, Big Ugly Creek. Sometimes it doesn't work out, like how South Charleston, West Virginia is northwest of Charleston.

Related, you can learn a lot about the history of a place by studying maps.

Hartford, New York, Throggs Neck, Kill Van Kill, Schuylkill, Gravesend, Brooklyn, The Bronx, all gain extra meaning if you know a little bit of history.


> Taipei = north Tai

> Tainan = south Tai

> Taichung = middle Tai

> Taitung = east Tai

> Shandong = east mountains

> Shanxi = west mountains etc.

Not quite. Shandong = "east of the mountain(s)", not "the mountains in the east", and similarly for the others. Note how in Shandong, dong comes second, whereas in Beijing, the north capital, bei comes first.

Shanghai is a weird one. "On the sea" should be haishang. 上海 in another context should mean something like "get onto the sea".


> 上海 in another context should mean something like "get onto the sea".

Wait, does that mean that "shanghaiing" someone is actually a literal description rather than just a reference to a prototypical destination?


Shanghai 伤害 is a Chinese verb meaning "hurt". Would that mean that "shanghaiing" someone is "actually" a euphemistic reference to the injury suffered by enslaved sailors?

It's an English word; it's got nothing to do with Chinese.


I was asking after the etymology, actually; were whoever coined the term aware of the meaning in Chinese? (Compare eg pagerank or bakers' chocolate elsewhere in thread - named after a proper noun, but the term as used is a description - but in this case it would be the reverse - named after a description, but the term as used references a proper noun.) Rereading I agree that wasn't obvious, though.


In any case, Shanghai is not the only seaside city and there are other porta as fragrant ad Hong Kong‘s.


Shanghai is the biggest seaside city and pretty sure Hong Kong for a long time was the principal port (wrt China, of course).


> Shanghai is the biggest seaside city

The name is much older than the size.


I'm guessing that at that point it was also the most notable one, or notable for the fact it was in the ses.


Definitely not; Fujian is the area with the strongest historical involvement with the sea.


Most suburban areas will probably have been named to attract people buying property, rather than for any real reason otherwise.

Perhaps the biggest historical example of this is Greenland, whose name is said to have been chosen to attract more settlers to a place that is very cold in reality. (The corresponding bit that people like to tell, in which Iceland had its name sabotaged to make Greenland more attractive, is not based in reality.)


You just have to get high enough in Mountain View to see the mountains. I was walking along Steven's Creek Trail, and there was a certain point where I could see mountains in all 4 directions (north, south, east, west).


Mountain View was named that way because if you look West you will see the Black Mountain, part of the Santa Cruz Mountains.


Look to Los Altos and Los Altos Hills as examples of the best of both worlds.


I prefer Tall Stick


This is one of those articles that might sound convincing to non-mathematicians (because it is annoying to learn names for difficult concepts, and many people can relate to that experience), but will not sound all that convincing to domain experts.

It'd be hard to deny that descriptive terms are easier to memorize. Sometimes a piece of natural and/or physical intuition allows such terminology to arise.

Scientist and mathematicians do tend to think about terminology quite frequently: communicating with other scientist and mathematicians is a major part of doing science, and "the reviewers couldn't follow your argument" is a valid, and not especially rare reason for rejected articles in mathematics. Given the amount of thought put into it, "intuitive" names do tend to come about when possible (as it happened with what we now call the Ham Sandwich theorem, the concept of Fibration, the Squeeze Lemma, and countless others).

Given that mathematicians do put thought into terminology, there's often a good reason for not having more intuitive names: maybe no good common-sense intuition was available (e.g. Chu constructions are too general for this sort of thing), or the thing comes up only in highly specialized contexts where it's not worth bothering with it (e.g. Girard's paradox), or there are too many different metaphors that one would have to invoke to describe the situation appropriately, so that it's more efficient to derive a completely new term from an associated name (e.g. Abelian became such an adjective, which now has its own associations).

It's telling that the author criticizes terms like "Calabi-Yau manifold", but doesn't suggest any alternative: coming up with an insightful name that communicates the key properties of such an object, and is easier to use and remember than "Calabi-Yau" is, let's just say, very very hard.

The same phenomenon is not limited to mathematics. Would Dijkstra's algorithm, the Haber–Bosch process or the Otto cycle be easier to learn and remember if they had snappy, insightful names? Probably. But the same concerns apply. It's hard to come up with genuinely better, more descriptive names for these processes. And even if we were successful in popularizing newer, better names, we would find that the names were not the real bottlenecks that made computer science, chemistry or mechanical engineering difficult to master.


As a counterpoint, I've a Ph.D in math and I largely agree with the article. Frankly, it's difficult enough to keep track of the names in my own field much less interact with people in other fields who would greatly benefit from these results if they were comprehensible.

While I agree that precision is necessary, is it really that hard to say something like a complete inner product space vs a Hilbert space? Should we have an argument about whether we should call the triangle inequality the Cauchy-Schwarz or the Kantorovich inequality? How many years did it take to recognize Karush for providing algebraic optimality conditions long before Kuhn and Tucker?

Anyway, outside of general griping about names, very specifically, I have to regularly interact with technical people in other fields and memorizing names is one additional, often superfluous, barrier to their understanding. Even if it's important for someone credit the (supposed) originator outside of a citation, giving an intuitive name for the context would go a long way in improving understanding. For example, the triangle inequality from Cauchy and Schwarz or the algebraic optimality conditions from Karush, Kuhn, and Tucker.

By the way, I'll also mention that the majority of other mathematicians that I know don't feel this way and prefer using names after people. I understand; I just disagree.


If you have to write complete inner product space a hundred times in an article that's pretty annoying tbh


In a Physics article, we'd just call it CIPS after the first instance. It does get a bit unwieldy if there are too many acronyms or initialisms in a sentence.


Terrible excuse. Things are read much more than they are written. That’s like using short variable names rather than descriptive ones as they are easier to type. With current IDEs that isn’t even the case and I’m sure word processors could use the same autocomplete. Every writing shortcut adds cognitive load for the reader to remember.


Short variable names can be easier to read, too. Having people decode meaning from the name, rather than keeping track of the meaning separately and associating it with a symbolic name, doesn't always feel like a good use of cognitive resources.

imo, "Hilbert space" is a lot easier to read, you really only have to recognize the guy's name, it's a fairly easily recognizable word. "Complete inner product space" has me look more closely at more words, and it's all extremely boring-looking words so they don't even stand out in a block of text.


I mean long class names in Java in particular are pretty much a meme [1] because of how ridiculous they are. There is definitely a balance to be struck between succinctness and expressiveness.

[1] https://news.ycombinator.com/item?id=3215736


If only math used completely intuitive names for concepts everywhere, like class, set, group, ring, field, ideal, domain (not the same as "domain"!), fiber, sheaf, manifold, oh wait.


At first I agreed with you, but I think your analogy to algorithms actually somewhat counters your point.

The Wikipedia article for Dijkstra's algorithm gives an alternative name of "Shortest Path First", or "SPF algorithm", which I do think is a much better and more descriptive name.

It also made me think of sorting algorithms, which all have wonderfully snappy and descriptive names. I think the world would be a sadder place if – instead of quicksort, mergesort, and heapsort – we had to struggle with opaque names like Hoare sort, Neumann sort, and Williams sort.


The issue is that some concepts can't have descriptive names, unless you just explain the whole concept. For example. Lipschitz or Holder continuous function, or Riemanninan manifold... Whenever concepts can be described with a good name, those names tend to be used, but unfortunately a lot of times we just can't.


> It also made me think of sorting algorithms, which all have wonderfully snappy and descriptive names.@@

Well, bubble, shell, insertion and merge, sure. Quick less so. And then there is Tim...


What? Shell sort isn't a descriptive name. The algorithm is named after Donald Shell, and has nothing to do with shells.


Huh. When I learned about it in high school 30+ years ago, I didn't learn any background on the name and saw an (admittedly, always awkwardly strained, and I can't even clearly explain it to myself now) metaphor in which the interleaved lists were somehow the shells. I never thought it as clear of a metaphor as by bubblesort, but more descriptive than quicksort.


kind of reminds me of the Heaviside function, which is a step function. When I first learned about it in school, I thought it was named after the "heavy side" on the right, as opposed to the "light side" on the left. Turns out that Oliver Heaviside was an actual person.


Nagle's Algorithm is named after a person. It helps make sure more data is sent together, rather than split across unnecessarily many packets.

I recently learned that in Polish, "nagle" can mean "all at once" - not pronounced the same, but still a cute coincidence!


You do know that Shell is a name, right?


quicksort kind of proves that the names don't have to be particularly meaningful, just somewhat descriptive and unique. It's not like mergesort isn't quick.


pivotsort would have been a better name


If you summarise the article as "John Conway was good at thinking of names. Try to be more like John Conway.", then I think many mathematicians will think it's a reasonable suggestion.


Really, "Try to be more like John Conway" is good advice all around.


> Chu constructions

Star-autonomous category constructions?

> Girard's paradox

Provability paradox?

> Calabi-Yau manifold

Smooth n-dimensional complex manifold?

> Dijkstra's algorithm

Shortest-path-first algorithm? (I'm cheating here because this is literally an already-common alternative name for it)

> Haber–Bosch process

Ammonia synthesis loop? (Also cheating here, since the "Ammonia production" Wikipedia article uses this exact terminology)

> Otto cycle

Four-stroke cycle? (Cheating yet again, since Wikipedia states: "This is why the four-stroke principle today is commonly known as the Otto cycle and four-stroke engines using spark plugs often are called Otto engines.")


The names you suggest are generic terms that already exist in the respective jargons with different, more generic (and in fact more intuitive) meanings. It's like noticing that the name "Jonathan" has no mnemonic function in the term "Jonathan apple", then suggesting the term "fruit" as a better replacement. This would not solve, but exacerbate the issues raised in article, in that this would make all of learning, teaching and expert communication harder (compare your suggestions with the ham sandwich theorem, a snappy name which serves as an exceptionally good mnemonic device for both the theorem and its usual proof, despite not mentioning Lebesgue measure, Euclidean space, exact division, or really any specific objects involved in the statement of the theorem).

Non-Otto four-stroke cycles abound (cf. Diesel cycle); new industrial methods for ammonia synthesis come into existence every few years and there's not much point in singling one out and calling it _the_ ammonia synthesis loop. The case of shortest-path algorithms has been thoroughly discussed upthread. Your suggestion for Calabi-Yau and Girard's paradox are not coherent with the meanings of these terms.

No big deal: giving a purer meaning to the words of the tribe is not easy, and requires deep domain expertise. And those with deep domain expertise do think about this task, have some strong incentives to think about it (teaching evaluations factor into promotion and tenure) and generally do fairly well at it.

(Startup idea: your-jargon-reform-is-bad as a service, backed by GPT-3?)


> The names you suggest are generic terms that already exist in the respective jargons with different, more generic (and in fact more intuitive) meanings.

The last three, like I mentioned, were pretty much direct from Wikipedia.

But yeah, I'm under no pretense that these are necessarily correct. My only point is that we can surely do better than "$person $thing" for nomenclature. The Otto cycle is a perfect example; sure, "four stroke cycle" might be too broad, but "spark-ignited four stroke cycle" (maybe abbreviated to SI4C) doesn't seem to be (there are technically other such cycles, but the vast majority of the time people talk about a spark -ignited four-stroke cycle they're talking about an Otto cycle or some evolution thereof).


I think that even an arbitrary name (e.g. quark colors blue, green, red) is easier to digest/process/memorize over a person’s name that may be from a language you are not familiar with. However that also points to another problem. How do you translate? Do you force the word from the discover’s language on all other languages or do you translate it? If you translate then you are introducing drift but if you don’t translate then the word is just as effective as a foreign name. It’s almost like you would have to come up with a table for the concept in all common languages. So while names suck I don’t know what would be better.


I think you make a great argument with the translation problem. English is a second language for me, and many of the people that studied with me at university didn't speak it at all so clases were as much in Spanish as it was possible. Things like "bubble sort" where translated to things like "ordenamiento de burbuja," etc. There were two problems with that: one is that not all authors used the same translation and second is that information in non English languages tends to be more outdated or not even exist at all on the web so many times I had to try and guess the right name in English when I wanted to understand something better.

Now that I have a job a similar problem appeared with translations, Microsoft has this annoying habit of translating useful technical error messages and many times the error in Spanish doesn't yield any useful information on Google searches (if at all) so I'm again having to guess-translate terms into their original form.


Is learning the names of the concepts the problem for non-math undergraduates? I’d say that’s the only part that doesn’t give them headaches. Remembering peoples’ names is something we practice every day, understanding abstract mathematical concepts on the underhand is very painful for most people.


We should strive to remove all references to Teichmüller. It is abhorrent that the name of a prominent Nazi is so deeply embedded in modern mathematics.


You can start calling them Alfhors spaces instead.

But I think it's important to keep Teichmuller around as a reminder of the important lesson (especially around here) that mathematical genius does mean that everything one says on all topics is logically or morally correct.

As Teichmuller wrote: > I am not concerned with making difficulties for you as a Jew, but only with protecting – above all – German students of the second semester from being taught differential and integral calculus by a teacher of a race quite foreign to them. I, like everyone else, do not doubt your ability to instruct suitable students of whatever origin in the purely abstract aspects of mathematics. But I know that many academic courses, especially the differential and integral calculus, have at the same time educative value, inducting the pupil not only to a conceptual world but also to a different frame of mind. But since the latter depends very substantially on the racial composition of the individual, it follows that a German student should not be allowed to be trained by a Jewish teacher.[8]


I don't think descriptive names per se would be especially helpful. The challenge of understanding a concept typically dwarfs the challenge of remembering a name.

Some people, though, have so many things named after them that Googling for concepts can become challenging. In my thesis, I needed to use something called the Steiner point, which is sometimes also called the Steiner curvature centroid, although I didn't know about this name at first. For a convex set K, this is the limit, as R goes to infinity, of Bar(K + B(0,R)), where Bar(L) denotes the barycenter of L. This is the unique continuous map S on convex sets with the two properties

(i) S(K) is in K for all K

(ii) S(aK + bL) = a S(K) + b S(L).

It is also the map on convex sets satisfying (i) which has the smallest possible Lipschitz constant when the space of compact convex sets is endowed with the Hausdorff metric.

It took me a while to get a thorough enough grasp on the literature to learn these things, though, because when you Google "Steiner point", you mostly get stuff about a triangle center, also named after Steiner, which is a totally different concept. It's not that I thought this other triangle center was the Steiner curvature centroid, it was that I literally didn't know what to search for in order to get results on the Steiner point I was interested in.


Georg Steller (1709-1746) was like that, but in biology rather than math. Things named after him:

Steller's jay, Steller's eider, Steller's sea eagle, Steller's sea cow, Steller's sea lion, and the Stellera genus of flowing plants. Oh, also the mineral Stellerite. He's also in the scientific names of a couple others: Cryptochiton stelleri (a marine mollusc) and Atremisia stelleriana (a plant in the sunflower family). And a school: Steller Secondary School in Anchorage, Alaska.

https://en.wikipedia.org/wiki/Georg_Wilhelm_Steller


Part of the problem is the "Googling for concepts" idea. What did Google Scholar tell you? And did adding the word "barycentre" improve that?


I don't see how your example proves your point. It is not plausible to think about curvature when discussing triangles (you may discuss curvature of constructed circles, but that's tautological to the size of the circles...) so searching for "Steiner Curvature Point" should help find what you need faster.


But at the time I didn't know the phrase "Steiner curvature centroid", I just knew "Steiner point". The definition I knew was not in terms of the curvature, or in the terms I gave above, but as a certain integral of the support function.

As an aside, the Steiner curvature centroid has a perfectly reasonable interpretation in terms of the "curvature" of a triangle. For a convex set in the plane with smooth boundary, the Steiner curvature centroid is equal to the barycenter of the probability measure on the boundary weighted proportionally to the curvature. Given a triangle, take a sequence of smooth convex sets converging in Hausdorff metric to the triangle, and the limit of the Steiner points of these will converge to the following thing: the average of the vertices of the triangle weighted proportionally to pi - the angle. This is the analogue of the barycenter of the curvature-weighted perimeter for triangles.


The thrust of the original article's point is that more descriptive names for theorems and definitions is better. "Steiner curvature centroid" is more descriptive than "Steiner Point", and by the metric of being able to Google for relevant information, it is indeed better.

I see now, rereading, that you were in fact making two points. First, that understanding the definition dwarfs learning the name. (I'd argue that a better name won't make you instantly understand a definition, but it can help but the very example of Steiner point vs Steiner curvature centroid.) Second, that sometimes multiple defintions and theorems are named for the same person, which causes confusion. So you were making a for-and-against argument.


Here we see the classic tension between synthetic names and natural names.

The best discussion I've seen of this topic is in the 1st chapter (which is fortunately freely accessible and concise) of this excellent programming book:

https://leanpub.com/elementsofclojure/read_sample

Relevant excerpt:

> Most natural names have a rich, varied collection of senses.3 To avoid ambiguity we must use synthetic names, which have no intuitive sense in the context of our code.

> Category theory is a rich source of synthetic names. ‘Monad’, to most readers, means nothing. As a result, we can define it to mean anything. Synthetic names turn comprehension into a binary proposition: either you understand it or you don’t. Between experts, synthetic names can be used to communicate without ambiguity. Novices are forced to either learn or walk away.

> Conversely, a natural name is at first understood as one of its many senses. Everyone understands, more or less, what an id is. In a large group, however, these understandings might have small but important differences. These understandings are refined, and gradually converge, through examination of the documentation and code. At the cost of some ambiguity, novices are able to participate right away.

> Natural names allow every reader, novice or expert, to reason by analogy. Reasoning by analogy is a powerful tool, especially when our software models and interacts with the real world. Synthetic names defy analogies,4 and prevent novices from understanding even the basic intent behind your code. Choose accordingly.


This falls a little flat with me. Let's consider an example from the article- a Kähler manifold. I'm not a geometer, so I looked it up on Wikipedia, and it says that a "Kähler manifold is a manifold with three mutually compatible structures: a complex structure, a Riemannian structure, and a symplectic structure." https://en.wikipedia.org/wiki/K%C3%A4hler_manifold

Two of those things are not named after a person, and none of them are understandable without special training. Naming something after a person doesn't make it any harder to understand unless there are multiple things named after that person and you can't figure out which they mean from the context.

In case you're wondering about those structures, here is what Wikipedia has to say about them:

- A Riemannian manifold "is a real, smooth manifold, M, equipped with a positive-definite inner product g_p on the tangent space T_p M at each point p."

- "A complex manifold is a manifold with an atlas of charts to the open unit disk in C^n, such that the transition maps are holomorphic."

- "A symplectic manifold is a smooth manifold, M, equipped with a closed nondegenerate differential 2-form ω, called the symplectic form."

The only thing in the above descriptions accessible to non-specialists is probably that the Riemannian manifold is probably named after that guy that they heard of in calculus class. Let's not get rid of our ability to honor people in a failed attempt to make the communication more effective. You can call a Riemannian manifold or a Kähler manifold whatever you want, but it's not going to prevent someone from having to spend years before they are able to understand them.


This example is made even sillier when one realizes that that "complex" and "symplectic" actually mean the same thing, but the former is Latin and the latter is Greek. It is essentially the same "everyday" description being applied two to quite different mathematical objects.


What Wikipedia says is not the best of arguments, for the simple reason that mathematics articles on Wikipedia have long suffered, as Wikipedia writers themselves have argued about for many years, from the problem of diving straight into jargon in the first sentence. It's fairly well-trodden territory by this point, and things are a lot better than they used to be, but part of the tension is between "But one has to understand these other things before one can understand this article anyway." and "An encyclopaedia is read by people who do not understand the subject, because if they understood it they wouldn't be trying to use an encyclopaedia to find out about it.".


> Let's not get rid of our ability to honor people in a failed attempt to make the communication more effective.

I think we should honor mathematicians less with eponymous theorems (prestige culture is toxic), but I agree it shouldn't be done at the expense of worse communication.


What's so toxic about honouring a long-dead mathematician? The article praises the Ancient Greeks and how they named things after their teachers. That, to me, is far more problematic. The history and the effort that went into developing the theorems of geometry presented by Euclid's Elements are all lost. Now we only know about Euclid, Pythagoras, Archimedes, and maybe a few others.

On the other hand, we know far more about the lives of Fermat, Euler, Gauss, Riemann, and Newton. While we can't owe all of the work of historians to eponymous topics, the use of their names in everyday mathematics helps to keep their memory alive so that new generations of people may be interested in learning about the history of mathematics.


To me, learning the mathematics is more important than learning the history, although the history is very interesting.

As for long-dead mathematicians... theorems are still being named to this day for living people. I think the glory we attach to discoverer of the mathematics diminishes the glory of the mathematics.


Yep; check out how the programming community does it! We have descriptive names for things like "Apache", "React", "nginx", "Rust", and "Java"!


These are names for products though, not programming concepts. A more accurate analogy would be calling a class a Kay template or calling a for loop a Lovelace construct. And it's great that we don't do that.


Lucky us, we have Booleans, Turing Machines, Bolzmann machines, Markov Chains, Liskov Substitution Principle, ISO 8601 dates, the MIT and BSD licenses (hat-tip to the GPL), and the wholly ambiguous concept of a "class" or "object-orientation"


> Booleans, Turing Machines, Bolzmann machines, Markov Chains

As someone with a degree in pure math, I can tell you, we had Turing machines, Boolean networks and Markov chains coming out of our ears. These are not programming concepts; all of these people were mathematicians/physicians who lived before computers even existed. This is just reaffirming the point that mathematicians tend to name things after each other.

> Liskov Substitution Principle

Actually, this is the only instance of a "truly" programming concept named after a person that I know of. And by "truly", I mean that a mathematician does not benefit from knowing this (would probably even write it off as "trivial"). It really is about the art of designing programs. Well, at least it has "substitution" in its name. Not like a Noetherian ring, which is just the opposite of an Artinian ring :D

Licenses belong to law. And ISO is not run by programmers either (for Christ's sake, they have a standard for A CUP OF TEA). This goes to show how much of a polymath a programmer has to be. Reminds me of this fun little rant: https://www.usenix.org/system/files/1311_05-08_mickens.pdf


> Booleans

On-off switches. Or conditionals. Or bits (depending on implementations). Or truthy/falsey values.

> Turing Machines

Symbolic tape machines.

> Bolzmann machines

Hidden-unit binary threshold networks.

> Markov Chains

Cumulative event probability chains.

> Liskov Substitution Principle

Instance Substitution Principle.

> ISO 8601 dates

Big-endian dates (or simply YYYY-MM-DD dates).

> the MIT and BSD licenses

n-clause copycenter licenses.


> Booleans

Truth values.

> Turing Machines

Tape automata. (Though there are a lot of variants.)

> Liskov Substitution Principle

Behavioral subtyping. Even Liskov calls it this, but "SOBID" doesn't have the same ring to it. (BOIDS?)

> ISO 8601

Lexicographic dates. (ISO 8601 also specifies ordinal dates, with day of year (i.e. YYYY-DDD), but the "big endian" ordering principle is still accurate.)

Although "ISO 8601" tells you what specification describes it, which is a lot better than widgets named after people.


We do have Shor's algorithm, Deutsch-Jozsa algorithm etc.


Who is "we"? Both Shor and Jozsa are mathematicians and Deutsch is a physicist.


That's because "easy to google" is much more important in the short, medium, and long runs than "easy to guess what it does"


Indeed, and in a previous era, "to google" was "Who wrote the main papers on this, who should I look up?". Honestly; that's true still today.


I highly doubt this will work out well in the long term. Considering that once something new comes out with a similar name or even the word which was misspelled to form the name of some JS framework it will rise above in the search results.


Let's start with JavaScript which obvious should be similar to Java...


In the long term, nothing works, because the universe is larger than the language center of the human brain.


Not everything is easy to Google:

C, D, F, R, COM, .NET, node ...


Python (the snake; doesn’t show up at all for me in Google’s “All” results).

Swift (the bird; Google eventually gives me https://www.merriam-webster.com/dictionary/swift, but that page thinks it’s more common as a name of a lizard than as the name of a bird, so Google’s small preview says “Definition of swift · 1 : any of several lizards (especially of the genus Sceloporus) that run swiftly · 2 : a reel for winding yarn…”)


C and R turn up top results with the single letter, D works if you add "language", F is more niche but not hard with some Google Fu, same with the other examples. Not everything is easy to Google, but I find those to be very niche or technical topics, e.g. a random linker error that has many other similar terms with another set of related errors.


io-lang


Rust, Go, C, D, R, Flask, Rocket.


Append programming language to the query for searching.

Imagine C++ would be named as "Object oriented, generic, general purpose programming language with C compatibility"(of course C won't be named C then).


You mean "C with Classes"?


It could be called Stroustrupilian ... or "The one with Turing complete templates".


> "The one with Turing complete templates"

Java has Turing complete generics as well


Also check out "Duff's device" or "Dijkstra's algorithm" ;)


Huffman coding, Turing Machine, Amdahl's law, Boolean algebra, Currying, Ada, etc


The fact that you went to Currying and Ada shows how rare it is in the programming world.


or "bool" (Boole)


Merkle tree.


Are you telling me it's not a patchy server? http://xahlee.info/UnixResource_dir/open_source_rewrite_hist...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: