Hacker News new | past | comments | ask | show | jobs | submit login
Towards a Better Notation for Mathematics (2010) [pdf] (files.wordpress.com)
81 points by stared on June 16, 2017 | hide | past | favorite | 32 comments



As the author admits in his update, this document is not very interesting. When I saw the title I was mainly interested in perhaps a formalism of mathematical exposition and proof notation.

The problem with the author's proposals is this: he proposes replacing well-known, well-distinguished symbols (alphabets and digits) with symbols that reflect a certain geometric interpretation.

First, the geometric symbols are difficult to read, and a small error (either in printing or in reading) can mean a huge change in meaning.

Second, the obvious criticism that they are too radical a departure from the current convention for them to be widely adopted.

Most salient, however, is that they offer no benefit to the development of mathematics. The mathematical enterprise is based on abstraction, generalization, and symbolic manipulation. The symbols proposed reflect only the most banal representation of the mathematical objects they discuss.

For example, he discusses representing the natural numbers with a number line. But this is an elementary schooler's notion of natural numbers. Mathematicians would like to discuss other representations and constructions of the natural numbers, say, their Peano axiomatization, as Church numerals, as an algebraic structure, as indices, as a countable set, and many more. Hence it is honestly more convenient, and less intrusive to the discussion of novel mathematical ideas and perspectives, to simply use a flavorless symbol of the alphabet, rather than one that is associated with a particular geometric interpretation.

Finally, I'd like to point out that while many things have changed in mathematics since ancient times, we still follow Aristotle in using letters to denote mathematical objects, and Euclid's rigorous exposition and proof, and actually use them today with far more frequency than they ever did.


Hi, I'm the author. Please keep in mind that I wrote this when I was 16.

Thanks for your comments -- I agree with many of them. The only part of this that I still think is at all interesting is the section on quantifiers, and to a lesser extent the octal numbers. And I don't think they're that interesting. In general, while I do think there is interesting potential for improving notation -- Feynman diagrams come to mind as a somewhat modern example of notation impacting research -- I don't think this is what it looks like.

> he proposes replacing well-known, well-distinguished symbols (alphabets and digits) with symbols that reflect a certain geometric interpretation.

That's what most of it is, although the quantifiers are a more "grammatical" change. I think the best defense I can level of the symbol proposals is that the people most effected by low-level notational choices like symbols are probably young students learning them for the first time. It seems plausible there might be significant pedagogical gains in other symbols.


Math major here. I too came out of your paper thinking that the quantifiers are the only interesting part of you paper; interesting enough, in fact, that I may start using them myself. One criticism is that there is no way to use them for variables that occur in more than one place. That is, you could not use them to right "for all x in R, x<x+1". I propose a simple addition of an optional subscript for the variable name in the quantifier.

My other criticism is that it seems less natural to read that notation. With the standard notation, you can simply replace the symbol with English words and read it. While the transliteration in your notation is likely possible, it seems more complicated. In this sense, I would probably view this notation more as a shorthand. For pedagogical purposes, I feel like separating the qualifier from the usage of the variable is a good idea.


It is not the only problem, unfortunately. For example, there is no way to specify the order of quantifiers, which is very important (if you commute existiential and universal quantifiers, in general you change the meaning of a formula). So, for example, the definitions of pointwise and uniform continuity would we written identically, and that is not tolerable.

Probably you could solve also this by adding another index to quantified quantities, but at that point are you really solving the problem of complexity? Complexity is not only measured in terms of verbosity (i.e., length of the expressions you produce); ability to quickly locate information and ease of manipulation are also important, and it seems to me that this formalism fails completely. In the "traditional" formalism you separate the "final fact" you want to assert from its "conditions", given by the quantifiers; usually, you want to process those pieces of information at different times. Also, you have rules for quantifier introduction and elimination, which seems to be a nightmare with the proposed formalism.

That said, I do encourage new proposals and experimentation with notations, as with anything else in mathematics, even from young students. There have been cases in maths in which some good idea for a new notation made an entire field much easier (I am thinking for example to Einstein notation for tensor calculus).


Hi, thanks for replying! I'm sorry if I come off as overly critical. I did indeed think that this was the work of a very curious teenage mind who nevertheless still had insufficient experience with academic-level mathematics.


The intention is good, e.g. 1, 2 and 3 can be written in as many strokes as they represent, an S already looks like a cosine-curve (as it's depicted on e.g. synthesizer interfaces). But after that it becomes tedious because there is only so much space for entropy in a small area even with printed symbols. Bigger symbols, ie. diagrams are embraced already in many areas. Geometric representations are embeddings of mathematical objects in the real world. Of course improved symbolism had the potential to improve learning. Another way to achieve that is reduction of formalism, structure is much more important to avoid repetition. Mathematical didactic is very much concerned with that, but I see no didactic arguments in the paper, its only stylistic.


A lot of the problems with mathematical notation, I think, are solved problems in programming languages. Mathematicians could learn a lot from programmers in this regard

- Variable Scope. In mathematical writing, the convention is that every variable is a global variable - and in a paper you quicky run out of variable names to use. There should be a way to declare the scope of a variable. For this chapter only, x is this vector.

- Strict Typing. You skim a paper, and you see "x + y". But what is x? a matrix, a vector, a scalar, an operator, a set, a random variable, you have no idea. The "+" operator is so overloaded x and y could be anything. Having to hunt down the definitions for every variable is a time consuming and frankly wasteful process. Conventions are built up around this (upper case letters A,B at the beginning of the alphabet are matrices, but upper case letters at the end X,Y are random variables) but they are mostly rater silly. There needs to be an easy way to infer the type of a variable quickly.

- Anonymous Objects. There is no way to declare a function without either giving a name to it (f(x) = x^2) or at least declaring its input symbols (x |=> x^2). But you can't go, say (x |=> x^2)(y). Using a symbol to represent a disposable function seems a waste of a valuable global variable (see again, 1) and is inelegant.


> Variable scope

This goes against the very nature of what mathematical variables are like! The whole point is that the symbols are suggestive. This kind of thing allows the reader to quickly get up to speed with what the symbols are all about. As a side-effect, an unconventional use of symbols is usually quite grating to the experienced mathematician. Usually:

- x, y, z stand for variables

- i, j, k stand for subscripts and superscripts

- m, n stand for integers

- theta, phi stand for angles

- alpha, beta stand for real numbers of some sort

- capital letters are usually sets, transformations and such

> Strict typing

Again. Mathematics is about abstraction. Here "+" means that you can add the two objects, whatever they are. That's all there is to it. Of course, they are mentally typecast to the more general type. The point of this overloading is that, again, its suggestive nature makes it easier to read and write. The abuse is so pervasive that something like 1 + A can easily mean that, yes, 1 is the identity matrix.

> Anonymous Objects

I agree. Although it's much easier to, once and for all, declare "Let f(x) = x^2" and use "f" instead of the anonymous function every time.

All that this kind of thing highlights is, IMHO, the fundamental difference between being a mathematician and being a software developer (and I've said it before if you look through my comment history):

- Mathematicians manipulate a given symbol, often in the thousands of times, in their heads or on paper - Software developers read a given line of code, often in the thousands of times, in order to understand what the code does

These differences drive, IMHO, 99% of the difference in notation.


> As a side-effect, an unconventional use of symbols is usually quite grating to the experienced mathematician.

There's a great spoof paper that demonstrates this by giving each variable the next available letter of the alphabet. Something like "Let a be a set of points in the plane. Let b, c and d be elements of a. Let e be the angle formed by..."

It becomes unreadable very quickly. Sadly I can't find the original now - anyone know what I'm talking about?


Not really disagreeing, just chiming in as a mathematician who does a lot of programming:

Mathematicians handle variable scope all the time--I would argue that a variable's scope is generally the section, subsection, lemma, theorem, etc that it is defined in. If I write "let x be ..." in the proof of a lemma, I sure don't intend for it to be a global variable.

It's not unusual to have an index of symbols at the end of a paper or book.

Also, anonymous objects are fine and used all the time. "Applying z => z^2 to the region U, ..." or some such thing.

I'm not sure it would be very useful to try to coordinate some standard notation for all of this. Anyway, one nice thing about writing math is that you can make up whatever notation you need, as you need it, and no compiler can tell you not to :)


An editor compiling submitted papers could complain.


We've entered the territory where solid programming is back at mathematical and logic roots. After all, the lambda calculus lineage comes from there.


Mathematician here. There is a sentiment here, that seems to be that the hard part of mathematics is in the notation, and that this could be solved by just implementing all mathematics in a programming language. I quite disagree. I believe that mathematics is hard, because explaining ideas is hard. Additionally, it is quite possible to state and prove an important theorem, without at all explaining where this theorem comes from, why it is important and how on earth you might ever cook up something like this. (In fact, I think there is a very great lack of good mathematical resources, even in pure mathematics).

With regards to notation and mathematical writing in general, the problem is that it is never explicitly taught. Too many students of mathematics never learn to properly structure sentences in mathematics. In studying programming languages, you have no choice but to understand the structure explicitly. I do think mathematics has all the features you mention, it's just not explicit.

> Variable Scope. In mathematical writing, the convention is that every variable is a global variable - and in a paper you quicky run out of variable names to use.

Variables usually have local scope. In a mathematical statement like "for a continuous function f ...", the letter f is not defined after the sentence ends. Another common way to "initialize" variables is by statements like "Let f be a continuous function", and while it is true that it is never formally cleared, it is reintroduced when needed in another context. In some rare cases, you might start a chapter with "In this chapter, we let $H$ denote a fixed Hilbert space...", which is the mathematical equivalent to a global variable.

> Strict Typing. You skim a paper, and you see "x + y". But what is x? a matrix, a vector, a scalar, an operator, a set, a random variable, you have no idea. The "+" operator is so overloaded x and y could be anything.

I don't understand how programming languages are different here. Most modern programming languages also have operator overloading or polymorphism which has precisely this problem. Do you prefer C-style add_object_of_type_T(x, y)?

> Having to hunt down the definitions for every variable is a time consuming and frankly wasteful process. Conventions are built up around this (upper case letters A,B at the beginning of the alphabet are matrices, but upper case letters at the end X,Y are random variables) but they are mostly rater silly. There needs to be an easy way to infer the type of a variable quickly.

The common convention is to introduce variables just before you use them. That's as I would do it in a programming language? You seem to desire an IDE for mathematics, so that I could ctrl-click any variable to the definition. Though I have to say, if that's such a problem, the paper is probably horribly structured.

I don't understand your problem with conventions. Conventions are suggestive, to help you remember the definition, it's much in the same way we choose suggestive variable names in programming languages.

> Anonymous Objects. There is no way to declare a function without either giving a name to it (f(x) = x^2) or at least declaring its input symbols (x |=> x^2). But you can't go, say (x |=> x^2)(y).

Sure you could write (x |-> x^2)(y), you're allowed to introduce any notation you like. But why wouldn't you just write that expression as y^2?

You could declare a function as "the function which ...", which doesn't give it a name. If you want a symbolic expression, I'm not sure how you would do that without declaring any input symbols.


Some great initial ideas here, but the focus still seems to be on notation for pencil and paper. This is the medium of the past and it's worth thinking about how mathematical ideas should be represented in a computing medium. Some potential ideas here:

- http://geometry.mrao.cam.ac.uk/ - http://worrydream.com/KillMath/


All the trigonometry symbols look too much alike. (Small round things with small scribbles of size of the ant's leg attached to them).

Likewise, base-8 numerals look fun. However, one benefit of the regular numbers is that they look both pleasant and distinctive from each other in the flow of the regular text. Both in print and also if written by hand (at least if you add the extra horizontal bar to the 7).

Actually, if one were to adopt all of the proposed notation, it appears that one might lost in the forest of slightly different combinations of arrows that mean different things.

The quantifier notation is the only thing I actually might like. (It's also backwards-compatible.)


From "Surely You're Joking, Mr. Feynman!":

"While I was doing all this trigonometry, I didn't like the symbols for sine, cosine, tangent, and so on. To me, "sin f" looked like s times i times n times f! So I invented another symbol, like a square root sign, that was a sigma with a long arm sticking out of it, and I put the f underneath. For the tangent it was a tau with the top of the tau extended, and for the cosine I made a kind of gamma, but it looked a little bit like the square root sign. Now the inverse sine was the same sigma, but left -to-right reflected so that it started with the horizontal line with the value underneath, and then the sigma. That was the inverse sine, NOT sink f--that was crazy! They had that in books! To me, sin_i meant i/sine, the reciprocal. So my symbols were better."

And some example here: https://tex.stackexchange.com/questions/274463/feynman-trig-....


My favorite new math notation is the triangle of power: https://www.youtube.com/watch?v=sULa9Lc4pck

I slightly modified it to avoid subscript and keep it closer to existing notation for powers: http://i.imgur.com/hOz9MXa.jpg


I think the author misses the basis of innovation of notation: mathematicians are lazy.

A simple exercise, often used to teach children, is to start without notation. Do exercises like these (translated from Hungarian way of saying numerals):

Write the questions the teacher speaks and give answers: "Two tens three plus one ten six equals?" Three tens nine. "Seven tens one plus two equals?" Seven tens three. ... Do about five of these. Children will ask if they can write less. Do a few more with comparisons ("is greater than or equal to"), simplifying equations ("is equivalent to"), lines of equations ("has the same solutions as") and children understand.

These leads to fixing notation that causes errors: "6x3" can read like "6x times 3". Poor handwriting confuses "6' with "8". Pi is usually a less interesting number than 2*pi. Kinematics use all four corners of a letter, e.g., "joint x, in frame r, at time t, ..."

There is space for better notation, or at least an appreciation of how it ends up where it does.


This might be well known, but is there are good resource for knowing what the various mathematical symbols mean? I only got as far as first year uni maths and I find that when I look at maths papers that are interesting for a problem I want to solve I can't make much progress on working out what all the symbols mean.

Almost all the time once I see the code that implements the mathematics then it is easy to understand, but making sense of all the different symbols is very hard especially when different fields seem to use the same symbols to meant totally different things.


> especially when different fields seem to use the same symbols to meant totally different things

That is the problem: there are by far too few letters and symbols to describe all the objects used in modern mathematics. You need to reuse symbols, and therefore any paper must declare the symbols they use. Sometimes they are even forced to use the same symbol for different things, so you need to have some experience in the field to understand which is the correct reading each time.

The symbol declaration can be done explicitly (by stating the fact, usually at the beginning or in a dedicated section; some books also have a table of notations with references to definitions) or implicitly, when the paper is targeted to scholars that are most probably already familiar with that notation. The latter is probably not very friendly to newcomers, but in most cases if you do not know what implicitly defined objects are, then you probably cannot understand the paper anyway.

In short, there are very few "short ways" into mathematics. If you want to understand things, you have to spend time and study them.


It is a huge job to bring my mathematics up to the required level :)

About the only way I have been able to short-circut the process is outsource to someone who can turn the paper into code - once they do this the maths becomes easy for me to understand.


As others have mentioned, the Wikipedia article [0] provides a good listing of common symbols. If you are trying to read something that uses symbols not in that list (or in a way different from that list), you best bet would be to find a textbook on the subject. Most math books have a glossary of symbols at the beginning or end.

Also, as long as we are on the topic of bad math notation, I would caution that some sources use the symbols in subtly different ways, so you should be sure to check the paper itself for specific conventions.

[0] https://en.wikipedia.org/wiki/List_of_mathematical_symbols



Thanks. This would make a fantastic cheat sheet project.

Edit. This project looks interesting [1].

1. https://github.com/Jam3/math-as-code/issues


A better Notation is a digital one, which can be zoomed by clicking upon a component. Click on the Summation Symbol to get to the Wikipedia Article for the Summation. https://en.wikipedia.org/wiki/Summation Click on a worked upon item to see its origin and a history of access and usage within the formuala/proof.

TL,DR; A better Mathematic Notation is a VS for math.


This is a genius paper. We should think about evolving the language of mathematics just like other languages (programming, english etc) evolve to become more friendly and empowering. I love the idea behind it, and of course the proposal has many points to be debated as to how and what but the core idea is genius.


replacing cosines and sines with dials makes typesetting a nightmare. imho I don't see this as being helpful at all.


Not only that, but at a glance they're near identical.


Interesting thought, but Kenneth Iverson got the Turing Award for APL which was a programming language implementation for his new mathematical notation system. His paper "Notation as a Tool of Thought" is quite interesting. You can take a look at any APL code and see it in action. Cool stuff.


The part on quantifiers reminds me of point-free notation in Haskell. Not in a good way, either...


The title is a bit misleading though enticing ... IMO, it is not an even a dead horse ...


> Towards a Better Notation for Mathematics for Engineers




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: