> ... a paper that uses entirely unnecessary equations, and doesn't bother to define symbols. This practice wastes everyone's time.
I agree with you there, notation should be defined, otherwise it doesn't help explain anything.
However, consider that for people who are already used to the notation those "unnecessary" equations are actually more compact and precise than reading the accompanying text. What seems difficult to you may be easy for someone else, and vice versa.
> ... the key to it is to force mathematicians to name their variables.
I believe you overestimate how much of an improvement that would be. How much code have you seen that had variables like int num or String str? If num can be any arbitrary number, and str is just a generic string, there isn't necessarily a more descriptive name you can use.
Mathematics is full of these cases. Say, some general equation involves three real numbers and a real valued function of two real parameters. They are completely generic, so a mathematician might name them x, y, z and f and write the equation as f(x,y) = z.
How would you call them? All I can think of would be the first parameter, the second parameter, the result and the function, which is not much more descriptive for the added verbosity.
The problem with using descriptive names in mathematics is that most entities you are dealing with are so generic that naming them doesn't help. Of course I'm not opposed to e.g. writing NormalDistribution instead of just N, since this is actually a very specific concept. But there is just no way to completely eliminate single letter variables from mathematics without using an equally nondescriptive replacement.
> However, consider that for people who are already used to the notation those "unnecessary" equations are actually more compact and precise than reading the accompanying text. What seems difficult to you may be easy for someone else, and vice versa.
A struggle for me over the years has been that there does not really seem to be any way of learning the notation, separate from the standard university education process. I'm not about to drop out of my career and go back to school just to learn to read mathematical notation, and it's basically impossible to look up the definitions for obscure symbols with unguessable names, so I simply remain ignorant.
Can you give an example of a place you've seen notation you can't understand? Typically, papers include a brief notation section where they define notation you can then google. In other cases, I've normally been able to google things like "what does the bar over <entity> mean?" successfully.
The last paper I remember trying to crack was Damas & Milner, on type systems, since I was working on a problem which appeared to be analogous to type inference. I asked a friend for help, and he very kindly translated it for me - he's actually written a long series of blog posts now, based on those emails:
As with virtually every other CS paper using math notation, I found the Damas & Milner paper utterly incomprehensible at first, but once I'd seen the formulas translated into a notation I can actually read, I was able to go back and learn something useful from it.
Unfortunately, it appears to be the case that there really is no one such thing as "math notation", not in the sense that there is one programming notation called "Python" and another called "Haskell" and yet another called "C++", such that one can go read a tutorial for some specific language and thereby come to understand how its notation works. Instead, it appears that "math notation" is a huge collection of little micro-notations, all mixed up together in a more or less ad-hoc fashion by the author of each paper.
Naming is hard, no question. Context helps, though. In realm where I live, computer science, x, y, and z likely correspond to something in the problem domain, like "rate" or "time". And even if we're dealing in pure, as opposed to applied, math, you should still be able to come up with names like "surfaceArea" or "interval".
It is impossible to get better than one letter variable names here. This goes for most expressions in maths, to make things simpler we just use one letter variables for everything. This forces us to learn that the names aren't important, all that matters is how the names relate to each other.
The problem with notation only occurs when non mathematicians start to write down formulas without properly defining what everything is. This is not a problem with mathematical notation, this is a problem with statisticians/economists/physicists/Computer scientists etc who define their own global naming rules where P might denote probability/price/momentum/polynomial time problems etc.
The problem with the quadratic formula is that, without referring back to the polynomial or thinking through the proof, I have no way to know which coefficient corresponds to which term of the polynomial. Here's an improvement on the variable naming:
X and Y are descriptive of an overwhelmingly common abstraction, the cartesian coordinate system. They do about the best job they can do.
The coefficients (a, b, c) are opaque. It seems like they would be better generalized as (C_2, C_1, C_0) (or subscripts instead of _#), to let C_N represent "The coefficient for the x^N term of the polynomial").
A few extra characters that are acceptable to mathematicians (subscripts are ++good) would make the formula much more readable, and lets us generalize to higher-order polynomials for similar solutions (not that they necessarily exist).
I agree with you there, notation should be defined, otherwise it doesn't help explain anything.
However, consider that for people who are already used to the notation those "unnecessary" equations are actually more compact and precise than reading the accompanying text. What seems difficult to you may be easy for someone else, and vice versa.
> ... the key to it is to force mathematicians to name their variables.
I believe you overestimate how much of an improvement that would be. How much code have you seen that had variables like int num or String str? If num can be any arbitrary number, and str is just a generic string, there isn't necessarily a more descriptive name you can use.
Mathematics is full of these cases. Say, some general equation involves three real numbers and a real valued function of two real parameters. They are completely generic, so a mathematician might name them x, y, z and f and write the equation as f(x,y) = z.
How would you call them? All I can think of would be the first parameter, the second parameter, the result and the function, which is not much more descriptive for the added verbosity.
The problem with using descriptive names in mathematics is that most entities you are dealing with are so generic that naming them doesn't help. Of course I'm not opposed to e.g. writing NormalDistribution instead of just N, since this is actually a very specific concept. But there is just no way to completely eliminate single letter variables from mathematics without using an equally nondescriptive replacement.