> This is actually why I feel that mathematical texts tend to be not rigorous enough, rather than too rigorous.
The thing that mathematicians refuse to admit is that they are extremely sloppy with their notation, terminology and rigor. Especially in comparison to the average programmer.
They are conceptually/abstractly rigorous, but in "implementation" are incredibly sloppy. But they've been in that world so long they can't really see it / just expect it.
And if you debate with one long enough, they'll eventually concede and say something along the lines of "well math evolved being written on paper and conciseness was important so that took priority over those other concerns." And it leaks through into math instruction and general math text writing.
Programming is forced to be extremely rigorous at the implementation level simply because what is written must be executed. Now engineering abstraction is extremely conceptually sloppy and if it works it's often deemed "good enough". And math generally is the exact opposite. Even for a simple case, take the number of symbols that have context sensitive meanings and mathematicians. They will use them without declaring which context they are using, and a reader is simply supposed to infer correctly. It's actually somewhat funny because it's not at all how they see themselves.
> The thing that mathematicians refuse to admit is that they are extremely sloppy with their notation, terminology and rigor. Especially in comparison to the average programmer.
Not sure why you say that. Mathematicians are pretty open about it. The well known essay On proof and progress on mathematics discusses it. It is written by a Fields medalist.
This drove me mad when I had to do introductory maths at uni. Maths as written did seem pretty sloppy and not at all like a programming language whose expressions I could parse as I expected. Obv most simple algebra looks about as you'd expect but I clearly recall feeling exactly what you describe in some cases, and commented upon it to the lecturer about it asking why it was that way during a tutorial. He humoured me, was a good guy.
But I think mathematicians probably have a point - it did evolve that way over a long time and anyone practicing it daily just knows how to do it and they're not going to do a thorough review and change now.
It's us tourists that get thrown for a loop, but so it goes. It's not meant for us.
> Maths as written did seem pretty sloppy and not at all like a programming language whose expressions I could parse as I expected.
Look at Lean's mathlib: that's what fully formal mathematics looks like. It's far too verbose to be suitable for communicating with other people; you might as well try to teach an algorithms course transistor by transistor.
You’re confusing syntax and semantics. Programmers write code for syntax machines (Turing machines). The computers care a lot about syntax and will halt if you make an error. They do not care at all about semantics. A computer is happy to let you multiply a temperature in Fahrenheit times a figure in Australian dollars and subtract the volume of the earth in litres, provided that these numbers are all formatted in a compatible enough way that they can be implicitly converted (this depends on the programming language but many of them are quite liberal at this).
If you want the computer to stop you from doing such nonsense, you’ve got to put in a lot of effort to make types or contracts or write a lot of tests to avoid it. But that’s essentially a scheme for encoding a little bit of your semantics into syntax the computer can understand. Most programmers are not this rigorous!
Mathematicians, on the other hand, write mathematics for other humans to read. They expect their readers to have done their homework long before picking up the paper. They do not have any time to waste in spelling out all the minutiae, much of which is obvious and trivial to their peers. The sort of formal, syntax-level rigour you prefer, which can be checked by computers, is of zero interest to most mathematicians. What matters to them, at the end of the day, is making a solid enough argument to convince the establishment within their subfield of mathematics.
But programmers are expected to get the semantics right. Sure, it happens to mismatch temperatures and dollars, but it’s called a bug and you will be expected to fix it
Why do mathematicians hide their natural way of thinking ? They provide their finished work and everyone is supposed to clap. Why can't they write long articles like about false starts, dead ends and so on. It's only after magazines like Quanta and YouTube channels that we get to feel the thinking process. Math is not hard. Atleast the mathematics we are expected to know.
Mathematics is extremely hard. The math people are expected to know for high school is not hard, but that is such a minuscule amount of math compared to what we (humans) know, collectively.
Mathematicians do speak and also write books about the thinking process. It’s just very difficult and individualized. It’s a nonlinear process with false starts and dead ends, as you say.
But you can’t really be told what it feels like. You have to experience it for yourself.
>Even for a simple case, take the number of symbols that have context sensitive meanings and mathematicians. They will use them without declaring which context they are using, and a reader is simply supposed to infer correctly.
Yes!! Like, oh, you didn't know that p-looking thing (rho) means Pearson's correlation coefficient? That theta means an angle? Well just sit there in ignorance because I'm not going to explain it. And those are the easy ones!
My experience with the average programmer is...different from yours. The software development field is exceptionally bad in this regard. Physicists are mathematically sloppy sometimes (why, yes, I will just multiply both sides by `dy` and take as many liberties with operators, harmonics/series, and vector differential operations as I care to, thanks).
Mathematics, like any other academic field, has jargon (and this includes notation, customary symbols for a given application, etc.), and students of the field ought to learn the jargon if they wish to be proficient. On the other hand, textbooks meant to instruct ought to teach the jargon. It's been forever since I've opened a mathematics textbook; I don't recall any being terribly bad in this regard.
Well I have a different approach. Sometimes I write and hack it to solve a particular problem. The code might be elegant or not, but if you understand the problem you can probably grok the code.
Next I generalize it a bit. Specific variables configurable parameters. Something that happened implicitly or with a single line of code gets handled by its own function. Now it’s general but makes much less sense at first because it’s no longer tied to one problem, but a whole set of problems. It’s a lot less teachable and certainly not self-evident any more.
The problem with math education is that we think the solution approach would be inherently superior to the first, and would make a better textbook—because it’s more generic. But that is not how real people learn—they would all “do” math the first way. By taking away the ability of the student to do the generalization themselves we are depriving them of the real pleasure of programming (or math).
Maybe back when paper was scarce this approach made sense but not any more.
Ideally I would love to present a million specific solutions and let them generalize themselves. That is exactly how we would train a ANN. Not be regurgitating the canned solution but by giving it all sorts of data and letting it generalize for itself. So why don’t we do this for human students? When it comes to education I think people have a blind spot towards how learning is actually done.
Notation and terminology? Sure, some explanations and mechanical manipulations are elided in mathematics because with context they're clear.
Rigor? Ha, you have got to be kidding me. Math is rigorous to a fault. Typical computer programming has no rigor whatsoever in comparison. A rigid syntax is not rigor. Syntax, in turn, is certainly not the difficult part of developing software.
This, really. Sometimes, when reading math papers, you find that they end up being very hand-wavy with the notation, e.g. with subscripting, because "it's understood". But without extensive training in mathematics, a lot of it is not understood.
Programmers will use languages with a lot of syntactic sugar, and without knowing the language, code can be pretty difficult to understand when it is used. But even then, you can't be sloppy, because computers are so damn literal.
> The thing that mathematicians refuse to admit is that they are extremely sloppy with their notation, terminology and rigor.
The refuse part is imo ver dependent on the person. Nearly all of my professors for theoretical cs courses just plainly said that their "unique" notation is just their way because they like it.
It's more or less just a simple approach to alter the language to fit your task. This is also not unfamiliar to the programmers who may choose a language based on the task, with, e.g. Fortran for vector based calculus or C for direct hardware access.
Bro I know this feel. Even books teaching algorithms being written by mathematicians are error everywhere.
They never state the type, class, no comment, no explanation, read exceed the last index... This list can go endlessly. When they say "lets declare an empty set for variable T", you don't know whether the thing is a list, set, tuple, ndarray, placeholder for a scalar, or a graph.
Some even provide actual code, however, never actually run the code to verify their correctness.
Try this guy then, he's got a PhD in mathematics from the California Institute of Technology from a thesis Finite Semifields and Projective Planes but he's written a bunch of stuff on algorithms and will write you a check for any errors you find in his work: https://en.wikipedia.org/wiki/Donald_Knuth
Church organist actually - serious enough to have a two story pipe organ built into his own home.
Enough of the True Scotsman .. it's clear as day from the introduction and opening chapters of TAOCP that he approaches programming as a mathematician.
I believe that any mathematician that took a differential geometry class must have already realized this, the notation is so compressed and implicit that some proofs practically are "by notation" as it can be a dauting prospect to expand a dozen indexes.
The average computer scientist (not only "programmer", as a js dev would be) never wrote lean/coq or similar, and is not aware of the Curry-Haskell like theorems and their implications.
I think you entirely missed the point. GP put it well:
>> They are conceptually/abstractly rigorous, but in "implementation" are incredibly sloppy.
Maturity in concept-space and the ability to reason abstractly can be achieved without the sort of formal rigor required by far less abstract and much more conceptually simple programming.
I have seen this first hand TAing and tutoring CS1. I regularly had students who put off their required programming course until senior year. As a result, some were well into graduate-level mathematics and at the top of their class but struggled deeply with the rigor required in implementation. Think about, e.g., missing semi-colons at the end of lines, understanding where a variable is defined, understanding how nested loops work, simple recursion, and so on. Consider something as simple as writing a C/Java program that reads lines from a file, parses them according to a simple format, prints out some accumulated value from the process, and handles common errors appropriately. Programming requires a lot more formal rigor than mathematical proof writing.
You have a valid point, which is that we are not even being rigorous enough about the meaning of the word “rigor” in this context.
- One poster praises how programming needs to be boiled down into executable instructions as “rigor,” presumably comparing to an imaginary math prof saying “eh that sort of problem can probably be solved with a Cholesky decomposition” without telling you how to do that or what it is or why it is even germane to the problem. This poster has not seen the sheer number of Java API devs who use the Spring framework every day and have no idea how it does what it does, the number of Git developers who do not understand what Git is or how it uses the filesystem as a simple NoSQL database, or the number of people running on Kubernetes who do not know what the control plane is, do not know what etcd is, no idea of what a custom resource definition is or when it would be useful... If we are comparing apples to apples, “rigor” meaning “this person is talking about a technique they have run across in their context and rather than abstractly indicating that it can be used to fix a problem without exact details of how it does that, they know the technique inside and out and are going to patiently sit down with you until you understand it too,” well, I think the point more often goes to the mathematician.
- Meanwhile you invoke correctness and I think you mean not just ontic correctness “this passed the test cases and happens to be correct on all the actual inputs it will be run on” but epistemic correctness “this argument gives us confidence that the code has a definite contract which it will correctly deliver on,” which you do see in programming and computer science, often in terms of “loop invariants” or “amortized big-O analysis” or the like... But yeah most programmers only interact with this correctness by partially specifying a contract in terms of some test cases which they validate.
That discussion, however, would require a much longer and more nuanced discussion that would be more appropriate for a blog article than for an HN comment thread. Even this comment pointing out that there are at least three meanings of rigor hiding in plain sight is too long.
>> Programming requires a lot more formal rigor than mathematical proof writing.
> This is is just wrong? Syntax rigour has almost nothing to do with correctness.
1. It's all fine and well to wave your hand at "Syntax rigour", but if your code doesn't even parse then you won't get far toward "correctness". The frustration with having to write code that parses was extremely common among the students I am referring to in my original post -- it seemed incidental and unnecessary. It might be incidental, but at least for now it's definitely not unnecessary.
2. It's not just syntactic rigor. I gave two other examples which are not primarily syntactic trip-ups: understanding nested loops and simple recursion. (This actually makes sense -- how often in undergraduate math do you write a proof that involves multiple interacting inductions? It happens, but isn't a particularly common item in the arsenal. And even when you do, the precise way in which the two inductions proceed is almost always irrelevant to the argument because you don't care about the "runtime" of a proof. So the fact that students toward the end of undergraduate struggle with this isn't particularly surprising.)
Even elementary programming ability demands a type of rigor we'll call "implementation rigor". Understanding how nested loops actually work and why switching the order of two nested loops might result in wildly different runtimes. Understanding that two variables that happen to have the same name and two different points in the program might not be referring to the same piece of memory. Etc.
Mathematical maturity doesn't traditionally emphasize this type of "implementation rigor" -- even a mathematician at the end of their undergraduate studies often won't have a novice programmer's level of "implementation rigor".
I am not quite sure why you are being so defensive on this point. To anyone who has educated both mathematicians and computer scientists, it's a fairly obvious point and plainly observable out in the real world. Going on about curry-howard and other abstract nonsense seems to wildly and radically miss this point.
Having taught both I get what you are saying, but the rigor required in programming is quite trivial compared to that in mathematics. Writing a well structured program is much more comparable to what is involved in careful mathematical writing. It's precisely the internal semantic coherence and consistency, rather than syntactic correctness, that is hardest.
You need more rigour to prove let’s say Beppo Levy theorem than writing a moderately complex piece of software.
Yet you can write it in crappy English, the medium not being the target goal, the ideation process even poorly transcribed in English needs to be perfectly rigorous. Otherwise, you proved nothing.
> Syntax rigour has almost nothing to do with correctness.
I see your point: has almost nothing correctness with rigour do to Syntax.
Syntax rigor has to do with correctness to the extent that "correctness" exists outside the mind of the creator. Einstein notation is a decent example: the rigor is inherent in the definition of the syntax, but to a novice, it is entirely under-specified and can't be said to be "correct" without its definition being incorporated already...which is the ultimate-parent-posts' point and I think the context in which the post-to-which-you're-replying needs to be taken.
And if you're going to argue "This is just wrong?" (I love the passive-aggressive '?'?) while ignoring the context of the discussion...QED.
but programmers don't write underspecified notational shortcuts, because those are soundly rejected as syntax errors by the compiler or interpreter
this is not about semantics (like dependent types etc) this is just syntax. it works like this in any language. the only way to make syntax accepted by a compiler is to make it unambiguous
... maybe LLMs will change this game and the programming languages of the future will be allowed to be sloppy, just like mathematics
Yep, but for any notational shortcut the compiler infers a single thing for it. It's still unambiguous as far as computers are concerned (but it may be confusing reading it without context)
It’s not only a notational shortcut as in syntactic sugar though.
It can apply a set of rewrite rules given you were able to construct the right type at some point.
It’s type inference on steroids because you can brute force the solution by applying the rewrite rules and other tactics on propositions until something (propositional equality) is found, or nothing.
The thing that mathematicians refuse to admit is that they are extremely sloppy with their notation, terminology and rigor. Especially in comparison to the average programmer.
They are conceptually/abstractly rigorous, but in "implementation" are incredibly sloppy. But they've been in that world so long they can't really see it / just expect it.
And if you debate with one long enough, they'll eventually concede and say something along the lines of "well math evolved being written on paper and conciseness was important so that took priority over those other concerns." And it leaks through into math instruction and general math text writing.
Programming is forced to be extremely rigorous at the implementation level simply because what is written must be executed. Now engineering abstraction is extremely conceptually sloppy and if it works it's often deemed "good enough". And math generally is the exact opposite. Even for a simple case, take the number of symbols that have context sensitive meanings and mathematicians. They will use them without declaring which context they are using, and a reader is simply supposed to infer correctly. It's actually somewhat funny because it's not at all how they see themselves.