Hacker News new | past | comments | ask | show | jobs | submit login
A Programmer's Introduction to Mathematics (pimbook.org)
386 points by __rito__ on May 3, 2023 | hide | past | favorite | 78 comments



Interesting work, but I think it neglects one of the profound distinctions in mathematics that people coming from a programming/compsci background often stumble over: that between the discrete and the continuous. This looks like a good discussion:

> "The distinction between the discrete and the continuous lies at the heart of mathematics. Discrete mathematics (arithmetic, algebra, combinatorics, graph theory, cryptography, logic) has a set of concepts, techniques, and application areas largely distinct from continuous mathematics (traditional geometry, calculus, most of functional analysis, differential equations, topology). The interaction between the two - for example in computer models of continuous systems such as fluid flow - is a central issue in the applicable mathematics of the last hundred years."

http://philsci-archive.pitt.edu/16561/1/Discrete%20and%20Con...

With respect to programming and mathematics, this line is worth thinking about: The problem is that the continuous is easier to imagine and prove results about, but the discrete is unavoidable when it comes to calculation.


You should be looking into coalgebras. God they are so obscure but so insanely useful.

https://www.youtube.com/watch?v=XqywV-wkKSE

'discrete mathematics : algebra :: continuous mathematics : coalgebra'.

It's like the missing half of math and programming. Power series? Coalgebra. Combinatorics? Coalgebra. Continuousness? Coalgebra. Bases? Coalgebra. Control flow? Coalgebra [1].

I have a discord https://discord.cofunctional.ai.

[1] https://www.cl.cam.ac.uk/~vb358/articles/mpc15.pdf


> The problem is that the continuous is easier to imagine and prove results about [...]

Why would that be true? A cyclic group of order n, or even all of Z, seems to me easier to think about than the real numbers which employ philosophically tricky notions such as uncountable infinity, limits, convergence, etc. There are a lot of weird and counterintuitive results even in elementary analysis because of the way the real numbers work.

Of course, there's also discrete objects that are hard to reason about.


I think this is written from the perspective of a continuous mathematician or physicist, and the examples of discrete maths they have have in mind are numerical simulations, which are certainly harder to reason about than their continuous counterparts.

I also suspect that this is a fairly orthodox attitude among mathematicians - in "The two cultures of mathematics" (https://www.dpmms.cam.ac.uk/~wtg10/2cultures.pdf) Tim Gowers says with more authority than I could:

> the subjects that appeal to theory-builders are, at the moment, much more fashionable than the ones that appeal to problem-solvers.

This isn't precisely the discrete/continuous split, but it's mostly aligned and the article puts combinatorics is firmly in the latter category.


I'm not sure that the "two cultures" split aligns with continuous vs. discrete maths. There's theory-building in continuous mathematics (abstract spaces, e.g. topological, metric, Banach, ... spaces) and in discrete mathematics (theory of finite fields), whereas both areas also have computational / problem-solving aspects (proving specific inequalities in the continuous case, or proving theorems about particular kinds of graphs in the discrete case).


It is true because the discrete world has the potential for more fine structure.

As an example showing that it is true, read https://math.stackexchange.com/a/362840/6708 explaining how it is possible that the real numbers are complete (all first order statements about them are either provably true or false) while the integers are famously incomplete (no set of axioms can prove everything about them).

ALL of the philosophically tricky notions that you list are only tricky when you mix in discrete notions like "integers" in your construction. And given that discrete mathematics gives us things like the Halting problem and incompleteness WITHOUT continuity being involved, shows that it is discrete mathematics that is harder.


> As an example showing that it is true, read https://math.stackexchange.com/a/362840/6708 explaining how it is possible that the real numbers are complete (all first order statements about them are either provably true or false) while the integers are famously incomplete (no set of axioms can prove everything about them).

> ALL of the philosophically tricky notions that you list are only tricky when you mix in discrete notions like "integers" in your construction.

That may be so, but people generally don't use RCF exclusively. Real analysis always looks at the real numbers as an extension of the natural numbers, you can't go anywhere without sequences and limits. So it seems disingenuous to say that continuous mathematics is easier just because RCF is complete.

> And given that discrete mathematics gives us things like the Halting problem and incompleteness WITHOUT continuity being involved, shows that it is discrete mathematics that is harder.

While this is true, it's also true that a purely additive theory of the natural numbers is complete and several other theories of discrete structures are too (for example, while group theory itself is not complete, it's certainly completable, unlike Peano Arithmetic).

Also, the halting problem and incompleteness aren't where weirdness ends. There's a whole other range of weirdness that happens only once you add in uncountable infinities, such as Skolem's paradox, the Banach-Tarski paradox or the undecidability of the continuum problem.


The continuous requires the law of the excluded middle, which makes it harder to imagine as it's not easy to "construct" an image of x in our mind purely from a proof of "not not x". That's why non-continuous (constructive) maths is also known as intuitionistic maths. For anyone who's not familiar with why constructive mathematics is non-continuous, it's because without the law of the excluded middle, only the computable real numbers (and analogues) exist, not the whole real number line that's present in classical mathematics.

The Banach–Tarski paradox is an example of an unintuitive result of the LEM, a paradox that doesn't exist in constructive mathematics.


I'm sorry, but classical mathematics is perfectly able to handle continuous functions. And in fact it does so more easily. Which is why most of the important theorems were proven classically first.

Sure, Errett Bishop came along with https://www.amazon.com/Foundations-Constructive-Analysis-Err... and fixed that. But it is harder. And sure, as soon as you start looking at error bounds in numerical analysis, the classical shortcuts start to take work. But it is simply wrong to assert that the continuous REQUIRES the law of the excluded middle.

In fact every mathematician has been through the classical treatments of continuity in courses on real analysis and topology. Very few can say much that is sensible about constructivism.


The current state of the metatheory of Banach tarski is that it follows from the ZF + Ultrafilter, but not ZF + dependent choice, and has very little to do with the excluded middle.


It's common to say that constructive mathematics is continuous, as it cannot prove the existence of a discontinuous function. One can certainly reason constructively about the various things traditionally considered continuous mathematics. I find your opposite perspective on it a bit odd.


Can't you define a discontinuous function over some other set besides the reals, such as the set of all computable numbers? For instance, a step function is discontinuous at a single point. You can also define a function that's discontinuous at every point, such as the indicator function of the rationals.

I don't have any experience with constructive set theory, so maybe I'm missing something.


I wouldn't say that it cannot prove the existence of a discontinuous function. Rather it can prove that a point of discontinuity makes it not a function.


I was in a similar situation while in college, as a math major. I was asked to run a tutoring session for the "statistics for psychology majors." Explaining continuous distributions -- why they were needed and had their own formulas -- was simply prohibitive. Even the students who learned calculus could solve the exercises, but didn't grasp the basics of continuous math unless they were really paying attention.


Oh hi!

I'm working on a second book: Practical Math for Programmers. Some details at https://pmfpbook.org/

Every weekend I livetweet my notes and research on the new book over at j2kun@mathstodon.xyz, e.g., https://mathstodon.xyz/@j2kun/110283189611214753

Happy to entertain any ideas folks have for topics! Got a long backlog to go through, but there's always room for more.


Thank you for making this. I'm a good programmer, but was awful at math in high school and college. I've often thought that now that I'm older (40s) and more patient (most days), I would like to try again, but have never found the right resource. I don't know if this is it, but I appreciate your effort and will order the book ASAP!


Nice. Years ago practical maths to my programming needs was Concrete Mathematics for algorithm analysis, combinatorics for data structures, queuing theory for performance analysis, mathematical stats for data analysis, and optimization theories for discrete optimizations. Calculus and linear algebra came in here and there as tools for those maths. It looks nowadays there's less need for diving into such maths as well-packaged systems and libraries solve most of my problems. I'm very curious what topics your book will cover for the new generation of programmers.


Awesome, I do not have or want mastodon or any social media like it, but did sign up for your newsletter. Thank you for all your hard work.

Once the next book comes out, I’ll buy a couple of copies immediately! Looking forward to it!


Mathstodon is nice, we have LaTeX.

https://mathstodon.xyz/@JordiGH/110306262198084936


I don’t have anything against ‘ Mathstodon. Just do not have any need for social media.


Is an early access version available for purchase? I would love to see the current state of the book.

I will be purchasing your first book sometime this week. I feel like I can finally make real progress in learning mathematics with ChatGPT as an aide.


No early access, but I published a draft of a sample chapter here, which is one of the simpler applications, but demonstrative of the type of content in the rest of the book: https://jeremykun.com/2022/05/14/practical-math-preview-coll...


I bought it back in the days when it first came out, loved it a lot. Helped me a lot as a “self-made” programmer. Shortly afterwards I actually started a formal pure math education in my 30s at uni, but with a second child on the way that journey stopped after only completing the first year. In the meanwhile there’s also a third kid. Still, really fell in love with mathematics during my career as a programmer and when the kids are bigger it will be a hobby that gets more attention again.

Suffice to say. Love the book. Will it make you a fully fledged mathematician? Of course not. Can it teach more or better, sure, what not? But it’s a great introduction anyway and it may inspire many that otherwise might not be inspired. Mathematics is really fun if you’re not afraid to feel uncomfortable all the time. Luckily most good programmers probably have that same feeling all the time anyway. I can only hope for more books like this, I have some space reserved for them on one of my dusty shelves! Big thanks for the author to write this book!


I found this a great introduction to 'proper' mathematics. It gave me enough context to be able to read more mathematics without feeling so lost. It also showed me just how bad the mathematics part of my (non-computing) engineering education was.


This seems to be interesting stuff!

It seems to be heavily focused on Algebra and arithmetic. My own recommendation would be to pick up Coq or Idris and use that to bridge programming, math and logic. In my experience this is the best way to leverage the knowledge. Understanding monads and category theory will immediately let you make better API designs, architecting larger applications, etc.


Any books or links you recommend?

Also, won't learning two things at once be harder than learning each independently?


Benjamin Pierce was my go to guy: https://softwarefoundations.cis.upenn.edu/index.html

And when that is digested: https://www.cis.upenn.edu/~bcpierce/tapl/main.html

And I think the learning should be fun. It is much more like taking an adventure into these areas. Especially when it is not a part of a course. So learn everything at ones and write about it (this is an area that is serious in need of some good written blog articles) - Maybe that will also create some serendipitous moments :)



Here's a link that was on HN before: https://www.neilwithdata.com/mathematics-self-learner

There's no quick fix in there though.


I'd throw in Agda with that list. I've enjoyed https://github.com/jespercockx/agda-lecture-notes/blob/maste...

> Also, won't learning two things at once be harder than learning each independently?

kind of, yes. But it's kind of funny. Like, it can take a while to get your head wrapped around functional or object oriented programming. But once you get a sense of it, you can kind of puzzle out any of that style of code. Math is written in proofs. after you prove a + (b + c) = (a + b) + c a few times, you kinda get a sense of what to look for.

I'm not saying it's easy. I am saying it's not that different than becoming fluent in another style of programming. functional hello world is still just hello world. deeper, more complicated programs are hard no matter the language/style. Getting a handle on writing proofs has been personally rewarding. (I've been pretty casually studying over the last few weeks/months). I don't think it'll advance my career or anything, but it's neat.


Why did the author choose such an awful example to start with (polynomials with real coefficients)? This is just upsetting: one would hope to understand things (i.e. to derive them from first principles), but instead are given another hand-waving "trust me bro, it works" kind of start...

Why even go into anything that requires "real" numbers when talking to programmers, if no real numbers can possibly exist in computers? You don't need "real" numbers for logic, nor for algebra, neither for combinatorics and many, many more useful areas of mathematics (from programming perspective). And yet the reader is required to take on faith some "math wizardry"...

Even if the author wanted so much to have polynomials, why not take polynomials with rational coefficients? -- A much easier to digest concept that requires no hand-waving and works just as good for the purposes of illustration?


For me, and I guess for programmers in general, as presumably anticipated by the author, the concept of floating point numbers as a pure construct without having to cram them into n bits was easy to grasp, even relaxing.

Perhaps that was the intention there.

As for choosing polynomials in general as a starting point, a potential answer is maybe found in the text:

> Polynomials occur with stunning ubiquity across mathematics.

That is, beyond the concept of basic arithmetic that's surely already known, it's possibly a logical place to begin.


If you read my argument... I'm not against polynomials. I'm against using "real" numbers for programming-related examples.

Floating point numbers with arbitrary precision are not the same thing as "real" numbers. They are still rational numbers. See, even you made this mistake, not surprisingly, this will confuse (or even worse, will give a false sense of understanding) to programmers reading such examples.

Arbitrary precision floating point numbers exist and are quite tangible, while, perhaps, not very common, and that is the concept that's easy to grasp. These behave like almost any other number you know: you can add them, multiply, take a natural logarithm of etc.

You cannot do any of those operations on "real" numbers in computers because of the way arithmetic operations work, you'd have to start with the least significant digit (to know if there's a carry), but there's no way to find out what the least significant digit is going to be.


Most of machine learning uses approximations to calculus & continuous functions, and those approximations make heavy use of polynomials.

As for the definition of reals - most programmers have a basic knowledge of reals and functions already, would working from first principles be any use for programming?


Please read my argument: I'm not against the use of polynomials. I'm against using "real" numbers.

The "real" numbers taught in high-school or in college are, well, basically, a thinly veiled lie. They "work well" for students who substitute memorizing the page number of a proof of a theorem for actual understanding of a theorem, but they don't work well for mathematicians who would actually want a good theory justifying their existence.

Needless to say that nothing in computers work as a "real" number. Knowing this is important to understand that you work with, as you called them "approximations" and that those "approximations" will have pathological cases where the distinction will bite you.

Finally, it's completely unnecessary for the purpose the author is using them for to have "real" numbers. It works perfectly fine with a much simpler and straight-forward concept of rationals, which doesn't require any pretension and wink-wink fingers crossed explanations.

I specifically pointed this out because I remember how in my days of being a CS student the boneheaded practicum material made my blood boil because a professor would write nonsense like "let A[j] be an array of real numbers" in a... C program! And the same boneheaded professor, when told to correct that to "floating point" or "rational" would spit some more nonsense about "real" numbers.


This is THE way to learn math as a programmer:

https://softwarefoundations.cis.upenn.edu/

With the added bonus that you learn how to prove software correct.


I like that - in addition to covering the usual topics - it has a chapter about groups. Maybe this is just my bubble but in my experience math courses for non-mathematicians are often lacking in this area. In the same vein a basic introduction to topology could be a nice addition to a future version of the book.


Linear Algebra (chap 10) always reminds me of this quote: “Classification of mathematical problems as linear and nonlinear is like classification of the Universe as bananas and non-bananas.”

And its standard reply: "It's actually bananas all the way down."


Loved it up until the introduction of proofs. Still can't wrap my mind around it. I'm beginning to think it's because "proofs" don't prove anything, they are just an argument that someone might find convincing.


A computer program can also be considered a form of constructive proof. One may be confident that the program functions as intended until a bug is discovered. This is why theorem provers have such a strong type system, which reduces the likelihood of bugs compared to mainstream programming languages. However, it is important to note that bugs may still be present, just as they can be found in formal proofs on paper.


Do they have a strong type system? Maybe I just haven't gotten to that. My issue is that, there doesn't seem to be any rules for "evaluating" a proof to see if it is valid. Everything seems subjective? Maybe I'm just missing something. But coming from programming, I'm obviously used to a situation where I can put my code through a mechanical evaluator which will tell me if it works. Or at least, will tell me if the syntax is right!


When you proove something you are esentially prooving that a statement logically follows from some other statement(s) for which we know that are true

You can evaluate the validity of a proof by checking the truthfulness of every statement which was used to argue that the proof holds

Miniature example of a proof with relaxed rigor:

Proove that 3+0=3

Proof: The above statement is a direct consequence of the additive identity axiom which states that x+0=x, if x is a real number. So the only thing we need to check is if 3 is a real number, and we know that it is. The statement holds, end of proof

So to check the validity of this proof you could check if that axiom really exists and check if 3 is a real number, if some of those is false than the proof is invalid

Edit: Or imagine that you have a DB full of axioms, theorems, prooven statements which you can use to proove a given statement. Then you could proove something by just referencing those. Eg. in the example above lets say that the neutral identity axiom has id 3, and the fact that 3 is a real number has an id 103. Then I could just say since 3 and 103 3+0=3


Proofs as usual written are informal summaries, but they are supposed (in today's age) to be linked to a very rigorous series of steps more precise than a Haskell program. Actually, it's very similar to the execution of a Turing Machine (there are formal connections):

* There is a finite set of logical rules. For example, one rule is that if A => B and B => C are true, then A => C is true.

* We start with a finite set of given facts.

* At each step, we state a new fact and note it is true by combining previous facts with a rule.

* At the end, we have the claim we started with.

Example: prove 14 is an even number.

1. If a number equals two times an integer, it is even (given fact).

2. 7 is an integer (given fact).

3. 14 = 2 times 7 (laws of multiplication).

4. 14 = 2 times an integer (combining (2) and (3)).

5. 14 is even (combining (1) and (4)).

At a more complex level, e.g. proofs by induction are often no more than programs with for loops.


Love your comment.

What are your favorite Math and Programming books?


Lots of favorite math books! Not as many programming. Maybe I'd recommend Sipser's Intro to the Theory of Computation?


Already read most of it.

Any lists for favorite Math book that you put together?


Though I haven't yet finished it, this one of my favorite books!

Approachable and friendly writing style reminiscent of the "3 Blue 1 Brown" channel on Youtube.


Looking through the toc I hope I had known this book earlier. Now I am looking for books for the math of more advanced topics, such as stochastic calculus, etc.


I used to do some "competitive programming" back in university - which I think helped me grasp quite a few Mathematics concepts.


Which mathematical concepts would you say that you got to grasp?

Having done competitive programming also, I would not say that it made any difference for my mathematical perception.


To name a few concepts: "Permutation & Combination", "Proofs with induction", "Probability". I did learn some of them from university courses, but writing code for them(in terms of coding problems) helped cement few of the them for me.


How did you apply induction proofs in competitive programming?

Even using recursion is rarely a good idea as you loose control with your memory layout.

And using a language that supports the concept of proof by induction would surely leave you on the absolutely last place for competitive programming as most of the algorithms used use guarantees that are extremely difficult to reason about even with some of the most recent advances in formal methods.

I do see how combinatorics work with competitive programming, though. and to an extend also probability theory, though I never used any probabilistic algorithms myself.


> Even using recursion is rarely a good idea as you loose control with your memory layout.

Used recursion at tons of Codeforces / ICPC problems with some caching(commonly known as "dynamic programming")


I think most problem sets would be hard to solve without use of dynamic programming, irrespective of recursion or not. But yes, recursion with memorisation/accumulation would be a good place to start.


> Even using recursion is rarely a good idea as you loose control with your memory layout.

Functional programming? NOT TODAY!


It seems like Java and Python are only just starting to get a place in COMPETETIVE programming :) I definitely think we will have to wait before we get stuff like Haskell or Idris in there.


What are prerequisites of math for this book?


We need something similar for quantum physics.


Again, one more "failure" of attempting teaching Maths to programmers.

My expectation is always, teaching Maths without using any of Maths language (forget theorem, signature,...). Is this possible ? Yes.

Programmers use code, algorithm, data structure to run the code, and to understanding the theory, instead of understanding Maths language, which is most of the time, "hurt the brain"


I thought so too. When I started Geometry for Programmers, I was going to write all the formulas as SymPy snippets. I thought that was a brilliant idea because, yes, normally programmers are much more comfortable with code rather than with math notation, and math is not about notation anyway. Case in point, you can turn any SymPy expression into LaTeX with the `latex()` function. It doesn't make the expression any more mathematical, so the other way should apply just as well, right?

But the very first review shown that I was wrong. Very few people saw this as a good idea. Most wanted both formulas and code. Apparently, there is a certain "comfortable" level of math language in a math book readers do not wish to give up.


> math is not about notation anyway

Well, yes and no. It's true that notation is a tool and not the actual object of interest (except when it's both), but some tools are much, much better for certain tasks than others.

Imagine, for instance, trying to teach someone about databases and having them demand that you translate everything into x86 assembly first, since that's what they're comfortable with. Once you get past the basics, this is the level of mismatch we're talking about.


I agree on yes and no. If notation mattered little, there would not be that many different notations. Leibniz and Lagrange notations for derivatives, Einstein abbreviation for sums, Iverson notation for tensors that later became APL, or any language for any computer algebra system ever. If notation mattered little, inventing your own notation would matter little too.

But if notation mattered a lot then... there would not be that many different notations either. There would be one and only notation that works and no one would dare to divert from it.

Math notation is a cultural artifact with its own significance pretty much like SQL or x86 assembly language indeed.


"...and math is not about notation anyway". It most certainly is!


English is not defined by the Latin alphabet that it uses.


Rigor and mathematical terminology make it simpler and easier, because you can say more things with fewer words and more precision. Mathematicians are not masochists. Compared with truly understanding the theorems and the concepts, the effort to understand the terminology is basically zero.


I would say that the pros of mathematical notation is the opposite of rigor. You get to not write everything out, which is both good for the lazy reader and lazy writer.

It is just not very good for the newcomer that needs to learn the implicit assumptions that are not written out.


> because you can say more things with fewer words

You say it as if it was a good thing. It's not. APL, J, and K would reign supreme over all programming if brevity and conciseness were all that good for people actually understanding what the heck is happening.

Math notation is a peculiar, loosely defined, context-dependent, ambiguous syntax that requires a lot of memorization, a special keyboard when writing, and a lot of focus when reading. It only benefits people forced to write equations on blackboards all day long. I mean, I feel for them, it's a tough job, but I'm not going to be doing that.

> the effort to understand the terminology is basically zero

You say it as if it was a fact. I don't believe it's anything else than a gut feeling you have. It's trivial to find people for whom the effort needed to understand the terminology or syntax was too big of a barrier to entry. If you could show a proper, replicated study from scientists working on cognition and memory that proves this statement (zero-cost of alien terminology), it would be great. Otherwise, I see this as a gut feeling coupled with survivorship bias.


> Math notation is a peculiar, loosely defined, context-dependent, ambiguous syntax

That's the point. This is what going all-in on formality looks like:

    /-- If n is divisible by m then the set of divisors of m is a subset of the set of divisors of n
    lemma divisors_subset_of_dvd {m : ℕ} (hzero : n ≠ 0) (h : m ∣ n) : divisors m ⊆ divisors n :=
    finset.subset_iff.2 $ λ x hx, nat.mem_divisors.mpr (⟨(nat.mem_divisors.mp hx).1.trans h, hzero⟩)
where each of those names is a reference to another proof - the full call tree would be far, far worse.

Compare that to a handwritten proof:

Let x be a divisor of m. Then there exists some y such that m = x * y. n is divisible by m, so there exists some k such that n = m * k. Thus n = (x * y) * k = x * (y * k) and x is a divisor of n.


> You say it as if it was a good thing. It's not. APL, J, and K would reign supreme over all programming if brevity and conciseness were all that good for people actually understanding what the heck is happening.

On the flip side, try reading all your programs in assembly.

Verbosity is nice up to a point. When I look at the math I did for physics, solving a problem could take 3 pages. If we go with the GGP's approach, it would take perhaps 15-20 pages. Almost everyone would grok it quicker with those 3 pages than trying to read a more verbose 15.

It's actually why some prefer functional programming. Which is easier to understand:

"Take these student papers, and split them into two groups based on whether the student's name begins in a vowel or not."

OR

"Create two groups. Call them vowels and consonants. Take the first paper. If it begins with a vowel, put it in the vowel group. Otherwise put it in the consonant group. Once done with that paper, repeat the steps with the next paper. Keep repeating till there are no papers left."

And I won't even bother describing how one would do it in C (have a counter variable, at the end of each iteration explicitly check for termination, etc).

The difference between math formalism and verbosity is analogous to the difference between the two descriptions above. At some point, more verbosity lets you see the fine details at the expense of the big picture.

> It's trivial to find people for whom the effort needed to understand the terminology or syntax was too big of a barrier to entry.

It's almost impossible to find someone who can do, say Griffiths level electromagnetics or quantum mechanics without that formalism. Your refrain pops up all the time on HN, but I have yet to see someone do even undergrad level physics in any of the alternatives suggested.


> Verbosity is nice up to a point.

Yes, agreed. But so is brevity. Up to a point, it's good. Beyond that point, it's a needless burden that could be eliminated.

> It's almost impossible to find someone who can do, say Griffiths level electromagnetics or quantum mechanics without that formalism.

I'm not saying that the formalism is useless. I'm saying it's not zero-cost. I'm against handwaving away the difficulty of working with the specific syntax because "concepts!"

Again, show me that learning to use the notation is not a problem, objectively, and then we can talk. Otherwise, you're just saying that "you just have to learn it, I did it and it wasn't that hard". OK, but that's not a proof that it isn't hard or it isn't a barrier to entry that could be lowered.


> Again, show me that learning to use the notation is not a problem, objectively, and then we can talk. Otherwise, you're just saying that "you just have to learn it, I did it and it wasn't that hard". OK, but that's not a proof that it isn't hard or it isn't a barrier to entry that could be lowered.

It's always believable that the barrier can be lowered. However, consider that on the one hand, you have millions of people who are quite comfortable with the current notation. On the other hand, there is ... nothing.

As I said, show me the alternative notation where people are comfortable solving Griffith level EM/QM problems with it.

I've heard this complaint for years - particularly on HN. An insistence that a superior notation must exist, that decades of SW experience shows this level of brevity makes working in the field harder, etc. Yet no one has come up with an alternative where one can solve higher level physics problems with it while maintaining sanity.

The status quo is we have a widely used system working. The burden is on those who claim it can be better to come up with something better.


> The burden is on those who claim it can be better to come up with something better.

We need to agree to disagree: in my mind, it's on those who say it's the best it can be to show that it indeed, cannot be better. Because otherwise their insistence on not even looking for ways to make it better looks drastically different. If you can show me that math notation is as closely aligned with how cognition works as possible without sacrificing its usability - that's great, you're right, I concede. OTOH, if the only thing you say is that it worked for a long time, worked for you, and therefore you're not interested in doing anything for it to work better - that strikes me as simply elitist.

The other problem is that nobody who is not deeply involved with math cares enough to take a closer look. How many linguists, psychologists, cognitive scientists invested their time into researching ways of making math notation better? I bet even fewer than the ones who tried researching programming. On the other hand, mathematicians are simply not equipped with knowledge and skills required to objectively assess the notation they use (neither are programmers, BTW.)


> In my mind, it's on those who say it's the best it can be to show that it indeed, cannot be better.

Indeed. The issue is that neither I nor most people are claiming it to be the best. I explicitly pointed this out in another comment.

> OTOH, if the only thing you say is that it worked for a long time, worked for you, and therefore you're not interested in doing anything for it to work better - that strikes me as simply elitist.

How is that elitist? If it works for me, why should I spend time making it better for? What do I gain from it?

And this comment doesn't even make sense. Mathematicians invent notations for their own convenience all the time. There's no committee that says "Yes, this is the official accepted notation." A mathematician uses whatever notation works for him, and if others find it useful, they adopt it.

> The other problem is that nobody who is not deeply involved with math cares enough to take a closer look. How many linguists, psychologists, cognitive scientists invested their time into researching ways of making math notation better? I bet even fewer than the ones who tried researching programming. On the other hand, mathematicians are simply not equipped with knowledge and skills required to objectively assess the notation they use (neither are programmers, BTW.)

You're not wrong, but you're also not helping. This is basically saying "Look, someone should do this!" If you think it's worthwhile, go for it. The category of professionals you have mentioned (linguists, etc) - most of them do not see it to be worthwhile. Put yourself in their shoes. Are they really going to invest a lot of effort to unseat a notation that has evolved over so many centuries, and then fight a battle to convince people to use it? That may well be a career killer.

And where the two of us will have to disagree on: Any improvement, although may be great for newcomers and amateurs, will barely have any impact on the productivity of a professional mathematician. As people have repeatedly pointed out: Notation is amongst the least challenging part of math. Sure, it is a barrier to entry, but at best you're simply lowering the barrier to entry - it won't benefit people who are already good at mathematics. A better notation will not enable them to suddenly grasp concepts they couldn't. That's why mathematicians don't bother.

To be frank (and I say it in all seriousness), the English language has more problems than the mathematical one, and if we could fix those, it would have a much larger impact.


Teaching maths without "Maths Language" is like teaching programming without a Programming Language.


You don't need to understand "Theorem" to know how to use the theory.

You can replace any of Math definition with code.

Is this the point of "for programmers" ?


> You can replace any of Math definition with code.

You often (but not always!) can replace a math definition with code—but either your code is sufficiently precise that it's just another way of phrasing the definition, or you're in the analogous situation to using a language defined by its implementation instead of a specification. And there's plenty of useful space for such languages—but they aren't formally specified languages. Math that isn't formally specified isn't math, in any sense that a mathematician would recognize—which is not to say that it can't be useful.


learning the language of a field is vital if you ever want to use anything somebody else figured out. You can't read a paper to understand how something works if you dont learn the language they use in the paper.

The reason they don't write the paper in "normal english" is because that normal english would make the paper thousands of pages long with lawyer-speak listing everything they dont mean. It turns out there isn't a shortcut to learning it, and after a couple lonely evenings slowly translating you will eventually be able to skip over that part of notation when you see it. repeat for all the maths youre interested in and you will be good


I do the opposite, I figure out the math to describe a problem, then write the program to do it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: