I want to stress before answering that everything you learn is useful if your mind is limber and willing to make connections. Some of the most valuable lessons in programming I learned from editing a philosophy journal in graduate school. I'll answer in terms of mathematical areas/concepts I've found immediately applicable in my programming career (format: what | why):
linear algebra | graphics, scientific computation, mostly graphics.
discrete probability | a whole shitload of perf solutions, understanding risk while planning
number theory, particularly factorization | modular arithmetic for crypto, a lot of very clever hacks for compressed representation of state spaces
predicate logic, basic set theory | I cannot count the number of times someone I work with has expressed something that required one elegant logical operation in a horribly convoluted way.
ammortized analysis | just learn it
statistics | operational reasoning using performance metrics, how to test/alarm rationally, how to reason about your customers
calculus | marginal returns (important when managing a team and optimizing where to spend resources). also if you ever end up doing any kind of convex optimization, which you might, maybe. (I do, but I don't think that's super normal outside datascience).
TL;DR: there's a reason CSCI curricula look like they do.
>I cannot count the number of times someone I work with has expressed something that required one elegant logical operation in a horribly convoluted way.
Would be great if you can write/blog about it or just provide some examples here...
Of course. As I said, there's a reason most CSCI curricula look similar. That said though, most applied programming work does not require hard/deep math with a steep learning cost. It requires good instincts and precise logical reasoning.
Good instincts require a breadth of knowledge, moreso than a depth in any particular domain. Precise logical reasoning requires practice and patience in any of a very wide range of fields, not exclusive to mathematics. In both cases, I think developing a careful knowledge of history or neuroscience or any rigorous intellectual discipline is effective practice, although I think a foundation in logic is irreplaceable.
Being an excellent programmer in some specific domains requires deep domain specific knowledge in applied math, which does have a steep opportunity cost. Absolutely. But I did not believe that was the question at hand.
Mostly that simplicity and clarity are hard to achieve but very worth it. Learning what it feels like to correct and simplify logic into something that is easy to read and easy to verify is experience that has transferred well.
From my experience:
Everything you publish in science/academia will be read by a wide, potentially hostile audience. It is very easy to be misinterpreted, and misinterpretation wastes a lot of time and effort.
Stating what you mean in clear, precise terms is far more work than writing things that make you sound 'smart'. Good writers make their work seem terribly obvious, but it comes at the labor of many, many drafts. Most of the hardest work in editing is helping an author subtract and simplify to get right to the point. The labor of being precise and explicit exposes errors that the illusion of understanding tends to conceal.
Young/inexperienced writers often have a gigantic blind spot when it comes to their own writing, and correcting that is a very painful process.
Also, sometimes the original author was right, and the only precise way to express something was really ugly and horrific and your attempts to make it simple did violence to a lot of careful thought.
The world would be a better place if everybody took the effort to make their point with precision, simplicity and clarity.
However the truth in many situations is that if person A spends an hour constructing a beautifully worded five-liner, and if person B spends the same time to produce several rambling pages, then person B's argument may well win out. Readers who have spent ten times as long wading through B's contribution are quite likely to have forgotten entirely about A's.
linear algebra | graphics, scientific computation, mostly graphics.
discrete probability | a whole shitload of perf solutions, understanding risk while planning
number theory, particularly factorization | modular arithmetic for crypto, a lot of very clever hacks for compressed representation of state spaces
predicate logic, basic set theory | I cannot count the number of times someone I work with has expressed something that required one elegant logical operation in a horribly convoluted way.
ammortized analysis | just learn it
statistics | operational reasoning using performance metrics, how to test/alarm rationally, how to reason about your customers
calculus | marginal returns (important when managing a team and optimizing where to spend resources). also if you ever end up doing any kind of convex optimization, which you might, maybe. (I do, but I don't think that's super normal outside datascience).
TL;DR: there's a reason CSCI curricula look like they do.