sum(vec) is exactly how I'd write it in Python (or pseudo-Python for that matter). It does what you think it does (provided the operation of addition is defined for vectors). I did mention using "sum" instead of "sigma" (combined with a list comprehension) in my OP, did you miss that or are you making a point I don't understand?
I recognise what you say about mathematics favouring conciseness. I feel that's what I'm addressing: mathematics have sacrificed readability in favour of conciseness to an extreme and could benefit from some decompression. It is possible to write extremely concise code but I feel most programmers today agree that's actually not a good thing to do. It's often better to write something a little longer to improve readability than to save those characters.
And like you write, every language, and we can consider mathematical notation to be a language in this context, has tradeoffs. But it's not a zero sum game. You can make an overall better language than another language. And what I like to imagine here is that perhaps mathematical notation is not the most evolved form of logical expression we have today. It's certainly not as formalised or field tested as the big programming languages.
I think you have to ask why mathematicians make that tradeoff. I argue it's for a good reason: the long-term efficiency gains for a working mathematician are higher than if they spelled everything out all the time. I'm using a made-up but illustrative example. Imagine reading
as just one object in a formula involving many such objects (like a long exact sequence which can contain 10 or more, each with a set of functions mapping between them!). You would get lost.
Then you say "okay, we'll come up with some nice alias for Projectivization(RealAffineSpace(dimensions=2)) and just call it RP(2)". Then it's still too long so you shorten inducedHomology to a prefix "" (not unheard of for programmers, cf pointers). Then you are left with "fundamentalGroup(OnePointCompactification(RP(2))", it's still long, and you realize that you just reinvented the standard math notation, which is like H^*(pi_1(RP^2)) with a line over RP^2, and you realize that while the shortened syntax is parseable by an interpreter, it's not actually any clearer.
You don't start to see the limitations of "let's just use python pseudocode for everything" until you actually try to use it for everything. When you realize that most math isn't done with a text editor (how would you do, say, geometry or topology?), you realize that the vim/emacs tricks programmers use don't apply. This is especially true because there are far more math concepts that need distinguishing than ASCII allows for. Think about how much code is required to translate trivial business rules into code. Math is orders of magnitude more complicated. It's just not worth the time.
Mathematicians also have one benefit pseudocode would destroy: the ability to point to something and declare a new semantic meaning. This is the heart of what it means to be mathematical notation, and software does everything it can to resist that.
Maybe you are right and there is some subset of mathematics so complex and lengthy that it becomes incomprehensible without a concise shorthand. I don't know enough about high level mathematics to refute that. I do know that the original article we are commenting on is not an example of that.
I do think by switching to code we don't need to lose the ability to declare new semantic meanings. But instead of inventing new ways to draw things, we can rely on our existing language. We define new functions. That's the beauty of having a standardised flexible language.
You write that mathematicians like to declare new notation. I think that's actually a liability. Like another commenter pointed out in this thread, mathematical notation is sufficiently freeform that there are different "dialects" in different fields of mathematics and it's not trivial to understand one even if you know the other. There is no formal definition, no formal grammar.
At the same time I just don't know if "H^ * (pi_1(RP^2)) with a line over RP^2" is actually better than
inducedHomology(fundamentalGroup(OnePointCompactification(Projectivization(RealAffineSpace(dimensions=2))))) nor that it's desirable to want to compress it down to 6 small symbols. Obviously if you are going to use such a formula a lot, just like in programming, you'd "refactor" it into a parameterised function. Now you have reduced the cognitive overhead of understanding every use of this formula going forward. As a reader, you understand that function once and when you see it again you don't have to reparse it or scan for differences from previous similar incantations. DRY applies to mathematics too.
Expressing the math in code, to me, means to express it with a small library of primitives, unambiguously and formally. (Regular mathematical notation is ambiguous, like the multiplication example I mentioned, for example, and can define new primitives as it goes along.)
Now combine that formal code expression with best practices for programming like sensible variable naming, DRY and so on and I really feel there should be some kind of tangible advantage. (That's before we even start to think about the possibility of unit testing parts of a greater work. Have there ever been instances of someone writing a physics paper and having a calculation error somewhere deep inside?)
At the risk of paraphrasing you badly, you write that you just don't think it's worth the time to express all the varied mathematical concepts in code because the math is too complicated. I feel the opposite. Because it's complicated, it'd be good to take the time to express it plainly.
sum(vec) is exactly how I'd write it in Python (or pseudo-Python for that matter). It does what you think it does (provided the operation of addition is defined for vectors). I did mention using "sum" instead of "sigma" (combined with a list comprehension) in my OP, did you miss that or are you making a point I don't understand?
I recognise what you say about mathematics favouring conciseness. I feel that's what I'm addressing: mathematics have sacrificed readability in favour of conciseness to an extreme and could benefit from some decompression. It is possible to write extremely concise code but I feel most programmers today agree that's actually not a good thing to do. It's often better to write something a little longer to improve readability than to save those characters.
And like you write, every language, and we can consider mathematical notation to be a language in this context, has tradeoffs. But it's not a zero sum game. You can make an overall better language than another language. And what I like to imagine here is that perhaps mathematical notation is not the most evolved form of logical expression we have today. It's certainly not as formalised or field tested as the big programming languages.