You only found the pseudocode readable because you know Python and don't know maths notation. It's not unreasonable to have to learn maths notation in order to read maths papers and blogs.
You aren't exceptionally lazy. You are normally lazy, and additionally not used to reading maths notation.
So I'm not a mathemetician, certainly not anymore.
But for a while I did read a lot of this stuff, and wrote things like it.
Aaand... to some extent you get used to it. But it really is simply a worse syntax. It's not just habit.
For one thing, there does not exist such a thing as "mathematical notation". There are a bunch of different, semi-overlapping conventions. You can't trivially read something someone wrote if the math was for another field. In fact, I'm pretty sure the variance is even larger than for most (seriously used) computer languages. It's easier for a pythonista to grok C++ templates than it is to read physics-inspired math with a modal-logic background.
This is particularly true when it comes to conventional symbols; there just aren't enough, or perhaps it's history, but there are different symbols for the same concept, and different concepts for the same symbol. Another pet peeve is subscripts - are those formally meaningless tags to distinguish variations the author talks about? are they arguments of some other class (think type arguments vs. value arguments)? Are they something elses? It's quite common for articles to use various forms, and sometimes even on the same symbols; for an example (albeit with superscripts) you probably know: X^T as a transpose, but X^2 as a power.
Then there are things like context sensitivity. It's not uncommon for some notation not to be self-contained in the sense that an expression depends on some variable, but that that variable is never specified in the expression itself. It's taken for granted you know which those are, and that can get quite tricky. It's the mutable global state of programming.
Mathematics could definitely benefit from a more disciplined and rigorous approach when it comes to notation, at least, IMNSHO.
I agree and this was my first thought too. Yet I feel like I do know the mathematical notation. I know what each "thing" does and yet the whole feels needlessly dense.
I definitely agree that once you are "used" to reading the notation it will be faster. That said, just because it's possible to get faster at it, does that mean it's the best possible notation for the job?
In creating programming languages and using them for tens of thousands of man hours per day we have envisioned and tested a multitude of different ways to express logical ideas, formulas and algorithms.
By contrast, how many kinds of mathematical notation has humankind tried and actively used?
If the latter number is substantially higher than the former, how can we be so certain that academic mathematical notation is the best, most clear way to express algorithms and formulas?
I would wager that the amount of logic expressed in programming languages vastly exceeds the amount of logic expressed in mathematical notation, every day. More logic is both written and read in the field of programming, so the medium we express the logic in is more rapidly iterated on.
Again, I'm not a mathematician. I'm a programmer and so of course my opinion about what is easy to read and write is subjective and biased. Yet even thinking about it objectively in terms of "what did we work harder on refining", I think there's some food for thought here. Wouldn't the rapidly iterated and widely used form seen in programming languages be more finely honed and at a higher stage of evolution in terms of understandability and writability?
You aren't exceptionally lazy. You are normally lazy, and additionally not used to reading maths notation.