That's nonsense. a/b is float in Python 3, and even in other languages a/b gets closer to it's actual value as a and b get bigger (the "limit", which is the basis of Algebra). So four operations in programming generally do agree with foundations of Algebra. But a/0=0 is %100 against Algebra. And it's very unintuitive. It's basically saying zero is the same as infinity, and therefore all numbers are the same, so why bother having any numbers at all?
Floats don't have multiplicative inverses, and the floating point operations don't give us any of the mathematical structures we expect of numbers. Floating point division already abandons algebra for the sake of usefulness.
Knuth vol 2 has a nice discussion of floating point operations and shows how to reason about them. Wilkinson's classic "Rounding Errors in Algebraic Processes" (1966) also has a good discussion.
If you were to define a/0 the most logical choice would be a new special value "Infinity". The second best choice would be the maximum supported value of the type of a (int, int64 etc). Anything else would be stupid.