You typically can't "raise an error" in such contexts in formal reasoning tools. There's no "runtime" per se to raise an error from -- the expressions themselves are manipulated to derive truths -- the division by zero may simply be implicit e.g. in a statement about division by some arbitrary number.
Trying to solve the problem with e.g. dependent typechecking quickly gets complicated (both implementation and theory). And adding a concept such as infinity may complicate proofs which now need to deal with infinity being a "number" but which behaves differently from other numbers.
Thankfully mathematics has the nice property that you can define concepts any way you want so long as they're self-consistent and match one's expectations. So rather than treating division by zero as an exceptional case -- either by disallowing it in the typesystem, "raising an error", or by introducing a "special" value -- some (most?) verification languages simply define division by zero to have a numeric value, and make it known that this is the case. Users know that they must then write expressions to be explicit about division by zero if they care about that case. (This turns out to be not much different than having to write expressions to satisfy a dependent type system.)
((x / y) * y) = x is not a true statement about real numbers, regardless of if division by zero is defined as an error or as being equal to 0. The true statement is ( y != 0 -> ((x / y) * y) = x ), and this holds no matter how you define division by zero.
More generally, in normal math, every theorem that involves division by some value Y which could be zero must have as hypothesis that Y is not equal to zero. All these theorems still work in systems like Lean where division by zero is defined to be zero, so nothing is lost.
> You typically can't "raise an error" in such contexts in formal reasoning tools.
You could write all your formulas in the Maybe / Option monad (which I guess is the same as 'introducing a "special" value') but that sounds painful...
Trying to solve the problem with e.g. dependent typechecking quickly gets complicated (both implementation and theory). And adding a concept such as infinity may complicate proofs which now need to deal with infinity being a "number" but which behaves differently from other numbers.
Thankfully mathematics has the nice property that you can define concepts any way you want so long as they're self-consistent and match one's expectations. So rather than treating division by zero as an exceptional case -- either by disallowing it in the typesystem, "raising an error", or by introducing a "special" value -- some (most?) verification languages simply define division by zero to have a numeric value, and make it known that this is the case. Users know that they must then write expressions to be explicit about division by zero if they care about that case. (This turns out to be not much different than having to write expressions to satisfy a dependent type system.)