I am starting to come around to your argument. Having the internal representation in different base than the written representation does produce problems (for example, printing a base-2 float in its shortest base-10 representation is not a trivial problem, with solutions only appearing in the late 90's, see Dragon4 and Grisu3 [1]).
Like with random number generators - computers are so powerful now that it makes sense to make the default PRNG cryptographically secure, to avoid misuse, and leave it up to the experts to swap in a fast (not cryptographically secure) PRNG in the rare cases where the performance is needed (and unpredictability isn't), for example Monte Carlo simulations.
One could argue that for many use cases (most spreadsheets handling money amounts, for example) computers are powerful enough now that "non-integer numbers" should default to some base-10 floating point, so that internal and user representation coincide. Experts that handle applications with "real" numbers can then explicitly switch to "weird float". It is worth a thought.
Decimal floating point floats are exactly as weird as binary floating points.
The only difference is that different numbers are representible. You still have:
- They do not obey mathematical laws for real numbers
- Equality comparison is meaningless
- Multiple operation lead to unboundedly large errors
>Experts that handle applications with "real" numbers can then explicitly switch to "weird float". It is worth a thought.
The weirdness does not go away in decimal floats. It is exactly as weird.
>One could argue that for many use cases (most spreadsheets handling money amounts, for example) computers are powerful enough now that "non-integer numbers" should default to some base-10 floating point
Base 10 floats do not fix money amounts. (1/3)3 is not* equal to 1 in base 10 floats. You can not correctly handle money with floats, as money is not arbitrarily dividable. Changing to base 10 does not fix that.
The core problem with money is that the divison operation is not well defined. As for example $3.333... is not an amount of money that can exist. Even the mathematically correct operations are wrong, you can not fix that with imperfect approximations.
Like with random number generators - computers are so powerful now that it makes sense to make the default PRNG cryptographically secure, to avoid misuse, and leave it up to the experts to swap in a fast (not cryptographically secure) PRNG in the rare cases where the performance is needed (and unpredictability isn't), for example Monte Carlo simulations.
One could argue that for many use cases (most spreadsheets handling money amounts, for example) computers are powerful enough now that "non-integer numbers" should default to some base-10 floating point, so that internal and user representation coincide. Experts that handle applications with "real" numbers can then explicitly switch to "weird float". It is worth a thought.
[1] https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31....
https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/p...
https://en.wikipedia.org/wiki/Floating-point_arithmetic#Bina...