I still remember when I encountered this and nobody else in the office knew about it either. We speculated about broken CPUs and compilers until somebody found a newsgroup post that explained everything. Makes me wonder why we haven't switched to a better floating point model in the last decades. It will probably be slower but a lot of problems could be avoided.
Unless you have a floating point model that supports arbitrary bases, you're always going to have the issue. Binary floats are unable to represent 1/10 just as decimal floats are unable to represent 1/3.
And in case anyone's wondering about handling it by representing the repeating digits instead, here's the decimal representation of 1/12345 using repeating digits:
Nice example. For those who do not understand why it is so long, a denominator multiplied by a period must be all-nines. E.g. 1/7 = 0.(142857), because 142857x7 = 999999, so that 0.(142857)x7 = 0.(999999) = 1 back again. For some simple numbers N their nearest 999...999/N integer counterpart is enormously huge.
> Programmable calculators manufactured by Texas Instruments, Hewlett-Packard, and others typically employ a floating-point BCD format, typically with two or three digits for the (decimal) exponent.
Then that's how they're encoding the components of the float. BCD itself is not a floating-point, it's just a different way of encoding a fixed-point or integer. If all you want to do is use floating point but expand the logarithm and mantissa then that's completely tangential to whether or not they're stored as BCD or regular binary values.
> Binary floats are unable to represent 1/10 just as decimal floats are unable to represent 1/3.
That is true, but most humans in this world expect 0.1 to be represented exactly but would not require 1/3 to be represented exactly. Because they are used to the quirks of the decimal point (and not of the binary point).
It's not important to most people, because decimal floating point only helps if your UI precision is exactly the same as your internal precision, which almost never happens.
Seeing the occasional 0.300000000000004 is a good reminder that your 0.3858372895939229 isn't accurate either.
One can argue that nothing is important to most people.
The correct calculations involving money, up to the last cent, are in fact important for people who do them or who are supposed to use them. I've implemented them in the software I've made to preform some financial stuff even in eighties, in spite of all the software which used binary floating point routines. And, of course, from the computer manufacturers, at least IBM cares too:
There is no "better floating point model" because floating point will always be floating point. Fixed point always has been and always will be an option if you don't like the exponential notation.
> Fixed point always has been and always will be an option
Not really. It would be really cool if fixed point number storage were an option... but I'm not aware of any popular language that provides it as a built-in primitive along with int and float, just as easy to use and choose as floats themselves.
Yes probably every language has libraries somewhere that let you do it where you have to learn a lot of function call names.
But it would be pretty cool to have a language with it built-in, e.g. for base-10 seven digits followed by two decimals:
fixed(7,2) i;
i = 395.25;
i += 0.01;
And obviously supporting any desired base between 2 and 16. Someone please let me know if there is such primitive-level support in any mainstream language!
COBOL was created to serve the interests of the financial industry, therefore COBOL has fixed point as a first class data type.
Every programming language that has come since has been designed to be a general purpose programming language, therefore they don't include fixed point as a first class data type.
Therefore the financial industry continues to use COBOL.
Every time someone some tries to rewrite some crusty COBOL thing in the language de jure, they'll inevitably fuck up the rounding somewhere. The financial industry has complicated rounding rules. Or better yet, the reference implementation is buggy and the new version is right, but since the answers are different it's not accepted.
Addition and subtraction will work normally. Multiplication also works normally except you need to right-shift by FRAC_BITS afterwards (and probably also cast to a larger integer type beforehand to protect against overflow).
Division is somewhat difficult since integer division is not what you want to do. DOOM's solution was to cast to double, perform the division, and then convert that back to fixed-point by multiplying by 1.0 before casting back to integer. This seems like cheating since it's using floating-point as an intermediate type, but it is safe to do because 64-bit floating point can represent all 32-bit integers. As long as you're on a platform with an FPU it's probably also faster than trying to roll your own division implementation.
Floating point is fundamentally a trade off between enumerable numbers (precision) and range between minimum/maximum numbers, it exists because fast operations on numbers are not possible with arbitrary precision constructs (you can easily have CPU/GPU operations where floating point numbers fit in registers, arbitrary precision by its very nature is arbitrarily large).
With many operations this trade off makes sense, however its critical to understand the limitations of the model.
> Makes me wonder why we haven't switched to a better floating point model in the last decades. It will probably be slower but a lot of problems could be avoided.
Pretty much all languages have some sort of decimal number. Few or none have made it the default because they're ignominiously slower than binary floating-point. To the extent that even languages which have made arbitrary precision integers their default firmly keep to binary floating-point.
Wait, an entire office (presumably full of programmers) didn’t understand floating point representation? What office was this? Isn’t this topic covered first in every programming book or course where floating point math is covered?
> Makes me wonder why we haven't switched to a better floating point model in the last decades.
The opposite.
Decimal floating points have been available in COBOL from the 1960s, but seem to have fallen out of favor in recent days. This might be a reason why bankers / financial data remains on ancient COBOL systems.
Fun fact: PowerPC systems still support decimal-floats natively (even the most recent POWER9). I presume IBM is selling many systems that natively need that decimal-float functionality.
Decimal floats are a lot older than COBOL. Many early relay computers (to the extent there were many such machines) used floating-point numbers with bi-quinary digits in the mantissa. https://en.wikipedia.org/wiki/Bi-quinary_coded_decimal