As the author of Fixed, I'd like to make a few comments:
1. If you are above 100 billion, you are probably dropping the the 'fractions' and dealing with whole numbers. Most databases don't have their columns configured as Decimal(64,64) for 128 digits. It's not practical. So I would store the amounts above 100 billion in another value - the billions unit. And if you are 100s of billion, I am pretty certain you're not concerned with .0000001 dollars. Most 'terminals/receipts' couldn't show/print a number that high without truncating or the UI/representation would be all messed up.
2. I don't believe the crypto-currency space needs more than 7 places on an exchange. The CME only supports 8 decimal places in their protocol. Most of these issues are handled by reinterpreting the scale & quantity. "The satoshi is currently the smallest unit of the bitcoin currency recorded on the block chain. It is a one hundred millionth of a single bitcoin (0.00000001 BTC)." and since there can only be 21 million BTC, it easily fits - although I would probably change the code to use 8 decimal places for ease of use.
3. I might add more rounding modes. They are trivial, but there are some pretty exotic ones - so implementing them all internally, rather than externally can be problematic.
4. For the gamers that say they need float32 - there are more digits of precision in Fixed that float32 - just change the number of places in the code... if you need less significand and more decimals.
5. I've added integer Mul. It is about 2x faster than using floating point. If you avoid limiting overflow, which it doesn't, it is closer to 10x.
> if you are 100s of billion, I am pretty certain you're not concerned with .0000001 dollars
Be careful with that logic. Salami fraud takes advantage of low precision. You need to maintain enough precision to make it not worthwhile for your transaction volume.
I work in the accounting space. In the current systems we develop today this would need to be able to handle trillions(up to 14 places) at least and most likely quadrillions(up to 17 spaces) These amounts happen already due to certain countries with high inflation levels.
The ubiquitous float Double has 15 places of accuracy that is secure for summing integers up to that size.
I think the core frustration is how values less than one are subject to about 1 million-billionth part of noise, which could be tolerated except that factors like inflation are recorded as fixed decimals.
For example, if monthly inflation is standardized as 5 significant digits, then for 0.0012345 - the rounding to 5 sd actually introduced about one hundred-thousandth part of noise to whatever the real value was, but the number is then considered precise. Subsequent encoding of 0.0012345 to a float Double is ten billion times less imprecise than the writing of 0.00123447... to 5 sd was, but it is an unwanted inconsistency that is difficult to securely keep out of the fixed point results, even though they are kept to much lower precision.
I'm not an expert but a finance library without a choice of rounding modes and using floating point for division and multiplication is a bit strange? Why those decisions?
In complex billing systems (that are also often responsible for computing non-trivial taxes), using fixed point math with configurable rounding rules -- is a must.
There are requirement time discussions/rules that indicate if a computation has a rounding rule or not (and it is normally driven by whether a calculation involves division, or if a computation must match an amount in a different currency).
Dividing amounts, as example, always comes about when there are mid-cycle cancelation/onboardings to a service.
Despite what everyone says, I've spoken to several different people at several different highly-regarded banks who say that they do, in fact, use floating point for money amounts in some of their systems.
As a concrete example, someone was running a massive online FFI for predicting the value of derivatives, and they were doing it in the real domain and using doubles on GPUs for monetary values. Where they doing it 'wrong' then? I don't know much about finance myself.
I think the problem here is that "financial computing" is too broad a term to apply a rule like that blindly. If you're making products like point-of-sale systems, ecommerce applications or account management, then it's basically malpractice to use floating point. The performance edge you get from them is so slight in the overall performance of the system and the damage bugs cause can be catastrophic. And it's very easy to get them wrong. It's pretty trivial in most languages to swap in decimal and/or rational types, so NOT doing so is just irresponsible.
However, if you're doing things like high-frequency trading systems or machine learning stuff with massive amounts of data, then performance really does matter, and using floating point is entirely reasonable. If you're making systems like that, you're hopefully aware of the limitations of floating point and know how to use them safely.
I worked in the credit union software space for a while, and using floating points for this does lead to trouble. There were several bugs in my time that were caused by using floats and were fixed by using integer math.
We do use FP numbers for performance reasons occasionally, but we are well aware of their limitations, and when to normalize.
Running option trading models with BigDecimal is unproductive. However, when the calculations are done, we store the results back as decimal representation.
I'm a bit surprised to see floating point being used for multiplication, division etc. I would have expected to see this implemented with integers as well (with each operation changing the position of the point).
> If I am owed 2.575% on $3425956.57 for 245 months, why not get the correct sum
Explosion of storage space and computation complexity. When you combine real-world numbers with compound interest, every time you add the interest, you’re adding decimal digits to the value. For your example, after 245 months of compound interest at 2.575% per month, the value will have 1228 decimal digits in it, that’s more than 512 bytes to store and process.
Another reason is, for the last ~700 years, finance folks use this: https://en.wikipedia.org/wiki/Double-entry_bookkeeping_syste... With that system, coins allow for some kind of rolling checksum verification. BTW, in modern software, the coin is often $/€/whatever 0.0001.
The thing is, laws and regulations around finance and things like gas/petroleum delivery scheduling predate electronic 32 bit computers with floating point math.
This used to cause problems at a former workplace, because the "math" in the regulations around scheduling used tables, so the functions were highly non-linear, and less math savvy customers would complain that reversals of calculations wouldn't work as expected. If we had used continuous functions, it would have worked out, but then we would have been going against regulations.
They have it in COBOL,which uses binary coded decimals to represent the number (1 decimal per nibble) so numbers like 0.3 can be represented without loss of accuracy. It was abandoned in favour of IEEE754.
BCD mode was a popular in a lot of early chips. It's an esoteric feature today, but on 16 bit machines you would blow out the ALU way too easily and get into messy two word math.
I always do my accounts exactly but then have to round up or down to make settlements. This can leave some "dust" in my accounts which I might eventually move to a losses account but I prefer to do things properly. For example, if I borrow money from someone I'll agree on an interest rate and calculate the interest (compounded continuously). If it turns out the interest is something like 1.01521 then I'll pay 1.02 and leave the account in the negative. I'll move it to losses if I never expect to transact with that entity again but if I do then things will work out exactly given enough time.
The question is can computers do arbitrary precision arithmetic. Technically you'd need infinite memory, but if you use fractions to represent amounts, as is correct, then practically speaking, yes you can. One accounting system that uses fractions is ledger[0].
That's not how floating point works though. The precision is by definition floating. With a sufficiently large number, your least significant bit of precision might be in the 10s or 100s or more of dollars (or whatever currency).
Your issue then is that if you combine a large enough number with anything, then you get precision loss and money just starts disappearing (or appearing out of nowhere).
Even if your hyperinflation was over ten orders of magnitude less bad than Zimbabwe's (decimal orders of magnitude even), you would run into floating point precision loss with a 64 bit floating point number.
If you are using floats in financial transactions you are doing it incredibly wrong. Never use floating points for that, ever. It is insanely problematic and bug prown.
Hypothesis: eventually all programming languages either add a library for fixed-place decimal maths, or revise the language to take care of it.
Ada had proper fixed point types in 1983 already, and added additional requirements for the implementation of decimal fixed point types as part of the 1995 language revision. Other languages ... may benefit from its example.
You'll never get good performance without using assembly. GMP probably walks all over this. Even my own bignum library probably walks all over it for addition and subtraction.
It's not a bignum library. It's just 64b integer fixed-point with a scale of 10^-7. So, perf-wise it's not going to have any issues, but it's also pretty easy to exceed the representable bounds.
The github page comes right out and says "[it] is ideally suited for high performance trading financial systems". Clearly, that's the intention behind the library.
This would be essentially useless in game development (which is my field). In gamedev it's pretty much 32-bit floats all the way down, and very little else. Very small fixed point types are occasionally used in shaders (Cg provides a native 12-bit fixed point type) to represent colors, but a library like this is pretty much pointless.
As a long ago game developer, we never used floating point - I'm sure things have changed now - but we always used fixed point integer, with lookup tables for sin, etc.
Maybe Bob should have thought of a better name instead of trying to become the least successful prescriptionist since that guy insisting it’s “thou”, not “you”.
1. If you are above 100 billion, you are probably dropping the the 'fractions' and dealing with whole numbers. Most databases don't have their columns configured as Decimal(64,64) for 128 digits. It's not practical. So I would store the amounts above 100 billion in another value - the billions unit. And if you are 100s of billion, I am pretty certain you're not concerned with .0000001 dollars. Most 'terminals/receipts' couldn't show/print a number that high without truncating or the UI/representation would be all messed up.
2. I don't believe the crypto-currency space needs more than 7 places on an exchange. The CME only supports 8 decimal places in their protocol. Most of these issues are handled by reinterpreting the scale & quantity. "The satoshi is currently the smallest unit of the bitcoin currency recorded on the block chain. It is a one hundred millionth of a single bitcoin (0.00000001 BTC)." and since there can only be 21 million BTC, it easily fits - although I would probably change the code to use 8 decimal places for ease of use.
3. I might add more rounding modes. They are trivial, but there are some pretty exotic ones - so implementing them all internally, rather than externally can be problematic.
4. For the gamers that say they need float32 - there are more digits of precision in Fixed that float32 - just change the number of places in the code... if you need less significand and more decimals.
5. I've added integer Mul. It is about 2x faster than using floating point. If you avoid limiting overflow, which it doesn't, it is closer to 10x.