Hacker News new | past | comments | ask | show | jobs | submit login
Fixed – Fixed-place decimal math library for Go (github.com/robaho)
94 points by networkimprov on Nov 29, 2018 | hide | past | favorite | 67 comments



As the author of Fixed, I'd like to make a few comments:

1. If you are above 100 billion, you are probably dropping the the 'fractions' and dealing with whole numbers. Most databases don't have their columns configured as Decimal(64,64) for 128 digits. It's not practical. So I would store the amounts above 100 billion in another value - the billions unit. And if you are 100s of billion, I am pretty certain you're not concerned with .0000001 dollars. Most 'terminals/receipts' couldn't show/print a number that high without truncating or the UI/representation would be all messed up.

2. I don't believe the crypto-currency space needs more than 7 places on an exchange. The CME only supports 8 decimal places in their protocol. Most of these issues are handled by reinterpreting the scale & quantity. "The satoshi is currently the smallest unit of the bitcoin currency recorded on the block chain. It is a one hundred millionth of a single bitcoin (0.00000001 BTC)." and since there can only be 21 million BTC, it easily fits - although I would probably change the code to use 8 decimal places for ease of use.

3. I might add more rounding modes. They are trivial, but there are some pretty exotic ones - so implementing them all internally, rather than externally can be problematic.

4. For the gamers that say they need float32 - there are more digits of precision in Fixed that float32 - just change the number of places in the code... if you need less significand and more decimals.

5. I've added integer Mul. It is about 2x faster than using floating point. If you avoid limiting overflow, which it doesn't, it is closer to 10x.


> if you are 100s of billion, I am pretty certain you're not concerned with .0000001 dollars

Be careful with that logic. Salami fraud takes advantage of low precision. You need to maintain enough precision to make it not worthwhile for your transaction volume.


Also, at least in Go, it is fairly trivial to do:

type Billions fixed.Fixed

and then add operators to allow Fixed to be added, String() would add a B at the end etc. So 0.1 is 100 million, etc.

Like I said, if you are summing into the billions, you are probably not concerned about .00001 pennies


Any ideas on handling Ethereum gas, which is uint256?


Gas is limited by having per block gas usage limit. One block cannot consume more than 8M gas.

Uint256 comes from the fact that it is native word size for EVM and that is because of crypto operations.


not really, except that all of them are going to zero, so fixed.Fixed should be sufficient :)


I work in the accounting space. In the current systems we develop today this would need to be able to handle trillions(up to 14 places) at least and most likely quadrillions(up to 17 spaces) These amounts happen already due to certain countries with high inflation levels.


The ubiquitous float Double has 15 places of accuracy that is secure for summing integers up to that size.

I think the core frustration is how values less than one are subject to about 1 million-billionth part of noise, which could be tolerated except that factors like inflation are recorded as fixed decimals.

For example, if monthly inflation is standardized as 5 significant digits, then for 0.0012345 - the rounding to 5 sd actually introduced about one hundred-thousandth part of noise to whatever the real value was, but the number is then considered precise. Subsequent encoding of 0.0012345 to a float Double is ten billion times less imprecise than the writing of 0.00123447... to 5 sd was, but it is an unwanted inconsistency that is difficult to securely keep out of the fixed point results, even though they are kept to much lower precision.


And in the cryptocurrency space, we need way more than 7 decimal places.


I'm not an expert but a finance library without a choice of rounding modes and using floating point for division and multiplication is a bit strange? Why those decisions?


In complex billing systems (that are also often responsible for computing non-trivial taxes), using fixed point math with configurable rounding rules -- is a must.

There are requirement time discussions/rules that indicate if a computation has a rounding rule or not (and it is normally driven by whether a calculation involves division, or if a computation must match an amount in a different currency).

Dividing amounts, as example, always comes about when there are mid-cycle cancelation/onboardings to a service.

Currency translations cause that as well.


Despite what everyone says, I've spoken to several different people at several different highly-regarded banks who say that they do, in fact, use floating point for money amounts in some of their systems.

What experience do people who work in banks have?


I work in Finance, I don't use float, and I suggest you never do either.


As a concrete example, someone was running a massive online FFI for predicting the value of derivatives, and they were doing it in the real domain and using doubles on GPUs for monetary values. Where they doing it 'wrong' then? I don't know much about finance myself.


I think the problem here is that "financial computing" is too broad a term to apply a rule like that blindly. If you're making products like point-of-sale systems, ecommerce applications or account management, then it's basically malpractice to use floating point. The performance edge you get from them is so slight in the overall performance of the system and the damage bugs cause can be catastrophic. And it's very easy to get them wrong. It's pretty trivial in most languages to swap in decimal and/or rational types, so NOT doing so is just irresponsible.

However, if you're doing things like high-frequency trading systems or machine learning stuff with massive amounts of data, then performance really does matter, and using floating point is entirely reasonable. If you're making systems like that, you're hopefully aware of the limitations of floating point and know how to use them safely.


If you are predicting, chances are you don't care about all the decimals being correct and you'd rather get the result quickly.


Predicting is inherently approximate; accounting needs to be exact. They may use similar units, but they are different kinds of problems.


Predicting a value is not the same as accounting.


I work in finance, and i use floats (well, doubles) all the time!

I never use floats for any quantities which i need to exactly add up to a known total, of course.


I worked in the credit union software space for a while, and using floating points for this does lead to trouble. There were several bugs in my time that were caused by using floats and were fixed by using integer math.


We do use FP numbers for performance reasons occasionally, but we are well aware of their limitations, and when to normalize.

Running option trading models with BigDecimal is unproductive. However, when the calculations are done, we store the results back as decimal representation.


I used COBOL in a well known global American bank.



No, original go-trader used it, since quuckfixgo uses it, and during profiling it showed as a significant cost, so I wrote Fixed.Fixed


I'm a bit surprised to see floating point being used for multiplication, division etc. I would have expected to see this implemented with integers as well (with each operation changing the position of the point).


Floating point multiplication can be faster than integer multiplication on modern CPUs. Check out this thread for lots of info!

https://stackoverflow.com/q/2550281/164234


When people consider float vs int, performance is seldom a relevant factor.

You have to ask, what do you even mean if you use floating point for representing money?

If you use floating point, the whole point is that you want the behaviour

1e100 = 1e100 + 1

This is exactly what you want for many analysis and decision making problems, but it is a dangerous default.


It's always struck me as strange than finance folk don't want correct math. They want math that matches the coins in their pocket.

If I am owed 2.575% on $3425956.57 for 245 months, why not get the correct sum instead of some rounded monthly sum added up? Its just strange.


> If I am owed 2.575% on $3425956.57 for 245 months, why not get the correct sum

Explosion of storage space and computation complexity. When you combine real-world numbers with compound interest, every time you add the interest, you’re adding decimal digits to the value. For your example, after 245 months of compound interest at 2.575% per month, the value will have 1228 decimal digits in it, that’s more than 512 bytes to store and process.

Another reason is, for the last ~700 years, finance folks use this: https://en.wikipedia.org/wiki/Double-entry_bookkeeping_syste... With that system, coins allow for some kind of rolling checksum verification. BTW, in modern software, the coin is often $/€/whatever 0.0001.


The thing is, laws and regulations around finance and things like gas/petroleum delivery scheduling predate electronic 32 bit computers with floating point math.

This used to cause problems at a former workplace, because the "math" in the regulations around scheduling used tables, so the functions were highly non-linear, and less math savvy customers would complain that reversals of calculations wouldn't work as expected. If we had used continuous functions, it would have worked out, but then we would have been going against regulations.


They have it in COBOL,which uses binary coded decimals to represent the number (1 decimal per nibble) so numbers like 0.3 can be represented without loss of accuracy. It was abandoned in favour of IEEE754.


I think that was a holdover of EBCDIC, an old IBM standard. In fact early IBM machines did BCD in their ALU.


BCD mode was a popular in a lot of early chips. It's an esoteric feature today, but on 16 bit machines you would blow out the ALU way too easily and get into messy two word math.


x86 still has BCD support, in 32 bit mode http://www.hugi.scene.org/online/coding/hugi%2017%20-%20coaa...


I always do my accounts exactly but then have to round up or down to make settlements. This can leave some "dust" in my accounts which I might eventually move to a losses account but I prefer to do things properly. For example, if I borrow money from someone I'll agree on an interest rate and calculate the interest (compounded continuously). If it turns out the interest is something like 1.01521 then I'll pay 1.02 and leave the account in the negative. I'll move it to losses if I never expect to transact with that entity again but if I do then things will work out exactly given enough time.


Because cash is obviously still a thing?

Be glad we at least have a decimal based coinage system these days. We'd all have such fun dealing with fractions.


How would you pay such a non-integer sum?


With a bank balance on a computer.


Can computer bank balances store fractional cents right now? At least before, this was not possible.


The question is can computers do arbitrary precision arithmetic. Technically you'd need infinite memory, but if you use fractions to represent amounts, as is correct, then practically speaking, yes you can. One accounting system that uses fractions is ledger[0].

[0] https://www.ledger-cli.org/


Six or seven digits of precision (like all floating point) is good enough, and far better than exactly 2 digits of decimal. Millions of times better.

Fractions do little for interest rates, which are transcendental functions and sometimes even irrational.


That's not how floating point works though. The precision is by definition floating. With a sufficiently large number, your least significant bit of precision might be in the 10s or 100s or more of dollars (or whatever currency).

Your issue then is that if you combine a large enough number with anything, then you get precision loss and money just starts disappearing (or appearing out of nowhere).

Even if your hyperinflation was over ten orders of magnitude less bad than Zimbabwe's (decimal orders of magnitude even), you would run into floating point precision loss with a 64 bit floating point number.


A double has 15 digits. Which counts more money than exists anywhere added up. Not a problem I'm thinking?


I dunno, South korea's M2 would need 17 or 18 digits to represent I think.

Edit: Japan too.

Say south korea has a few years of hyperinflation, not even anywhere close to zimbabwe bad, and you'd have money disappearing right & left.


And ieee floating point still can't store 10¢ exactly.


If you are using floats in financial transactions you are doing it incredibly wrong. Never use floating points for that, ever. It is insanely problematic and bug prown.


Yes it doesn't match the ad-hoc stuff finance people do. That's in fact my point.


It isn't enough. Suppose you have something in dollars and cents (in a float), and you want a count of pennies.

2.22 * 100.0 == 222.00000000000003, not 218.0. This is a) wrong and b) not an integer.

Either you are aware of this and manually round the result to an integer, or you use fixed-point math.


They must be able to, since brokers often use fractions of cents for penny stocks.


Hypothesis: eventually all programming languages either add a library for fixed-place decimal maths, or revise the language to take care of it.

Ada had proper fixed point types in 1983 already, and added additional requirements for the implementation of decimal fixed point types as part of the 1995 language revision. Other languages ... may benefit from its example.


You'll never get good performance without using assembly. GMP probably walks all over this. Even my own bignum library probably walks all over it for addition and subtraction.


It's not a bignum library. It's just 64b integer fixed-point with a scale of 10^-7. So, perf-wise it's not going to have any issues, but it's also pretty easy to exceed the representable bounds.


Where can we find your bignum library?


You can find it on github along with a hundred others. But you should use GMP. It's better than anything you or I could do alone.


No sqrt, sin, cos functions?


Probably not high priority in financial computing.


No, this lib is not suitable for financial computing. It might be good for game development.


The github page comes right out and says "[it] is ideally suited for high performance trading financial systems". Clearly, that's the intention behind the library.

This would be essentially useless in game development (which is my field). In gamedev it's pretty much 32-bit floats all the way down, and very little else. Very small fixed point types are occasionally used in shaders (Cg provides a native 12-bit fixed point type) to represent colors, but a library like this is pretty much pointless.


As a long ago game developer, we never used floating point - I'm sure things have changed now - but we always used fixed point integer, with lookup tables for sin, etc.


The author can claim anything. But sorry, it is really not suitable for trading financial systems.


[flagged]


"Go" was such a poor language name choice, it pretty much goes by "Golang" to make it easier to search, etc.


I'm not sure Nim, Crystal, Rust, D (or C, C++) are that much better for searchability.


They really are. "Go" as a word is in way heavier usage than any of those.



Golang is the accepted label used for titles and other communications for disambiguations. Note the domain name that hosts the project.


[flagged]


Golang Golang Golang Golang

...and nothing bad happens.

Maybe Bob should have thought of a better name instead of trying to become the least successful prescriptionist since that guy insisting it’s “thou”, not “you”.


One wonders if the language had been invented at Apple, might the Rob have tried to name it 'Ap'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: