Hacker News new | past | comments | ask | show | jobs | submit login

The big issue here is what you're going to use your numbers for. If you're going to do a lot of fast floating point operations for something like graphics or neural networks, these errors are fine. Speed is more important than exact accuracy.

If you're handling money, or numbers representing some other real, important concern where accuracy matters, most likely any number you intend to show to the user as a number, floats are not what you need.

Back when I started using Groovy, I was very pleased to discover that Groovy's default decimal number literal was translated to a BigDecimal rather than a float. For any sort of website, 9 times out of 10, that's what you need.

I'd really appreciate it if Javascript had a native decimal number type like that.




Decimal numbers are not conceptually any more or less exact than binary numbers. For example, you can't represent 1/3 exactly in decimal, just like you can't represent 1/5 exactly in binary.

When handling money, we care about faithfully reproducing the human-centric quirks of decimal numbers, not "being more accurate". There's no reason in principle to regard a system that can't represent 1/3 as being fundamentally more accurate because it happens to be able to represent 1/5.


Money are really best dealt with as integers, any time you'd use a non-integer number, use some fixed multiple that makes it an integer, then divide by the excess factor at the end of the calculation. For instance computing 2.15% yearly interest on a bank account might be done as follows:

  DaysInYear = 366
  InterestRate = 215
  DayBalanceSum = 0
  for each Day in Year
    DayBalanceSum += Day.Balance
  InterestRaw = DayBalanceSum * InterestRate
  InterestRaw += DaysInYear * 5000
  
  Interest = InterestRaw / (DaysInYear * 10000)
  Balance += Interest
Balance should always be expressed in the smallest fraction of currency that we conventionally round to, like 1 yen or 1/100 dollar. Adding in half of the divisor before dividing effectively turns floor division into correctly rounded division.


This is called fixed-point arithmetic:

https://en.wikipedia.org/wiki/Fixed-point_arithmetic

> In computing, a fixed-point number representation is a real data type for a number that has a fixed number of digits after (and sometimes also before) the radix point.

> A value of a fixed-point data type is essentially an integer that is scaled by an implicit specific factor determined by the type.


Yeah, though that notion tends to come with some conceptual shortcomings, like presuming a power of 10 radix. In the above code the radix is implicitly different on leap years, applying such tricks is usually not possible with a fixed point library or language construct.


Sounds like fractions cleanly describe what you're saying?

But that practically holds only for a reasonable amount of simple arithmetics. Fractional components tend to grow exponential for many numerical methods repeated multiple times. This can happen if you're describing money and want to apply a complex numerical method from an economics article for whatever purpose. Might be worth it but be careful not to carry ever expanding fractions in your system.


This only for dealing with actual money, generally our banking systems have rounding rules that prevent the fractions from getting out of hand.

If you are running an economic simulation you generally don't have to worry about rounding, the whole thing is only approximate anyway.


Yup. Once worked on a big project with one of the largest US exchanges. We were migrating large OTC (over the counter) CDS (credit default swaps) contracts to standardized centralized contracts. We were testing with large contracts, millions of contracts worth trillions of dollars. I was off by a single penny and failed the test. Took a while to find, but it was due to a truncate to zero instead of a proper round. I was using a floating point type instead of a proper decimal. Dont think the language I was using had a proper decimal type at the time, though it does now, 11 years later.


>Money are really best dealt with as integers

I wish I could up vote you more than once. You are bang on.


The real lesson is, no matter what base (radix) you use, floating point math is inexact.

The value of floating point is that it can represent extremely huge or extremely infinitesimal values.

If you're working with currency / money, floating point is the wrong thing to use. For the entire history of human civilization, currency has always been an integer type, possibly with a fixed decimal point. Money has always been integers for as long as commerce has existed, and long before computers.

If you're building games, or AI, or navigating to Pluto, then floating point is the tool to use.


> The real lesson is, no matter what base (radix) you use, floating point math is inexact.

This is just not true. If you add 1.5 + 4.25 with IEEE754, there is nothing inexact or rounded. That you cannot exactly represent 0.1 in base2 FP is a problem of base2, not FP.

You get inexact results with FP math for underflows, overflows, or if you don't have enough precision for the result (or an intermediate result). But the same is true for normal integer types.


I think what that commentator meant is that floating-point math is not an accurate model of rational-number arithmetic, not that there aren't certain computations that are in fact exact. (As you point out, there are: 1.5 + 4.25 is indeed exact)


> is that floating-point math is not an accurate model of rational-number arithmetic

Well, this is true. But integer math is also not an accurate model of rational-number arithmetic, yet nobody would claim that integer math is inexact.


Unsigned integer math (on typical machines) is an exact model of the ring of integers modulo 2^64. Floating point arithmetic is not an exact model of anything with nice properties that people are used to from algebra.


> Integer math (on typical machines) is an exact model of the ring of integers modulo 2^64.

And even this is only true if you retrict yourself to unsigned integers. For signed integers you have quirks (-0x8000.. = 0x8000..) or minefields (undefined overflow semantics in C, which can yield non-associativity, tests deleted by the compiler, etc.).

And I'd argue that whoever understands the ring of integers modulo 2^64, will also understand the IEEE754 semantics (which are, I agree, sometimes unfortunate. But not inexact).


> And even this is only true if you retrict yourself to unsigned integers

Fair point. I've edited my comment to include the word "unsigned".

> I'd argue that whoever understands the ring of integers modulo 2^64, will also understand the IEEE754 semantics

I'm an existence proof that that is not true :). Although I'm sure I could learn the IEEE754 semantics if I put enough effort into reading the spec.

But even if they don't know the word "ring", I think most programmers do understand how modulo arithmetic works, and they have algebraic intuitions about it that turn out to be true: both operations are commutative and associative, multiplication distributes over addition, equality of a forumla involving * and + is true if it's true in the actual integers, and so on.


>> I'd argue that whoever understands the ring of integers modulo 2^64, will also understand the IEEE754 semantics

> I'm an existence proof that that is not true :). Although I'm sure I could learn the IEEE754 semantics if I put enough effort into reading the spec.

This was sloppy writing on my side. I wanted to say "whoever understands the ring of integers modulo 2^64, can also understand". And I'm sure you could :)

And you don't even have to read the spec. The core idea (mantissa, exponent, and sign) is super easy and writing a FP emulation for addition and multiplation is a really nice task to understand what is actually going on. The only really unfamiliar idea is binary fractions and I think this is a cool idea to understand on its own.

> But even if they don't know the word "ring", I think most programmers do understand how modulo arithmetic works, and they have algebraic intuitions about it that turn out to be true: both operations are commutative and associative, multiplication distributes over addition, equality is true if it's true in the actual integers, and so on.

Well that is all fine but scrolling back to the grand grand grand parent: That would also be a completely wrong abstraction to model financial stuff. I'm not saying FP is the solution, but for sure modulo arithmetic is also how you not want to do finance :)


I think the big difference is that integers are accurate within a well-defined range, in a way that's easy to understand. Floating points work within a much larger range, but are inaccurate in most of that range, and it's harder for people to understand why.


A 32 bit floating point number can only have around 4 billion unique values, yet must represent numbers from 10^38, to very small decimals. 99.99999% of numbers in this range cannot be accurately represented in floating point form.

Compare that to a 32 bit integer, which can have 4 billion unique values, and supports numbers from 0 to 4 billion. It's a 1:1 mapping.


To be mathematically pedantic, 100% of numbers in that range cannot be accurately represented in floating point form.


> yet must represent numbers from 10^38

No, they don't must represent all number in the range. I don't know where you get from that they must. An integer also can't represent all real numbers in its range.


There's no such thing as a "problem of base2". Base 2 is an ineffable fact of the universe, and it is neither virtuous nor problematic. All the problems you are describing are problems of floating-point arithmetic.


> There's no such thing as a "problem of base2".

That you cannot represent 1/3 as a non-periodic decimal number is a problem of base 10.

That you cannot represent 1/10 as a non-periodic binary number is a problem of base 2.

These are just mathematic facts. Maybe you don't like the world "problem", but it does not change that this is where we are.

The problem that you cannot represent 0.1 in base 2 FP, is a problem of base 2. You can represent it exactly in base 10 FP.


> Decimal numbers are not conceptually any more or less exact than binary numbers.

True but irrelevant. The problem isn't with the math fundamentals, it's the programmers.

The issue is if you get your integer handling wrong it usually stands out. Maybe that's because integers truncate rather than round, maybe it's because the program has to handle all those fractions of cents manually rather than letting the hardware do it so he has to think about it.

In any case integer code that works in unit tests usually continues to work, but floating point code passing all unit tests will be broken on some floating point implementations and not others. The reason is pretty obvious: floating point is inexact, but the implementations contain a ton of optimisations to hide that inexactness so it rarely raises it's ugly head.

When it does it's in the worst possible way. In a past day job I build cash registers and accounting systems. If you use floating point where exact results are required I can guarantee you your future self will be haunted by a never ending stream of phone calls from auditors telling you code that has worked solidly in thousands of installations over a decade can not add up. And god help you if you ever made the mistake of writing "if a == b" because you forgot a and b are floating point. Compiler writers should do us all a favour and not define == and != for floating point.

Back when I was doing this no complier implemented anything beyond 32 bit integer arithmetic, in fact there was no open source either. So you had to write a multi precision library and all expression evaluation had to be done using function calls. Despite floating point giving you hardware 56 bit arithmetic (which was enough), you were still better off using those clunky integers.

As others have said here: if you need exact results (and, yes currency is the most common use case), for the love of god do it using integers.


> If you're going to do a lot of fast floating point operations for something like graphics or neural networks, these errors are fine. Speed is more important than exact accuracy.

Um... that really depends. If you have an algorithm that is numerically unstable, these errors will quickly lead to a completely wrong result. Using a different type is not going to fix that, of course, and you need to fix the algorithm.


From your description, I fail to understand how does it depend. You're saying that the algorithm is wrong, and changing the type doesn't help. If the type is not the issue, what difference does it make?


A single problem can be solved by using many different algorithms.

However, even though algorithm A and B are "correct" they can behave differently when rounding errors are introduced.

For example – if algorithm A uses

https://en.wikipedia.org/wiki/Kahan_summation_algorithm

and B uses naive summation then you can expect the end result of A to be more precise than the end result of B – even though both algorithms are correct.


> and B uses naive summation then you can expect the end result of A to be more precise than the end result of B – even though both algorithms are correct.

Formally speaking, no. The problem can be defined precisely. At least one of the algorithms fails to solve the problem.

In practice of course, some amount of error may be acceptable.


In the world of money, it is rare to have to work past 3 decimal places. Bond traders operate on 32nds, so that might present some difficulties, but they really just want rounding at the hundreds.

Now, when you’re talking about central bank accruals (or similar sized deposits) that’s a bit different. In these cases, you have a very specific accrual multiple, multiplied by a balance in the multiple hundreds of billions or trillions. In these cases, precision with regards to the interest accrual calculation is quite significant, as rounding can short the payor/payee by several millions of dollars.

Hence the reason bond traders have historically traded in fractions of 32.

A sample bond trade:

‘Twenty sticks at a buck two and five eights bid’ ‘Offer At 103 full’ ‘Don’t break my balls with this, I got last round at delmonicos last night’ ‘Offer 103 firm, what are we doing’ ‘102-7 for 50 sticks’ ‘Should have called me earlier and pulled the trigger, 50 sticks offer 103-2’ ‘Fuck you, I’m your daughter’s godfather’ ‘In that case, 40 sticks, 103-7 offer’ ‘Fuck you, 10 sticks, 102-7, and you buy me a steak, and my daughter a new dress’ ‘5 sticks at 104, 45 at 102-3 off tape, and you pick up bar tab and green fees’ ‘Done’ ‘You own it’

That’s kinda how bonds are traded.

Ref: Stick: million Bond pricing: dollar price + number divided by 32 Delmonicos: money bonfire with meals served


I'm curious about the "off tape" part. Presumably this means not on a ticker or not made public somehow - how are these transactions publicized and/or hidden?


Hear, hear! It would be great if javascript had any integral type that we could build decimals, rationals, arbitrarily-large integers and so on off. It’s technically doable with doubles if you really know what you’re doing, but it would be so much easier with an integral type.


ES does have an arbitrarily large integer type, BigInt.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


It’s not supported everywhere though, so it’s not like you could use it to actually build a library, you would need to use something that fell back to Doubles anyway.


Because the double type can guarantee accurate reproduction of values up to the size of its mantissa (52 bits) you can effectively use than as integers up to that size. It would be nice to be able to just have an integer directly though as that would be more efficient

IIRC some JS engines are capable of detecting many circumstances where floating-point is not needed, particularly for simple cases like loop counters, and their JiT compilers will produce code that uses integer values instead of floats for those purposes - but how reliable that is for cases any more complex than that I don't know.


Though the lack of support in IE, current Edge, and Safari, blocks that from client-side use for many.

There are several BigInt libraries out there that you could use, though obviously this is not as convenient and even if they wrap BigInt when available will be less efficient.


Latest Edge dev preview has supported it since the switch to Chromium. The Chromium-based Edge launches on Jan 15th, at which point Edge will support it.

Safari (WebKit) actually has a fully working implementation, they just haven't shipped it yet. Search the release notes for "BigInt": https://developer.apple.com/safari/technology-preview/releas...


How is a true integer easier than just pretending a double is an integer? In both cases, you have to be aware of the range of values they can hold to prevent overflow (integers) or rounding (doubles), and you have to be careful not to perform operations that aren't valid for integers to avoid truncation (integers) or non-zero decimal places (doubles).


'Decimal' is a red herring. The number base doesn't matter. (And what are you going to do when you need currency coversions, anyways?)

Floats are a digital approximation of real numbers, because computers were originally designed for solving math problems - trigonometry and calculus, that is.

For money you want rational numbers, not reals. Unfortunately, computers never got a native rational number type, so you'll have to roll your own.


Historically, it's correct-but-too-vague to say computers were for "solving math problems". Historic computer problems should be divided into two types: business problems and scientific/engineering problems. Business problems include things like tabulation and accounting. Programmable digital computers go back at least as far as UNIVAC I, in 1951 (using programmable digital computers for science doesn't go back THAT MUCH farther).

Prior to the IBM/360 (1964), mainframes sold for business purposes generally had no support for floating point arithmetic. They used fixed-point arithmetic. At the hardware level I think this is just integer math (I think?), but at a compiler level you can have different data types which are seen to be fractions with fixed accuracy. I believe I've read that COBOL had this feature since I-don't-know-how-far-back.

This sort of software fixed-point is still standard in SQL and many other places. Some languages, and many application-specific frameworks, have pre-existing fixed-point support. So it's also not accurate to say that you necessarily need to roll your own, though certainly in some contexts you'll need to.

And for money, you very much do not want arbitrary rational numbers. The important thing with money is that results are predictable and not fudgable. The problem with .1 + .2 != .3 is not that anyone cares about 4E-17 dollars, it's that they freak out when the math isn't predictable. Using rationals might be more predictable than using floats, but fixed-point is better still. And that's fixed-point base-10, because it's what your customers use when they check your work.


Agree that rational isn't it. But "reproducing the existing quirks" seems like an accurate description. If you want to pay 7% APR on month-end balances, then that's a real-number calculation, but to match what customers expect you need in addition to specify when to round off to cents.


I enjoy Haskell's approach to numbers.

The type of any numeric literal is any type of the `Num` class. That means that they can be floating point, fractional, or integers "for free" depending on where you use them in your programs.

`0.75 + pi` is of type `Floating a => a`, but `0.75 + 1%4` is of type `Rational`.


Hm... what happens if you've got a neural network trained to make decisions in the financial domain?

Is there a way to exploit the difference between numeric precision underlying the neural network and the precision used to represent the financial transactions?


Neural networks are by their very nature a bit vague, random and unpredictable. Their output is not suitable as a direct, real monetary value you can rely on. At best, they predict trends, approximations or classifications.


> I'd really appreciate it if Javascript had a native decimal number type like that.

Was proposed in the late 90's Mike Cowlishaw but the rest of the standards committee would have none of it.


A new proposal for adding arbitrary-precision Decimal support to JavaScript is being presented at TC39 this week.

Proposal: https://github.com/littledan/proposal-bigdecimal

Slides: https://docs.google.com/presentation/d/1qceGOynkiypIgvv0Ju8u...


I'd agree for saner defaults, especially in web development. I can understand that if you want to have strictly one number type it may make sense to opt for floating point to eke out the performance when you do need it, but I'd rather see high-precision as the default (as most expect that you'd be able to write an accurate calculator app in JavaScript without much work) and opt-in to the benefit of floating point operations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: