Hacker News new | past | comments | ask | show | jobs | submit login

It is possible, but makes is somewhat impractical, because you need to convert numbers back and forth every time you touch them on an application level. Not worth the extra headache.



Speaking from experience, if you exclusively deal with integers internally and then apply the appropriate formatting when outputting to the user (which you need to deal with regardless of internal representation), it makes dealing with monetary data so much easier. Integers are generally better supported, faster, and more convenient for most languages, databases, and serialization formats.


+1 adding my anecdata to this.

If you don’t need irrational numbers (few financial applications do?), the tuple of (value, scale) can take you very far.


you never really deal with irrationals ... do you mean floating point?


Nope, specifically meant irrationals.

Floating point (up to a certain amount of absurdity) you can describe with integers (value, scale).

Real numbers you can describe with integers (numerator, denominator, scale).

Irrationals are more difficult. That's why we try not to use them :). Hopefully your company's commission tiers are not defined in terms of circles or euclidean distances.


How do you handle things like multiplying by some interest rate of 3.125 or (1 + 0.03125)^-354 as examples.

Sure you can round at the end but then you have to worry about compounding errors.


I think that's just part of the process of deciding how small an increment each integer represents (dollars, cents, centi-cents, milli-cents).


Accounting rules often care more about consistency than precision per se. As long as the code follows accepted accounting rules (say bankers rounding) choosing a known precision works. Often GAAP and other standards don’t define all details for, say amortization, but if you use a scheme the code should always use that method.


Numeric type, as supported by other SQL databases, does exactly this. I do not want reinvent it on the application level.

I know how to do it, it is just not practical.


Do you then use variable length for the precision depending on the currency?


Not insisting, and could be seen as a bad practice, but I think I have experience worth sharing.

Unless you operate 2+degree currency polynomials, it may be okay to store money in minimal integer units at an application level too. It may be adjusted in only ui formatters. One should be careful with constants in code and external apis, but nothing you can’t get used to or catch with a test. We store timeouts in milliseconds, weights in grams, and so on anyway - not a new thing.

As an example, the primary database I maintain for our company is able to hold decimal .00 natively, but we use its fields as if they were multiplied by 1000, i.e. 12,345.00 is stored and operated as 12.35 (and goes as int64 1235 to ODS) (rounding 5.00 off is fine for this particular accounting model). Nobody made a three-digit mistake ever in a whole decade, because square/cubic money is not a thing naturally, and ratios & exchange rates work as usual. It is like storing cents, but the other way round.

Storing in-app amounts in floats has a downside of accumulating an error in for-looped calculations and the need for periodic rounding (although it should run on a really big dataset to collect that enough). If a language has a money type and it can be programmed accordingly, then a problem disappears. If not, then it persists either way in different forms.


I dunno... the only alternative seems to be storing as a float everywhere, since most coding languages don't support a decimal type... and using floats for currency honestly just feels scary to me, since they're not fixed-precision.

I mean I know there are min/max ranges in the float specification that can guarantee no loss of accuracy to two decimal places... but then I need to manually make sure there's never anywhere in the code that might ever sum up transactions to beyond that, or whatever. Or people thinking it's OK to divide and then multiply, which you're hopefully less likely to do if it's an integer.

I feel much more peace of mind storing currency data as integers and doing the conversion. It feels quite natural really, since I'm already used to doing that with datetimes for different timezones.


Python has Decimal, JavaScript has BigNum and so on. I disagree with you - most languages support arbitrary precision numbers. Floats are not needed or wanted.

You so not want to invent your own arbitrary precision numbers

https://floating-point-gui.de/formats/integer/


What is the drawback of just using a string as storage? Together with something like bcmath for calculations.

Strings does not overflow & are easy to transfer.


Applications can handle integers just fine, even more of a reason to use them. 64-bit integers will work everywhere seamlessly and you only need to convert once, when showing them in the UI and/or collecting user input.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: