That isn't a computerism. It is a feature of floating point arithmetic. Computers are perfectly able to calculate perfectly.
>should get the same right answer in a Jupyter notebook. The only person who should be exposed to the base 2
"Why does my notebook take hours to do a simple task"
>There are numerous social consequences of this that are harmful such as the perception that computer programmers are "grinds" and "nerds" and the idea that "idea people" are more worthy than the people that execute, etc.
The benfits of floating point arithmetic easily outweigh people having to learn it.
"Why does my notebook take hours to do a simple task"
“Because the chip it is running on doesn’t have native base-10 math”?
See also “why my notebook [is fast but] yields incorrect results”.
The benfits of floating point arithmetic easily outweigh people having to learn it.
GP talks about saving non-low-level programmers from base 2 FP, not about removing it. CPUs could use an additional block (or mode) of base 10 exponent FP.
This and other geeky issues make programmers programmers instead of making everyone a programmer. The consequence of this is much heavier than any benefits of base 2 FP.
>“Because the chip it is running on doesn’t have native base-10 math”?
Base 10 fixes exactly zero of the problems with floating point.
Decimal floating point exists as a standard already. It is even part of the upcomming C standard.
But again decimal arithmetic is just as weird as floating point arithmetic. The change of base s irrelevant, except for a few niche applications.
>The consequence of this is much heavier than any benefits of base 2 FP.
Totally false. Decimals don't fix floats. They are just as weird. Changing the base is irrelevant to the inherent properties of floats and using base 10 instead of base 2 just means that with a very high chance you get something even worse.
If you do not understand the basics of floating point arithmetic you should not be programming software. Tough world out there for people who refuse to learn, I know.
Your views are too extreme and elitist, exactly as mentioned before. I don’t think that this position may be considered an argument, as for this thread’s subject it is a lost cause.
Floating point numbers are the most useful approximation to reals we have on a computer.
That isn't an "elitist" view. I don't get what you are on about, seems ridicolous. People need to learn to use the systems they are working with any highschooler can understand what floating point numbers are and why they have flaws. It is really simple.
The problem with binary floating point is that it is input and output as decimal floating point.
Every floating point number is really a fraction of the form
M
-------
b^E
When you write
x = 0.1
you are really asking for 1/10. If b=2, however, you can only get a denominator of 1/2, 1/4, 1/8 or something like that. 1/10 just doesn't exist in that number system, but there is a number A that you get when you ask for 0.1 that round-trips back to 0.1, and the same is true for 0.2 (B) and 0.3 (C). The trouble is those substitute numbers aren't the real numbers and
A != B + C
This has nothing to do with numeric precision, it's always going to be off a bit even if you are using 1024-bit floats or 1048576-bit floats.
The problem punches above its weight because it is targeted right at two fault lines of the mind: (1) people flinch at inconsistencies, 0.1 + 0.2 = 0.3 is an identity and when basic identities are wrong people feel uncomfortable and don't want to proceed; if you are working with an accountant, for instance, and they see something that is not consistent in a way they've been trained has to be consistent, they will just stop until it is consistent. (2) A certain kind of laziness leads people to not get to the root of a problem like this and instead waste a huge amount of time and energy into non-solutions (rounding!) that are just like pushing a bubble around under a rug.
Note that decimal floating point does not require that you use BCD. That is, the mantissa and the exponent of a floating point number are just integers, and unlike the floats, base 7 and base 192 integers are the same numbers.
An obvious idea is to use base 2 for the mantissa and exponent, just have the exponent be base 10. One difficulty is the cost of sliding two numbers so they have the same exponent before you add them, for instance to add
43
723
---
766
you have to express the 723 and 43 in the same base to add them (multiply/add by ten) and multiplication by powers of ten is a lot hard with base 2 math than with base 10 math. You also can represent decimal floating point numbers with a decimal mantissa and face the tradeoff of BCD numbers being wasteful of bits but the factors of 10 being easy to deal with. There is an efficiency gap between hardware binary floating point and hardware decimal floating point but it's not as bad as you might think.
> people flinch at inconsistencies, 0.1 + 0.2 = 0.3 is an identity and when basic identities are wrong people feel uncomfortable and don't want to proceed; if you are working with an accountant, for instance, and they see something that is not consistent in a way they've been trained has to be consistent, they will just stop until it is consistent.
This is an unsolvable problem. If your floats are base-10 "0.1 + 0.2 = 0.3" might work out, but it isn't going to fix "(1/3)3 = 1". And it gets even worse if you do anything involving π.
It is mathematically impossible for a computer to handle all* reals correctly, so something has to give. Binary floats are a reasonable approximation in practice, and anyone wanting something else is free to use arbitrary-precision libraries.
The guy you replied to was talking about base 10 floats. As you can very easily see his example has to work if the arithmetic has the "best possible" property and the rounding mode is "to nearest".
Pythons is unusable for numerics without numpy, which makes it a tiny bit less unusable.
The problems with floats are independent of the basis.
>1/10 just doesn't exist in that number system
And 1/3 doesn't exist for b=10.
I really have no idea what you are on about. The number system you want does not exist, it is a mathematical theorem, floats are the best approximation of real numbers and choosing b=10 is dumb outside of specific applications.
Decimal floating point fixes nothing it just moves around the issues where they less affect numbers in base 10. It is exactly as broken as base 2. You still violate basic identities in b=10, just different ones.
Again, the number system you want doesn't exist and it can't. You can never approximate the real numbers with constant bits, division and arithmetic consistency.
The trouble is that we are using base 10 literals together with base 2 numbers. If you had base 35 literals and base 35 numbers or whatever it would be OK. All I'm asking for is literals that match the numbers I am using. If my float literals were like
431*2^-9
where 2^-9 is 1/512 there would be no semantic gap here.
The fact that you find it so hard to get what I am talking about isn't reflective of your intelligence, it is that this is something that strikes people where they are absolutely weakest, where the gap between the map and the territory can leave you absolutely lost. That's what is so bad about it.
Of course that is why computer professionals suffer with the "grind" and "nerd" slurs, because we tolerate things like the language that puts the C in Cthulhu.
Personally I think Cantor's phony numbers suck and it is an insult that we call them "real" numbers. It's practically forgotten that Turing discovered the theory of computation not by investigating computation but by confronting the problem that there are two kinds of "real" numbers: the ones that are really real because they have names (e.g. 4, sqrt(2), π) and the vastly larger set of numbers that can never be picked out individually (e.g. any description of how to compute a number has to be finite in length but the phony numbers are uncountable.)
I wish Steve Wolfram would grow some balls and reject the axiom of choice.
I am starting to come around to your argument. Having the internal representation in different base than the written representation does produce problems (for example, printing a base-2 float in its shortest base-10 representation is not a trivial problem, with solutions only appearing in the late 90's, see Dragon4 and Grisu3 [1]).
Like with random number generators - computers are so powerful now that it makes sense to make the default PRNG cryptographically secure, to avoid misuse, and leave it up to the experts to swap in a fast (not cryptographically secure) PRNG in the rare cases where the performance is needed (and unpredictability isn't), for example Monte Carlo simulations.
One could argue that for many use cases (most spreadsheets handling money amounts, for example) computers are powerful enough now that "non-integer numbers" should default to some base-10 floating point, so that internal and user representation coincide. Experts that handle applications with "real" numbers can then explicitly switch to "weird float". It is worth a thought.
Decimal floating point floats are exactly as weird as binary floating points.
The only difference is that different numbers are representible. You still have:
- They do not obey mathematical laws for real numbers
- Equality comparison is meaningless
- Multiple operation lead to unboundedly large errors
>Experts that handle applications with "real" numbers can then explicitly switch to "weird float". It is worth a thought.
The weirdness does not go away in decimal floats. It is exactly as weird.
>One could argue that for many use cases (most spreadsheets handling money amounts, for example) computers are powerful enough now that "non-integer numbers" should default to some base-10 floating point
Base 10 floats do not fix money amounts. (1/3)3 is not* equal to 1 in base 10 floats. You can not correctly handle money with floats, as money is not arbitrarily dividable. Changing to base 10 does not fix that.
The core problem with money is that the divison operation is not well defined. As for example $3.333... is not an amount of money that can exist. Even the mathematically correct operations are wrong, you can not fix that with imperfect approximations.
>The trouble is that we are using base 10 literals together with base 2 numbers.
Now that is a really dumb ask. Surely designing computers around the numbers of finger we have for no reason is insane. Again, b=10 DOES NOT FIX FLOATS. It has the exact same problems.
>Personally I think Cantor's phony numbers suck
Personally I think they are the greatest description of the continuum.
>I wish Steve Wolfram would grow some balls and reject the axiom of choice.
Yes. Really looking forwards to such hits as "you can't split the continuum" and "points do not exist", contradicting 100% of human intuition.
I don't really want base 2 literals and base 2 numbers, I want base 10.
The point is that when the literals don't match your numbers you get particularly strange problems that I think scare people away from computers. We just lose them.
The other problems with floating point math create much less cognitive dissonance than that does.
If you do not understand base 2 and why computers use it I am glad that you are unable to effectively program a computer.
I actually hope that "we" loose people who do not put in that tiny amount of effort to learn something so simple. They have absolutely no buisiness developing software.
If you are unwilling to learn such absolutely basic concepts as what base 2 is, you really should be excluded.
Floats are probably the best approximation of irrational numbers near zero. Rational numbers don’t need to be approximated, unless you’re doing something like tight SIMD loops where the hardware itself limits precision, or letting an end user dictate how much precision he wants to see at the moment.
There are corners we don’t need to cut anymore, because a half century of Moore’s Law has already paid for better tools if only we would claim them.
>should get the same right answer in a Jupyter notebook. The only person who should be exposed to the base 2
"Why does my notebook take hours to do a simple task"
>There are numerous social consequences of this that are harmful such as the perception that computer programmers are "grinds" and "nerds" and the idea that "idea people" are more worthy than the people that execute, etc.
The benfits of floating point arithmetic easily outweigh people having to learn it.