The COMPUTABLE-REALS [0] package from the article is interesting because it lets you change the precision even after a computation is “performed”. It basically builds up lazy representations of computations that can (internally) recompute to any precision specified. You do all of your arithmetic as normal and only until you need to do something with an actual decimal representation do you “request” it. You grab the actual approximation as a decimal number with the APPROX-R function, which provides as many digits as you please. There is no need to re-run all of your code which produced the (computable) number in the first place to reapproximate at higher precision.
This all directly falls out of a mathematical formalism called “computable real numbers” [1] which are the (countable!) set of real numbers for which an algorithm exists to compute the digits to any desired precision. (Usually binary digits are the standard way to write out the definition of a computable number.)
Sounds similar to how Haskell does all number computations by default. When you do 1/3 for example it leaves the types generic/ambiguous using typeclasses until it finds something to refine them. 1/3 could be a floating-point number, or a fixed-point number, or a rational (datatype holding the numerator and denominator), or any other user-defined datatype. You could put these computations in a library, keeping their types generic with typeclasses, and a user of your library can use their own number datatypes. Your library would automatically support that datatype in your number computations.
No really, computable real store the AST of the computation to lazily compute digits as they are needed (which is an interesting idea but memory hungry at scale) while Haskell approach is more akind to a form of general type deduction.
I don't understand how this is supposed to work. For one thing iterative code wouldn't even do the same number of iterations when you change the desired precision, so merely plugging in higher precision numbers wouldn't give the higher precision approximation you'd be looking for. You'd have to actually re-execute the entire code with higher precision numbers so that intermediate control flow decisions use those numbers instead too, right?
If you made decisions based off of precision at some point in time, then you’re right. You’ve “baked” into the number some aspect about a precision decision you’ve previously made. That doesn’t invalidate the number, as it reflects precisely the number you asked for, it just means you’re expressing your computation in a dubious manner.
Sometimes it’s unavoidable though, such as when checking equality where you have to call it quits at some point at some precision.
Generally, precision is pulled on-demand, as it is needed. If you compare two values as x < y, x and y will be evaluated to higher and higher precision until the answer is unambiguous. This can be an issue if the values are equal, since they often cannot be proven to be equal, so the evaluation may just outright fail to terminate.
In practice, you get guarantees “up to precision”. This is totally reasonable. If you’re doubtful, you can either prove theorems, or bump up the precision.
That's interesting because that's how humans actually do algebra. Who cares about the decimal expansion of some irrational if in the end we divide it by itself for example.
Unfortunately, most infinite-precision real-number implementations (including the one here, I bet, but I haven't checked) will not recognize that you're dividing something by itself. So you'll get a super-inefficient implementation of
the number 1 :-).
[EDITED to add:] I checked; the implementation here indeed won't recognize that you're dividing something by itself. For the avoidance of doubt, nor do I think trying to recognize such things would be an improvement.
(Also, if you ask for e.g. the digit immediately preceding the decimal point, you will wait for ever because no amount of precision will enable the implementation to tell whether your number is just less than 1 or 1 or just over. So don't ask it for that, ask it for a very close approximation and draw your own conclusions.)
The notion of "by itself" is dubious in this context at best. You can only check up to finite precision that two numbers are equal. For something more, you need computer algebra, which is a tall order.
Even if you have computer algebra, math says it's not possible to check that arbitrary expressions are equal. It's an undecidable problem. So with that, I don't fault a computable reals package not being able to detect that x == y in an expression x/y.
You can imagine an infinite-precision-real package that checks whether numerator and denominator are the exact same object and optimizes that case away.
(And I wonder idly from time to time about making one that does know a bit about some particularly nice classes of number -- e.g., algebraic numbers -- and does all the things you'd like it to as long as you stay within such a class. Obviously as soon as you ask it for sqrt(2)+pi all that would go out the window.)
Could we define it as a one to many function that maps from 0/0 to the set of all Complex numbers?
If 0/0 = Q, then
0 = 0 × Q. So, we get to define this operation. If we treat 0 as an integer, then we could treat Q (a set) like a 1 x inf. matrix and we have scalar multiplication.
But
0 × Q = [0]' × Q
So 0 can be an infinite set of 0. This is cyclical, I know. This cyclical nature leads to an interesting effect if we also define integer division. Try this around 0 in Q.
We could also define the × as an inner (dot) product, or as a cross product.
Julia (as a "scripting" language for scientific purposes) also supports them natively, but defaults to normal int/float for performance reasons, which might create some gotchas for people who use the language but are not programmers.
True, but the difference between C and Haskell hash-bangs is that GHC Haskell provides an interpreter.
GCC hash-bang is still an edit-compile-link-debug cycle. Just like with Python and Ruby, Haskell hash-bang accelerates the cycle by jettisoning the compile and linking steps.
The Rosetta examples show a solution that only compiles the C file if it has not been previously compiled, or if it has a newer modification timestamp than the executable.
Doesn't the (necessarily loose) coupling of executables with source files create its own challenges? Or does it track the object file too, in which case we have 3 files, not 1, to manage simultaneously?
The benefit of scripts is that each file is a self-contained single source of truth.
In the case of one .c file, there doesn't have to be an object file; you can go straight to executable "cc prog.c -o prog".
Python implements a caching scheme for .pyc files; maybe it's not a scripting language ...
If I were to venture an opinion on why C (in its usual implementations) is not a scripting language it would be:
- no interactive prompt (REPL).
- no reloading of code (other than clumsy, fragile, platform-specific shared lib mechanisms).
- no execution of "top level forms": a program's entry point is a specific function (main on hosted implementations). When, e.g., Lisp code is compiled, the effect of loading the compiled file is still the same as of the source file: the top-level forms (now in a compiled form) are executed one by one. (What's a "script"? A file with instructions to be run one by one.)
- no late binding; in fact, no program at all, unless all external references are satisfied. Scripting languages can run code that contains references to unknown global variables, or calls to unknown functions. These can be loaded late.
> When, e.g., Lisp code is compiled, the effect of loading the compiled file is still the same as of the source file: the top-level forms (now in a compiled form) are executed one by one. (What's a "script"? A file with instructions to be run one by one.)
Fun fact: the SBCL implementation of Common Lisp actually adds a shebang to its compiled, binary Lisp files, so you can compile your Lisp script and then execute it from the shell.
I just improved this tonight. If the original program is unsuffixed, and so its hash bang line uses the --lisp option to say "treat this as Lisp source" that option is translated to --compiled, so that if the compiled file is also unsuffixed, it will be properly treated as compiled.
Purpose of the Decimal module is not really arbitrary precision computation but rather exact computation on decimal numbers wherever it matters (financial arithmetic, etc.). This module should never be used for scientific purposes.
1. Most Lisp dialects, including Scheme and ANSI Lisp, have interactive modes for evaluating expressions.
2. Most Lisp dialects have a mode of processing files in the same manner: reading individual forms and evaluating them immediately. I.e. it is not necessary to translate files into objects that have to be linked to produce an executable program.
This is only true for values calculated at compile time. Once the result is stored in a variable it is converted to a fixed-size representation. All runtime calculations and results are fixed size.
A nice thing about Common Lisp's unlimited precision integer is it made testing a Common Lisp compiler much easier.
One can easily generate random integer-valued lisp forms without having to do much checking that integers are in range. In C or other languages with bounded integer size it's much tougher (see Regehr et al.'s Csmith). It's also tough to reduce a failing test case without introducing bad behavior in C; in Lisp it's straightfoward.
This all directly falls out of a mathematical formalism called “computable real numbers” [1] which are the (countable!) set of real numbers for which an algorithm exists to compute the digits to any desired precision. (Usually binary digits are the standard way to write out the definition of a computable number.)
[0] https://github.com/stylewarning/computable-reals
[1] https://en.m.wikipedia.org/wiki/Computable_number