Hacker News new | past | comments | ask | show | jobs | submit login
Common Lisp: Numbers (lispcookbook.github.io)
148 points by tosh on June 14, 2019 | hide | past | favorite | 48 comments



The COMPUTABLE-REALS [0] package from the article is interesting because it lets you change the precision even after a computation is “performed”. It basically builds up lazy representations of computations that can (internally) recompute to any precision specified. You do all of your arithmetic as normal and only until you need to do something with an actual decimal representation do you “request” it. You grab the actual approximation as a decimal number with the APPROX-R function, which provides as many digits as you please. There is no need to re-run all of your code which produced the (computable) number in the first place to reapproximate at higher precision.

This all directly falls out of a mathematical formalism called “computable real numbers” [1] which are the (countable!) set of real numbers for which an algorithm exists to compute the digits to any desired precision. (Usually binary digits are the standard way to write out the definition of a computable number.)

[0] https://github.com/stylewarning/computable-reals

[1] https://en.m.wikipedia.org/wiki/Computable_number


Sounds similar to how Haskell does all number computations by default. When you do 1/3 for example it leaves the types generic/ambiguous using typeclasses until it finds something to refine them. 1/3 could be a floating-point number, or a fixed-point number, or a rational (datatype holding the numerator and denominator), or any other user-defined datatype. You could put these computations in a library, keeping their types generic with typeclasses, and a user of your library can use their own number datatypes. Your library would automatically support that datatype in your number computations.


No really, computable real store the AST of the computation to lazily compute digits as they are needed (which is an interesting idea but memory hungry at scale) while Haskell approach is more akind to a form of general type deduction.

Note that I first meet computable numbers in Haskell : https://wiki.haskell.org/Applications_and_libraries/Mathemat...


http://hackage.haskell.org/package/exact-real-0.12.2

I’ve used this package to get exact real computations in Haskell which get evaluated at n bits when needed for printing or equality checking etc.


I don't understand how this is supposed to work. For one thing iterative code wouldn't even do the same number of iterations when you change the desired precision, so merely plugging in higher precision numbers wouldn't give the higher precision approximation you'd be looking for. You'd have to actually re-execute the entire code with higher precision numbers so that intermediate control flow decisions use those numbers instead too, right?


If you made decisions based off of precision at some point in time, then you’re right. You’ve “baked” into the number some aspect about a precision decision you’ve previously made. That doesn’t invalidate the number, as it reflects precisely the number you asked for, it just means you’re expressing your computation in a dubious manner.

Sometimes it’s unavoidable though, such as when checking equality where you have to call it quits at some point at some precision.


Generally, precision is pulled on-demand, as it is needed. If you compare two values as x < y, x and y will be evaluated to higher and higher precision until the answer is unambiguous. This can be an issue if the values are equal, since they often cannot be proven to be equal, so the evaluation may just outright fail to terminate.


That doesn't address the situation I outlined?


You store the expression. Because that expression is a computable real, you can calculate it to arbitrary precision.

This is quite inefficient for things like summing over a list. Or really, any kind of accumulation.


Interesting caveat to the computable reals. Equality is hard. I need to look into this but I'm pretty sure it's undecidable.

Edit: looking into it was just reading futher on wikipedia. Even ordering is undecidable.


Undecidable in principle but I wonder how much it comes up in practice.


In practice, you get guarantees “up to precision”. This is totally reasonable. If you’re doubtful, you can either prove theorems, or bump up the precision.

In general this works well.


Good point. It's not worse than floating point numbers.


That's interesting because that's how humans actually do algebra. Who cares about the decimal expansion of some irrational if in the end we divide it by itself for example.


Unfortunately, most infinite-precision real-number implementations (including the one here, I bet, but I haven't checked) will not recognize that you're dividing something by itself. So you'll get a super-inefficient implementation of the number 1 :-).

[EDITED to add:] I checked; the implementation here indeed won't recognize that you're dividing something by itself. For the avoidance of doubt, nor do I think trying to recognize such things would be an improvement.

(Also, if you ask for e.g. the digit immediately preceding the decimal point, you will wait for ever because no amount of precision will enable the implementation to tell whether your number is just less than 1 or 1 or just over. So don't ask it for that, ask it for a very close approximation and draw your own conclusions.)


The notion of "by itself" is dubious in this context at best. You can only check up to finite precision that two numbers are equal. For something more, you need computer algebra, which is a tall order.

Even if you have computer algebra, math says it's not possible to check that arbitrary expressions are equal. It's an undecidable problem. So with that, I don't fault a computable reals package not being able to detect that x == y in an expression x/y.


You can imagine an infinite-precision-real package that checks whether numerator and denominator are the exact same object and optimizes that case away.

(And I wonder idly from time to time about making one that does know a bit about some particularly nice classes of number -- e.g., algebraic numbers -- and does all the things you'd like it to as long as you stay within such a class. Obviously as soon as you ask it for sqrt(2)+pi all that would go out the window.)


Also note that dividing something by itself is not necessarily 1.


I dunno, I think 0/0 is 1, but I cannot be sure.

https://www.wolframalpha.com/input/?i=lim+x%2Fx+as+x-%3E0


You can say 0/0 is 1, but it's also 2 and 3 and every other number, since x*0=0 for any x. That's why it's undefined.


Could we define it as a one to many function that maps from 0/0 to the set of all Complex numbers?

If 0/0 = Q, then

0 = 0 × Q. So, we get to define this operation. If we treat 0 as an integer, then we could treat Q (a set) like a 1 x inf. matrix and we have scalar multiplication.

But

0 × Q = [0]' × Q

So 0 can be an infinite set of 0. This is cyclical, I know. This cyclical nature leads to an interesting effect if we also define integer division. Try this around 0 in Q.

We could also define the × as an inner (dot) product, or as a cross product.

This is fun!


Is there a number set larger than a complex number? If so you can’t map to any concrete thing as it is really anything.

And I guess it is easy to define one more dimension of number.


It is worth noting that most scripting languages today are still restricted by 64-bit integers and IEEE floating points limitations.

Python and Ruby are probably the only popular ones that ship by default with unlimited precision integers.

For arbitrary precision reals in Python, there is mpmath http://mpmath.org/ . There is also the standard decimal module which "provides support for fast correctly-rounded decimal floating point arithmetic" : https://docs.python.org/3/library/decimal.html



Julia (as a "scripting" language for scientific purposes) also supports them natively, but defaults to normal int/float for performance reasons, which might create some gotchas for people who use the language but are not programmers.

https://docs.julialang.org/en/v1/manual/integers-and-floatin...

If I remember correctly, Elixir uses arbitrary precision integers by default, though I'm not sure about float.


I think Haskell allows for arbitrary-precision with the Integer type. It is limited only by the available memory in the machine.


Even though the term 'scripting language' is very loosely defined, I don't think Haskell qualifies as one.


Haskell has shell hash-bang scripting capabilities just like Python and Ruby do. See

https://stackoverflow.com/questions/8676331/how-to-run-a-has...



True, but the difference between C and Haskell hash-bangs is that GHC Haskell provides an interpreter.

GCC hash-bang is still an edit-compile-link-debug cycle. Just like with Python and Ruby, Haskell hash-bang accelerates the cycle by jettisoning the compile and linking steps.


The Rosetta examples show a solution that only compiles the C file if it has not been previously compiled, or if it has a newer modification timestamp than the executable.


Doesn't the (necessarily loose) coupling of executables with source files create its own challenges? Or does it track the object file too, in which case we have 3 files, not 1, to manage simultaneously?

The benefit of scripts is that each file is a self-contained single source of truth.


In the case of one .c file, there doesn't have to be an object file; you can go straight to executable "cc prog.c -o prog".

Python implements a caching scheme for .pyc files; maybe it's not a scripting language ...

If I were to venture an opinion on why C (in its usual implementations) is not a scripting language it would be:

- no interactive prompt (REPL).

- no reloading of code (other than clumsy, fragile, platform-specific shared lib mechanisms).

- no execution of "top level forms": a program's entry point is a specific function (main on hosted implementations). When, e.g., Lisp code is compiled, the effect of loading the compiled file is still the same as of the source file: the top-level forms (now in a compiled form) are executed one by one. (What's a "script"? A file with instructions to be run one by one.)

- no late binding; in fact, no program at all, unless all external references are satisfied. Scripting languages can run code that contains references to unknown global variables, or calls to unknown functions. These can be loaded late.


> When, e.g., Lisp code is compiled, the effect of loading the compiled file is still the same as of the source file: the top-level forms (now in a compiled form) are executed one by one. (What's a "script"? A file with instructions to be run one by one.)

Fun fact: the SBCL implementation of Common Lisp actually adds a shebang to its compiled, binary Lisp files, so you can compile your Lisp script and then execute it from the shell.


My TXR Lisp adds a shebang to the compiled file, if the source file has one.

  $ cat > foo.tl
  #!/usr/bin/txr
  (pprinl "hello")
  $ txr -e '(compile-file "foo")'
  hello
  $ cat foo.tlo
  #!/usr/bin/txr
  (5 0 nil)
  ((2 3 #b'0200012400000004 02000010' #("hello") #(usr:pprinl)))
If the source file doesn't have a hash bang line, the one isn't output.

You'd think SBCL wouldn't need this sort of thing since it compiles to native code?


I just improved this tonight. If the original program is unsuffixed, and so its hash bang line uses the --lisp option to say "treat this as Lisp source" that option is translated to --compiled, so that if the compiled file is also unsuffixed, it will be properly treated as compiled.


For MP work in Python, gmpy2 - https://pypi.org/project/gmpy2/ comes with a much better selection of MP functions, backed by MPFR, MPC.

Also, the buit-in Decimal class is horribly slow by comparison to gmpy2.


Purpose of the Decimal module is not really arbitrary precision computation but rather exact computation on decimal numbers wherever it matters (financial arithmetic, etc.). This module should never be used for scientific purposes.


Also (shameless self-promotion) python-flint! http://fredrikj.net/python-flint/


> Python and Ruby are probably the only popular ones that ship by default with unlimited precision integers.

racket has exact numbers of unlimited size.

https://docs.racket-lang.org/reference/numbers.html


In Go "Numeric constants represent exact values of arbitrary precision"

https://golang.org/ref/spec#Constants


I don't think Go qualifies as a 'scripting language', even though the term is very loosely defined.


I missed the part about scripting languages. Is LISP a scripting language?


Yes.

1. Most Lisp dialects, including Scheme and ANSI Lisp, have interactive modes for evaluating expressions.

2. Most Lisp dialects have a mode of processing files in the same manner: reading individual forms and evaluating them immediately. I.e. it is not necessary to translate files into objects that have to be linked to produce an executable program.


This is only true for values calculated at compile time. Once the result is stored in a variable it is converted to a fixed-size representation. All runtime calculations and results are fixed size.


BigInt works in Firefox, Chrome, Opera, and upcoming Edge at the moment. Safari preview also has at least partial support.


I also highly recommend the rtg-math commonlisp library (ql:quickload :rtg-math) https://github.com/cbaggers/rtg-math

Incredibly fast with support for quaternions / spherical coordinate systems etc


A nice thing about Common Lisp's unlimited precision integer is it made testing a Common Lisp compiler much easier.

One can easily generate random integer-valued lisp forms without having to do much checking that integers are in range. In C or other languages with bounded integer size it's much tougher (see Regehr et al.'s Csmith). It's also tough to reduce a failing test case without introducing bad behavior in C; in Lisp it's straightfoward.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: