Hacker News new | past | comments | ask | show | jobs | submit login

Actually, you need just Z ring enriched (if this is right word) with sqrt(5). So instead of one float, you use pair of integers of arbitrary precision. So

(a,b)+(c,d) = (a+c,b+d)

(a,b)/sqrt(5) = (b,a/5)

(a,b)(c,d) = (ac+5bd,ad+bc)

2phi = (1,1)

F(n) = (2phi^n - (2-2phi)^n)/(2^n*sqrt(5))

[edit] As ot said, the right word is "extended"




> Z ring enriched (if this is right word) with sqrt(5)

I think "extended" is the right word (see http://en.wikipedia.org/wiki/Ring_extension)

It is commonly denoted as Z[sqrt(5)]


This is a great method! It avoids arbitrary floating point arithmetics[1] and thus makes it more comparable with the method presented in the article.

As far as I can see, this algorithm is very similar to the matrix operations of the article, at least in terms of complexity.

[1] Only the very last step might involve fp arithmetics, because you'll have to convert the result pair (x,y) back to x+y*sqrt(5). However, we already know that the result has to be an integer, so the its second part will always be 0, no fp arithmetics required.


And when you calculate this out, you internally wind up doing the same calculation as the matrix method.


How do you know that? Have you actually done the math?

Although I agree that this whole stuff looks very similar, I'm not sure whether it really leads to the exact same calculations.


If you compute the eigenvalue decomposition of

  A={{0,1},{1,1}}
i.e.

  A = V^-1 D V
where D is a diagonal matrix with the eigenvalues phi and 1/phi as the diagonals, then

  A^n = V^-1 D^n V
where D^n is diagonal with phi^n and 1/phi^n as the diagonals. This gives exactly the above well known formula.


Yes, I have actually done the math. On the surface it looks very different, but a lot of the same numbers show up in intermediate calculations.


Well, both have computed the Fibonacci terms at a given level, so how different could it be? Here's my implementation:

    def fib_fast2(n):
        assert n >= 0
        a, b = 2, 0 # invariant: a,b are components of 2(phi^n)
        for bit in bits(n):
            a, b = (a*a + 5*b*b)>>1, a*b
            if bit: a, b = (a + 5*b)>>1, (a+b)>>1
        return b
It's almost identical runtime as the one in the article - a hair slower (15.32s vs. 16.17s to compute fib 10M). They're probably related by some well known relation between Fibonacci numbers.


That’s very nice! And a little rearrangement will bring it from three to two “big multiplications”, making it faster than my routine:

  def fib_ring2(n):
    assert n >= 0
    a, b = 2, 0 # invariant: phi^n = (a + b*sqrt(5)) / 2
    for bit in bits(n):
        ab = a*b
        a, b = (a+b)*((a+5*b)//2) - 3*ab, ab
        if bit: a, b = (a + 5*b)//2, (a+b)//2
    return b


Oops, I should have found that. I looked and thought there was some reason it wouldn't work. Interesting post, thanks for the fun.


That method is, for me at least, really really fast.

Eleven seconds to compute the 1,000,000th Fibonacci number. Compared with the naive method which takes 143 seconds.

Code below:

    def add(a, b): return (a[0]+b[0], a[1]+b[1])
    def sub(a, b): return (a[0]-b[0], a[1]-b[1])
    def divsq5(a): return (a[1], a[0]/5)
    def mul(a, b): return (a[0]*b[0]+5*a[1]*b[1],a[0]*b[1]+a[1]*b[0])
    def subi(x, a): return (x-a[0],-a[1])
    def pow(b, e):
        r = (1, 0)
        while e > 0:
            if e & 1 == 1:
                result = mul(r, b)
            e = e >> 1
            b = mul(b, b)
        return r

    twophi = (1,1)

    def fib0(n):
        return divsq5(sub(pow(twophi, n), pow(subi(2,twophi), n)))[0]>>n

    def fib1(n):
        a = 0
        b = 1
        while n > 0:
            (a,b) = (b, a+b)
            n -= 1
        return a


Two optimizations you can apply:

1. Given how your big integer code is probably implemented, you probably want to distribute the divisions by 2 into the intermediate calculations, rather just shifting by n at the very end.

2. Consider that (a, -b)(c, -d) = (ac + bd, -(bc + ad)). That means that the powers of (1,1) and (1,-1) are always related by negating the second component. Thus you only need to actually compute the power of (1,1) and double it.


Yeah, there are a whole lot of things I could do, but I just wanted a really quick program to see what the relative speeds were. And I have to say I was impressed.


I did this once for a Project Euler question. I called the function "awful-thing". It worked.

  ;this calculates (a+b√sqr)^n, using exponentiation by squaring
  (def awful-thing (n a b sqr)
    (afnwith (n n i a r b it 1 rt 0)
      (if (is n 0)
          (list it rt)
          (even n)
          (self (/ n 2)
                (+ (square i) (* sqr (square r)))
                (* 2 i r)
                it
                rt)
          (self (- n 1)
                i
                r
                (+ (* i it) (* sqr r rt))
                (+ (* i rt) (* r it))))))

  (def fib (n)
    (with (pow2 (expt 2 n)
           hh   (map - (awful-thing n 1 1 5) (awful-thing n 1 -1 5)))
      ;now hh = '(a b) where (fib n) = (a + b*root5)/(root5*pow2)
      ;a should = 0
      (/ (cadr hh)
         pow2)))


Thank you, that's a nice observation




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: