Hacker News new | past | comments | ask | show | jobs | submit login

That's unconvincing. You could say the same thing about matrix multiplication with huge elements.



Yes, you could. But normally when people analyze the runtime of matrix multiplication, they assume that the elements have constant size (hence constant runtime arithmetic) and the matrix size scales. But you could also write a runtime that scales with respect to the size of the elements as well; assuming that we are using integer elements that have size at most M your runtime would be something like O(n^3log^2(M)M) in the naive case.

In Fibonacci it is not valid to assume that your arithmetic is constant time because you are doing arithmetic on integers that grow exponentially. In general, it is not valid to assume that you can do constant-time arithmetic on all numbers because then you could solve many many problems much faster than possible on a word-RAM machine by encoding your problem as a real number, and multiplying/adding.


> In general, it is not valid to assume that you can do constant-time arithmetic on all numbers because then you could solve many many problems much faster than possible on a word-RAM machine by encoding your problem as a real number, and multiplying/adding.

i miss theory of computation. the reasoning was so simple and the proofs were so fun.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: