Hacker News new | past | comments | ask | show | jobs | submit login

Yeah but the same could be said about memory accesses. You can always pessimize and assume it misses LLC at which point becomes a constant factor.



Yes you also have to take memory access into account if you are doing a proper big O analysis, but not for the reasons you implied. I don’t think you are thinking about time complexity correctly. The question is not how long does a memory access take, but how does it scale as the number of bits to be fetched increases. The naive answer for a Von Neumann arch is O(N) where N is the number of bits, although the answers here regarding the speed of light bounds are interesting.

For addition and multiplication the current best time complexity is some long term with logs and other terms I can’t remember, but it is not constant.

However, in most cases such as sorting involving memory accesses or mult/addition we are using a fixed bit size, so we can correctly think of them as being O(1).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: