Hacker News new | past | comments | ask | show | jobs | submit login

If you really want to make it easy to understand, make it graphical.

That is: benchmark the code, varying the input size, and plot the results. Almost anyone should be able to understand.

This might also reveal effects that are not taken into account by Big O notation, as not all algorithms that have the same complexity have the same performance. But I see it as a plus.




The coefficients are what can bite you.

Also amortized analysis, e.g. appending to a std::vector is O(n) in the worst case but O(1) almost all the time (the O(n) spikes come at something like every n = golden_ratio^k * initial size for integer k)


It isn't uncommon for algorithms with "worse" algorithmic complexity to perform better than the faster alternative because the constant overhead of setting up the "better" algorithm eats all the performance gains. Sequential search is faster than binary search for short lists. You need to test to see where the crossover happens on your platform.


Not necessarily set-up time, but either constants can reverse the roles, or the fact that CPUs are hardly complex and the model most often used in analyzing algorithms is more simple and doesn’t map too well to things like caches, branch-prediction and the like.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: