1) complexity analysis ignores coefficients which can make a huge difference, especially since computers usually have bounds
2) real life may influence the likelihood of best/worst case. I think data tends to be somewhat sorted in practice so algorithms with best case on sorted data perform better
Complexity analysis typically assumes an ideal Von Neumann machine. Systems programming embraces the discontinuities in registers, L1, L2, L3, ILP, branch prediction, and so on. (Maybe there's a good pun to be had that complexity analysis stays as far away from computer complexity as possible.) The simple model is essential, as it's the only way to create a body of proofs that will last. What's increasingly hard, though, is that there are oodles of papers, of the sort demonstrating that one algorithm or data structure is better than another, yada yada, and there's usually some benchmarking in each, and that kind of empirical demonstration is now far less than useful unless the paper's authors are good systems programmers and have given up on improvements.
Exactly, seems like the Von Neumann machine assumption is outdated, especially with modern supercomputers. Memory access itself isn't O(1), it's more like O(N^(1/3)) because space is a serious consideration, and that's your distance from a focal point to all other points within a 3D volume.
Yet, memory access can be limited by latency and bandwidth. Predictable access patterns (e.g. linear scan) tend to enjoy prefetech reduces the latency of accessing random parts.
1) complexity analysis ignores coefficients which can make a huge difference, especially since computers usually have bounds
2) real life may influence the likelihood of best/worst case. I think data tends to be somewhat sorted in practice so algorithms with best case on sorted data perform better