Hacker News new | past | comments | ask | show | jobs | submit login

Well, this did happen in machine learning. There was a theoretical argument that neural networks "should" terribly overfit according to statistical learning theory, since they have very large VC dimension, but they don't. Practice did race way ahead of theory in AI. Another example is the Chinchilla scaling law, which was discovered empirically instead of being derived from first principles.



That's different from complexity though. We actually understand why some algorithms have better complexity and results than others, while we mostly have no idea how intelligence works or even how to define it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: