Hacker News new | past | comments | ask | show | jobs | submit login

As far as I can tell, mainline web search engines are mostly serving cached/canned responses these days. They get updated periodically but it's not the same as the late 90s or early 2000s when every search was run against large-scale content indexes. You can occasionally stack keywords or form unique enough queries that you force the engine to do real work, but getting this right seems to get harder and harder over time and their pool of content that's indexed seems to be broad but shallow now.

GitHub code search is still doing real searches and so is much more expensive to run.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: