Hacker News new | past | comments | ask | show | jobs | submit login

I can't begin to fathom a motivation for that. Can anyone think of a reason, or is this likely simply a mistake?



The site doesn't even seem close to being useful or complete yet. I think it's intentional.


They didn't want search traffic until the content and structure start to stabilize? Google, et al. will crawl for years after a URL 404s so it might make sense to hold off until the first round of stabilization


Maybe someone from Facebook made the robots.txt file, ayyy!


The site was in super-alpha. I think it was intentional.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: