Hacker News new | past | comments | ask | show | jobs | submit login

Interestingly enough NOLOH (http://www.noloh.com) made AJAX applications crawlable years ago. If you're interested in the specifics, see http://dev.noloh.com/#/articles/Search-Engine-Friendly/

Disclaimer: I'm a co-founder of NOLOH.




I took a look. Your approach is to detect requests from spiders and respond with plain HTML content rather than the content wrapped in Javascript, etc., that a normal user would get. You address the obvious question, "But isn't that cloaking?" by saying no, the content itself is the same, so nobody should object. Fair enough, I happen to agree with you, but our opinion is irrelevant; what matters is whether Google consider this practice legitimate. Can you (or any HNer) tell me definitively whether they do or not? And has their policy changed recently with the introduction of this new crawlability spec?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: