Hacker News new | past | comments | ask | show | jobs | submit login

Google understood a long time ago that they could not beat SEO, and have been fighting a losing battle ever since. I remember a research presentation from them (might have been late 00s or early 10s) in which they wanted to know: we have an adversary with unlimited resources who can create as many webpages and servers as they want. How do we detect pages in "their" internet as opposed to pages in the "real" internet. The basic answer at the end of the seminar was - you can't. There is no information-theoretic way to do it on the structure of the graph. Instead you need to follow chains of trust, which means that you need roots of trust, which means .... look at the web today, dominated by a handful of known platforms.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: