Hacker News new | past | comments | ask | show | jobs | submit login

It's too bad that there's not a vaguely-somehow-related-but-not-really and impossible-to-censor service that retains stuff that sites have excluded using robots.txt or whatever.



The good news is that the Wayback Machine stopped making robots.txt retroactive. That handles most of the cases I was running into where content I wanted was removed from the archive.


Is this supposed to be a veiled reference to archive.is? I'm not sure what point you're making.


No, I wasn't thinking of that site per se.

I was thinking more about the 90s era dream of uncensurable "data havens". That led to Freenet, for example. Which is slow, and forgets stuff that doesn't get accessed. And Tor onion services, which are more readily taken down.

But the problem is that there's no way to know in advance whether something is about to disappear from the Internet Archive. So you'd need someone inside who'd discreetly alert the backup service.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: