Hacker News new | past | comments | ask | show | jobs | submit login

Came across this too the other day. For a moment I was hoping that they'd trained some machine learning algorithm on the past evolution of the sites in their archive in order to extrapolate how sites may change in the future, and that they'd have thrown in some futuristic design elements in the mix.

But the way that this thing works is pretty satisfying too. In terms of conveying a message about our future I mean.




Yeah I thought it was going to be that too. They certainly have the data for it. Although that model would probably predict that almost every site just disappears in its future. Speaking of which, to imagine a world where information is inaccessible, I don't need to imagine a dystopian authoritarian future, just the shitty haphazard one we have now, where things just disappear from the internet - which was the original battle archive.org was fighting.


Every URL eventually decays to pointing to a parked domain loaded with ads.

Is that a theorem with name to it attached already? I feel it should be.


imagining Google.com pointing to a parked domain with junk search results in 2065 brings a smile to my face.


I like the idea of having a model look at a page and then rework the look and style to be simple and free of ads or JavaScript.


reader mode


This is a great idea


That's a way more interesting idea. As it is it feels like a forward from grandma from 2002.


I went in expecting Devs.


I hoped it was something that fanciful, but didn't expect it.

Your comment brought to kind something. I wonder if GPT3 et al could be used to invent or predict futures. I know AI is being used to work on domains in science and having some success. It seems like those spaces have rules that can be followed to make new discoveries. Could we set an AI on certain social/economic/technological simulations and have it spit out various possible outcomes?

One sort of simulation that comes to mind is the Transition Integrity Project. Could an AI have arrived at realistic conclusions given the right rules?

https://en.wikipedia.org/wiki/Transition_Integrity_Project


GPT3 and other deep models cannot predict the future. They can only generate alternative presents.


Why not? It is trained to guess the next word, and giving it a few lines of dialogue makes it continue the conversation.

The only reason would be hitting the hardcoded input length limit.


What you call the « next word » is actually the next word it has already seen in a similar context in the training set.

The novelty does no go beyond the variability of the training data.

And so deep learning cannot generate futures that are both surprising and general.

In the context of conversations, no doubt that it can generate realistic answers. They will just be regurgitations of the training data. They might seem novel to you because you haven’t experienced all the discussions of the training set. But they won’t be novel to humankind and won’t project it into an actual future.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: