Hacker News new | past | comments | ask | show | jobs | submit login

It's fascinating to think about it. We put into writing almost everything we experience from the world. GPT-3 demonstrates that if the model is big enough, it can produce intelligent answers, much better than GPT-2. It can fail spectacularly, but humans can do that too :)

It was also shown that it's scalable, so there's no reason we couldn't make it an order of magnitude bigger if we wanted to. That future system might be a game changer for search and AI.

Then we just have to ask great questions like "what is the meaning of life the universe and everything" and watch the loading animation for 7.5 million years.

It's probably not that far fetched to think that this is the Fat Man bomb of AI. China, Russia probably already allocated resources to build their own model. The arms race is on.




> GPT-3 demonstrates that if the model is big enough, it can produce intelligent answers

You could make the same statement about a lookup table.


And it would be a fine statement :) That table would be huge in size compared to GPT and less general. GPT is way more compressed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: