Hacker News new | past | comments | ask | show | jobs | submit login

> LLMs won't get intelligent

Even assuming that is true: LLMs aren't all that exists in AI research and just like LLMs are amazing in terms of language it's possible similar breakthroughs could be made in more abstracted areas that could use LLMs for IO.

If you think ChatGPT is nice, wait for ChatGPT as frontend for another AI that doesn't have to spend a single CPU cycle on language.




The next AI wave hasn't even started. Imagine an LLM the size of GPT-4 but it's trained on nothing but gene sequence completion.

All the models being used in academia are basically toys, none of those guys are running hardware at a scale that can even remotely touch Azure, Meta, etc, and right now there is a massive global shortage of GPU compute that's eventually going to clear up. We know models get A LOT better when they are scaled up and are fed more data, so why wouldn't the same be true for other problems besides text completion?


> LLMs aren't all that exists in AI research

Frankly, I'm a bit worried about all the rest now that LLMs proved to be so successful. We might exploit them and arrive to a dead end. In the meantime, other potentially crucial developments in AI might get less attention and funding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: