Hacker News new | past | comments | ask | show | jobs | submit login

Exactly. So why now are we trusting 'Research' that is trying to predict the future of other 'Research'. The linked article is just some estimates on the error built into the current LLM model.

How can we extrapolate that to be "well, gosh darn, these LLM's are already played out, guess we're all done"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: