Hacker News new | past | comments | ask | show | jobs | submit login

Yes, but we should expect that, the answers are in its training data.

The problem is passing tests are an okay proxy for competence in humans, but if you think of LLMs as a giant library search engine, the thing it is competent at is identifying and regurgitating compiled phrases from its records.

Which is awesome. It can't be a doctor.




Relatedly, a lot of the cap R fanfic crowd points at stuff pretty clearly lifted out of the Metamorphosis of Prime Intellect or whatever.

And it’s dope that it can do that!

But let’s keep our heads about what it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: