Hacker News new | past | comments | ask | show | jobs | submit login

> their own large language model LaMDA, that is apparently so good it convinced a senior AI researcher that it was sentient

It is certainly worth looking into the controversy about that particular engineer (was he actually a programmer?). There's plenty of room for exciting debate to be had about defining and testing for sentience and I'm glad it stirred that debate. But researchers with far better credentials criticized his reasoning and I imagine that is quite a ubiquitous view in NLP research.

I think the Washington Post did the initial reporting and they covered it well - even criticising his arguments that e.g. the Turing test is a proper test of sentience. There's audio of their conversation in an episode of Post Reports.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: