Hacker News new | past | comments | ask | show | jobs | submit login

> Unless an AI becomes embodied and can do the same, I have no faith that it will ever "think" or "reason" as humans do. It remains a really good statistical parlor trick.

This may be true, but if it's "good enough" then why does that matter? If I can't determine if a user on Slack/Teams is an LLM that covers their tickets on time with decent code quality, then I really don't care if they know themselves in a transparent, prelinguistic fashion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: