Hacker News new | past | comments | ask | show | jobs | submit login

This is exactly the things LLMs seem to be the worst at, even as they improve

LLMs are confidently wrong rather than inquisitive/have an interest in being right. They theoretically have infinite patience, but they really have no patience and accept input at face value.

I don't doubt that these can be improved, but I also don't doubt that some people will be better at interacting with LLMs than others, and that to get good at using an LLM will take onboarding time that organizations would prefer to centralize




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: