Hacker News new | past | comments | ask | show | jobs | submit login

From my perspective, it's not useful to dwell on the fact that LLMs are often confidently wrong, or didn't nail a particular niche or edge-case question the first time, and discount the entire model class. That's expecting too much. Of course LLMs constantly don't help solve a given problem. The same is true for any other problem-solving approach.

The useful comparison is between how one would try to solve a problem before versus after the availability of LLM-powered tools. And in my experience, these tools represent a very effective alternative approach to sifting through docs or googling manually quote-enclosed phrases with site:stackoverflow.com that improves my ability to solve problems I care about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: