Hacker News new | past | comments | ask | show | jobs | submit login

GPT-4's solutions are usually correct the first time around, or can be corrected by telling it what it did wrong. The ask-clarify-correct loop is still faster and less effort than doing it yourself.



If people continue to learn outside of LLMs and can tell when it is wrong so they can correct it. The slope can be slippery. For those of us who grew up before the rise of LLMs, this isn't really something we have to worry about. But the next generation will be not know what it was like without LLMs, kind of like the pre/post-internet/phone generational splits




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: