Hacker News new | past | comments | ask | show | jobs | submit login

How is this any different from the mountains of shitty code produced every hour by millions of shitty developers everywhere?

This form of argument appears in every thread about LLMs: They are fallible, what will we ever do?!

As if people haven't been fallible in both the same and different ways all along. How have we ever managed...




It's different because people who use LLMs like this dont expect the output to be malicious or wrong


I do. So do others.

Others will come to do the same. Some others will not.

Not sure that your generalization is true here.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: