Hacker News new | past | comments | ask | show | jobs | submit login

There are certainly people slapping AI generated code into production in small projects without adequately checking them, leading to things like inadequate input validating leading to open XSS and injection vectors. With too little oversight in a more significant project it is only a matter of time before this happens somewhere that results in, for instance, a DoS that affects many people, or a personal data leek.

Given the way LLMs are trained, it might be unlike but it is conceivable that if they see deliberate back doors injected into enough of the training data, they'll consider it to be a valid part of a certain class of solution and output the same themselves. It is a big step again from deliberate back-doors to active malware, but not an inconceivable one IMO if large enough chunks of code are being trusted with minimal testing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: