Hacker News new | past | comments | ask | show | jobs | submit login

Complexity and cost are just two of the things that inhibit these attacks.

Three letter agencies knowing who's buying a suspicious quantity from the list of known precursors, that stops quite a lot of the others.

AI in general reduces cost and complexity, that's kind of the point of having it. (For example, a chemistry degree is expensive in both time and money). Right now using an LLM[0] to decide what to get and how to use it is almost certainly more dangerous for the user than anyone else — but this is a moving goal, and the question there has to be "how to we delay this capability for as long as possible, and at the same time how do we prepare to defend against the capability when it does arrive?"

[0] I really hope that includes even GPT-4 before the red-teaming efforts to make it not give detailed instructions for how to cause harm




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: