Hacker News new | past | comments | ask | show | jobs | submit login

The text says “in a way that would be significantly more difficult to cause without access to a covered model” and in another place mentions “damage by an artificial intelligence model that autonomously engages in conduct that would violate the Penal Code if undertaken by a human” so that probably doesn’t count. Though it might be open to future misinterpretation.



I don't agree with that reading. As long as my custom chemical weapon instructions are not publicly available otherwise, then it is surely more difficult to build the weapon without access to the instructions.

The line about autonomous actions is only item C in the list of possible harms. It is separate from item A which covers chemical weapons and other similar acts.


If someone is so keen on doing something in a way that’s illegal that they go to all that trouble specially to get in trouble with the law, maybe that’s up to them.


It's not the person who does the fine-tuning I'm worried about, it's the person who releases the base model who the law also makes liable.

The point is that, because fine-tuning can trivially induce behavior that satisfies the standard for "hazardous capability" in any model, the law effectively makes it illegal to release any covered model.


You're missing the point. Liability here would also fall on the open source developer who created a general purpose model, which someone else then went on to fine-tune and prompt to do something harmful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: