Hacker News new | past | comments | ask | show | jobs | submit login

1. The new Frontier Model Division is just receiving information and issuing guidelines. It’s not a licensing regime and isn’t investigating developers.

2. Folks aren’t automatically liable if their highly capable model is used to do bad things, even catastrophic things. The question is whether they took reasonable measures to prevent that. This bill could have used strict liability, where developers would be liable for catastrophic harms regardless of fault, but that's not what the bill does.

3. Overall it seems pretty reasonable that if your model can cause catastrophic harms (which is not true of current models, but maybe true of future models), then you shouldn’t be releasing models in a way that can predictably allow folks to cause those catastrophic harms.

If people want a detailed write up of what the bill does, I recommend this thorough writeup by Zvi. In my opinion this is a pretty narrow proposal focused at the most severe risks (much more narrow than, e.g., the EU AI act). https://thezvi.substack.com/p/on-the-proposed-california-sb-...




On point #3, as far as I can tell, the bill criteria defines a "covered model" (a model subject to regulation under this proposal) as any model that can "cause $500,000 of damage" or more if misused.

A regular MacBook can cause half a million dollars of damage if misused. Easily. So I think any model of significant size would qualify.

Furthermore, the requirement to register and pre-clear models will surely precede open data access, and that means a loss in competitive cover for startups working on new projects. I can easily see disclosure sites being monitored constantly for each new AI development, rendering startups unable to build against larger players in private.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: