Hacker News new | past | comments | ask | show | jobs | submit login

This is a very, very bad article that reeks of LLM-generation. If you want to actually understand this bill, read Zvi's analysis:

https://thezvi.substack.com/p/on-the-proposed-california-sb-...




I think Zvi is missing some critical points about the bill. For example:

>Before initiating the commercial, public, or widespread use of a covered model that is not subject to a positive safety determination, limited duty exemption, a developer of the nonderivative version of the covered model shall do all of the following:

>(1) Implement reasonable safeguards and requirements to do all of the following:

>(B) Prevent an individual from being able to use the model to create a derivative model that was used to cause a critical harm.

This is simply impossible. If you give me model weights, I can surely fine-tune them into doing a covered harm (e.g. provide instructions for the creation of chemical or biological weapons). This requirement is unsatisfiable, and you're not allowed to release a covered model without satisfying it.


From Zvi's article:

> The definition of covered model seems to me to be clearly intended to apply only to models that are effectively at the frontier of model capabilities.

> Let’s look again at the exact definition:

> (1) The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024, or a model that could reasonably be expected to have similar performance on benchmarks commonly used to quantify the performance of state-of-the-art foundation models, as determined by industry best practices and relevant standard setting organizations.

> (2) The artificial intelligence model has capability below the relevant threshold on a specific benchmark but is of otherwise similar general capability.

> That seems clear as day on what it means, and what it means is this:

> 1.

> If your model is over 10^26 we assume it counts.

> 2.

> If it isn’t, but it is as good as state-of-the-art current models, it counts.

> 3.

> Being ‘as good as’ is a general capability thing, not hitting specific benchmarks.

> Under this definition, if no one was actively gaming benchmarks, at most three existing models would plausibly qualify for this definition: GPT-4, Gemini Ultra and Claude. I am not even sure about Claude.

> If the open source models are gaming the benchmarks so much that they end up looking like a handful of them are matching GPT-4 on benchmarks, then what can I say, maybe stop gaming the benchmarks?

> Or point out quite reasonably that the real benchmark is user preference, and in those terms, you suck, so it is fine. Either way.

> But notice that this isn’t what the bill does. The bill applies to large models and to any models that reach the same performance regardless of the compute budget required to make them. This means that the bill applies to startups as well as large corporations.

> Um, no, because the open model weights models do not remotely reach the performance level of OpenAI?

> Maybe some will in the future.

> But this very clearly does not ‘ban all open source.’ There are zero existing open model weights models that this bans.

So no, it does not seem that anything was missed.


I honestly don't understand how this is responsive to what I wrote.


> reeks of LLM-generation.

> Answer.AI is a new kind of AI R&D lab which creates practical end-user products based on foundational research breakthroughs.

It's very likely the company / author is dogfooding.


Can we please focus on the substance of the article instead of trying to derail the discussion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: