> The creator of a model can not ensure that a model is never used to do something harmful – any more so that the developer of a web browser, calculator, or word processor could. Placing liability on the creators of general purpose tools like these mean that, in practice, such tools can not be created at all, except by big businesses with well funded legal teams.
This matches my thoughts on why this is ultimately a bad piece of legislation. It is virtually impossible to ensure that a piece of technology will not be used for "harmful purposes". I agree that such stipulations will be just another roadblock keeping everyone except "big businesses with well funded legal teams" from working on LLMs.
As I understand this law does not mandate you to ensure anything. It requires you to follow best practices (to be determined), report safety incidents, etc. You are not even liable for safety incidents, you just need to report them, although it may be embarrassing. Overall, it seems highly reasonable.
> requires you to follow best practices (to be determined)
Trigger happy regulation for a field that hasn't even come into full swing. It's indicative of an over-active immune system; lawmakers with nothing better to do.
Pass laws against improper use and go after the malicious users. Don't ban the technology, the research, or even the applications. (Of which there will be abundant good uses. Many of which we've yet to even see or predict.)
Our culture has become obsessed with regulating and limiting freedom on the very principle that it might be harmful. We should be punishing actual measurable, physical and monetary harms. Not imaginary or hypothetical ones.
If California passes this, AI companies should leave California behind.
> Trigger happy regulation for a field that hasn't even come into full swing. It's indicative of an over-active immune system; lawmakers with nothing better to do.
I guess they are damned if they do and damned if they don't.
We constantly complain about slow lawmaking, "Look at how out of touch Congress are! XYZ technology is moving so fast, and they're always 10-20 years behind!" Finally, someone is actually on the ball and up-to-date with a current technology, and now the other complainers complain that they're jumping the gun and regulating too soon. Lawmakers can't win.
> Trigger happy regulation for a field that hasn't even come into full swing.
Of little concern in the US legal system. Might be problematic in the EU perhaps, but in the United States the courts have consistently been tremendously deferential to the interests of small and large businesses vs consumers.
>Pass laws against improper use and go after the malicious users.
I think they are having to deal with things like sales to countries outside of their legal reach. So, while I understand the tack here, there's probably more to it than this.
> Other relief as the court deems appropriate, including
monetary damages damages, including punitive damages, to
persons aggrieved aggrieved, punitive damages, and an order for
the full shutdown of a covered model.
> A civil penalty in an amount not exceeding 10 percent of
the cost, excluding labor cost, to develop the covered model for a
first violation and in an amount not exceeding 30 percent of the
cost, excluding labor cost, to develop the covered model for any
subsequent violation.
That's like saying we should punish when bridge collapses, before that any bridge should be able to be built. You can argue that, but not many will agree.
Because we know how to build bridges so that they don't collapse. The laws of physics that govern bridge-building are well known. The equivalent for AI systems? Not really.
We don't know if there's even any danger. All statements so far of any danger are somewhere between science-fiction stories and anthropomorphizing AI as some kind of god. The equivalent of "if the bridge breaks down, someone can be hurt", namely a real, quantifiable danger is sorely lacking here.
Best practices will say things like "you should test it". While we are ignorant, there are just many reasonable things to do. Human biology is not completely understood, but that does not mean medical checklists are useless.
Test it how? What makes it fail? The ability to tell people how to make a bomb? Being able to say what (few) good things Hitler accomplished for Germany? Giving medical advice? Where’s the line?
One thing law explicitly says is full shutdown capability. So it should be tested whether it can autonomously hack computers on the internet and propagate itself. In fact Anthropic tested this. See https://metr.org/ for more.
It's not at all like saying that.
The concept of bridges is thousands of years old at this point with well established best practices, and a dense knowledge base on what can go wrong and how much damage can occur if built incorrectly. We aren't at the stage of "bridge innovation" where we don't even know what a bridge collapse looks like.
We know very well the cost, threat to lives, even timeline that a poorly built bridge can cause.
I'm not against legislation regulating AI, but it needs to be targeted toward clear problems e.g.: stealing copyrighted material, profiling crime, face recognition, self driving vehicles, automated "targeting" however you want to interpret that.
I want to point out above are some awful uses of AI that are leveraged mostly by closed, proprietary entities
Nobody has been killed by AI, unless you're arguing it impacts mental health [1].
A better-fitting analogy I'd make is that sex causes disease and other negative externalities, so we should pass laws that force people to be married and licensed in order to have sex.
In any case, this bill is the walking epitome of something a "nanny state" might produce.
[1] TikTok and Instagram have far more impact on this, and we've yet to do anything there. We seem to be of the opinion that this should be an individual responsibility.
Some of us worry that billions of people will be killed by AI in the future -- possibly without anything that you or the average decision-maker might regard as a warning. (They're likely to be killed all at the same time.)
I.e., it is more like a large asteroid slamming into the Earth than a stream of deaths over time such as produced by the deployment in society of the automobile (except that the asteroid does not have the capability of noticing that it's first plan failed to kill a group of human over there, then to devise a second plan for killing those).
Safety and alignment stopped being about preventing AI from killing all humans a while ago. Unless you think that "don't say anything potentially offensive" is in-scope with "don't kill humans and don't take over the world by any means necessary to carry out your prompt."
Very fair point/question, I should have explicitly drawn this link because my comment was quite ambiguous and making (bad) assumptions on shared context.
The relevance is IMHO this bill is largely an ossification at the government level of the safety and alignment philosophy of the big corps. I'm guessing they mainly wrote this bill. It's not the specific words "safey and alignment" that matter, it's the philosophy.
If the bill were only covering AI killing machines I'd (probably) be in agreement with it, but it seems significantly more overreaching than that.
>If the bill were only covering AI killing machines I'd (probably) be in agreement with it, but it seems significantly more overreaching than that.
Just to make sure we are on the same page: my main worry is the projects ("deployments"?) that aren't intended to kill anybody, but one of those project ends up killing billions of people anyways. It probably kills absolutely everyone. That one project might be trying to cure cancer.
The only way of not incurring this risk of extinction (and of mass death) that I know of is to shut down all AI research now, which I'm guessing you would consider "overreaching".
It would be great if there were a way to derive the profound benefits of continuing to do AI research without incurring the extinction risk. If you think you have a way to do that, please let me know. If I agree that your approach is promising, I'll drop everything to make sure you get a high-paying job to develop your approach. There are lots of people who would do that (and lots of high-net-worth people and organizations who would pay you the money).
The Machine Intelligence Research Institute for example has a lot of money that was donated to them by cryptocurrency entrepreneurs that they've been holding on to year after year because they cannot think of any good ways to spend it to reduce extinction risk. They'd be eager to give money to anyone that can convince them that they have an approach with even a 1% probability of success.
Agreed, and I think this bill probably would help against that, although indirectly by stifling research outside of big corps. You might be winning me over somewhat - stifling research outside of big corps does feel like a pretty low price to pay against the death/destruction of all of humanity...
I guess I need to decide how high I feel the risk is of that, and that I'm less sure of. Appreciate the discussion btw!
The idea that something with greater cognitive capabilities than us might be dangerous to us occurs to many people: sci-fi writers in large numbers to be sure, but also Alan Turing and a large fraction of currently-living senior AI researchers.
What really gets me concerned is the quality of the writing on the subject of how can we design an AI so that it will not want to hurt us (just as we design bridges so that we know from first principles they won't fall down). Most leaders of AI labs have by now written about the topic, but the writings are shockingly bad: everyone has some explanation as to why the AI will turn out to be safe, but there are dozens of orthogonal explanations, some very simplistic, none of which I want to bet my life on or the lives of my younger relatives.
Those who do write well about the topic, particularly Eliezer Yudkowsky and Nate Soares of the Machine Intelligence Research Institute, say that it is probably not currently within the capabilities of any living human or group of humans to design an AI to be safe (to humans) the way we design bridges to be safe, and that our best hope is the hope that over the next centuries humankind will become cognitively capable enough to do and that in the meantime people stop trying to create AIs that might turn out to be dangerously capable -- which (because outside of actually doing the training run, we have no way of predicting the effects on capability of the next architectural improvement or the next increase in computing resources devoted to training) basically means stopping all AI research now worldwide and for good measure stopping progress in GPU technology.
Eliezer has been full-time employed for over 20 years to work on the issue (and Nate has been for about 15 years) and they've had enough funding to employ at least a dozen researchers and researcher-apprentices over that time to bounce ideas off of in the office.
How do you know? If we can agree something about AI, it is that we are ignorant about AI.
We were similarly ignorant about recombinant DNA, so Asilomar was very cautious about it. Now we know more, we are less cautious. I still think it was good to be cautious and not to dismiss recombinant DNA concerns as "science fiction".
"Best practices" means something sensible in most subfields of capital-E Engineering. (In fact, I think that's where the term originates from, with all other usages being a corruption of the original concept.)
In Engineering, "best practices" are the set of "just do X" answers that will let you skip deriving every answer about what material or design to use from first principles for cases where there's a known dominant solution. For example, "for a load-bearing pillar, use steel-reinforced concrete, in a cylindrical shape, with a cross-sectional diameter following formula XYZ given the number of storeys of the building." You can (and eventually must!) still do a load simulation for the building, to see that the pillar can hold things up without cracking — but you don't have to model the building when selecting what material to use; and you don't have to randomly fiddle with the shape or diameter of the pillar until the load holds. You can slap a pillar into the design and be able to predict that it'll hold the load (while not being overly costly in material use!), because "best practices."
1. The new Frontier Model Division is just receiving information and issuing guidelines. It’s not a licensing regime and isn’t investigating developers.
2. Folks aren’t automatically liable if their highly capable model is used to do bad things, even catastrophic things. The question is whether they took reasonable measures to prevent that. This bill could have used strict liability, where developers would be liable for catastrophic harms regardless of fault, but that's not what the bill does.
3. Overall it seems pretty reasonable that if your model can cause catastrophic harms (which is not true of current models, but maybe true of future models), then you shouldn’t be releasing models in a way that can predictably allow folks to cause those catastrophic harms.
If people want a detailed write up of what the bill does, I recommend this thorough writeup by Zvi. In my opinion this is a pretty narrow proposal focused at the most severe risks (much more narrow than, e.g., the EU AI act).
https://thezvi.substack.com/p/on-the-proposed-california-sb-...
On point #3, as far as I can tell, the bill criteria defines a "covered model" (a model subject to regulation under this proposal) as any model that can "cause $500,000 of damage" or more if misused.
A regular MacBook can cause half a million dollars of damage if misused. Easily. So I think any model of significant size would qualify.
Furthermore, the requirement to register and pre-clear models will surely precede open data access, and that means a loss in competitive cover for startups working on new projects. I can easily see disclosure sites being monitored constantly for each new AI development, rendering startups unable to build against larger players in private.
Your argument is meaningless if you don't specify what threshold there should be for harm
Otherwise you also have to complain about the stifling of open source bioagent research, open source nuclear warheads, open source human cloning protocols
Those are also all dual-use technologies that are objectively morally neutral
Laws should be about the outcome, not about processes that may lead to an outcome. It is already illegal in California to produce your own nuclear weapon. Instead of outlawing books, because they allow research into building giant gundam robots, just outlaw giant gundam robots.
> Laws should be about the outcome, not about processes that may lead to an outcome
They have to be about both because outcomes aren’t predictable, and whether something is an intermediate or ultimate outcome isn’t always clear. We have a law requiring indicator use on lane change, not just hitting someone while lane changing, for example.
But even this example is a ban on a specific action: changing lanes without using a legally defined indicator with a specific amount of display time.
The equivalent would be if the law simply said, "don't change lanes unsafely" but didn't define it much beyond that, and left it to law enforcement and judges to decide, so anytime someone changed lanes "unsafely" there's now extremely unknown legal risk.
Laws also should be possible (preferably easy) to implement. Why does DMCA ban circumvention tools? Circumvention is already illegal and it is piracy that should be outlawed, not tools to enable piracy? The reason is piracy tools are considerably easier to regulate than piracy.
The DMCA ban on circumumvention has been both stunningly useless at discouraging piracy and effective at hurting normal users including such glorious stupidity as being used to prevent 3rd party ink cartridges.
> Laws should be about the outcome, not about processes that may lead to an outcome.
Some outcomes are pretty terrible, I think there are valid instances where we might also want to prevent precursor technology from being widely disseminated to prevent them.
There are certainly types of data that are already prohibited for export and dissemination. In this case, I would argue no new law is needed, the existing laws cover the export or dissemination of dual use technologies. If the LLM becomes dual-use/export-restricted/etc because it was trained on export-restricted/sensitive/etc data, it is already illegal to disseminate it. Enforce the existing law, rather than use taxpayer money to ban and police private LLM training because this might happen.
> Otherwise you also have to complain about the stifling of open source bioagent research, open source nuclear warheads, open source human cloning protocols
No, actually you don’t.
This is just a slippery slope that suggests that any of these examples are even remotely comparable to AI. There is room for nuance and it’s easy to spot the outlier among bioagent research, nuclear warheads, human cloning, and generative artificial intelligence.
Unfortunately, I think you will see this differently in a few years, that AI is not an outlier (In the fortunate case where were there were enough "close calls" that we're still around to reflect on this question)
Agree that artificial intelligence is an outlier. I think it is the technology with the greatest associated risk of all technologies humans have worked on.
It’s unhelpful to the argument when you do this, and it makes our side look like a bunch of smug self entitled assholes.
The reality is that AI is disruptive but we don’t know how disruptive.
The parent post is clearly hyperbole; but let’s push back on what is clearly nonsense (ie. AI being more dangerous than nuclear weapons) in a logical manner hm?
Understanding AI is not the issue here; the issue so that no one knows how disruptive it will eventually be; not me, not you, not them.
People are playing the risk mitigation game; but the point is that if you play it too hard you end up as a ludite in a cave with no lights because something might be dangerous about “electricity”.
I disagree. Debating gives legitimacy, especially when one begins to debate a throwaway comment that doesn't even put an argument forward. The right answer is outright dismissal.
Someone who creates very dangerous items needs to take responsibility for them. Or their production needs to be very heavily regulated. That is just a reality. We don't let companies sell grenades on street corners.
The running away from responsibility is one of the things I like least about big tech.
Sure, ultra-hazardous activities are regulated differently from other activities, including under tort law, but generic AI tools are not ultra-hazardous by nature. No piece of software is, until it is connected in some way to real world effects. Take an object-detection algorithm. There's absolutely nothing inherently dangerous about identifying objects in a video stream. But once you use the algorithm to create an automatic targeting system for a drone with a grenade strapped to it, it does become hazardous. But that's no reason to regulate the algorithm as if it were hazardous itself, at least no more so than it is to regulate the drone. As you point out, we regulate hand grenades. We do not regulate the boxes hand grenades are delivered in, or the web framework used for building a website that can be used to purchase hand grenades.
All technology has good and bad uses and you can’t hold the maker accountable for all of those. At some point you have to hold users and buyers accountable or just stop developing anything.
When a person uses a car to drive into a crowd, do we blame the automobile manufacturer? Do you blame Kali Linux when someone uses it to hack a remote system? What about Apple when an iPhone is used to call in a threat to a school?
After all of the times that I have heard this argument, I now believe that the lesser evil is allowing people to sell grenades on street corners. This logic causes complacency in users of products and removes any responsibility on the part of malicious actors who still find ways to use the "softened" version of these products badly. They will now just blame the people who didn't "soften" them properly.
So no thank you, bring back responsibility to end users of products, and allow suppliers to develop the best capabilities they can.
This is a strawman argument. LLMs, like books, are not inherently dangerous. Grenades are, and lack any legitimate purpose beyond indiscriminate killing.
LLMs are functions of their training data, nothing more. This is evidenced by how we see very different model architectures produce essentially the same result. All of that training data is out there, on the internet, in books; none of that “dangerous” knowledge is banned or regulated, nor should it be.
Given the number of AI deaths (a handful, if we're counting very generously) and gun deaths, or car deaths, or even deaths caused by refusal to vaccinate, I'm fascinated we're choosing autocomplete on steroids as a "very dangerous item".
By all means, let's have responsibility for actual outcomes. That bill is talking about imagined outcomes.
The definition of harm is buried low in the bill, here's the list:
(A) The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties.
(B) At least five hundred million dollars ($500,000,000) of damage through cyberattacks on critical infrastructure via a single incident or multiple related incidents.
(C) At least five hundred million dollars ($500,000,000) of damage by an artificial intelligence model that autonomously engages in conduct that would violate the Penal Code if undertaken by a human.
(D) Other threats to public safety and security that are of comparable severity to the harms described in paragraphs (A) to (C), inclusive.
That means AI for drug discovery and materials science development, AI for managing electricity grids and broadband traffic, AI in the financial and health services sectors, etc. Then there's the military-industrial side, which this legislation might not even touch if only federal contracts are involved. Classified military AI development seems reckless, hasn't anyone seen War Games?
I really hate the (apparently very popular) idea that we should be shifting responsibility away from end users and toward providers and makers of tools. From playgrounds to drugs to software, our society wants to force the suppliers to make things safe by design rather than requiring and educating end users on responsible use.
The politicians are gunning extremely hard for open source AI. It's crazy.
"Soros argued that synergy like that between corporate and government AI projects creates a more potent threat than was posed by Cold War–era autocrats, many of whom spurned corporate innovation. “The combination of repressive regimes with IT monopolies endows those regimes with a built-in advantage over open societies,” Soros said. “They pose a mortal threat to open societies.”
Literally everyone out there who pursues global influence is just frothing at the mouth over AI. This is seriously tempting me to buy the 512gb Mac Studio when it comes out so I can run the big llama3 model, which will probably be banned any day now.
> The politicians are gunning extremely hard for open source AI. It's crazy.
I hope someone has / can do some investigative journalism to check out their links with commercial / closed source AI; I can imagine the investors and those that benefit from companies like "Open"AI have close links with politicians. There's probably no direct links, they've become really good at obscuring those and plausible deniability.
You'll hear about it in 10 years in an article about "why did open source Ai die? ... once blossoming along closed source AI, open source AI disappeared after XXYYZ AI ACT. Turns out, Senator X and Senator Y were both in the pocket of closed source AI (and ended up in cushy jobs in Microsoft and Xai, not a coincidence) ... "
Regardless of your perspective on Musk, X AI is currently producing open source AI with permissive licensing, and seems very likely to continue open source releases in the near future.
Microsoft, Amazon, OpenAI, others are driving regulatory capture behind the scenes. The usual suspects are dropping all sorts of money on establishing control and rent seeking - actual open source AI with end user control makes it much harder for these asshats to extract money and exert influence over people, and they desperately want both. AI, like search, will be a powerful influence vector for politics and marketing.
I don’t really understand the long term plan, or maybe I don’t believe lawmakers understand where we are going long-term with this stuff.
We’re still in the very early days.
Unless the academic community really drops the ball, in 5 or so years they’ll be training models around the quality of the current state of the art on professors’ research clusters (probably not just at R1 universities).
I’d be shocked if, in the long term, anyone who can get access a library’s worth of text won’t be able to put together a useable model.
There’s nothing magical about our brains, so I imagine at some point you’ll be able to teach a computer to read and write with about as many books as it takes to teach a human. I mean maybe they’ll be, like, 10x as dumb as us. A typical American might read hundreds of books over the course of their life, what are they going to do, require a license to own more than a couple thousand e-books?
maybe I don’t believe lawmakers understand where we are going long-term with this stuff.
Wait, you're saying that a bunch of legislators who believe the Earth is 6000 years old may not have a valid perspective on complex technical matters? No. Say it isn't so.
I guess it always just seems weird to me when they see something correctly as a rapid and dramatic change, but they don’t play out the obvious trajectory, and then come up with restraints that only make sense in the context of current technical limitations.
It all depends on whether the AI proponents are right or not. If they're right, then of course it's a massive destabilizing threat. Even a weaker version, where there is no autonomy at all and it's all just the result of prompts, is going to be seriously destabilizing if it delivers on its promises. We really are not ready for a world of near zero cost fake everything.
On the other hand, like existing ITAR, this will manifest in extremely weird rules that have very little to do with actual safety.
If all you want is to be able to run it, but don't care about speed, you can run it on a Dell R720, they support hundreds of gigabytes of RAM. https://ollama.com/ makes it easy to download. They're pretty cheap compared to a Mac Studio. I got an R820 for a few hundred dollars, it has 256GB of RAM, with room for much more.
Furthermore you can get used versions of these pretty cheap on ebay. I bought some years back for experimenting with openshift in my homelab and was able to get some pretty insane hardware for $600 USD. Processors are slow, but it will run.
>On February 7, 2024, Senator Scott Wiener introduced Senate Bill 1047 (SB-1047) – known as the Known as the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act (the Act) – into the California State Legislature. Aiming to regulate the development and use of advanced artificial intelligence (AI) models, the Act mandates developers to make certain safety determinations before training AI models, comply with various safety requirements, and report AI safety incidents. It further establishes the Frontier Model Division within the Department of Technology for oversight of these AI models and introduces civil penalties for violations of the Act.
You've got a company valued at $80 billion and you can get legislation put forward to kneecap your primary competition for a the price of a 2009 Honda Accord with 150,000 miles on the odometer? What great value for money!
This typically works the other way. You find politicians who support you, due to personal views or electoral idiosyncrasies, and then give them money to boost them.
I’m worried that regulations like these will create a lock in effect that benefits existing leading AI companies and makes it impossible for new entrants
What do you consider the tech industry? Do you consider Wall St firms to be tech? Do you consider bigPharma to be tech? Do you consider FAANG to be tech?
Here's a link[0] with 2023 lobby spends by industry, but there is no "tech" listing specific. There's an entry for "Internet" listed, which I'm guessing is what you mean by "tech". Another chart[1] breaks down that entry.
If you want to know when each company started to spend money, you could research their public filings.
No, the tech market is totally nice, not a bunch of cut-throat thieves who’ll backstab and steal from each other at the drop of a hat.
— Posted from my Xerox
The roots of the tech sector are PC (the libertarian dream of basically zero-cost startups), telecoms (playground of monopolies and regulatory capture), and ad guys who’s main trick is outrunning society’s ability to understand their business model.
The tech industry spends just as much lobbying as other large businesses in the us [1]. Fiduciary duty more or less forces larger corporations to engage in lobbying, considering the great value per dollar spent.
Bro given that the cost of frontier models is already at or past $100M I think that boat has already sailed. Unless you have a completely cracked team that can raise like $1B upfront you have no chance at competing.
Unbelievable. I like Scott Wiener for his housing policies but this bill is rampant overstepping from the government. Ironically, it will have the same effect as the NIMBY system he has fought for so long.
Some nuggets...
---
So we are only allowed to train what they allow us:
This bill would require that a developer, before initiating training of a nonderivative covered model, comply with various requirements, including implementing the capability to promptly enact a full shutdown of the covered model until that covered model is the subject of a limited duty exemption.
---
Of course it comes with a new department with powers to impose fees:
This bill would also create the Frontier Model Division within the Department of Technology and would require the division to, among other things, review annual certification reports from developers received pursuant to these provisions and publicly release summarized findings based on those reports. The bill would authorize the division to assess related fees and would require deposit of the fees into the Frontier Model Division Programs Fund, which the bill would create.
---
And, obviously, we must pay consultants:
This bill would also require the Department of Technology to commission consultants, as prescribed, to create a public cloud computing cluster, to be known as CalCompute, with the primary focus of conducting research into the safe and secure deployment of large-scale artificial intelligence models and fostering equitable innovation that includes, among other things, a fully owned and hosted cloud platform.
>until that covered model is the subject of a limited duty exemption.
The second half of that sentence clarifies. It's not that you have to be able to shut down the model, everyone knows it's trivial to turn off a computer. It's that the government can force you to shut down your program until such time as they give you regulatory approval to turn it back on again.
How exactly? If it's floss software, everyone has their own copy. If there's a million people who downloaded my model weights, do I get to phone them up and ask politely? ;-)
Which states would you speculate will be best long term for AI startups? I would’ve guessed California but it’s looking more important to pick a state less likely to get over their skis with regulation. Washington doesn’t seem yet to be doing this, and no state tax with lots of AI engineers/researchers in Seattle/redmond which is why I’m here. Texas probably won’t add regulations and has similar pros if you’re in Austin. Anywhere else looking like it will crop up if California regulates away the industry?
What states have good weather year round, already have large urban centers, and better laws and taxes than California? That's basically what it amounts to. That's why cities in Texas and Florida are growing. Seattle has terrible weather. Washington gets cold and their large cities are mismanaged to the point that they're undesirable to live in for well-off families.
As someone who lives in Seattle I can tell you at least two things 1) the homeless population and use of drugs out in the open on sidewalks in the city has gone up significantly and no plan for addressing. 2) the city management tends to be anti-tech, things like pushing Amazon and others out with the head tax and the gig worker min hourly pay and all that which basically shut down use of Uber eats and other delivery services due to misunderstanding economics (which they are now scrambling to reverse since the workers themselves hate it).
The open use of drugs is offensive but rarely dangerous. I live adjacent to Seatle and have been coming here for 30 years.
The gig worker min hourly pay is fine. If it decreases the total demand for deliveries thats ok. I wouldn't want more McJobs for the state to subsidize anyway. Those workers don't cease to exist they just work somewhere else for someone who can actually afford to pay.
Eh, I don't want my kids around it, we shouldn't be ok with it, and I don't blame anyone who doesn't want to live near it. "Rarely dangerous" is a hell of a term if you've walked downtown at night as anyone but a large man (and as one even I am not a fan and moved my office to redmond from 4th ave area even though I live in north cap hill). I asked my visiting sister to avoid coming to my office in the evening when I was downtown after she was harassed by a few individuals who were very openly doing drugs, it's quite embarrassing for a supposedly well off city. We want the city to be safe and welcoming, not what it is today. We're looking as complacent as SF with no plans to cleanup and fix things.
The gig worker min rate has completely cut out their money, you can hear feedback directly from the gig workers and see that it's being reversed because of the backlash: https://www.newsweek.com/20-minimum-wage-law-seattle-deliver... -> "300,000 fewer orders within Seattle". I can't agree with you here at all. These are jobs people have the choice to take or not, the government here is eliminating that choice by basically making the jobs nonexistent. I know I've cut my orders significantly and will walk or drive myself nowadays to pick up food when I do get takeout.
"Those workers don't cease to exist they just work somewhere else for someone who can actually afford to pay" <- citation needed. Setting wage floors almost always get modeled out as shortages where supply of workers will no longer meet demand for jobs, and most people aren't perfectly fungible, nor are there are bunch of jobs that allow people to work for a few hours between other gigs, watching their kids, trying to be entrepreneurial, etc. Let adults decide which jobs they want to work for which pay. If there were alternative jobs that paid more don't you think they'd naturally flow there rather then the government stepping in?
Delivery for random shit was never cheap before its not to ME shocking that it should reasonably be somewhat expensive to have someone drive their $50k car to the starbucks for you.
The government disallows work that people would otherwise do all the time for instance by instituting minimum wage, requiring benefits, or requiring regulation that drive up costs enough that marginal businesses fold. Those employees don't cease to exist they are reallocated to other parts of the market which are more worthy. In the end uber eats isn't worth anything it loses money. It's a side show until investors money runs out.
There are parts of Seattle that are shady. Unfortunately those people exist and they aren't going anywhere so we are basically playing whack a mole. If we want them out of people's faces we should probably house them. Finland did and it worked for them.
State politics are important, but electricity prices and availability of real estate for new data centers also matters. California has particularly high electricity prices and makes it difficult to construct new industrial facilities (especially near bodies of water that could help with cooling requirements). While companies might locate some employees in CA, the hardware will likely run elsewhere.
They don’t seem to have any reason that there would be AI experts there though. No big comp sci universities or existing research labs and tech companies that you can pull talent from.
I don’t think that worked when Miami was being hyped over the last few years and I don’t expect that to work in the future. Needs a base of research and big tech to start.
Really doesn’t seem like Florida is that opposed to banning innovation for incumbents, not the kind of place I’d trust my startup to really given big restrictive state government over small gov https://x.com/andercot/status/1786169027007783227?s=46&t=3ZO...
I think Zvi is missing some critical points about the bill. For example:
>Before initiating the commercial, public, or widespread use of a covered model that is not subject to a positive safety determination, limited duty exemption, a developer of the nonderivative version of the covered model shall do all of the following:
>(1) Implement reasonable safeguards and requirements to do all of the following:
>(B) Prevent an individual from being able to use the model to create a derivative model that was used to cause a critical harm.
This is simply impossible. If you give me model weights, I can surely fine-tune them into doing a covered harm (e.g. provide instructions for the creation of chemical or biological weapons). This requirement is unsatisfiable, and you're not allowed to release a covered model without satisfying it.
> The definition of covered model seems to me to be clearly intended to apply only to models that are effectively at the frontier of model capabilities.
> Let’s look again at the exact definition:
> (1) The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024, or a model that could reasonably be expected to have similar performance on benchmarks commonly used to quantify the performance of state-of-the-art foundation models, as determined by industry best practices and relevant standard setting organizations.
> (2) The artificial intelligence model has capability below the relevant threshold on a specific benchmark but is of otherwise similar general capability.
> That seems clear as day on what it means, and what it means is this:
> 1.
> If your model is over 10^26 we assume it counts.
> 2.
> If it isn’t, but it is as good as state-of-the-art current models, it counts.
> 3.
> Being ‘as good as’ is a general capability thing, not hitting specific benchmarks.
> Under this definition, if no one was actively gaming benchmarks, at most three existing models would plausibly qualify for this definition: GPT-4, Gemini Ultra and Claude. I am not even sure about Claude.
> If the open source models are gaming the benchmarks so much that they end up looking like a handful of them are matching GPT-4 on benchmarks, then what can I say, maybe stop gaming the benchmarks?
> Or point out quite reasonably that the real benchmark is user preference, and in those terms, you suck, so it is fine. Either way.
> But notice that this isn’t what the bill does. The bill applies to large models and to any models that reach the same performance regardless of the compute budget required to make them. This means that the bill applies to startups as well as large corporations.
> Um, no, because the open model weights models do not remotely reach the performance level of OpenAI?
> Maybe some will in the future.
> But this very clearly does not ‘ban all open source.’ There are zero existing open model weights models that this bans.
(f) “Covered model” means an artificial intelligence model that meets either of the following criteria:
(1) The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations.
(2) The artificial intelligence model was trained using a quantity of computing power sufficiently large that it could reasonably be expected to have similar or greater performance as an artificial intelligence model trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024 as assessed using benchmarks commonly used to quantify the general performance of state-of-the-art foundation models.
---
f.1: anyone who has taken a basic cpu arch class knows that int and float are significantly different computational effort. One could see an entity using this in court to greatly lower the threshold for qualification after the fact. i.e. lawyers play word/text games and one could say something to the effect that 1 float is 10 int ops, so the limit is 10^26 int or 10^25 float ops
f.2: future proofing against better algorithms based on today's benchmarks... to the point where effort no longer matters. They seem to be drawing the threshold at today's benchmarks, whether or not they are reflective of capability. I could see a small model be trained to do poorly on these benchmarks while excelling at the problems they are concerned with, like making nuclear weapons...
> 22603. (a) Before initiating training of a covered model that is not a derivative model, a developer of that covered model may determine whether ... if the covered model will have lower performance on all benchmarks
because I know how it will perform before training?
In a way this could be what gives us AGI that runs in your pocket. If there's an upper limit on what can be used, then human ingenuity will be funnelled towards whatever does fit in. Of course, this is only in the USA. China, Russia, North Korea, Iran, etc will still be free to persue the technology.
This entire article feels like it was written by ChatGPT. For instance, it continuously makes vague claims about the value of open source without even citing the original bill.
The quality of the article is what should drive voting and flagging.
Your personal opinion as to what level of AI tooling was used in crafting the content is a low value signal and posting that speculation without meaningfully engaging in the content is as low value as commenting to say "this article sucks".
Personally, I think this would be a very strange place to find pute AI genetated content. This is a personal statement that was submitted to the state, and posted under a real name under a site the poster has a professional association with. I think that any "strangeness" in formating a wording comes from the role this text serves, as a public comment intended to affect policy.
Why would I want to read some AI-generated slop? It's no different to spam, other than techbros being okay shoving it everywhere and anywhere it doesn't belong.
> Instead of regulating the development of AI models, the focus should be on regulating their applications, particularly those that pose high risks to public safety and security. Regulate the use of AI in high-risk areas such as healthcare, criminal justice, and critical infrastructure, where the potential for harm is greatest, would ensure accountability for harmful use, whilst allowing for the continued advancement of AI technology.
I really like this proposed model. Are there good arguments against this?
mmmmmmm regulatory capture, wouldn't be america without it - nothing says freedom and open markets like sawing off the rungs from the bottom of the ladder after you've already climbed to the top
Open source AI is not at the bottom of the ladder; like closed source AI, some of the brightest, well-educated and arguably well-paid people in the world are the driving force behind it. OpenAI, arguably at the top of the ladder, was supposed to be open source as well.
It's not a perfect analogy, but it's still corporate interests trying to entrench their position to prevent disruption by less politically powerful players.
SB-1047 has real "stay in your lane" energy. Senator Weiner knows a lot about housing policy and if he spends 100% of his legislative efforts on housing policy people will celebrate that. He knows nothing whatsoever about computer science, and should just step away from the keyboard to avoid the temptation to legislate the impossible.
This is worrisome for open source - this isn't some far off future limit. Llama3 training at 400 TFLOPS per GPU [1] and 6.4M GPU hours [2] puts Llama3-70B at 9.2*10^24 (so 10^25) floating point ops.
This is a well-articulated response to the SB-1047 bill. I want to underscore the undercurrent of the message — that with the proposed regulations on creators, it will accomplish the opposite of what it is intended to do. The landscape will be less open and diverse, handing the power to a few.
Another point that gets buried is that AI is about the data. Without the transparency of what these models are built from, it leads to potential dangers as well as inappropriate use of materials.
My bet is on the open ecosystem if it doesn’t get legislated away.
> The landscape will be less open and diverse, handing the power to a few.
Proponents of the regulation approach probably wouldn't state it this way, but if I'm understanding their arguments correctly, I think they want that, because regularing a few very powerful corporations is easy. Regulating a ton of small people/startups is hard. When you genuinely believe that some of the output from LLMs is literally dangerous to some people, it's not unreasonable to decide that the "freedom" of people to run and develop models to compete is unimportant compared to protecting society from dangerous text or images.
The law seems to only care about models that take at least 10^26 floating point ops or models that achieve comparable benchmark performance.
This is a truly absurd number! I’m ok with organizations with that sort of compute capacity being subject to regulatory oversight and reporting and liability.
This is not very far off from Llama3 training. 400 TFLOPS per GPU [1] and 6.4M GPU hours [2] puts Llama3 70B at 9.2*10^24 (so 10^25) floating point ops.
It's dumb to create laws based on arbitrary technological limits. Computing power is still increasing exponentially, yesterday's supercomputer is tomorrows gaming gpu.
Its not an absurd number, depending on how it is calculated.
I've seen that number thrown around, and by some calculations it would already apply to open source models like LLama.
No, we don't need to ban or regulate LLama or any existing open source models. If someone wants to be worried about GPT-6, fine. But there is no need to regulate the stuff thats already out there.
No, it’s not a good point. It could be 10^100000000 flops and it wouldn’t fucking matter. There’s no evidence that would do anything at all. “AGI” may not even be possible with those flops. You’re talking about what would require unprecedented control over computing to ease fears over a scary robot fanfic. None of the “safety” concerns are real. A GPT4 open source would do nothing but hurt Sam Altman’s bottom line - YOU ARE BEING DUPED.
If you’re not being serious, I appreciate your sarcasm. If you are, this may be one of the worst things I have ever read. It’s fucking math people. Math. It will not create some scary golem. It will create marginally better chatbots. You are arguing for totalitarian control over computing to line Microsoft’s pockets. Shame. Shame shame shame shame shame.
A few things that I’m seeing folks in the comments misunderstanding about the bill (full disclosure: I’ve been one of a group of folks advising Senator Wiener on SB 1047)
1. The new Frontier Model Division is focused on receiving information and issuing guidelines. It’s not a licensing regime and isn’t investigating developers.
2. Folks aren’t automatically liable if their highly capable model is used to do bad things, even catastrophic things. The question is whether they took reasonable measures to prevent that. This bill could have used strict liability, where developers would be liable for catastrophic harms regardless of fault, but that's not what the bill does.
3. The bill requires developers to test their models and report whether they have hazardous capabilities (and the answer can obviously be yes or no). Even if the model does have hazardous capabilities, the developer can still deploy it if they take reasonable precautions, as outlined in the bill. For perjury, you would need to intentionally lie—good faith errors would not be covered. I get that models can have unforeseen capabilities, but this isn’t about that. If you are knowingly releasing something that could have demonstrably catastrophic consequences, it seems fair to have consequences for that. Some things which already require folks to certify under penalty of perjury:
lobbying disclosures, companies’ financial disclosures, immigration compliance forms.
4. Overall it seems pretty reasonable that if your model can cause catastrophic harms (which is not true of current models, but maybe true of future models), then you shouldn’t be releasing models in a way that can predictably allow folks to cause those catastrophic harms.
If people want a writeup of what the bill does I recommend this one by the law firm DLA Piper (https://www.dlapiper.com/en/insights/publications/2024/02/ca...). In my opinion this is a pretty narrow proposal focused at the most severe risks (much more narrow than, e.g., the EU AI act).
The entire notion of “safety” and “ethics” in AI is simply a Trojan horse for injecting government control and censorship over speech and expression. That’s what the governments get out of it. The big AI players like OpenAI, Microsoft, Amazon, Google, etc. are incentivized to go along with it because it helps them through regulatory capture and barriers to competition. They also make some friends with powerful legislators to avoid pesky things like antitrust scrutiny.
Legislation should not restrict the development or operation of fundamental AI technologies. Instead laws should only be built on the specific uses that are deemed illegal, irrespective of AI.
> Regulate the use of AI in high-risk areas such as healthcare, criminal justice, and critical infrastructure, where the potential for harm is greatest
This suggests, for example, image generation should be unregulated, but potential for harm of deepfake is great. In general, regulation needs to be feasible to implement, and even if it is ideal to regulate use not development, it can make a sense to regulate development due to feasibility concerns.
Deepfakes are already possible and entirely convincing with existing technology, I don't see the benefits of stifling the development of open source transformer models for fear of deep fakes.
It’s generally not possible for California state legislature to regulate deepfakes; closing the metaphorical doors now doesn’t make sense as the metaphorical horses weren’t even contained in their barn to begin with. (I’d argue it’s not desirable for the US to try regulating the creation of these tools rather than their use at a federal level either but that’s another discussion)
It won't stifle anything. It will take about 2 months for all AI companies to abandon California. Forcing them out of SF will make their social lives worse but lower their costs 80%. The end result is that they will spend less time drinking $17 espresso, more time working, and will be able to hire more engineers. AI research accelerates dramatically.
"This could inadvertently criminalize the activities of well-intentioned developers working on beneficial AI projects."
It wouldn't be inadvertent. It's a control tactic.
"Placing liability on the creators of general purpose tools like these mean that, in practice, such tools can not be created at all, except by big businesses with well funded legal teams."
...
"These requirements could disproportionately impact open-source developers who often lack the resources of larger corporations to navigate complex regulatory processes."
...
"The proposed regulations create significant barriers to entry for small businesses and startups looking to innovate in the AI space."
That's the idea. The government likes a small number of big businesses that they can control.
How about some liability placed on members of government for their hopeless legislating? As developers of law shouldn’t they accept responsibility for the harms they cause?
Honest question -- if models (much like code) are open-sourced anonymously, what can governments and politicians do about it? This sounds like the "export-restricted cryptography" foolishness of the 90's all over again.
Seize your domain name. Force anyone to refuse to do business with you and or give up enough info to find you and put you in prison anywhere in the US or any country with an extradition treaty which is most of the better places to live.
this feels a lot like Uber/Lyft whining about rideshare regulation, or Musk whining about self-driving regulation. Howard does a miserable job attempting to explain what, exactly, this legislation will do to impact open source development and clearly hasnt read the specifics for open-source development in the bill. things like dual use/ITAR apply explicitly to commercial products, but he attempts to conflate them with open source instead and uses inference to try and define them. they are very well defined trade concepts at the federal level.
from the bill, the specific applications to open source are:
Appoint and consult with an advisory committee for open-source artificial intelligence that shall do all of the following:
(A) Issue guidelines for model evaluation for use by developers of open-source artificial intelligence models that do not have hazardous capabilities.
(B) Advise the Frontier Model Division on the creation and feasibility of incentives, including tax credits, that could be provided to developers of open-source artificial intelligence models that are not covered models.
(C) Advise the Frontier Model Division on future policies and legislation impacting open-source artificial intelligence development.
nowhere does it state open source developers need "required shutdowns" or burdensome reporting for open source. The states position to regulate trade is sacrosanct and in such, the bill applies almost entirely to commercial products. it would affect Jeremys business and as a business owner, he doesnt like that.
While this article makes some valid points, it basically just ignores the reasons why the law is being passed, that is the potential for open-models to enable bio-attacks, cyberattacks, election manipulation, automated personalised scams, and who knows what else.
One might question why that is. Perhaps it's the case that Jeremy has an excellent response to these points which he has somehow neglected to raise. Or perhaps it's because these threats are very inconvenient for an open source developer.
I'm sure he'd say that open-sourcing models means that all actors have access to defensive systems and that the good guys outnumber the bad guys and it'll all work out well.
And that could be true. Or it could be false. It's not like we really know that everything would work out fine. It's not that we've run the experiment. I mean maybe it works out like that, or maybe one guy creates a virus and then it doesn't really matter how many folk on the other side, but we still get kind of screwed because we can only produce vaccines that fast. It's that's what going to happen? I don't really know, but it's at least plausible. I mean, maybe we'll automate all aspects of vaccine production and be able to respond much faster, but that's dependent on when we develop this technology vs. when AI starts significantly helping with bioweapons with someone then using it for an attack. And at that point it's all so uncertain and up in the air that it's seems rather strange for someone to suggest that it'll all be fine.
As someone who has studied both computer science and molecular biology at postgraduate level I can tell you that the chance of LLMs leading to higher probability of a “bio-attack” compared with a quick Google search is zero.
Do you know how much skill, practice, resourcing and time it takes to develop bio-anything?
You imagine some extremist could somehow use llama version 11 to print viruses from his printer ?
LLMs are not intelligent, they predict text based on what it was trained, if it could somehow build new viruses, weapons then it means the internet has MANY such information so the LLC could predict something useful, so maybe those websites, scientific papers , blog posts need to be deleted because some extremist group or state sponsored group can use them directly plus Natural Intelligence plus good laboratories.
But tell me how can I make my next LLM so it would help on say fighting biologic weapons, creating vaccines but refusing to make evil stuff keeping in mind that jailbreaking is always possible (scientifically proven)
This is our sign to start boycotting OpenAI - they're behind the lobbying of this.
Instead of writing your senator who is owned by OpenAI - just throw away your `OPENAI_API_KEY` and use one of the many open models like mistral or llama3.
ollama run mistral
It's very easy to get started, right in your Terminal.
And there are cloud providers like https://replicate.com/ and https://lightning.ai/ that will let you use your LLM via an API key just like you did with OpenAI if you need that.
There is no such thing as AI safety. AI is far too dangerous. The only thing that exists with regard to AI is "distracting the population so they think AI benefits them" or "AI is too amusing so I don't want to think about the consequences".
This matches my thoughts on why this is ultimately a bad piece of legislation. It is virtually impossible to ensure that a piece of technology will not be used for "harmful purposes". I agree that such stipulations will be just another roadblock keeping everyone except "big businesses with well funded legal teams" from working on LLMs.