California Senate Passes SB 1047
Some additional notes on the most important AI legislation in the country
California’s State Senate has passed SB 1047, the sweeping bill to regulate AI that I’ve written about here before. In honor of this silly and sad milestone, I thought I’d provide a few more details about what SB 1047 would do. Here’s the tl;dr for those of you who already know the basics:
The California government is gearing up for the “increased incarceration costs” and other costs associated with criminalizing AI development if this bill passes;
The Frontier Model Division has a wide scope to define the “perjury” charges the bill attaches to developers for end-user misconduct, including “the rigor and detail” of the developer’s safety protocols;
The Frontier Model Division can change the threshold for covered models, conceivably widening its jurisdiction over time.
For those who want the details, read on.
First, let’s review the basics. SB 1047 is a bill in California designed to regulate frontier AI models. Here’s how it works:
The bill creates a regulator called the Frontier Model Division under the California government’s Department of Technology;
Any model that is trained with more than 10^26 flops (floating-point operations) is subject to regulation;
So is any model trained with less compute but matching a 10^26 flops-model in “relevant benchmarks” (the bill doesn’t say which);
So is any model trained with less compute AND lower benchmarks, so long as it is of “similar general capability” (we do not know what this means);
If you want to train a model that could conceivably fall into any of those three categories, you have to sign a document under pain of perjury (a felony) promising the Frontier Model Division that it is safe;
Safe specifically means the model cannot be used for, or play a significant role in, a “hazardous capability,” which the bill defines as a chemical/biological attack, a cyberattack causing more than $500 million in damage, or “similar” offenses;
This doesn’t just apply to the developer’s own conduct. It applies to downstream users. As a developer, you would be responsible for the conduct of anyone who downloads your model off Hugging Face, a platform for open-source AI models;
If a developer of a model doesn’t want to sign the piece of paper making them criminally liable for user conduct beyond their control, they can still train their model; they just have to implement a series of precautions and steps that could take months or longer;
After doing that work, they can train their model. If they still don’t want to sign the criminal liability paperwork (known as a “limited duty exemption,” by the way), they can release their model, but have to be able to monitor user’s use of the model, “shut down” the model if need be, and suspend specific users;
Open-source AI at today’s frontier—to say nothing of tomorrow’s—is effectively impossible without the limited duty exemption, but the LDE itself also makes open-source nearly impossible. It’s a catch-22;
Startups, regardless of their intentions to open-source their models, are immensely burdened as well;
The bill does not just apply to language and multimodal models like ChatGPT; robotics and self-driving car models in particular would be affected also;
Because California has a $60 billion budget deficit this year, the regulation by the Frontier Model Division is a service AI developers pay for. Their staff, likely in the dozens (to start, at least), and all other operations would be funded through fines and fees levied on AI developers. It seems fair to assume that the cost of submitting the criminalization paperwork, for instance, would be in the tens of thousands of dollars—if not more.
None of this will be a surprise to those of you who have read my past work on SB 1047 or the work of others. Defenders of the bill have written that it “doesn’t ban open-source AI.” It’s true that no current model would be unlawful under this bill; any future model at or near the frontier of performance, however, would be.
Some, including Senator Scott Wiener, the author of the bill, have also pushed back against the idea that SB 1047 imposes criminal penalties for AI development. Wiener has said, for example, that the perjury charge would only be used in extreme cases for “lying to the government.” This is, after all, the definition of perjury. But what does lying to the government mean in the context of certifying AI models are impossible to misuse? Thankfully, SB 1047 gives us some insight.
The bill allows the Frontier Model Division to write jury instructions for developers who are charged with perjury under the bill’s statutes. Jury instructions set the terms by which a jury must assess a defendant’s guilt. They set the standard for the burden of proof and explain what factors the jury must consider. Here’s what the FMD is allowed to incorporate into jury instructions:
(B) In developing the model jury instructions required by subparagraph (A), the Frontier Model Division shall consider all of the following factors:
(i) The level of rigor and detail of the safety and security protocol that the developer faithfully implemented while it trained, stored, and released a covered model.
(ii) Whether and to what extent the developer’s safety and security protocol was inferior, comparable, or superior, in its level of rigor and detail, to the safety and security protocols of comparable developers.
(iii) The extent and quality of the developer’s safety and security protocol’s prescribed safeguards, capability testing, and other precautionary measures with respect to the relevant hazardous capability and related hazardous capabilities.
(iv) Whether and to what extent the developer and its agents complied with the developer’s safety and security protocol, and to the full degree, that doing so might plausibly have avoided causing a particular harm.
(v) Whether and to what extent the developer carefully and rigorously investigated, documented, and accurately measured, insofar as reasonably possible given the state-of-the-art, relevant risks that its model might pose.
The security and safety best practices within AI are live fields of science, subject to dispute and debate by world-class practitioners. We don’t even know the security and safety practices of top-tier AI companies like OpenAI, DeepMind, Meta, and Anthropic. Does this sound to you like the limited, unambiguous statute Senator Wiener makes it out to be?
The California Senate’s Appropriations Committee seems to think not. The committee is tasked with figuring out how much proposed bills are going to cost the California government. In their analysis of the bill, they cited “increased incarceration costs” as one of the leading cost centers of SB 1047 (thanks to Lauren Wagner for pointing this out).
The Frontier Model Division also has the power to change (expand) the scope of models under its jurisdiction:
(A) On or before July 1, 2026, issue guidance regarding both of the following:
(i) Technical thresholds and benchmarks relevant to determining whether an artificial intelligence model is a covered model, as defined in Section 22602 of the Business and Professions Code.
(ii) Technical thresholds and benchmarks relevant to determining whether a covered model is subject to a limited duty exemption under paragraph (2) of subdivision (a) of Section 22603 of the Business and Professions Code.
The bill now goes to the California Assembly.
Best case scenario: this significantly increases outflows from California to the rest of the country.
Come to the midwest, where that salary that barely affords you a one bedroom flat will get you a mansion, there are far fewer heroin dealers and junkies on the streets, and people are much nicer.
Will this drive AI development to other states, or even to other countries?