Introduction
SB 1047 was vetoed yesterday by California Governor Gavin Newsom. The bill would have imposed a liability regime for large AI models (proponents would say it clarifies existing liability), mandated Know Your Customer (KYC) rules for data centers, created an AI safety auditing industry and an accompanying regulator to oversee that industry, granted broad whistleblower protections to AI company staff, initiated a California-owned public compute infrastructure, and more.
It was a sweeping bill, no matter what how many different ways proponents discovered to say the bill was “light touch.” Indeed, of the major provisions listed above, really only the first (liability) was a subject of significant public debate. The fact that a major issue like data center KYC barely warranted discussion is a signal of just how ambitious SB 1047 was.
Governor Newsom is therefore wise to have vetoed the bill; at the end of the day, it was simply biting off more than it could chew.
If you got your information purely from X, you would assume that Governor Newsom has abandoned AI regulation and is keen to let a thousand flowers bloom. The reality, though, is much more complicated. First, by his own count, the Governor signed 17 other AI-related bills from this legislative session alone. Second, in his veto message, Governor Newsom was clear about his intention to regulate AI even more in the future, including for California to serve as America’s main AI regulator if need be (emphasis added):
Let me be clear - I agree with the author - we cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable… To those who say there's no problem here to solve, or that California does not have a role in regulating potential national security implications of this technology, I disagree. A California-only approach may well be warranted - especially absent federal action by Congress.
It would not surprise me for California to remain the primary “theater” of rhetorical combat on American AI policy for at least the next year. Without a doubt, there will be a subsequent chapter in this story. So what should come next?
The Specter of “Use-Based” AI Regulation
One plausible next step would be to focus on “uses” of AI. Rather than regulating AI models, the idea here is to regulate individual uses of AI by downstream “deployers,”—that is, “people who use AI to do things in the world.” In its extreme form, use-based regulation would involve government-created rules for every industrial use of AI. Given the fact that businesses are still in the early days of figuring out what AI is useful for (and that AI’s utility changes by the month), this would obviously be premature. Yet it is the direction suggested by the EU’s AI Act, or the Biden Administration’s non-binding AI Bill of Rights.
A less extreme version of this would be mandating “algorithmic impact assessments” for businesses that use AI (n.b.: the proper phrasing of this should be algorithm impact assessment, since we are assessing the impact of an algorithm, and not performing an impact assessment which is itself algorithmic—this perpetually annoys me, but I will stick with the conventional phrasing). Often, these are aimed at “high stakes” decisions.
The trouble with this approach, though, is in the definition of “high stakes.” In contemporary American policymaking, “high stakes” often means practically everything valuable. For example, you would think that “critical infrastructure,” would be a subset of infrastructure—but it is in fact the opposite. “Critical infrastructure” includes essentially everything one would think of as infrastructure (highways, power plants, etc.) and quite a bit more: financial services, communications services, casinos, ballparks, and much else. Thus, bills aimed at “essential services” or “critical infrastructure” end up, in practice, covering huge swaths of our economy.
Fundamentally, the flaw with “use-based” regulation is that it leads to a kind of regulatory neuroticism: government becomes obsessed with documenting, assessing, risk-mitigating, and writing best practices for AI uses, regardless of whether it is valuable.
Imagine if you had to file paperwork with the government every time you wanted to use a computer to do something novel. Imagine if you had to tell the government how your computer has no adverse impact on “decisions that materially impact access to, or approval for, housing or accommodations, education, employment, credit, health care, and criminal justice”—every time you wanted to do something novel with a computer, and regardless of whether your computer use had anything to do with those things.
This is not a hypothetical. This is the reality for contractors in the State of California today—one of Governor Newsom’s “use-based” regulations (in this case downstream of an Executive Order he issued that requires would-be government contractors to document all their uses of generative AI).
I fear this is the direction that Western policymakers are sleepwalking toward if there is not a concerted effort to push back. It is just all too easy to lay on yet another disclosure form, impact assessment, or box-checking requirement. This is what bureaucracies eat for breakfast. But every sensible person, I think, understands that this is no way to run a civilized economy.
Or do they?
The Future of AI Safety
In response to the veto, some SB 1047 proponents seem to be threatening a kind of revenge arc. They failed to get a “light-touch” bill passed, the reasoning seems to be, so instead of trying again, perhaps they should team up with unions, tech “ethics” activists, disinformation “experts,” and other, more ambiently anti-technology actors for a much broader legislative effort. Get ready, they seem to be warning, for “use-based” regulation of epic proportions. As Rob Wiblin, one of the hosts of the Effective Altruist-aligned 80,000 Hours podcast put it on X:
Having failed to get up a narrow bill focused on frontier models, should AI x-risk folks join a popular front for an Omnibus AI Bill that includes SB1047 but adds regulations to tackle union concerns, actor concerns, disinformation, AI ethics, current safety, etc?
This is one plausible strategic response the safety community—to the extent it is a monolith—could pursue. We even saw inklings of this in the final innings of the SB 1047 debate, after bill co-sponsor Encode Justice recruited more than one hundred members of the actors’ union SAG-AFTRA to the cause. These actors (literal actors) did not know much about catastrophic risk from AI—some of them even dismiss the possibility and supported SB 1047 anyway! Instead, they have a more generalized dislike of technology in general and AI in particular. This group likes anything that “hurts AI,” not because they care about catastrophic risk, but because they do not like AI.
The AI safety movement could easily transition from being a quirky, heterodox, “extremely online” movement to being just another generic left-wing cause. It could even work.
But I hope they do not. As I have written consistently, I believe that the AI safety movement, on the whole, is a long-term friend of anyone who wants to see positive technological transformation in the coming decades. Though they have their concerns about AI, in general this is a group that is pro-science, techno-optimist, anti-stagnation, and skeptical of massive state interventions in the economy (if I may be forgiven for speaking broadly about a diverse intellectual community).
To forfeit that posture in favor of a more ambient “anti-technology” stance would be a loss for the world. This is a community, after all, that saw the future coming long before many others, yours truly included. Though I think their model of technological change and of intelligence itself is often hopelessly naïve, I would be kidding myself to pretend that they have no wisdom to offer.
It is legitimate to have serious concerns about the trajectory of AI: the goal is to make heretofore inanimate matter think. We should not take this endeavor lightly. We should contemplate potential future trajectories rather than focusing exclusively on what we can see with our eyes—even if that does not mean regulating the future preemptively. We should not assume that the AI transformation “goes well” by default. We should, however, question whether and to what extent the government’s involvement helps or hurts in making things “go well.”
I hope that we can work together, as a broadly techno-optimist community, toward some sort of consensus. One solution might be to break SB 1047 into smaller, more manageable pieces. Should we have audits for “frontier” AI models? Should we have whistleblower protections for employees at frontier labs? Should there be transparency requirements of some kind on the labs? I bet if the community put legitimate effort into any one of these issues, something sensible would emerge.
The cynical, and perhaps easier, path would be to form an unholy alliance with the unions and the misinformation crusaders and all the rest. AI safety can become the “anti-AI” movement it is often accused of being by its opponents, if it wishes. Given public sentiment about AI, and the eagerness of politicians to flex their regulatory biceps, this may well be the path of least resistance.
The harder, but ultimately more rewarding, path would be to embrace classical motifs of American civics: compromise, virtue, and restraint.
I believe we can all pursue the second, narrow path. I believe we can be friends. Time will tell whether I, myself, am hopelessly naïve.
It does seem like AI safety is getting subsumed politically in a bigger tent faction that is simply "anti AI". There are not really that many doomers. There are many more people who are either anti-tech, anti-capitalism, or (especially in California) in a legacy industry like music or movies or Uber-driving that is specifically threatened by AI.
The problem with the AI safety movement is an intellectual problem. None of the theories behind AI safety have worked yet. It's not like structural engineering, where there are clear dangers like bridges collapsing, we developed better and better models of how these dangers happened, and we discovered principles like load factors that are effective in preventing the danger, and now we can regulate load factors on bridges. With AI, we don't agree on what the dangers are, and the core doomer fear of "the AIs will replace us" has not been modeled.
Many AI safety people see AI progress as the biggest existential threat to humanity’s existence. They see preventing this catastrophe as the highest calling they could personally hope for - to prevent our doom would make them true heroes and humanity’s savior.
Once a person thinks that way, very little else matters. They won’t shun strange bedfellows, they won’t worry about a bit of exaggeration here or there. Maybe even the occasional lie. After all, it’s for the greater good, you see.
Doomers are dangerous.