14 Comments

I invest in European AI startups and the recent EU guidelines are ensuring European AI companies are all contemplating (and most likely will) move to the US before the need to comply with those guidelines. It is a lead pipe cinch that a regulation like this will ensure they move to someplace without such regulations (most likely places like Dubai or Singapore). So such regulations are beyond stupid, even if the goal is to (attempt) to provide for safer AI development – they will ensure the opposite.

Expand full comment

This may be a bad bill, but that doesn't mean there aren't concerns about increasingly powerful AI models.

What's your positive vision for regulation here? How would you propose society take steps to preclude dangerous AI from being developed and/or being misused?

Expand full comment

Great question! While articulating a positive vision was not the point of this piece, it is the broader goal of my Substack. I think working with industry and scientists to develop standards and protocols (for media validation/authentication, for DNA synthesis equipment, for AI models themselves, for agent-agent communication) is first on the list. Cybersecurity defenses are also essential for key government services; there is a great deal of public private collaboration that should happen (and is) in this area. Finally, governments should integrate AI into their own operations so that they can better understand the technology, improve government services, and be ready to scale law enforcement responses if certain malicious AI-enabled behavior proliferates.

That’s just the start, but there is definitely much more.

Expand full comment

Ultimately, though, there is probably no such thing as “precluding dangerous AI from being developed.” We don’t control the world. Instead, in my mind, policy should be focused on making society more resilient. For biorisk, for example, an AI model that can “make bioweapons” is not a coherent concept. This ultimately requires manufacturing, and it is highly non trivial. There are many bottlenecks in that manufacturing process that we can police more aggressively (and the federal government is starting to do so).

So I think it’s about having a very precise and grounded risk model and then countering those risks in the most realistic way possible. Because of the serious implications associated with policing the distribution of software on the Internet (global surveillance of digital communication being just one), the AI model itself is rarely the most productive or efficient thing to target.

Expand full comment

Unless we regulate it, AI is going to get to a point where a user can say "AI, tell me how to produce anthrax, give me a detailed plan to buy the machines and materials I need", and the user will get back an actionable plan. This capability is almost certainly less than 3 years away.

This is concerning, and I want us to take regulatory steps to curtail these capabilities. I agree with you that we should more aggressively regulate bio-manufacturing services, which seem like a big source of potential harms. We should regulate bio labs and bio-manufacturing services too, but IMO the best regulatory approach needs to be multi-faceted.

This isn't just about bioweapons either. I would really rather not have an open source GPT5 that's happy to tell users how to get away with murder either.

I work in the AI industry in California. And I work for a smaller company that wants to compete with OpenAI and Anthropic, so I'm very sensitive to concerns that regulation will hamper smaller companies while entrenching larger companies. But it costs hundreds of millions of dollars today to train models that are covered by this bill, and any company that can do that can afford to hire a compliance team.

Expand full comment

I hear that completely. A major problem is we all know that it won't cost hundreds of millions of dollars in the not-so-distant future to train models of this kind, absent an exogenous shock such as a war in Taiwan, or a widespread rise in energy costs. Today's dynamics won't persist forever, and ultimately models of the kind you describe are going to exist in the world.

My guess is someone with the wherewithal and resources to buy all the equipment needed to make anthrax probably does not need GPT-n to tell them how to do it. I live in Washington DC; people get away with murder here on a weekly basis. They don't need GPT-5 to tell them how to do that, either.

I do hear your broader point. I think the world you describe--where AI models that can help people do dangerous things--is an inevitable one. I also think that the knowledge of how to do dangerous things is but one small part of actually achieving the dangerous thing--this is something auto-didactic, reasonably intelligent, terminally online people (almost everyone who reads or writes AI focused substacks, me included) tend to underrate. But in general, it's true that various kinds of bad behavior will be more achievable by people who are so inclined. The goal we should all have is to keep up--to ensure that AI can be used for defense more than it can be used for offense. I do not see how 1047 helps in that regard.

Expand full comment

Like you, I admire Scott Wiener's leadership on pro-housing policy. Unfortunately, he's obviously listening to the wrong people on this one and, like many on the left, is instinctively pro-regulation despite what it's done the housing market.

I will also note that in long-standing Sacramento lingo, a legislator is not said to "sponsor" a bill but to "carry" it, the implication always being that he or she is doing the work of interest groups.

Expand full comment

The good news is when all the AI developers flee California, the grid will stay up a little longer before it collapses.

Expand full comment

You make a lot of great points here, though I disagree with some of them. But more importantly, I'm curious how you feel about the latest version of the bill: https://legiscan.com/CA/text/SB1047/2023

I believe it addresses all of your concerns (possibly because of your efforts here) and the bill is much better because of it. In particular:

* The definition of a covered model has changed and no longer refers to "similar performance on benchmarks":

(e)

(1) “Covered model” means either of the following:

(A) Before January 1, 2027, “covered model” means either of the following:

(i) An artificial intelligence model trained using a quantity of computing power greater than 10^26 integer or floating-point operations, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market prices of cloud compute at the start of training as reasonably assessed by the developer.

(ii) An artificial intelligence model created by fine-tuning a covered model using a quantity of computing power equal to or greater than three times 10^25 integer or floating-point operations.

(B)

(i) Except as provided in clause (ii), on and after January 1, 2027, “covered model” means any of the following:

(I) An artificial intelligence model trained using a quantity of computing power determined by the Frontier Model Division pursuant to Section 11547.6 of the Government Code, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market price of cloud compute at the start of training as reasonably assessed by the developer.

(II) An artificial intelligence model created by fine-tuning a covered model using a quantity of computing power that exceeds a threshold determined by the Frontier Model Division.

(ii) If the Frontier Model Division does not adopt a regulation governing subclauses (I) and (II) of clause (i) by January 1, 2027, the definition of “covered model” in subparagraph (A) continues to be in effect until the regulation is adopted.

(2) On and after January 1, 2026, the dollar amount in this subdivision shall be adjusted annually for inflation to the nearest one hundred dollars ($100) based on the change in the annual California Consumer Price Index for All Urban Consumers published by the Department of Industrial Relations for the most recent annual period ending on December 31 preceding the adjustment.

* All of the text about “positive safety determination”, “hazardous capability”, and "covered guidance" is gone.

* Your example about a hacker with poor English skills no longer applies because the bill states:

“Critical harm” does not include harms either of the following:

(A) Harms caused or enabled by information that a covered model outputs if the information is

otherwise publicly accessible from sources other than a covered model.

Being that correcting the wording in an email is possible without a covered model, it wouldn't be considered a critical harm caused by the model.

* The bill makes it clear that it does not "effectively outlaw all new open source AI models" by specifically referring to models "controlled by a developer." An open-source model that is modified by someone else would no longer be controlled by a developer.

Expand full comment

Julius,

Thanks for this! You’re right that the bill has been improved a great deal since the original version.

I’m still a critic, primarily because of the unclear safety standards, unpredictable liability, and the FMD (you can read my criticisms of FMD here: https://www.hyperdimensional.co/p/the-political-economy-of-ai-regulation). That said, I do expect the bill to undergo another round of edits, possibly as soon as tomorrow. There has been some promising discussion, but ultimately, it will depend on the specific text. I’ll certainly write more about any major updates.

Expand full comment

The bill introduced, not approved?

Expand full comment

Yes, that’s right. It’s being considered by the California legislature.

Expand full comment

The Beach Boys haven't had a hit in years, and neither has California.

Expand full comment

OpenAI lobbyists did a terrific job drafting this bill.

Expand full comment