22 Comments

I share all those concerns but I fear that we may need minimal regulation now to fend off much worse regulation.

Suppose there is some kind of big scandal or public concern about AI in CA. Maybe someone discovers a teacher has leveraged AI to help cover up rapeing their students or some people find it hilarious to modify an OSS model or trick a commercial one to do something super racist and upsetting -- and maybe an interest group plays it up out of economic interest.

Yes, absolutely the 1047 agency bends a bit to be seen as doing something in that situation.

But if that agency doesn't exist politicians aren't going to shrug and say it's fine -- they'll take the only other option they have and call for introducing AI regulation at that time. And I fear the regulation passed in the wake of that kind of moral panic will be worse and broader while being subject to all the same bad incentives.

I'd prefer broad principles written into federal law preventing this kind of thing but short of that minimal regulation may head off greater regulation.

Expand full comment
author

That’s a reasonable concern, and why I think both industry and policy analysts need to get out ahead of these risks and be proactive about mitigating them. I just think there are much lower cost ways of doing that than a permanent regulatory agency.

Expand full comment

I absolutely agree, but looking at history I'm not hopeful.

Basically every big new kind of media creates a moral (and sometimes economic) panic and there are very strong calls to restrict it. For the most part we've been saved because of the 1st amendment -- pretty much never did coalitions arise at the time which said "ohh c'mon that's not really a big enough concern to justify that kind of regulation". And unfortunately, I think that's pretty much baked into things. Until people are personally comfortable with a new technology, art form etc it doesn't feel familiar enough to laugh at the people paniced about the harms and until that happens it's really hard to be politically successful not taking action -- especially because it's much easier to tell the scary narrative than explain why it won't happen. That's what is tripping up Yudkowsky et al and was the same thing that happened in the 90s when the internet was new -- so easy to tell stories about how little Johnny would be warped by hardcore porn or discontents would learn how to make bombs but so hard to sell a narrative where it's no big deal.

As such, I tend to think this goes best if the courts adopt the legal theory that what AI models say amounts to protected speech by the corporation releasing it and you can no more regulate an AI for suggesting how to search for vulnerabilities in some system -- absent some strong form of developer intent -- than you could regulate a book on network vulnerabilities. And if that succeeds yes I would very much prefer no agency,

Sure, that's not the concern of the AI safety people and it doesn't stop all regulation but I tend to suspect that's going to be where the panic is just as it has been in the past.

Expand full comment
author

I think the 1st Amendment is going to play a big role in AI policy and is currently underrated by many policy analysts working in the field. I need to write something in-depth on this--probably law review article length, then boiled down to Substack length. One of these days soon!

Expand full comment
Jun 23Liked by Dean W. Ball

Dean, thanks for writing this. I’ve been seriously underwhelmed by the level of analysis in the “AI safety” world by thinkers who are otherwise quite sophisticated (e.g., Scott Alexander). Much like the domain-specific stupidity one sees in the Woke left today, or in the Christian right of 20 years ago (or 40 years ago), the implausible claims to which the “AI safety” / “existential risk” crowd seems committed are a telltale sign of religious ideology.

Your piece might be the first thing I’ve read about AI regulation that reads like it was written by a grownup. So again: thank you.

Expand full comment
author

That’s remarkably kind of you to say. Thank you!

Expand full comment
Jun 20Liked by Dean W. Ball

Reminds me of how CEQA is used to sue all sorts of things. "College students making noise is a type of pollution!"

Soon the lawyers will be claiming that bicycle lanes are a type of dangerous artificial intelligence.

Expand full comment
author

Yes—the more general purpose and ambient something is, the worse these political economy problems can be if you get the regulatory structure wrong.

Expand full comment
Jun 20Liked by Dean W. Ball

While not optimal this isn't a totally awful scenario. It may take time for a culture like ours to adapt to a transformative technology. But AI will, I assume, just keep getting better; the performance difference between it and the status quo will keep growing. As it does the assumptions of the culture will change. We'll see

Expand full comment
author

It's very similar, in my mind, to what happened with nuclear energy--where we got all the bad stuff (weapons capable of killing all humanity many times over) and not nearly enough of the good stuff (clean, abundant energy).

Expand full comment
Jun 20Liked by Dean W. Ball

There is this difference: with AI a user is interacting directly with the technology. (Almost everyone I know is using it right now as a general purpose search upgrade; I have friends who use it to write summaries of meetings, etc.) So the benefits of the technology are easier to access and therefore appreciate/

Expand full comment
author

True!

Expand full comment

I suspect you’ll be linking to this piece many times in the next few years, sadly. Good to spell it out.

Expand full comment

I guess I agree it's not clear, but making predictions about the future is hard!

I broadly agree that the general scenario you piont out is plausible. But I'm a little unsure who this is supposed to convince? I support SB 1047 (and pretty strong regulation of foundation models in general). I strongly agree that in worlds where catastrophic risk doesn't materialize, these bills make the world worse by making innovation and deployment of useful new technology more difficult. I think that's a reasonable price to pay for catastrophic risks. I don't know that anybody is arguing these regulations are a free lunch.

I guess it's valuable to see some of the political economy specifics worked out, but this doesn't change my position: I already expected regulatory capture.

Expand full comment
author

if you expected regulatory capture and bad political economic outcomes, and were ok with those things, for a powerful new general-purpose technology, then I don't think there is much I can do to convince you specifically.

But many people, including, I think, Sen. Wiener himself, are under the impression that this bill will not entail any (or many) costs.

Expand full comment

Great article and really helpful. I’m curious, what would be ways to avoid that dynamic? Perhaps being very detailed in the very specific scenarios where the regulator would intervene?

Expand full comment
author

Yes, but I think ultimately narrow interventions would be best done by the Attorney General or other law enforcement entities. The issue with 1047 is creating people who do nothing but work on regulating advanced AI models all day.

Expand full comment

The problem with slippery slopes is that they can often slip both ways. Teachers unions hold a lot of political sway; so does the tech nexus at the forefront of AI innovation and funding. There are going to be lobbyist pushing in exactly the opposite direction, and if the threat of a slow hemorrhage of public school students is politically motivating, wouldn't the possibility of a sudden exodus of that Bay Area tech nexus to a more politically welcoming environment be far more so? Can't they send a lot of lobbyists and campaign donations to ensure that the Frontier Model Division does jack diddly?

Expand full comment
author

Well yes and no. No, because exit from CA isn’t an option with SB 1047. The bill is designed to apply to anyone distributing a model in CA, which at the very least means all American companies.

Yes, in the sense that what you are describing is textbook regulatory capture. Given that FMD staff will be *paid* by the AI companies under the current version of 1047, this seems quite likely.

Regulatory capture is also bad for innovation and competition, and an equally serious risk to be concerned about. I just didn’t write about it here because I think the topic has been well-covered by many others.

Expand full comment

I'm not convinced about that. A company could certainly choose to not let people in CA access their model. That might be an expensive choice but it's also a powerful move to pressure CA to change policy.

Are you suggesting that the mere fact that engineers in CA worked on the code (but never the ultimate weights or their outputs) is enough? I think that would raise some hard legal questions about jurisdiction.

And while SCOTUS did slap down the proposition 12 challenge it does seem there is some judicial appetite to limit states ability to effectively implement extraterritorial regulation.

Expand full comment

I'm not saying that you're wrong, but I'm not convinced that you're making the correct comparison. What happens if you instead compare against the US military, waiting for the Catastrophic Risk to materialize? You could still make a pretty strong argument that things could go wrong in how the Regulatory Agency is influenced by different groups in the society. But _not_ guarding against the threat and (in metaphor) letting the police handle any invading forces that commit crimes doesn't seem like a good idea.

To a large extent I think this boils down to how people view catastrophic AI risks. SB 1047 draws _the lower line_ at a certain amount of lost lives or destroyed value, and if only looking at that limit I think most people would agree that existing (or other, more specialized) regulation would suffice. But if looking at those limits not as worst case scenarios, but as examples of what a "too powerful" AI model can and will do in the wrong hands, it's another story. We don't want tools like that going around unregulated, and while such destructive actions are illegal, the existing authorities most likely aren't equipped to handle a world where such actions become common.

I'm (honestly) not sure what I think about catastrophic AI risks, and I try to learn more about the risks and how to handle them. Thanks for a really good read.

Expand full comment
author

Thank you for the kind words! I think that the national security apparatus (and federal/state law enforcement) will play a major role in the prevention/policing of catastrophic risks from AI--as they do catastrophic risks from every other technology.

If we saw the kind of catastrophic risks envisioned by SB 1047, I suspect the political tenor of the whole AI issue would change quite significantly and that additional regulatory/enforcement powers would be given either to a new or existing federal agency.

Meanwhile, though, models continue to scale (just today--a new generation from Anthropic), and we're not seeing the risks we've been told are imminent for years now. I'm not saying they'll never manifest. I'm only arguing that it's not clear we should create a new regulator (especially at the state level) to handle risks that we aren't sure even exist. If they don't manifest, we'll just have two dozen government-employed "LLM police," and at the very least that's a waste of money.

Expand full comment