13 Comments

Dean, great summary of the rule, agree in most places, except for the idea that all of this is somehow necessary to prevent China from leading in AI, misusing Ai, or using AI against "us". This of course is the whole rationale for all the rules, even though there is no evidence that AI, for example, would be decisive in any future conflict, which will still be dominated by old fashioned firepower and the ability to bring it to bear en masse and accurately. Already software and "AI" is doing this, and gen AI in particular, which is the target of these rules, seems unlikely to lead to some decisive military advantage. Already research in other areas of AI, for example, is pointing towards much different approaches for achieving something like AGI, and these approaches may not rely on massive GPU clusters to achieve progress. In addition, and I have written a lot about this, you failed to mention the critical Achilles Heel of the whole US approach (you alluded to it but did not tackle head on): the entire hardware basis for AI is located 100 miles from the Chinese coast in a country Beijing considers to be a part of China. As I have written, it is naive to believe that the US and allies can run ahead towards AGI/ASI, with the explicit goal of "winning" the AI "arms race" over China and containing China's ability to develop advanced AI for economic growth and all the good stuff, while this is still the case, which it wil be for the next decade. The dangers here are stark and growing and no one seems to want to acknowledge this, least of alt the authors and drivers of these rules, who do not understand the global technology industry or the risks inherent in this approach. In addition, the thrust of these rules will work to exclude China from participating in much needed global efforts to develop safety and risk frameworks around AI model development, yet another massive risk from this approach. I address some of his in a WIred piece this week with Alvin Graylin: https://www.wired.com/story/why-beating-china-in-ai-brings-its-own-risks/. Happy to be on your podcast at some point to discuss these issues in further detail....again, great summary of the rules....

Expand full comment

I do agree with a lot of this! Certainly I think these rules make conflict over Taiwan more likely and generally set us down a path that could lead to many bad outcomes.

One retort: I didn’t intend to argue that the rules are necessary for reasons of ai safety or military supremacy. Certainly others do, and I am skeptical of such views. I only meant the rules seem unavoidable as a political reality.

Would be great to have you on the podcast sometime!

Expand full comment

well put.

Expand full comment

Interesting and thought-provoking piece. I liked the historical analogy to British trade secrets on textile manufacturing. But here's a counter-example: Chinese dynasties successfully kept silk-making a secret from the rest of the world for a thousand years. https://en.wikipedia.org/wiki/History_of_silk

There may be a lot of historical examples where export controls did or did not work. The key dynamic is likely striking the right balance between imposing rules that are strong enough to protect your cartel's monopoly, while not overextending beyond your ability to enforce the rules (overplaying one's hand, as the British did with textile manufacturing).

In the current situation with the US & allies controlling high-end GPUs and frontier AI models, and with China's semiconductor industry lagging far behind, we have a very strong hand to play, and the rules are just now catching up to that reality. The rules will have to adapt to shifts in the industrial power structure, and will always lag behind because government is slow. But we have a good shot of locking in our lead for at least the next decade, and that seems like a hand worth playing.

Expand full comment

I am not so sure how strong our hand is, because I know that the semi industry is subject to rapid swings and cruel outcomes based on ever so slightly wrong decisions.

I do not see nearly enough investment in the US on next generation leaps in this area, and we know the Chinese are very much pursuing such things.

Expand full comment

Fresh idea, still in the half-baked stage:

Instead of top-down government regulations... what about decentralized voluntary regulation via privacy-preserving mutual inspections powered by temporary AI instances which get deleted after making their reports?

This would enable companies to agree to use a particular safety framework which had some 'alignment tax' associated with it, and confirm that their competitors were also using the framework. Thus, the net effect would be greater safety for both without fear of falling behind their top competitors.

I've been thinking through the details of how this might work, but not really settled on them yet. It just seems like a situation where the rules would be made better by the tech experts themselves. They then just need to ask themselves, "What would I be willing to give up if I could be sure my competitors would also give it up?"

I don't think this applies to all possible safety concerns, but I think it hits at least a few pretty well. Particularly in the regime of 'loss of control'.

Expand full comment

Interesting. Write stuff like this up! Ai policy isn’t creative enough.

Expand full comment

Leaving aside the chips, the weights protection doesn't make sense to me. I assume this is intended to prevent leaks of the weights of models such as o1 and Sonnet 3.5. But:

- Such weights haven't been leaked, so it's questionable to introduce all this complexity to address a hypothetical scenario.

- If such weights do leak, that's not catastrophic. The models only have a lifespan of around half a year at most before they're replaced by an update, anyway.

- Foundational model companies are already naturally incentivized to protect their weights, which probably has something to do with why we haven't seen any significant leaks so far. (The biggest I can think of is the Miqu leak from Mistral, and it's hard to imagine that had any significant negative impact on the company.)

- It's questionable how much government-level interest there'd be in exfiltrating these big generalist models, anyway. (They are not designed for military use.) Maybe that's why it hasn't happened yet.

- Unlike chips, which are physical, model weights are just information. It's not easy to prevent leaks of information. I think that if there's a model whose weights actually are a national security essential, then the level of protective measures required to successfully prevent leaks would be extremely high, beyond what would be reasonable for models with primarily civilian uses. So "model is big" is too inclusive a standard for which model weights to protect.

- The exemption for open-weight models is good but also creates some confusion about the logic of the rule. If open-weight models aren't covered but are also nearly as good as the top proprietary models (which is the current state of things), then what's the point of the heavy restrictions on the latter?

Expand full comment

Absolutely. And as far as I know, inasmuch as weights have been stolen, it’s come from cyberattacks rather than from physical access to servers. And in general I think dc obsesses over model weights to an unhealthy degree.

Expand full comment

One thought exercise I have been doing is, if the US unilaterally peeled back all their laws, how might different players respond?

Given the capital intensity of the research, a belief powerful AI is near, and a supply-constrained GPU market, it seems straightforward that China would bid for as much of it as they could. For what it's worth, Microsoft alone is putting about $80B into CapEx this year. China put about $96B into their semiconductor manufacturing initiatives, so the order of magnitude is not too far off. NVIDIA GPU prices would go even higher, but hyperscalers would have a tough choice. Take the subsidy and build in China, or see the subsidy go to a competitor. Enterprises may be willing to pay more for supply outside China, but it's hard not to think the training and consumer inferences would go where it's cheapest (and where the CCP has the most control over it).

There are a lot of "if's" in that scenario, and I'm not sure that's the only way it goes, but it seems plausible enough that it gives me pause in outright criticizing one of these rules. Coincidentally, my mind went immediately to the mercantilist system you referenced too—is this just meant to enshrine the hyperscalers as the gatekeepers of AI? In some ways, it feels like pseudo-nationalizing them, as the Federal government did to the banks by making them responsible for implementing Know Your Customer and Anti-Money Laundering rules.

All of this rests on how powerful you think AI may be, by when. It's an event horizon for the world.

Expand full comment

It is totally pseudo nationalizing them! I suspect we will see more of this in the years to come.

Overall though, I do agree with your reasoning; much as I am uncomfortable with these rules they seem difficult to escape as a political matter.

Expand full comment

This is the best summary of the rules I have seen. It's nice to see general equilibrium thinking in action. From a policy lens, it can also be seen as a violation of Tingbergen's Rule (one policy instrument, one target). The US is trying to achieve way too many goals using one policy instrument called export controls. On most occasions, such attempts end with none of the goals achieved. The complexity and myriad of exemptions are partly a result of this confusion.

As for the geopolitics of this move, imposing ever-expanding export controls can win the US followers but not partners. Everyone will try to find domestic alternatives and collaborate less.

Expand full comment

Thank you and agreed!

Expand full comment