6 Comments

I also don’t love the term decentralized training for multi data center training (what it should be).

Fully decentralized training on laptops is very different than that.

Expand full comment
author

agree they're very different--but I wanted to cover both possibilities (tho think the latter is not likely)

Expand full comment

Great article! I had not been following the regulator side of this. I'm so glad I read this now. You breakdown the issue very well. Thank you for sharing this.

Expand full comment
author

Thank you!

Expand full comment

I think this post is correct that advancements in decentralized training pose a problem for compute thresholds. They also pose a problem for basically all "compute governance" (governance methods that focus on AI chips), which includes basically all of the AI governance methods that have any teeth / enforcement. These advancements aren't just bad for compute thresholds, they are bad for most of the reasonable and good approaches that national or international powers might use to govern AI development in an adversarial situation (assuming people don't just comply).

It's also possible that we're differing a bit in what we're trying to accomplish with AI governance. I believe one of the key functions of AI governance should be to prevent large scale harms from AI, once we get AIs competent enough to cause said large scale harm Accomplishing this goal might include some combination of:

- Frontier AI developers assess for dangerous capabilities and have to implement guardrails to prevent misuse of API-hosted models if said capabilities are present.

- Models with dangerous capabilities are centralized (i.e., not open sourced, not leaked to nation states) for as long as possible to reduce the risk of catastrophic misuse by bad actors (and to limit race dynamics).

- By the time models-with-dangerous-capabilities are widespread, the world is much more prepared for them, including substantial investment in e.g., cybersecurity.

- Reduce dangerous race dynamics in AI development and deployment when there are highly competent AIs around (i.e., slow the intelligence explosion and reduce military racing), so that we have more time to prepare society and solve technical safety problems.

Intervening on most of these intermediate objectives looks to me like it involves "ask somebody to do something, and then hope they do". But that might not be enough. Law needs enforcement. There need to be ways to prevent people from breaking rules and respond to violations. And at the level of enforcement, compute is by far the nicest intervention point: a company is refusing to comply with safety testing? it would be way nicer to shut off their data center than throw their employees in jail (obviously both of these are pretty far down the line if other enforcement fails). A country is racing ahead in violation of an AI-arms agreement? their AI development is basically the combination of people, ideas, and compute — ideas are very hard to control, people working for foreign militaries are also hard to control, compute is just the GPUs who don't have fundamental rights that need to be respected.

From my current perspective, compute is the main node of enforcing AI governance, especially internationally. Decentralized training could significantly change the landscape of how regulatable compute actually is.

Expand full comment

What are my thoughts on regulation based on "compute thresholds" after encountering the threshold-based approach in Biden's Executive Order?

Expand full comment