I wrote an oped in Just Security, co-authored with Keegan McBride, on the geostrategic value of open-source AI to the United States.
I know many are deeply displeased about the election outcome. Please consider the essay below with an open mind.
Introduction
There is a reasonably high chance that Donald Trump will lead the United States federal government when “AGI” is developed. And setting aside the myriad problems with defining terms like AGI (a term I disfavor), there seems a very high chance that the most rapid progress seen in AI to date will take place during the second Trump administration, regardless of whether that progress culminates in “AGI.”
Many in the AI safety community perceive this as a disaster—they think that Republicans are “anti AI safety,” and that the Trump campaign’s commitment to repeal the Biden Executive Order on AI is a signal of a fully laissez-faire approach to come, even on matters of catastrophic risk.
I think this misapprehends both the attitude of the Republican Party and the broader politics of AI safety. Indeed, I think that the GOP and Trump are far better positioned to take major AI risks seriously than the Democrats. Let me briefly explain why.
One note up front: I am sure many of you are concerned about Trump administration policies that have nothing to do with AI. I am sure others among you believe that some orthogonal Republican policy commitment (trade policy, say, or immigration) will spell disaster for AI regardless of what the Trump administration does on “AI Safety.” This isn’t a post about those issues. Nor is it even a post about AI policy under the Trump administration more broadly—which I do think will have a regulation-skeptical bent to it. Instead, this is focused on how I think the Trump administration could approach the narrow question of researching, evaluating, and if need be mitigating major AI risks. The conditional is essential. Nothing about this is a “prediction”; it is an analysis, and an exposition of an opportunity that I believe is on the table.
The Trouble with Biden’s Approach to AI
The Biden administration represents (hopefully) the high water mark of what I call “everything is everything” liberalism, and what New York Times columnist Ezra Klein calls “Everything-Bagel” liberalism. This is a political approach that sees every government action as an opportunity to advance nearly every cultural and political priority the administration has. How could we ever just aim to build semiconductors—the most complex physical items ever conceived by man—in American factories, this line of reasoning goes, when there is climate change to defeat and systemic racism to contend with and union jobs to create? No problem can be assessed on its own, and no solution can be pursued independent of all the other problems facing the world. Everything, after all, is everything.
Of course, this makes organizational success more difficult to achieve, and probably helps us understand why the Biden administration lagged in building electric car chargers, disbursing CHIPS Act funding (though in fairness, the TSMC Arizona facility does appear to be running smoothly), and in achieving many of its other ambitious infrastructure objectives.
But it also turns everything into a political and cultural issue. It makes narrow bipartisan agreement, and serious engagement with complex technical issues, far more difficult than they need to be (and these things already are far from easy).
Unfortunately, this mentality is a pervasive feature of mainstream Democratic public policy—so pervasive, in fact, that one wonders whether there are structural factors behind it. My friend Sam Hammond recently wrote about what these might be:
As the political scientists Matt Grossman and David Hopkins argue in their book, Asymmetric Politics, the Democratic Party is best understood as a “coalition of social groups” while the Republican Party is a “vehicle for an ideological movement.” This explains why Republican leaders “prize conservatism and attract support by pledging loyalty to broad values” while Democratic leaders “seek concrete government action, appealing to voters' group identities and interests by endorsing specific policies.” There are ideological currents in the Democratic Party as well, but the raw power of ideas is usually subordinated to the interests of the major party factions, from teachers’ unions to the plaintiffs bar.
The “everything is everything” approach has characterized the Biden administration’s AI policy since the beginning. With this approach, no measured, technically focused prioritization is possible. Instead, you get documents like the National Institute of Standards and Technology (NIST) AI Risk Management Framework(RMF), and passages like these:
NIST has identified three major categories of AI bias to be considered and managed: systemic, computational and statistical, and human-cognitive. Each of these can occur in the absence of prejudice, partiality, or discriminatory intent. Systemic bias can be present in AI datasets, the organizational norms, practices, and processes across the AI lifecycle, and the broader society that uses AI systems. Computational and statistical biases can be present in AI datasets and algorithmic processes, and often stem from systematic errors due to non-representative samples. Human-cognitive biases relate to how an individual or group perceives AI system information to make a decision or fill in missing information, or how humans think about purposes and functions of an AI system. Human-cognitive biases are omnipresent in decision-making processes across the AI lifecycle and system use, including the design, implementation, operation, and maintenance of AI.
Another passage recommends that both AI developers and corporate users talk to “trade associations, standards developing organizations, researchers, advocacy groups, environmental groups, civil society organizations, end users, and potentially impacted individuals and communities” about “the tradeoffs needed to balance societal values and priorities related to civil liberties and rights, equity, the environment and the planet, and the economy” before they have released or begun using AI.
So, talk to everyone about everything that could go wrong—including issues relating to the planet, “society,” and struggles that have persisted for all of human history—with your use of a general-purpose technology that is changing constantly. Got it. Sounds like a recipe for success, does it not?
The AI RMF is often pointed to in other administration documents, ranging from the Executive Order on AI to administration’s AI “Bill of Rights” to the Office of Management and Budget’s recent guidance to agencies on their use of AI. It has also been referenced, often explicitly, in state legislation like SB 1047 and the AI bias laws from Colorado, Connecticut, Virginia, Texas, and probably others soon that I have written about recently. In some of those proposed and enacted laws, the RMF is cited as a minimum standard for compliance—even though, on paper, the NIST RMF is voluntary. The RMF as a voluntary guidance document is nothing to complain about—sure, it is overbroad and unfocused, but so are a lot of government documents. But as the basis for a policy approach, it is a disaster.
If you believe, as I do, that major risks from AI, and associated technical topics like interpretability and alignment, merit serious scientific study, this approach should worry you greatly. There seems to me a vanishingly small chance that such an abstract and broad risk management mentality will be successful. It seems to me that scaling this mentality to actual laws, with actual enforcement mechanisms, is a wide-open invitation for the government to involve itself in a shockingly broad range of technological and commercial activity. And on top of that, I would contend that this approach already has polarized a huge swath of Republicans against AI safety—to the point that some even want to disband the US AI Safety Institute.
Nothing about this puts those concerned about AI catastrophic risks on the path to success.
A Better Way
Republican politics is not inclined toward this “everything bagel” style of policymaking. Instead, as Matt Grossman and David Hopkins (quoted by Hammond above) argue, Republicans “attract support by pledging loyalty to broad values.” Or, as the GOP’s 2024 Platform (rumored to have been heavily edited by Trump himself) put it with regard to AI: “Republicans support AI Development rooted in Free Speech and Human Flourishing.”
AI catastrophic and alignment risk is still (mostly) theoretical, not observed. It is a scientific concern, not a cause for imminent strict regulation. Republicans, if they can be persuaded of the merits of studying major AI risks, could be inclined to hire good technical talent and let them cook, and to forge productive and mature partnerships with frontier AI companies to make progress on these issues (and many others).
Donald Trump can read the polls just as well as anyone else; he knows that voters are concerned about AI, and in several interviews he has expressed concerns of his own. His proximity to Elon Musk means that he has almost certainly heard well-articulated cases for concern about major AI risk. Ivanka Trump’s recent promotion of Leopold Aschenbrenner’s Situational Awareness essay series is yet another vector for such ideas to reach President Trump. There is a reasonable chance that Trump himself, and many around him, are, indeed, situationally aware.
By contrast, Vice President Harris last year described things like deepfakes and facial recognition systems as an “existential” threat from AI. Everything, alas, is everything.
I’m not saying this is what will happen “by default.” Many Republicans have been turned off from “the AI safety movement” because they see it as a progressive cultural cause, or another effort at “big tech censorship.” And right now, they are not wholly wrong. AI safety and risk management “guidance” from the administration, and some proposed and enacted state laws, really do push a progressive cultural agenda on AI. That’s what happens when you make everything about everything else.
It will take effort, statesmanship, and, probably, compromise to achieve anything like the outcome I’m describing. But the political dynamics under a Republican administration permit focused work on major AI risks in a way that they simply do not under a Democratic administration. Whether the AI safety movement, which is largely mood affiliated with the left, can seize that opportunity is another question altogether. Can they forge narrow and tactical alliances on specific issues? Or is everything about everything to them, too?
Spot on. The Democrats' insane starting premise is they can create legislation that will alter reality to fit their fantasies. I love this "everything is everything" concept because it describes perfectly why they screw things up so badly all the time.
There is no such thing as AI. There are a range of machine learning tools. The Leftist urge for totalitarian control drives them to absurdities like trying to regulate how people who know more than they do write code a liberal arts major cannot understand, all to defend against some imaginary existential and - I'll bet - intersectional whiny nonsense that has never occurred and will never occur.
I cannot tell you how wonderful and free America under 12 years of GOP governance is going to be.
Really feels like it depends on which of the republican ai narratives trump goes with.
Though, I feel like some of this over index’s on the political leaders of democrats on AI rather than what was happening in the full picture. I don’t have enough data to say definitively how. It is an issue, but makes all of it just look like a total failure.