I am pleased to inform you that I will be taking on a consulting role with Fathom, a new nonprofit working on AI governance and organizer of the recent (and excellent) Ashby Workshops, as a fellow. In that capacity, I will be researching private governance of AI—that is, standards, best practices, and other mechanisms for the governance of frontier AI that are developed outside of formal governments (though with oversight from government). If you’ve been reading my work for the last several months, my interest in this topic might not be a surprise to you. While I have hinted at this direction for a while, I have been thinking about private governance for a long time behind the scenes. But what I’d like to do today is explain my motivation for pursuing this research.
Before I do that, though, a few notes: this new affiliation does not affect my role as a fellow at either the Mercatus Center or the Foundation for American Innovation. I will be devoting substantial time to private governance research, but I will continue to research and write about state and federal AI policy, AI and science, and other topics that interest me. And most importantly, none of my affiliations affect my fundamental orientation to you, the reader of Hyperdimensional. I will continue to publish here weekly, and there will be no changes to this simple fact: my job is to tell you, as clearly and honestly as I can, what I think is happening in AI, and what I think should be done (and not done) about it.
Onto this week’s essay.
On Private Governance
Imagine that you are in a spacecraft, on mankind’s maiden voyage to Mars. You are founding a colony. You and your crewmates will have to figure out how you’re going to produce energy, food, and water. You are going to have to build habitats. You will have to figure out if humans can successfully reproduce on Mars, and figure out some sort of workaround if they cannot. There is no return flight.
All this work may sound overwhelming, and perhaps you suggest, reasonably, that we should do something to buy ourselves more time to investigate all these questions. But if you change the velocity of your spacecraft too much, you could easily fall off your flight path and end up careening into the utter blackness of space.
You have, in short, a lot of work to do, and you are under some serious constraints. You probably will want to get to work before you arrive at your destination.
It would be bizarre, in this situation, to argue that our first priority should be establishing an environmental permitting regime for our would-be colony. That would seem, to most sensible people, like a bridge that you and your fellow colonists can cross later. There are more important things that need to be done.
At the very least, you probably do not want the question of how to handle environmental permitting on the Martian colony to be too foundational to the process of constructing your extraterrestrial civilization. It is not obvious how much you will need environmental permitting on Mars, or what kind you will need.
Much of AI policy today amounts to just this: a debate over environmental permitting while we are on a spaceship hurtling toward an alien planet. Mechanized, industrial-scale cognition will, in the fullness of time, fundamentally challenge our present-day logic, abstractions, heuristics, intuitions, and assumptions in all sorts of unpredictable ways. So, too, will it challenge our institutions. From year to year, there will always (probably) be more continuity than discontinuity, but when we look back on 2025 in 2035, we will feel that radical changes have been wrought.
We have seen only a glimpse of what it is possible to do with industrial-scale machine intelligence. Institutions public and private will need to adapt, and they will probably need to do so quickly. Some, assuredly, will fail.
Yet instead of putting serious thought into how institutions should adapt, or what kind of new institutions should be built, “AI policy” today mostly amounts to cramming AI into regulatory and institutional frameworks from the past. Sometimes, these are literally the frameworks from America’s famously disastrous environmental permitting regime.
We will not succeed with this threadbare intellectual infrastructure. Yet those of us who want something new will not succeed merely by explaining why the existing proposals are subpar. The burden is on us to propose something better. Similarly, if you believe, as I do, that solving AI governance is an essential part of “making the AI transformation go well,” then it also follows that America must be an exporter of not just technological ideas, but also governance ideas. If America does not lead in governance, others will. America has the natural opportunity to lead, but the world will not wait around for too long. The burden, again, is on us.
We must seek a new path.
I want to convince you of a simple thesis: That, for now, America’s approach to AI should rely just as much, if not more, on private governance rather than laws, rules, and regulations promulgated by governments. That may sound chaotic to you, but throughout this essay, I’ll try to show you why I believe it is the best path to take.
I define governing as the intellectual task of setting rules for how systems and resources should be used. Governing also entails additional work, such as establishing mechanisms for enforcing those rules, creating processes for resolving disputes about how the rules should apply, and ensuring accountability if someone disobeys the rules.
To say that something is governed is to make a specific set of claims. It is not to say, in general, that it is controlled absolutely by the entities that govern it. Instead, governing is a specific kind of relationship between the thing that is governed and those entities that do the governing.
Private governance is merely all the governing activity that happens outside of formal governments.
Private governance can be small-scale and informal—families are largely self-governing units. Or it can be sprawling and quite formal: financial markets and the internet have substantial private governance institutions. It is often done in close collaboration with formal governments. Insurance, for example, is both a mechanism of private governance and a heavily regulated industry. The collaboration need not always entail regulation. For instance, many private industry standards-setting bodies work in partnership with the National Institute of Standards and Technology, which has no regulatory powers.
Advocating for the private governance of AI does not mean that there is no role for formal laws and regulations with respect to AI. Nor does it mean there is nothing for government to do in the short term. Indeed, there are many pressing AI-related challenges that government should address with alacrity, including bolstering America’s cyberdefense (and offense) capabilities, ensuring that the country has ample data center, semiconductor manufacturing, and mineral mining and refining capacity (and the energy we’ll need to power it all), and much else. Nothing about my argument rejects the importance of the state.
Finally, private governance of AI does not mean industry governance of AI. To be sure, there is no realistic way to solve the unanswered questions in AI governance without the organizations building frontier AI playing a role. But a private governance institution can, and in my view should, incorporate the perspectives of a diverse range of societal actors. Similarly, private governance institutions do not need to be a monopoly: there can be different organizations serving complementary roles, or even organizations competing to serve the same role. Part of the benefit of private governance is the far wider range of institutional options it affords.
Institutions are not simply organizations. They are, as the economist Douglass North has defined them, “humanly devised constraints that structure political, economic, and social interactions.” All institutions are technologically contingent; institutions are enabled and defined by the tools human beings have at their disposal. It would be shocking if a technological revolution like mechanized intelligence did not create new opportunities for institutional entrepreneurship.
I advocate for private governance for a simple reason: we do not currently know how to govern AI, and we should generally avoid using government and public policy to accomplish objectives we cannot define. We do not know how to formally evaluate the performance, safety, or reliability of AI systems. We do not know how to assess the quality of an AI systems’ outputs. We have a vague intuition that successful adoption of AI will require “human-machine collaboration,” and automation with “humans in the loop,” overseeing things. But we have little idea of what this means in practice. We have little idea of how to articulate, in anything but the vaguest terms, what our goals even are, much less devise a system of rules for how to ensure our goals are met.
It is even possible that AI will change the optimal structure of organizations. The modern managerial corporation itself only arose in the last industrial revolution, when new technologies like the railroad necessitated the development of professional managers and larger corporate bureaucracies. It seems quite likely that mechanized intelligence will also change how organizations are structured. It is challenging, of course, to craft rules for new forms of organizations that do not exist today. But the problem becomes even more daunting when you realize that regulatory agencies themselves are large institutions whose optimal size, structure, and techniques may be changed profoundly by AI.
Given all this uncertainty, one thing seems sure: we are unlikely to get the rules right on the first try. They may become outdated quickly or be unworkable in practice. The optimal rules are likely to vary considerably between different industries, and between different use cases within the same organization. The solution, then, is ensuring that governance is flexible, agile, iterative, technologically savvy, and open to experimentation. These are not qualities that anyone associates with government.
To be sure, I believe we need to improve the quality of government along all of these vectors. But often, the slowness and inflexibility of government is a feature, not a bug. Because governments have the monopoly on legitimate violence, we have devised ways to constrain the powers of government. Indeed, constraining the state is a founding principle of America, and it is one that has served us well over the decades. One such constraint is the rule of law—the notion that government should enforce the law consistently, and that people and businesses should be able to easily predict what is and is not against the rules. Because governments have the right to seize people’s property, imprison people, and even take their lives, we probably do not want to give them sweeping power to change the rules in real time in the interest of flexibility and experimentation.
Private governance, on the other hand, can be flexible, agile, iterative, technologically savvy, and open to experimentation, because the people and organizations involved in private governance do not have the sovereign right to imprison or kill you if you do not follow their rules.
Moreover, private governance allows us to collectively discover the best way of governing AI, instead of by centralized government directive. As that discovery process unfolds, the ideal role of government will become clearer. Perhaps government will simply want to codify the best practices that self-governance discovered, as has happened frequently in American history. Or perhaps we will want to maintain self-governing institutions and simply establish formal oversight of them. As with many things, we simply do not know today what the optimal outcome looks like.
Private governance requires trust. To some extent, it requires us to trust the AI field—the industry, its investors, academia, non-profits, and others—to play a meaningful role in governing itself. Many policymakers and citizens are understandably uncomfortable with this proposition. That is, indeed, a great deal of power. Because of the tremendous benefits self-governance can deliver, I believe this is a risk worth taking, but I would be lying if I did not acknowledge that it is a risk. Thus, it will be important to establish clear transparency rules to ensure that the field can be held accountable by the public, as well as clear oversight by government of private governance institutions. Determining the relationship between private governance organizations and the formal government is an essential part of the research I wish to conduct.
The task of governing a society transformed by AI will be one of the grand challenges of our time. Meeting it successfully will require experimentation, intellectual breakthroughs, and institutional innovation. The best structure in which to foster experimentation, breakthroughs, and innovation is a private governance regime.
America faces two distinct challenges in the governance of AI. One is bolstering state capacity to handle the task in the first place. The other is inventing the new institutions that AI will both require and enable. The former problem is one I have studied in various ways for most of my professional career. The latter problem is one I have come to more recently. Yet as I’ve juxtaposed them, I’ve come to wonder whether these problems are as distinct as I originally imagined. That is, could it be that successfully devising robust governance mechanisms for transformative AI will itself give us important clues about how we need to reform our government more broadly? Could AI governance be a kind of institutional R&D for the broader governance changes we must make over the coming decades? I do not know for sure, but after exploring this question for the past year, I’ve come to believe that the answer may well be yes.
Scholars have argued that American history is characterized by three “foundings”—moments in our history when we reimagined our legal and civic life such as the Reconstruction Era and the New Deal. Perhaps the coming of transformative AI—this distinctly American invention, this thing we will bring into the world—will constitute another founding. Perhaps, for a country that has stumbled in recent decades, it will herald a rebirth.
The challenge, then, is not so much to identify what laws need to be passed. “Real reform does not begin with a law,” as Calvin Coolidge said, “it ends with a law.” Instead, the challenge is to identify the fundamental changes to the mechanisms of government we will need, and in so doing, perhaps, to trace the contours of a new American founding.
In 1975, Lawrence Sanders wrote a near-future dystopian novel called The Tomorrow File. Its frontspiece was a quote from the second inaugural address of the fictional president Harold Morse in 1988 that your invocation of another Founding reminded me of:
"We can no longer afford an obsolete society of obsolete people."
Be careful what you wish for, in other words.
I agree that agility is underrated compared to specific policy plans.
It's interesting and I'd love to see it applied in other contexts, but AI is moving too fast for this plan to be viable.
Once created, any private governance orgs would attempt to preserve their own existence and ward off any government regulation. This could be disastrous when things get real and when we need serious action (not lowest common denominator governance based on orgs either opting out or picking the most lax private governance org to safety-wash their actions).