
The proposal below reflects some of the work I have done in my capacity as a Fellow with Fathom, a new non-profit exploring private governance in AI. Fathom is exploring policies similar to those discussed in this piece and may pursue state policy on these topics. I have had no role in any political efforts on any legislation, nor have I drafted, edited, or otherwise reviewed the text of any draft bills. My analysis here reflects my independent research and views.
Introduction
Every AI policy researcher has their favorite analogy. No AI policy discussion is complete without at least one participant bringing up the way we regulate cars, or airplanes, or nuclear weapons, or electricity, or books, or the internet. Implicit in these analogies is the idea that we should regulate AI like we regulate those things—that we should take some existing regulatory or legal framework off the shelf and apply it, with a fresh coat of paint, to the governance of digital minds.
There’s nothing wrong with reasoning by analogy; I do it myself regularly. Yet I’ve come to believe that these analogies can be dangerous. Not necessarily because they mislead us, but because they constrain our imaginations. There’s the obvious fact that mechanized intelligence is not very much like any of those earlier technologies, but there’s a deeper point, too: AI is, itself, a governance technology.
The people doing the governance of advanced AI will themselves have access to advanced AI. And we do not know exactly what governance capabilities advanced AI will enable. Given that governance is a cognitive activity, however—and that AI is mechanized cognition—it would be surprising if advanced AI did not enable at least some novel governance capabilities.
Asking our current policymaking apparatus to conceptualize a governance regime for advanced AI is like asking the ancient Greeks to write a Beethoven piano sonata. For all their wisdom and artistry, the ancient Greeks could never have imagined such sounds, because technology had not yet made them possible. But soon, our horizons will expand. Soon, all sorts of novel things will be possible. New tones, new octaves, new techniques, new ideas.
I have written before that AI governance will require institutional entrepreneurship. Like all entrepreneurship, it will be difficult to do quickly within government itself. So, I believe, we need private governance—organizations overseen by traditional government that can employ novel approaches to the evaluation, standards-setting, and oversight of AI systems.
I have proposed a few broad ideas for what private governance specifically might be able to achieve. Dynamically generated contracts with automated adjudication mechanisms, perhaps. Or maybe technical communications protocols for AI agents.
But I merely gesture. I, too, have never heard the AGI-piano, let alone played one. We need experimentation to facilitate the discovery of the right governance system.
My ideal AI law, then, would be a foundation that allows experimentation with different kinds of governance. But we don’t just live in a world of ideals. If you wanted to build such a foundation in the real world, you’d need to grapple with practical realities. Such a law would need to be passed in the near term; if we do not pass it soon, some older governance framework will gain a foothold. Such a law would need to be politically feasible—recognizing, for example, that Congressional action in the near term is unlikely. Indeed, such a law would need not just to be plausible for state governments to implement, but designed to be resilient to a state-by-state patchwork. And such a law would need to provide policymakers and the public with reasonable assurance that societal order will be maintained in the face of one of the largest, fastest-moving technological revolutions in history.
Perhaps most importantly, my ideal AI law would be an accelerant to AI adoption throughout the economy. As I have argued before, many existing laws and liability frameworks already could be (and in some cases, are being) applied quite broadly to today’s AI systems. One of the biggest obstacles facing AI developers and commercial users of AI is the sheer uncertainty of how, exactly, existing laws will be applied to AI systems of the future, especially agents. So my ideal law would provide clarity and certainty for AI firms of all sizes.
It is a tall order. No law can satisfy all these constraints perfectly. But I think, after many months of work—and a lot of help from many friends and colleagues—there is something that just might work.
I do not mean to suggest this idea is completely novel. It is heavily influenced, for example, by the work of Jack Clark and Gillian Hadfield in their regulatory markets paper. Many of the basic functions this governance structure would perform are recommended in the recently released Joint Policy Working Group on AI Frontier Models, the report commissioned by California Governor Gavin Newsom after the veto of SB 1047.
Nonetheless, this proposal is a fundamentally different way of thinking about AI governance than the vast majority of policies being considered today.
I am eager to hear your feedback, criticisms, and questions. As with everything I write here, consider this a work in progress.
The Proposal
The idea is relatively simple:
A state legislature authorizes a government body—the Attorney General, or a commission of some sort—to license private AI standards-setting and regulatory organizations. These licenses are granted to organizations with technical and legal credibility, and with demonstrated independence from industry.
AI developers, in turn, can opt in to receiving certifications from those private bodies. The certifications verify that an AI developer meets technical standards for security and safety published by the private body. The private body periodically (once per year) conducts audits of each developer to ensure that they are, in fact, meeting the standards.
In exchange for being certified, AI developers receive safe harbor from all tort liability in that state. This means that a huge variety of legal risks for AI developers are taken off the table.
The authorizing government body (the Attorney General, the commission, etc.) periodically audits and re-licenses each private regulatory body.
If an AI developer behaves in a way that would legally qualify as reckless, deceitful, or grossly negligent, the safe harbor does not apply.
The private governance body can revoke an AI developer’s safe harbor protections for non-compliance.
The authorizing government body has the power to revoke a private regulator’s license if they are found to have behaved negligently (for example, ignoring instances of developer non-compliance).
The first thing worth expanding on is that this system is entirely opt in for developers. If developers prefer the legal status quo, they are welcome to forgo protections from tort liability. And they can withdraw from oversight by a private governance body at any time, though doing so would mean losing their liability shield. Thus, from the perspective of a developer, this proposal is all upside. The worst that can happen is that they choose to exit the system.
In that way, this proposal hinges on how much of a benefit the liability shield really proves to be for developers, and how much better society is served by a certification system over a litigation-based system. My own research has led me to believe that tort liability is a clear and present danger to anyone operating at the frontier of AI research—especially anyone, startup or not, interested in developing agentic AI systems. This is a substantial update to my thinking from one year ago, when I would not have counted tort liability among the major risks to AI innovation. Now, I see it as among the most severe risks. I could, of course, be wrong about this—I’ll leave that for you to decide.
It’s not just that tort liability is bad for innovation, either. I also believe it is not an optimal system to ensure safety or security. First, juries and judges are not always well-suited to making complex technical determinations in matters of product design or development. This has historically been the case for far simpler consumer goods, and could be even more so for the comparatively alien technology of frontier AI. Indeed, this was one of my biggest objections to SB 1047: tort liability for major frontier AI risks means that judges and juries will be adjudicating disputes about scientific and technical questions that even the world’s foremost experts cannot agree on today.
Second, the risk of tort liability can actually incentivize firms to ignore certain risks. Say that a developer discovers some new threat model in their development–one that no one in the external research community has found. If they view the threat as a tail risk (very low likelihood of happening, but large consequences if it does), and if mitigating the threat is very expensive, tort liability incentivizes firms against being transparent about it, so that the company can argue in court that the risk was not “reasonably foreseeable,” and hence that they were not negligent in their failure to mitigate against it.
Another virtue of this proposal is that it is resilient to AI federalism—the idea that, like it or not, America is likely to proceed with state governments passing laws of their own with no major Congressional action. The private governance bodies would be non-profits—entities that could easily operate across state lines, essentially creating a national standard. States tend to be memetic; as the recent cascade of algorithmic discrimination laws makes clear, state legislatures will often simply copy statutes from other states. If this proposal is implemented in a few states, and seems to work, it could very well spread to many more within a couple of years. And in every state that does so, the same private governance bodies could operate.
The private governance bodies could even operate across international lines, allowing American AI governance practices to be exported to countries all over the world. This would be particularly true if AI governance relies largely on technical solutions, since there may be network effects or economies of scale associated with expanding such approaches to other countries. A well-implemented AI communications protocol, for example, may be something companies in other countries want to be a part of.
A final advantage of this system is that it allows flexibility and institutional innovation. No longer are governance practices determined by the fiat of a monopoly regulator. A new entrant can join the market with a technologically differentiated solution, and compete with the established governance bodies.
What’s more, this system can scale to all the different domains of the AI industry. Could there be a private governance organization for startups or open source? Absolutely. Could a private governance organization tailored to AI applications in one industry, like healthcare, try to establish itself? Sure. Conceivably, even firms that heavily modify open-source models might wish to have the option of receiving liability protections in exchange for meeting standards of transparency and security. And as new AI industries are born—such as generalist household robotics—new private governance institutions could grow alongside them.
The Tradeoffs
Like any governance system, there are reasonably foreseeable failure modes and tradeoffs. For example, if multiple governance bodies compete to provide certifications for frontier AI systems, it is easily conceivable that AI developers could default to the lowest common denominator—the private governance body with the most relaxed policies.
For this reason, the proposal laid out includes a mechanism whereby the authorizing government body can revoke the private regulator’s license if they are found to have been negligent. Doing so would eliminate safe harbor protections for all developers covered by that private governance organization. The authorizing government body could perform such investigations during its periodic re-licensing of the private regulators, or if a major harm is found to have occurred with an AI system covered by a private regulator. Thus, if AI developers all find themselves attracted to one “lowest common denominator” private regulator, they know they are assuming the risk of losing their safe harbor.
Let’s make this concrete. Say that two AI developers—we’ll call them Ganthropic (one of my cats is named Ganymede, or “Gan” for short) and BlopenAI (no explanation for this one)—choose to be subject to certifications by a private governance body called ARO (AI Regulatory Organization). ARO is suspected by some to be overly lax in their duties, and Zvi Mowshowitz has published numerous blog posts about it.
One day, Ganthropic releases a new model that is easily susceptible to a well-known prompt injection for which there are robust, but computationally expensive, technical solutions. The prompt injection causes the system to unload all user data in its 50 million token context window to an attacker, and hundreds of millions of users’ data is stolen.
Such an event undoubtedly raises eyebrows in the offices of numerous state Attorneys General, and they jointly launch an investigation into the practices of ARO. ARO is found to have engaged in handshake deals with Ganthropic to let a few things slide—after all, they are widely known in the industry to be more compute constrained than their competitor BlopenAI.
In this scenario, ARO would lose its license, and every company covered by ARO would lose their protection from tort liability. BlopenAI, not having engaged in any misconduct, could find another private regulatory body. But Ganthropic would likely face greater difficulties. Setting aside any liability it might face for this debacle, it might struggle to find another private governance body willing to take on the risk of covering it and its models. In that way, the system itself incentives good conduct.
Another reasonable critique is that this system would lead to an altogether new kind of patchwork. If, indeed, I am right that advanced AI will itself enable many innovative new forms of governance, then wouldn’t this system eventually result in a patchwork of many wildly different AI governance regimes?
On the one hand, the answer to this may be, “yes, and so what?” If companies can choose which regime they wish to participate in, then the system only looks like a patchwork to an external observer. For any participant in the market, there is just one regulatory system—the one they have chosen.
On the other hand, it is true that some of the potential AI governance solutions may benefit from, or even depend upon, near universal participation from all, or almost all, frontier AI developers. For example, a communications protocol for agents may have little use if it is not effectively universal, just as the internet would be a much more confusing place if there were multiple competing sets of web protocols (HTTP, TCP/IP, etc.) for website owners to select.
Perhaps, in the latter example, lawmakers and the public would find it wise to simply mandate use of that protocol or other governance mechanism. If that comes to pass, that would be a win for the system I am proposing. The point of this system is to experiment with different approaches to AI governance. If one approach is discovered to be the “victor,” and lawmakers wish to codify that victory in statute, that would be an example of this proposal working as intended.
Finally, you might raise objections as to the competence of the actors involved. Will private governance bodies really be up to the job? Will state governments really be able to license private governance bodies effectively? These are reasonable questions. I would agree, for example, that a federal agency would likely be better positioned to handle the task of licensing private governance organizations than any state government would be. I am envisioning this proposal as a state-led policy simply because of political realities; there is no doubt it would be better if the federal government did this.
But in the final analysis, all governance comes down to the quality of the persons who are tasked with its execution. I believe that, if the incentives are structured well, America can produce AI governance mechanisms that don’t just keep AI systems safe—they will also make them more robust, more quickly adopted, and more globally competitive than they otherwise would be.
Conclusion
In short, this proposal:
Eliminates, or at least dramatically lowers, a large category of existing legal risks to future AI development (tort liability);
Furthers the adoption and evolution of technical AI security and safety standards;
Provides a foundation for experimentation in AI governance practices;
Sits alongside existing legal systems (because it is opt-in for developers), in essence competing with the status quo
Accommodates the political reality of state governments being likely to set AI policy, rather than the federal government (though there is no reason the federal government could not adopt this policy!)
It is an imperfect proposal in ways I have anticipated and, I am sure, in many ways that I have not. I hope, at the very least, you’ll see the direction I am suggesting, and understand this proposal to be an essay in the classic, French sense of the word—essayer, to try.
We cannot answer the many profound questions about the direction of AI and what it will mean for us humans. We cannot know what will come of all this dynamism. We cannot hear the piano until we’ve built it. All we are left to do, as ever, is to try.
But in so doing, we should not seek to further complicate life. Improving institutions means creating institutions that make the world simpler by solving problems, by making the world work better.
This is what is meant by government for the people. Not a government that hogs your attention, nor a government that solves all your problems on its own. Not a superhero, but a steward. A government that you can depend upon for the rule of law, national security, and basic welfare. A government that fulfills these duties while remaining humble, simple, and efficient. A government that fosters ambition by virtue of simplicity. A government that recognizes when its hand is needed to achieve a goal, and recognizes when the hands of others will do. A government that “imposes orderliness without directing enterprise” as Michael Oakeshott said, “so that room is left for delight.”
A republic.
Res publica, from the Latin. The public thing.
I believe that America has the talent to build new governance institutions for AI and the inventions it will enable. I believe we can be imaginative. I believe we are a country with talented and loyal private citizens willing to take on some of the burden of building these new institutions.
So let us handle AI, let us grapple with it, let us govern it, as a republic. Let us channel private effort toward the public good—the public thing. With any luck, some room will be left for delight.
Full exemption from tort liability seems a bit too extreme -- I think there needs to be some strong incentive on *outcomes*, not just *process*. It'll be too hard for regulations to cover all the risks and mitigations. The real threat of liability from bad outcomes will force companies to think more creatively about possible risks and fully account for knightian uncertainty.
But I like the spirit of it -- liabilities should be greatly reduced for a company that's been given a high score by a validated private auditor. The amount of liability should be a function of both the damage and the negligence, and if the company gets a high score for safety, then they weren't being negligent.
Interesting proposal. I think it ends up becoming something like the equivalent of insurance against a future tort liability, which definitely has useful features and also overhead. That might even be a more direct method. The thing that I have the biggest question with is that none of the AI safety research or assessments I have seen are nearly good enough to be enough for a certification. Definitely not enough to have any views on whether it could cause particular forms of mayhem.