I co-wrote an essay, with Greg Lukianoff of the Foundation for Individual Rights in Education and Adam Thierer of the R Street Institute, on the First Amendment implications of the state AI discrimination bills.
Today’s piece is the second of a two-part series about the near-term future of AI. Part I is available here. Note that this piece is not about the effect of agents on routine government functions—an important topic about which I may one day write more. Instead, it’s focused on how I have come to approach the basic questions of AI policy. If the latter topic is of more interest to you, I suggest Sam Hammond’s AI and Leviathan series as a starting point.
Introduction
Not so long from now, I expect that a significant, maybe even overwhelming, majority of the intelligent actions in the world (the kind we currently associate with humans) will be performed by computers. The companies that make the foundation models that do these things will be unbelievably important and influential organizations. “Power” does not even begin to describe it.
It is very difficult for me to imagine no regulations, rules, or other legal constraints being imposed on the companies that engage in the development of foundation models. I hope that any regulation policymakers do wish to enact will be modest, and I hope that it supports the diffusion of AI throughout society.
What could such a policy look like?
Liability and Foundational Technologies
My expectation is that AI will become a foundational technology—closer to a natural resource like energy than to, say, social media. AI models will undergird a substantial and growing fraction of economic activity. This has major implications for how one conceptualizes a policy regime for the technology.
Think of some of the major foundational technologies of the past 150 years: railroads, telecommunications networks, electricity, and the internet. Each of these are vast, foundational technologies on top of which all of modern civilization is built (though this is somewhat less true of railroads today). Each is quite different in form and function, suggesting that any commonalities between how we regulate these technologies might map nicely onto AI.
In all these instances, America has pursued federal legal standards. In all these instances, America has eliminated or severely limited the exposure of the foundational technology providers to tort liability for downstream misuse of their products.
That’s not to say these firms have no liability. All these firms face a wide range of statutory liability: they can be sued for violating civil rights law, consumer protection law, environmental law, securities law, and the like. And many also face tort liability for their own misconduct or irresponsibility. A power distribution company must maintain its infrastructure, and if it fails in a way that directly causes damage to someone’s property, the company likely faces tort liability. The specific protection we offer is for liability on the providers of foundational technology for misuse by the users of those technologies.
Let’s apply this to AI. If someone slips and falls at OpenAI’s headquarters, OpenAI could face a common form of tort liability called premises liability. If a Google data center explodes due to mismanagement, and damages surrounding property, Google likely faces liability. And, in a more outlandish scenario, if xAI were testing a model internally for autonomy capabilities, and the mode exfiltrated itself and began to scam people on the internet, xAI should face liability for any resultant harm.
But if models from, say, Anthropic are fundamental parts of economic output in virtually every industry, it is unreasonable to force Anthropic to bear liability risk for all this activity by their customers. It is simply too much risk on the balance sheet for any firm.
Imagine a bizarre world where, somehow, the American tort liability system exists, but humans are still basically apes—we have no higher cognitive faculties. Then one day, someone finds a spring with magic water you can drink to give you full human intelligence. He fences off the spring and begins selling the water to the other, still-dumb homo sapiens. Should the guy who stumbled on the spring have liability exposure for everything everyone does with human intelligence for all time? It seems to me that the answer is obviously not, if we want there to be a cognition industry in the long run.
Tort liability proponents might respond that the standard for “reasonable care” would eventually be codified such that firms are protected from liability in all but egregious cases of unsafe development or deployment practices. And this could be true. But consider the practical reality of tort liability today.
Let’s say that a startup takes a frontier language model and puts it into an agentic scaffolding. The resultant agentic system is marketed by the startup as a sales agent. A company buys that agentic system to expand its effective salesforce. The company is a declining business in a declining industry, and it needs all the help it can get. One quarter, business dries up so much that the company’s managers decide to prompt their AI sales agents (and their human ones) to be more aggressive. In a small fraction of its several tens of thousands of sales interactions over the next few months, the AI system gets a little too aggressive, making defamatory claims about competitors.
The defamation is a tort, legally actionable for an injured party under existing law. No one, including me, disagrees about this.
Much more contentious, though, is the matter of who contributed to this tort. Who, in other words, is liable? Is it the company that prompted its sales agents to be more aggressive? Or is it the startup developer of the agentic system, because they did not exercise ‘reasonable care’ to prevent this bad outcome? Or is it the developer of the original language model, since it was the language model, after all, that literally committed the tort? Are all these actors liable? Is no one? One of them? Some combination?
Answering this question in trial will inevitably involve incredibly complicated and expensive litigation—countless expert witnesses, endless back and forth. But the political economy of tort litigation does not cleanly channel lawsuits toward the most responsible entity. Tort lawsuits are costly in time and money; the plaintiff’s decision of whether to bear these costs depends on size of the expected payoff. In other words, the decision hinges on how much money the plaintiff supposes he can get from the defendant if his case is successful. In this case, the party likeliest to have the most money is the foundation model developer.
In recent years, the political economy of tort litigation has worsened yet again due to the rise of “litigation finance”—investors who fund tort lawsuits in exchange for a share in the damages awarded. Insurance firms have described tort litigation finance as one of the leading causes of “social inflation,” or human-created factors that cause insurance premia to rise beyond the rate of inflation. One of the distinguishing features of litigation finance-backed suits is the plaintiff’s ability to prolong lawsuits significantly beyond the length of a typical self-financed case. Another is the ability to hire many more of the highest-paid expert witnesses.
One final distinguishing feature of litigation finance worth pondering is the opacity of the funders. In most US jurisdictions, there is no transparency requirement to disclose the identity of the people paying for the lawsuit, including, often, foreign investors from adversarial countries. There are examples of investors overruling the plaintiff’s decision to settle in the interest of extending litigation and securing a higher financial award from the jury. Think, for just a moment, about the clever things an adversarial government (or government-linked private investors) could do to hobble the American AI industry with this tool at their disposal.
It's not the fault of tort litigation that this weakness exists, nor is it even the fault of litigation finance. Instead, it’s a reflection of a simple fact, one that is at the heart of my criticisms of the modern tort system: tort law was never intended to be a policymaking instrument. It was intended to provide financial relief to people who were injured by the actions of another individual or firm.
Consider what happens when courts are put in the role of crafting AI policy. Say an injured party from my example brings a case against the frontier model developer. A fundamental question courts will address is: was this harm ‘reasonably foreseeable’? A finding of fact ensues, all against the backdrop of an adversarial legal system with millions or billions of dollars on the line for the participants.
“Can AI developers predict what the outputs of their models will be?,” the plaintiff’s attorney asks.
No, not exactly, says the company.
“Do AI developers really understand how their models work?” And again, the company is forced to reply that no, they don’t exactly know.
Pressing his advantage, the lawyer asks whether the AI developer knew about the potential for AI agents to engage in harmful conduct. Was there not, the lawyer asks, an “extensive literature” detailing these potential risks? Had the company not acknowledged their existence in a myriad of ways, not the least of which is voluntarily publishing a risk management plan (a “safety and security framework”)?
Yes, the company replies, “we knew about these risks.”
“And with these facts in mind,” concludes the attorney, “your company released this model onto unsuspecting consumers?”
A jury is supposed to consider, in these contexts, whether the benefits of the technology outweigh the risks. But they only have to consider that issue. What if the benefits are more ambient or more subtle, as the benefits of technology often are, but the risks are plain to see? How sympathetic will juries be to the abstract benefits versus the harmed individual they see in front of them, the individual who is pleading to them for justice? Will juries allow their pre-formed opinions about AI and AI development to bias their legal determinations? Is it possible to form a jury with no biases about the most important technology in our history?
Say that the jury decides that yes, the risk was reasonably foreseeable by the developer. There are, the court concludes, plausible actions the developer could have taken but did not, even if everyone agrees that other parties (such as the deploying company) also erred. The developer, therefore, bears some liability for this harm. A judge writes an opinion to this effect.
At this point, we have new judicial doctrine. How detailed, nuanced, and reflective of technical and business reality it is rests completely on the discretion of a single judge. It applies to everyone who develops AI systems. It affects profoundly the decisions, incentives, and business practices of everyone building the most important technology of our era.
At no point did any elected official decide on this doctrine. At no point did any electorate vote for it. It is public policy that simply happened, in the normal course of tort litigation.
Even more worryingly, the shape of tort law can change overnight with the political winds. Scott Wiener, the California State Senator who introduced SB 1047 last year, promised that his was a “light touch” bill subjecting the AI industry to “common sense” standards. And maybe it was.
But ironically, this year, Wiener himself has introduced a bill that shows the pitfalls with a liability-based approach when applied to a foundational technology: oil and gas firms. He is carrying SB 222, a bill which would permit Los Angeles-area residents who suffered property damages of more than $10,000 to sue oil and gas companies in tort. Wiener’s theory is that “oil and gas companies caused climate change, which caused the wildfires.”
Think about the causal leaps involved in this line of reasoning. Oil and gas companies lawfully extracted hydrocarbons from the Earth, lawfully processed them, lawfully sold them to billions of people and businesses around the world, who in turn lawfully used them. This in turn contributed to climate change, a widely documented and discussed phenomenon about which the State of California has taken many policy actions, none of which resembles banning the use of fossil fuels. Climate change is a plausible causal factor in the Los Angeles wildfires, though so too are many grievous acts of mismanagement committed by California government entities over the past decade (and you better believe you cannot sue the State of California for this).
This is the logic of a bill that could easily become law and guide billions of dollars worth of tort litigation (arguably, it’s worth noting, mostly to extract private wealth to pay for public mismanagement). Oil and gas firms could be made to bear liability not just for their own misconduct, but in essence, the “misconduct” of all their customers who used hydrocarbons. Think about how similarly extended chains of causality could be applied to the mechanized intelligence that is likely to undergird a large fraction of our economy.
You don’t need to think tort liability is bad in general—I certainly do not. You just need to agree with me that it is a suboptimal way of governing platform technologies, a fact which is reflected by a century of American technology policy.
A Standards-Based Approach
Granting AI developers liability protections would probably be unwise on its own. The technology is going to be hugely important and powerful. We are likely to need some sort of incentive for firms to build advanced AI responsibly. At the same time, we would ideally seek to avoid the problems of a centralized government regulator: burdensome or outdated rules; slow-moving bureaucracy; special interest capture; and one-size-fits-all policies.
And there is also the fact that agents—and all of the kaleidoscopic use cases I described last week—will transform governance institutions just as much as any other. But again, a traditional government regulator seems less poised to innovate with these technologies than the private sector.
This is why I have proposed a private governance framework for AI, inspired very much by Gillian Hadfield’s work on regulatory markets. In this design, companies would opt in to receiving certifications by private governance bodies overseen by formal government. Ideally, there would be multiple such bodies competing with one another. The private governance bodies would have a great deal of flexibility in how certifications could be designed and enforced, allowing them to innovate technologically and otherwise.
The competition between private governance bodies partially, though not entirely, mitigates against the political economy problems with centralized AI regulators. Another mitigation against political economy problems is that the proposal operates on an opt-in basis. The status quo is preserved as a safety valve against overly burdensome regulation, with the current tort liability system remaining fully in place as a form of “back up” regulation. In essence, this new system of private governance competes with the status quo.
Conclusion
This proposal is just a start. We do not know how we will govern a society replete with highly capable AI agents. In the long run, this task converges with the broader work of architecting free society for our new and tumultuous era. There will be no easy answers, and not every aspect of it will be fun.
But if you believe that AI will become a commodity, then perhaps one of the ways for America’s AI ecosystem to differentiate itself from those of other countries will be to build a superior system of governance. It is here, I think, that free societies can distinguish themselves from unfree societies. To do this, governments will have to welcome dynamism and tolerate their fair share of societal ills. They will have to avoid micro-managing their citizens, which will be ever more tempting and feasible because of the power of agents themselves.
The role of the state is to deal first and foremost with the most dire threats to society. Let the people—and their agents—handle the rest.
Almost all major AI labs have called for NIST to develop a single set of standards for AI safety, specifically standards relating to encryption of user data and model weights. Google asked for international standards from the International Organization for Standardization / International Electrotechnical Commission in their OSTP AI plan proposal. All that to say, it seems as if labs want standard safety regulations to (1) cover their liability (tort liability requires the establishment of “duty” — the defendant owing a legal duty to the plaintiff, and by following a set of standards they can argue that they were operating within the law if harm came from a malicious use of their model) and (2) ensure that smaller AI startups do not compromise their progress by committing safety errors that result in harsher regulation.
Have you read https://docs.google.com/spreadsheets/d/1UvPVStwCZQeDcdlRw4x3Tg5KUWOhGsjTeLsS5HyT3cQ/edit?gid=0#gid=0?
Team in the AI Liability Ideathon did useful work on highlighting which stakeholders can be held liable for what reason: https://docs.google.com/spreadsheets/d/1UvPVStwCZQeDcdlRw4x3Tg5KUWOhGsjTeLsS5HyT3cQ/edit?gid=0#gid=0