I’d like to issue a correction regarding last week’s essay. I wrote that strict liability can attach to product manufacturers regardless of whether the issue with the product is a “manufacturing” or “design” defect. This was true for several decades, but the 1998 Third Restatement of Torts for products liability (not official law—instead guidance for judges published by the American Law Institute—but often effectively treated as statute) changed liability for design defects to a “risk-utility” framework, more like negligence rather than strict liability. This error is ultimately downstream of a version control oversight on my part. It has been corrected, with a supplemental note, in the original post. I regret the mistake.
Onto this week’s essay—part II of my series on AI liability.
Introduction
The landscape of American innovation is indelibly shaped by where liability lies, and where it does not. Innovative technologies are very often the technologies whose tort liability exposure, for one reason or another, is limited. The COVID vaccine? Waiver. SpaceX rocket launches? Waiver. Cell phone towers? Waiver. Internet communications and social media? Waiver. Other innovative business models, such as ridesharing services, were designed in part to minimize tort liability exposure.
Most important of all, though, is software, which for the most part enjoys a miraculous existence, seemingly unbothered by the liability rules that hold back everyone else in our economy. But this has been changing before powerful AI came to the market—and AI may well be the straw that breaks the camel’s back. Software developers—and especially AI developers—may not be free from tort liability for much longer.
Software has been the primary driver of American innovation and growth for at least the last two decades. Tort liability could bring that dynamism to a dramatic halt. The AI community must face this problem head on. Here are some thoughts about how to do so.
Software’s Liability Wall
For a long time, software has had an implicit liability wall. It is hard, in general, to sue software developers, because most tort liability applies to harms that take place in the physical world. In particular, it is often difficult to prove that software, as an intangible good, led to the kind of real-world harms that invoke strict liability. On top of this, lawyers have struggled to develop negligence liability theories for software development, because it is hard to say what constitutes the “standard of care” for developers.
In software, then, contract, that ancient form of risk sharing I discussed last week, still prevails. Software is offered as-is, and developers disclaim all liability in the lengthy contracts (variously referred to as end user license agreements, terms of service, etc.) consumers must sign to use almost all modern software tools and services. Disclaimers of this kind once governed physical products, but they were thrown out by courts during the strict liability revolution. For software, though, the wall still stands.
Just like the old product disclaimers, consumers have little to no power to negotiate about the terms of these software contracts. And just like the tort revolutionaries of the mid-20th century, there are many legal scholars and other thinkers who find this unfair. They have sought to change this. Some of them want to do it because they believe it will improve cybersecurity. Others want to do it for the noble causes of consumer protection and accountability. Some want to do it because they are trial lawyers, and trial lawyers, like all businessmen, enjoy expanding their total addressable market.
These efforts have been ongoing for years. In the European Union, unsurprisingly, they have been successful (though the European Union does not share America’s exceptionally broad product liability regime, and it is worth noting that they apply neither to open-source software nor to any AI systems, the latter initiative having recently been walked back by the Europeans). But similar efforts to pass laws that explicitly assign generally applicable liability to software developers have mostly faltered in the US.
The Emerging Cracks in the Wall
At the end of the day, though, an explicit law creating liability for software developers may not be necessary. This is because tort liability is an ambient legal phenomenon: it always exists in the background, at least in principle, unless a law explicitly says otherwise. And there is no law that protects software from liability. Section 230 shields website owners from liability relating to content posted by third party users, but it says nothing about software writ large.
Because of this, some cracks in software’s liability wall are beginning to emerge. Just a few years ago, Apple paid hundreds of millions of dollars in settlements and fines related to alleged problems with their iPhone battery management software. Snap has been found to have tort liability for a product design flaw in a pure software feature, even after an appeal in federal court (namely, a “speed filter” that used smartphone accelerometers to show how fast a user was moving—resulting in gruesome automobile accidents).
What about AI, though? AI, depending on how you define it, might be difficult to distinguish in practice from old-fashioned software. But it is new as a concept—exotic, even. That concept has not quite rooted itself into the world as firmly as software has. And modern AI is both powerful and, quite plausibly, more dangerous than traditional software.
The liability advocates see an opportunity. Most of them are neither craven nor cynical; like the tort reformers of earlier generations, they believe they are doing good. They believe liability is necessary to create a “race to the top” for safety and security in AI. And just like the old tort reformers, they have a point.
Even if you are, rightfully, uncertain about the capabilities trajectory of frontier AI, it seems plausible to imagine that AI systems will one day soon be automating significant amounts of work that is today done by knowledge workers. As their reliability improves, and even with “humans in the loop,” it seems difficult to imagine that firms will not allow AIs to make progressively larger decisions—both in number and in magnitude.
Should all these risks truly be borne by the users, including individuals and small businesses with little or no technical expertise? What if an AI system errs in some way that a user could not have foreseen (say, because of a prompt injection attack) and causes financial harm to some third party who had no part in the matter to begin with? What if the AI hallucinates in a way that misrepresents the intentions of the company employing it? What if an AI sales agent, trying to persuade a prospective customer, defames another business (remember: defamation is a tort)?
Is the user of the system always to blame? If not, and developers cannot be sued, is no one to blame? Is there a negligent way to release an AI system capable of performing such a vast swath of cognitive labor? Should we simply revert to the ice-cold system of contract born in the days of our tough and distant ancestors, whose extreme poverty gave them little time or inclination to think about helping the hapless? Or can we do better?
No matter your answer to these questions, the reality is that these questions will be asked by increasingly large numbers of your countrymen in the coming years and decades. The liability shield for software in America has stood strong for decades, but it is already beginning to show its cracks. Will AI be the thing that shatters it? Quite possibly yes. Just as I published the first part of this series last week, Microsoft CEO Satya Nadella echoed the same idea on Dwarkesh Patel’s podcast (text in brackets and emphasis added):
I think the one biggest rate limiter to the power [of AI] here will be… how does the legal infrastructure evolve to deal with this?…
Today, you cannot deploy these intelligences unless and until there's someone indemnifying it as a human…
To your point, I think that's one of the reasons why I think about even the most powerful AI is essentially working with some delegated authority from some human… This AI takeoff problem may be a real problem, but before it is a real problem, the real problem will be in the courts. No society is going to allow for some human to say, "AI did that.”
To put it bluntly: liability is a major problem for AI developers and the AI field more broadly. This is true whether or not a new liability law is passed by state governments, and it is true whether the AI community wishes to acknowledge it or not.
What To Do
Those of us who care about innovation and civilizational progress can mount our defense at software’s liability wall. But we are few in number; failure seems likely. SB 1047, a tort liability bill (though not strict liability) came within an inch of becoming law. And, much to my surprise, negligence liability for AI that is significantly broader than SB 1047 is well on its way to becoming law throughout the United States. Yet we do not observe the same outcry from developers, academics, and others that SB 1047 generated. Some of the tech industry is even supporting it.
I doubt how much longer the wall can stand, and candidly, I am not sure how much longer it should remain standing—for AI, at least (for traditional software it undoubtedly should remain, and on this I will fight anyone tooth and nail).
Fundamentally, there are three options: do nothing and fight for the status quo of “no liability”; forge some kind of reasonable compromise; or build an altogether new kind of system for sharing responsibility for harms. Let’s examine each in turn.
Do Nothing and Fight for the Status Quo
This route will involve playing whack-a-mole across 50 state legislatures and the federal government as the temperature on AI rhetoric (whether prosaic tech ethics concerns, labor market worries, intrinsic distrust of the new, or AI catastrophe fears) only rises (maybe not in all four of these, but probably at least one). AI optimists seem to mostly think they are doing well at this so far, though I believe we are mostly failing, given the large number of liability bills making their way through the states.
Even if optimists play legislative whack-a-mole perfectly, we will still have the issue of the courts. Remember that the entire system of strict liability I described last week was assembled, by and large, without any votes by legislatures or electorates. The tort liability system was built by judges, juries, and legal scholars. There is absolutely nothing you can do about an adverse decision in courts, short of passing a law to shield AI from liability. If the dam breaks, liability suits will proliferate. If the dam really breaks, it will be because judges decided to apply strict liability to AI in some form or another. Tort liability cases have been brought against AI developers already, most notably against Character.AI for their models’ alleged role in the suicide of a Florida boy.
Doing nothing and begging policymakers to let AI developers get by under the status quo is therefore the highest risk approach.
Compromise
A plausible compromise on liability could take many forms, but I have thought the most about two of them.
One would be to accept regulation by a centralized regulator—perhaps like the FDA—in exchange for either complete safe harbor or significant and unwavering limits on liability for AI developers. The regulator could be a traditional government agency, or it could be a private body, like FINRA (which regulates securities brokers). In an ideal world, it would be free from political interference and promulgate technical standards that are easy enough for all AI developers to meet.
Another compromise would be to accept something resembling the ancient form of negligence liability based on the “reasonable person” standard, which I described above. In this scenario, companies themselves—or perhaps a nonregulatory but still powerful public or private standards body—would establish industry best practices for security and risk mitigation. Meeting these best practices would grant a company effective safe harbor from liability.
The advantage to this compromise, vis a vis the formal regulator and from the perspective of innovation, is that a company could choose not to follow the best practices and remain able to offer products. Perhaps some startup comes up with an innovation that makes them feel confident they can avoid most harms, and let their liability insurance handle the remainder. This approach would allow, if not necessarily encourage, that outcome.
Both are plausible compromises firmly rooted in familiar legal and governance concepts. They may well be prudent, if not exactly the easiest pills to swallow for developers. There are numerous downsides to these compromises, as there are with any. For the first one, the most glaring is likely the inherent political economy problems with centralized regulators. For the second, the biggest is that, even if it is carefully crafted to avoid excessive lawsuits, the history of liability is one of gradually eroding definitions and conceptual dams. This is how we ended up with the broad liability system we have today, after all.
Build Something New
Here, we can allow ourselves some fun (if you find liability fun). This is just one idea of many, of course, and I encourage others to develop proposals of their own.
I am inclined to believe that the old-fashioned contract is the optimal way for transacting parties to apportion liability for foreseeable harms among themselves. Though I understand why contracts came to be seen as subpar by the tort liability reformers, I wonder if it is time for contracts to make a comeback. The system I am going to outline below, it is worth noting, is intended to cover almost all risks from AI systems—but importantly, it probably does not, on its own, cover catastrophic risks.
What if, for example, commercial use of agentic AI was governed by a contract dynamically generated and negotiated by AI systems? The contract could specify different risk sharing terms based on the inherent risks of the activity under consideration, themselves modeled quantitatively by teams of AI actuaries and economists. It could consider the likelihood of third-party harms (harms caused by an AI system to a bystander uninvolved in the contract) and include provisions for compensating third parties.
If, for example, a business is using an agentic AI system for routine process automation in a lightly regulated industry, the AI company could readily offer generous terms—indemnifying (covering expenses associated with harms) companies for millions of dollars beyond what the company’s insurance would typically offer. Indeed, cloud computing companies routinely indemnify low-risk uses of their services today, often via self-funded insurance rather than a separate insurance plan purchased by the cloud computing company.
Riskier AI applications, on the other hand, might face more onerous terms. The AI company might be willing to share much less risk with, say, a hospital than they would for a florist. Each customer could, in principle, have their own AI that reviews and negotiates the terms of the contract with the company. Because both sides of the negotiation would be AIs, it could happen rapidly. The parameters of these contracts could be constrained by lawmakers in a diverse range of ways, and the contracts themselves could be reviewed, enforced, and perhaps even generated by private governance institutions such as automated adjudication bodies (and standard civil courts, if need be). Of course, the terms of these contracts could themselves become a vector for competition in the AI industry. In exchange for participating in a contract-based system, AI companies would receive a liability shield from tort claims against them.
An approach like this has a few distinct advantages. First, it allows risk to be both determined and shared according to the mutual agreement of transacting parties, rather than according to the arbitrary whims of disconnected regulators or courts. Second, it provides corporate customers of AI services with a guarantee of financial compensation in the event that an AI model malfunctions in some way that causes harm. This is far better than the casino dynamics that predominate in tort liability cases, where only the luckiest defendants win large settlements (and indeed, in many states, as part of tort reform efforts, legislators have capped maximum damages in tort cases, so the payouts from a contract-based system could plausibly be larger than a tort-based system).
Adopting a contract-based system need not be an all-or-nothing decision. For example, policymakers can set minimum standards for contracts to ensure a baseline level of protection for all users—just as they do today for insurance contracts. Contracts could be regulated in a variety of other ways, and one beneficial side effect is that AI policy might come to focus more on regulating contracts (familiar instruments to the legal system) rather than AI models (practical aliens to the legal system).
Such a system would probably need to be paired with, at the least, minimum transparency requirements for AI model developers—something akin to a model card, containing basic information on model training, reliability, security, and evaluation performance. This would be essential so that customers can accurately assess the strengths, weaknesses, and risks of models they are considering. Even better, though, would be for a contract-based system to be complemented by a rich model evaluation ecosystem. I wrote a bit about what such an ecosystem could look like a few months ago:
Even more mundane aspects of AI governance—such as ensuring that AI models comply with existing regulations in industrial use cases—are, at their core, evaluation problems. If a doctor wants to know whether his use of an AI model in patient diagnostics is a medical malpractice risk, his inquiry will not be meaningfully aided by a law that says “do not use AI models irresponsibly.” Instead, he will need high-quality evaluations to gauge whether different models are appropriate for his envisioned use cases.
It is conceivable that an entire ecosystem of AI model evaluators—composed of both existing and altogether new organizations—could be a major part of the long-term solution to AI governance.
As mentioned above, the sketch I’ve given is intended to cover non-catastrophic harms from AI (though could certainly include individual or localized cases of physical injury, loss of property, and the like). It would not be well-suited to address the risks of, say, an accidental pandemic caused by a pharmaceutical company’s research with an AI scientist.
But the reality is that empirical evidence (as well as deductive arguments I find persuasive) suggests that liability alone only weakly incentivizes companies to mitigate against these tail risks. The costs associated with such harms are so extreme that managers tend simply to put them aside rather than actively invest in mitigations tailored for catastrophic risks. We currently lack high-quality (or really, any) quantitative models for catastrophic AI risks—in many discussions, we lack even a definition of these risks in the first place.
As these risks become better understood, collective insurance schemes—such as the nuclear power industry’s federally supported insurance system for catastrophic risk—may be a long-term solution. Alternatively, the contract-based system could be complemented by one of the approaches described in the “compromise” section of this essay, with technical standards narrowly tailored to catastrophic risk mitigation.
Conclusion
The AI community underestimates the risk that the tort liability system could be rigorously applied to its products. Indeed, in all likelihood, tort liability already applies to AI systems; the legal system is just figuring out how to apply it. Though courts move slowly, they do move—one day soon, AI developers could find themselves on the receiving end of massive tort liability exposure.
But in a way, the expansiveness and high cost of America’s tort liability system is an incentive for the AI field to innovate. If unrestricted tort liability poses such an enormous risk to AI—as I believe it does—then the field should be strongly motivated to forge a better path. Perhaps the most prudent near-term path is one of the compromises I have outlined above; that may well be the case. I can’t help but wonder, though, if an institutional innovation out of left field—one that blends the best parts of our legal traditions with our most cutting-edge technology—might be just the thing we need in the long run.
One thing, however, seems certain: pretending as though this problem does not exist is no solution at all.
Hey Dean. As always, I appreciate your thoughts on these topics.
I worry that these posts miss the central question of the liability debate. It seems like most of your arguments are in support of the proposition that, as between a transacting AI developer and a consumer, liability should be mostly derived from contract and not from tort law.
But it seems to me like the main question that raised by 1047 and other liability proposals is about what to do as between an AI developer who is at-fault (e.g., negligent) and an injured third-party, when there is no contract between them.
Contracts, of course, are voluntary. I am under no background obligation to contract with AI providers as to any injuries their products may cause me as a third-party bystander. So the terms I would be willing to agree to would of course depend on where tort liability would lie in the absence of a contract.
It seems to me like skeptics of 1047 and other liability proposals want the answer to be: if an AI developer fails to take reasonable care and thereby causes a third party harm (in the legally relevant sense),* the third party should simply bear the costs themself (even when there is no contract between them and the third party is not also blameworthy).† It seems very hard to me to justify this position. The loss must be allocated between the two parties; the decision to let the loss lie with the plaintiff is still a policy choice. Morally, it seems inappropriate to let losses lie with the less-blameworthy party. But more importantly, it is economically inefficient from the perspective of incentivizing the proper amount of care. After all, the developer could much more easily invest additional resources in the safety of their products, but the third party could not have.
Maybe this argument is wrong in some way. But the arguments about the viability of contract simply have very little relevance. More generally, it would be good to identify where you agree with and diverge from mainstream tort law and theory.
The argument about litigation costs is more on-point. But note that this cuts both ways: it also makes it harder for the injured party to vindicate her rights. And indeed, given the nature of these things, her legal costs will probably be much more painful to her than to the developer. If litigation costs are the main problem, I don’t think the right answer is to simply erase tort liability for negligent developers: the more tailored answer is just to figure out how to reduce such costs. There are plenty of proposals on how to do this, and I think there is widespread consensus on the need for reform here. (E.g., https://judicature.duke.edu/articles/access-to-affordable-justice-a-challenge-to-the-bench-bar-and-academy/). (Also, I am hopeful that AI lawyers will dramatically decrease litigation costs, if we can keep bar associations from getting in the way!)
* Cf. https://www.law.cornell.edu/wex/cause#:~:text=In%20tort%20law%2C%20the%20plaintiff,proximate%20cause%20of%20the%20tort.
† In cases where the third-party could have prevented the harm with reasonable care, standard tort doctrine is to either absolve the developer of liability entirely, or partially offset the developer’s liability (https://www.law.cornell.edu/wex/comparative_negligence).
I wonder if tort liability also contributed to the rise of offshoring - essentially companies trying to "hide" in other jurisdictions to escape the grasping hands of tort lawyers.