During the SB 1047 debate, I noticed that there was a great deal of confusion—my own included—about liability. Why is it precisely that software seems, for the most part, to evade America’s famously capacious notions of liability? Why does America have such an expansive liability system in the first place? What is “reasonable care,” after all? Is AI, being software, free from liability exposure today unless an intrusive legislator decides to change the status quo (preview: the answer to this one is “no”)? How does liability for AI work today, and how should it work? It turned out that to answer those questions I had to trace the history of American liability from the late 19th century to the present day.
Answering the questions above has been a journey. This week and next, I’d like to tell you what I’ve found so far. This week’s essay will tell the story of how we got to where we are, a story that has fascinating parallels to current discussions about the need for liability in AI. Next week’s essay will deal with how the American liability system, unchecked, could subsume AI, and what I believe should be done.
Introduction
“The fact that [the plaintiff] restricted its contractual liability to [the defendant] is immaterial… Regardless of the obligations [plaintiff] assumed by contract, it is subject to strict liability in tort.”
-Justice Roger J. Traynor, Supreme Court of California, majority opinion in Vandermark v. Ford Motor Company (1964) (emphasis added)
In April 2014, Dewayne Johnson, a groundskeeper for the Benicia Unified School District in Solano County, California, was performing a routine application of the herbicide Roundup to school grounds. The hose he was using to spray the chemical broke, and Johnson quickly became covered in a product whose packaging clearly stated should never be brought into direct contact with the human body. A few months later, Johnson was diagnosed with Non-Hodgkin Lymphoma.
There is almost no evidence that exposure to Roundup, whose chemical name is glyphosate, in this quantity results in cancer. The European Commission, the World Health Organization, and most importantly for purposes of American law, the Environmental Protection Agency, all rate the herbicide as an unlikely carcinogen.
Nonetheless, Johnson sued Monsanto in California court, and a San Francisco jury awarded him and his lawyers $289 million, a figure that ultimately dropped to $21 million after several rounds of appeal.
There’s nothing especially noteworthy about this case and the 154,000 other Roundup-related lawsuits against Monsanto (and its parent company, Bayer) that followed it and have cost that company more than $10 billion in settlements and payouts. This is a fairly typical example of tort litigation in the United States. Yet cases like this have always struck me as strange. What if the EPA had classified Roundup as carcinogenic, and a San Francisco jury had decided that, no, in fact, glyphosate was safe? We would all agree this was absurd.
Why is it that juries and judges routinely find safety problems in products that safety regulators with specialized expertise have deemed safe?
The answer lies in the system of modern American tort liability, conceived by a handful of jurists and legal scholars in the middle of the 20th century. It is a system that shapes your life in ways you almost certainly have never realized (unless you are a tort lawyer). It is a system that shapes what you do and do not eat, what kind of medicines you can and cannot take, and even the educational opportunities available to children. It is a system that profoundly affects nearly every product, public space, and even many services you encounter—and prevents many products, public spaces, and services from existing altogether. It is a system that is, as far as I know, unique to the United States. It is a system that came to be, for the most part, without even a single law passing, without a single vote by any electorate or legislature. And in a similar fashion, without so much as a vote in a single state legislature, it is a system that could come one day soon to AI.
American liability did not always work this way. It was, at one point not so long ago, downright Spartan; now it is the most expansive liability system in the world. The transformation from then to now took place in a shockingly brief period of time. It was understood by those who effected it as a technocratic response to modern capitalism and technology. It is a story of smart, analytical, and utilitarian people—people who desired social and material progress—doing what they believed was right to fix legitimate problems. Unsurprisingly, it is also a story of the unintended consequences of their fundamentally pro-social design.
Here is what happened.
The Liability Revolution
Liability is the assignment of responsibility for harms. For most of American history, this was primarily done through the legal instrument of the contract: parties wishing to transact with one another would agree to a contract, which would specify who would be responsible for doing what, who would be responsible if things went wrong, and what they would do in that event. A key function of courts is to enforce contracts, though as we will see, this role is perhaps less central today than it once was.
For centuries, courts also entertained various forms of tort liability. Tort means wrong, coming from the Latin tortum, which meant “twisted” or “injustice.” If I am negotiating a contract with someone, and I punch them in the face, I have committed a tort. There are, in general, two kinds of tort liability, both of which have existed for centuries: fault-based and fault-free.
A tort requiring proof of fault means that the plaintiff must demonstrate that the defendant acted carelessly. In the United States, we call this “negligence” liability, and it is based principally upon the concept of a “reasonable person.” Everyone owes a “duty of care” to society to not cause harm, and this duty is defined in the context of a particular case by asking what a “reasonable person” would have done. This is the origin of the legal term of art “reasonable care,” which any follower of SB 1047 will likely remember.
Then there are fault-free torts. If harms of this kind occur, the defendant is responsible regardless of whether they intended the harm, and regardless of how careful they were to avoid it. If my neighbor’s dog sneaks through the fence onto my property and damages my car, my neighbor is liable, regardless of how well-constructed his fence was and regardless of any other precautions he took. Today, we call this strict liability.
For a long time, contracts predominated in the assignment of liability, and both forms of tort liability I have described lived on the fringes, filling in the blanks left by contracts.
But with the rise of modern industrial capitalism, the modern large corporation, mass-manufactured goods, and high technology, the contract system began to break down. Life was becoming safer on the whole, but our technology was becoming more overtly dangerous: a liability system born out of disputes between farmers over livestock grazing in medieval Europe seemed ill-suited to grapple with automobile accidents, airplane crashes, tainted chemicals in drugs and foods, and electrical appliances that caught fire.
No longer were products sold under a contract negotiated by buyer and seller; if there was a contract at all, the buyer increasingly was faced with take-it-or-leave-it terms. How was “the little guy” supposed to negotiate with the multinational private corporation, still a relatively new character on the world-historical stage? And on top of that, proving negligence by a manufacturer was often near-impossible, because if the plaintiff had done a single thing that might have contributed to the harm, the case could be dismissed under the old negligence standard. The world that birthed these old standards was quite alien from our modern one: it was harsh, comparatively poor, and had little time for luxuries like protecting the little guy. If the buyer consented to the terms of sale, that was, more often than not, that.
These problems had been recognized by judges, here and there, throughout the 19th and early 20th centuries, but few had conceptualized a wholesale reform to the system. In the mid-20th century, a group of scholars and judges reimagined liability for the era of industrial capitalism. Among their leaders were Roger Traynor, Guido Calabresi, and William Prosser.
Their idea was straightforward: liability should be designed to structure the incentives of corporations to prioritize consumer protection and safety. The corporations would invest in safety and liability insurance, and, in their imagining, there would ensue a “race to the top” on the safety and reliability of products throughout the economy. With this goal in mind, they expanded negligence liability to incorporate a broader standard of care for defendants (and relaxing the restrictions on plaintiffs). Most importantly, though, they began to apply strict liability to products. Suddenly, the tables were turned: so long as a consumer could prove that a harm occurred and that it was caused by the product in question, the manufacturer was to blame.
There were many problems with this, but two bear mentioning, especially in the context of AI liability.
The first problem, then and now, was what precisely “caused” meant. Did the vaccine “cause” the illness the mother noticed in her child in the weeks after it was administered? Did poor road design “cause” the car crash? Did Roundup “cause” Mr. Johnson’s Non-Hodgkin Lymphoma? Did an industrial facility that someone lived near or worked in years or decades ago cause the illness the person now has? These are complex questions, often requiring specific expertise and debated by the experts anyway. In the new liability regime they would be decided very often by, as our Constitution guarantees us, “a jury of our peers.”
The second problem lies in strict liability itself. Strict liability can obtain regardless of how much care the manufacturer took to ensure safety. It can obtain regardless of whether the harm was “caused” by a manufacturing defect (i.e., an abnormality in the product’s assembly that made it uniquely dangerous) or by a design defect (something the manufacturer could, in theory have done, but did not). It can obtain regardless of whether that design defect was possible given the cost at which the manufacturer intended to sell the product. It can obtain regardless of how many other products on the market had that same design defect. It can obtain, in many cases, regardless of whether a regulator said the product was safe.
Most importantly, strict liability can obtain regardless of whether a contract between parties specified otherwise. Some of the foundational strict liability cases involve the explicit undoing of contracts by courts, often with little basis in preexisting law. In so doing, strict liability attacked at its core the way that transacting parties had apportioned risk and responsibility among themselves for centuries, at least in many domains of economic life.
The Aftermath
It is ultimately impossible to say how many lives the system of strict liability saved, though the evidence is mixed. But what we do know, with at least reasonable confidence, is that the liability revolution fundamentally changed American life—often, for the worse.
First, we know that the expansion of liability massively increased litigation and litigation costs—but this the reformers intended. We also know that tort liability incentivizes lawsuits against the wealthiest plausible alleged doer of harm, rather than the most obviously culpable. The person who crashed into your car may have no insurance and no assets to seize in court; but the car company whose design decisions could plausibly have made the accident worse? Their coffers are comparatively bottomless. Thus strict liability initiated—begged for, really—a crusade against large American companies, regardless of how much harm they created. But this too, at least arguably, was what the tort liability advocates intended.
Yet we also have reason to believe that tort liability led to unintended consequences. As workplace harm lawsuits mounted, discrimination in employment against pregnant women, the disabled, and other protected classes became common, since these groups were, for various reasons, at higher risk of generating tort liability exposure (in some occupations, pregnant women were banned from employment until this practice was made unlawful). Vaccines were made almost impossible to release without federal carve outs from liability (as was required for the Covid vaccines). Many vaccine companies went out of business or exited the US market. Healthcare became scarcer in high-risk fields—often, fields affecting especially vulnerable populations like HIV patients and pregnant women. Healthcare costs began to rise dramatically as doctors practiced “defensive medicine,” ordering unnecessary tests to avoid liability.
The reformers imagined that the insurance system would be the bedrock of their new liability system. But their system almost collapsed parts of the insurance industry. Medical malpractice insurance rates climbed 30% per year between the late 1970s and early 1980s. Child daycares were almost made de facto unlawful due to skyrocketing insurance rates. Municipalities across the country could no longer afford, and in some cases even obtain, insurance. This in turn meant that public services were eliminated and public amenities made both scarcer and more barren (no more diving boards at swimming pools; recreation programs cancelled). Foster care was curtailed and in some cases simply eliminated.
Everywhere, innovation was put on notice: you better be careful. Anything new will have all sorts of novel supposed design flaws for lawyers to identify and dissect. Not to mention, new goods are inherently more difficult to ensure, since historical data about accidents is by definition unavailable. Companies became more conservative, and not just in product decisions. One of the most common sources of tort liability was in workplace hazards of various kinds. Coupled with environmental regulations imposed around the same time, the liability system made owning any kind of factory in the US a massive vulnerability. We were not light-touch compared to the rest of the world in this regard; we were, and largely remain, among the most challenging jurisdictions for heavy industry in the developed world.
If you look at the bewildering charts on the meme website “WTF Happened in 1971?,” documenting the shocking decline of America since that time, it is hard to spot a trend where the new liability system was not a plausibly contributing factor—keeping in mind, of course, that the early 1970s is when the liability revolution got into full gear.
And in exchange for all this, it has always been the case that plaintiffs are unlikely on average to succeed in tort liability cases. The result is, as Peter Huber describes it in his book Liability: The Revolution and Its Consequences, a casino. Most people cannot obtain justice for legitimate harms they suffer; a few win big. The system leaves us with the worst of both worlds, enriching a few lucky plaintiffs and their lawyers, and leaving everyone else with little but scarcer goods and higher prices.
The liability revolution was a failure. Despite valiant and often successful efforts at broad-based reform, the tort liability system I have described remains the law of the land today.
As I read the writings (often eloquent and meticulously well-reasoned) of the tort revolution fathers, I found myself thinking about the similarities to AI policy discussions today. The arguments are often precisely the same. The market system will not incentivize safety and security in AI. But government cannot feasibly regulate every part of this fast-moving technology, and even if they do we have good reason to expect their regulations will be flawed. So we need liability to restructure the incentives of the AI industry and create—you guessed it—a race to the top. The companies can bear the costs, we are told, and the insurance system will help them.
We have tried this before. We failed then. Perhaps this time we will learn our lesson. Perhaps instead of overreliance on strict liability, we should look to negligence and its much more lenient “reasonable care” standard. This is what many of today’s reformers desire, as evidenced by SB 1047 (though plenty believe we need strict liability). But we should be inherently skeptical of any reform whose arguments so closely mirror the rhetoric used to justify one of the most staggering policy failures of modern American history.
Regardless of whether the AI liability reformers succeed, the liability system will affect AI. I have always believed that any successful AI revolution will require us to go beyond the world of bits and transform the physical world. It’s not just environmental permitting reform and other “regulations” that bedevil us: any serious effort to change the physical world will require grappling with our freeze-it-in-place liability system. Any entrepreneur with ambitions to remake the physical world will run headlong into the jaws of this system. They will struggle to do the things futurists dream of, like putting robots in every home, without being bombarded by lawsuits.
They will be especially surprised if they come from Silicon Valley, which for the most part, has managed to escape the hooks of tort liability (except for the few remaining Silicon Valley firms who specialize in consumer hardware). Why is that the case? And, more worryingly, might this happy status quo be on the verge of changing, even for purely digital instantiations of AI?
Next week’s essay will explain why it is that software has avoided the brunt of liability litigation. It will also explain why, in the era of AI, I expect this to begin changing—regardless of whether a bill like SB 1047 passes or not. Finally, it will outline a few options for what I believe can be done to avert this would-be crisis.
Impressive work!
Great piece. As you write more in this series, how should the failure to reform American liability law for human beings shape what liability we adopt for AI? For example, it would be strange to hold an AI self-driving car (and its manufacturers) to a different standard than one driven by a human being. One could make similar arguments for copyright standards and other areas of law. These all desperately need reform, but does that mean we should adopt a better policy regime for AI before we've reformed the policy regime for humans?
In short: you've convinced me that our liability regime needs an overhaul. If it is not overhauled, though, wouldn't it be unjust to argue the AI world ought to be held to different standards?