8 Comments

Hey Dean. As always, I appreciate your thoughts on these topics.

I worry that these posts miss the central question of the liability debate. It seems like most of your arguments are in support of the proposition that, as between a transacting AI developer and a consumer, liability should be mostly derived from contract and not from tort law.

But it seems to me like the main question that raised by 1047 and other liability proposals is about what to do as between an AI developer who is at-fault (e.g., negligent) and an injured third-party, when there is no contract between them.

Contracts, of course, are voluntary. I am under no background obligation to contract with AI providers as to any injuries their products may cause me as a third-party bystander. So the terms I would be willing to agree to would of course depend on where tort liability would lie in the absence of a contract.

It seems to me like skeptics of 1047 and other liability proposals want the answer to be: if an AI developer fails to take reasonable care and thereby causes a third party harm (in the legally relevant sense),* the third party should simply bear the costs themself (even when there is no contract between them and the third party is not also blameworthy).† It seems very hard to me to justify this position. The loss must be allocated between the two parties; the decision to let the loss lie with the plaintiff is still a policy choice. Morally, it seems inappropriate to let losses lie with the less-blameworthy party. But more importantly, it is economically inefficient from the perspective of incentivizing the proper amount of care. After all, the developer could much more easily invest additional resources in the safety of their products, but the third party could not have.

Maybe this argument is wrong in some way. But the arguments about the viability of contract simply have very little relevance. More generally, it would be good to identify where you agree with and diverge from mainstream tort law and theory.

The argument about litigation costs is more on-point. But note that this cuts both ways: it also makes it harder for the injured party to vindicate her rights. And indeed, given the nature of these things, her legal costs will probably be much more painful to her than to the developer. If litigation costs are the main problem, I don’t think the right answer is to simply erase tort liability for negligent developers: the more tailored answer is just to figure out how to reduce such costs. There are plenty of proposals on how to do this, and I think there is widespread consensus on the need for reform here. (E.g., https://judicature.duke.edu/articles/access-to-affordable-justice-a-challenge-to-the-bench-bar-and-academy/). (Also, I am hopeful that AI lawyers will dramatically decrease litigation costs, if we can keep bar associations from getting in the way!)

* Cf. https://www.law.cornell.edu/wex/cause#:~:text=In%20tort%20law%2C%20the%20plaintiff,proximate%20cause%20of%20the%20tort.

† In cases where the third-party could have prevented the harm with reasonable care, standard tort doctrine is to either absolve the developer of liability entirely, or partially offset the developer’s liability (https://www.law.cornell.edu/wex/comparative_negligence).

Expand full comment

Not really sure we disagree.

Safety advocates myopically obsess about third party harm because it relates the most to their preexisting concerns. It is an important issue, but it is discussed to the exclusion of nearly everything else. Why are you so sure third party harms are such an urgent issue to resolve, particularly of the kind that are affecting truly uninvolved bystanders? Contracts absolutely can help with some kinds of third party harms, and I am frustrated by the consistent stance of ai safety world that they cannot (this flies in the face of 1000 years of history).

All that said, the post does acknowledge that some kinds of harms, including the catastrophic kind ai safety people think about, are not addressed by contract. The same could be true for some, I suspect relatively rare, kinds of third party tortious harms. The latter reason is why I also outline a proposal that is… not that different from sb 1047, and note that contract/tort are of course fully compatible.

Tort liability’s role *should* be narrowly circumscribed by the law. We rely on it overmuch and it is a burden businesses everywhere face. We should not repeat the mistakes of the past because it is, as you say, “mainstream,” to favor those mistakes for one reason or another.

The entire meta-point of my post is that liability is helplessly focused on extreme tail risks that nobody really understands. So we go around and around in circles, playing rhetorical games, while actual businesses in the actual economy, and their insurers, scratch their heads about what risks they face today. These risks are mostly uninteresting to the ai safety community, which is fine—but they are not uninteresting to me. This post is about addressing those. Not every matter of ai policy needs to be about prompt engineered covid and the like.

But while we are on the topic of catastrophic risks, I wonder how you react to the wide empirical literature that casts quite serious doubt on whether tort liability really works at all in incentivizing companies to take mitigations? It seems to me everyone just assumes negligence will work, and I think this is wishful thinking.

If the ai safety community had its way, we’d have tort liability for catastrophic risks and probably little else, and very quickly, enterprising lawyers would find a way to erode definitional walls such that a huge range of normal things become “critical harms.” This has happened often in the history of tort liability in the US, and I think the safety community ignores this. We cannot, and should not, entrust decisions on these highly technical matters to random judges and juries. It’s just no way to run a civilization, and the fact that many people believe otherwise is a sign of just how thoroughly the status quo blinds people.

Expand full comment

Thanks Dean! I think our positions probably are reasonably close, but maybe further than you think?

My comment was not limited to tail risks, which is why it mentions them only passingly and indirectly (wrt 1047). And I think I could have done a better job acknowledging the various negligence-friendly aspects of your posts. But I guess I am still left unclear as to where you come out as to general negligence suits for third-party harms. On my reading, I interpret you as holding all of the following: (a) overall skepticism of the net value of today’s liability regime, especially strict liability and malpractice, but seemingly also including much of negligence(?), and (b) general disapproval of the liability provisions of 1047 and similar proposals, but also (c) openness to negligence-based liability more generally?

If I have understood this all correctly, I guess the main confusion is: If my reading of (b) is correct, does that also imply that you think that negligence liability in non-catastrophe cases is also too aggressive? (This would seem to follow a fortiori.) Put differently: what tort liability principles should govern third-party harms more generally, pre-contract, not limited to catastrophe cases?

I take your proposal to be the second option under “Compromise” (after “Another compromise…“) modifiable by contract. I suppose I object to calling this a “compromise” with people who want more liability, because, under my interpretation, it is in fact a deviation away from the background negligence standard in industry’s favor. It sounds like you want voluntary compliance with industry standards to automatically give the defendant an “effective safe harbor from liability.” But under existing negligence doctrine, companies can already argue that compliance with industry standards is evidence of due care. But crucially, under current law, juries are free to disagree: compliance is an argument defendants can raise, but not preclusive a safe harbor. This aspect of the status quo seems perfectly appropriate, given that the standards are set by self-interested parties (even assuming they are acting in good faith and want to take reaso nable care). After all, why should an unelected, non-democratic, financially conflicted group get to preclusively set their own standard of due care? (The argument is different if we do the public–private thing you then suggest as an alternative, which does indeed look much better. See also: https://arxiv.org/abs/2304.04914). Under market pressure, and in the absence of an external forcing function to actually ensure that the safety levels approximate some social optimum, then obviously we should just expect them to tend towards a rubber stamp.

In any case, as you recognize, to make that proposal work, you still need a background liability rule for parties that intentionally don’t abide by the the standard or fail to meet it. If that is the negligence/reasonable care standard, then why establish a safe harbor with the effect of saying that compliance with industry standards is conclusive evidence of reasonable care, without foreknowledge of what the actual standards are? On what theory is that justifiable? Is it that companies will do a better job than juries of picking a reasonable care standard, even in the absence of public oversight and in light of their financial self-interest (and indeed, fiduciary duty to maximize shareholder value)?

I hope I’m understanding this all correctly; let me know if not. If I am, then I suppose what threw me off is all of the discussion of contract. I agree this can in principle displace malpractice and products-liability forms of second-party liability. I think it’s also reasonable for you to not want to focus on tail risks, but the negligence backstop is central to all cases, not just tail risks.

On the empirical literature: My general take is that markets are always less efficient than theory would predict, but that price signals (like liability) are nevertheless one of the better tools society has discovered for achieving its desired ends. (To be clear, I am also somewhat skeptical of negligence being that effective in managing tail risks—it’s one of the central differences between my all-things-considered view and Gabe’s).

It seems like you think I am arguing for the status quo. I am not. I think you and I share the same goals of incentivizing the optimal level of care in practice, net of transaction costs, regardless of what the status quo is or what traditional tort theory says. But I also find it difficult to track where, with respect to negligence suits, you think we should deviate from the status quo (assuming negligence applies), and why, even for non-catastrophe harms. In part this is because your strongest arguments against expansive liability do not tend to address the negligence question. (Or, to the extent that they do aim against negligence, tend to implicitly assume without arguing that simply telling faultless injured plaintiffs that they have no remedy is a better alternative.)

Expand full comment

Cullen said it perfectly. I’ll just add that I’m basically fine with contract law handling harms to users. The real issues are (1) who, if anyone, is liable when a system harms a third party in a way that the user could not have reasonably foreseen, and (2) are developers/deployers on the hook in misuse cases of third-party harm where the user is judgment proof.

My own view, as you know, is (1) the developer/provider (or, in some cases, a third-party fine-tuner/scaffolder) should be liable, and (2) it’s complicated, but developers should not be totally off the hook for misuse, especially where the user is judgment proof.

Contract-based solutions simply cannot handle these cases because there’s no privity between the developer and the third-party victim. I understand lots of AI-related liability talk is not about this sort of case, and it’s fine for you to push back on that stuff, but that still leaves these two key issues unaddressed.

Expand full comment

I wonder if tort liability also contributed to the rise of offshoring - essentially companies trying to "hide" in other jurisdictions to escape the grasping hands of tort lawyers.

Expand full comment

Not just offshoring but many decisions related to corporate structure (for example oil tanker companies often create individual corporations for each ship so that there is no entity to sue)

Expand full comment

This is food for thought, thanks. Have you spoken to people thinking about liability and AI before? Gabriel Weil, Catherine Sharkey etc? I feel like more flow of discussion on this seems pretty important atm.

It seems a bit hard for me to imagine courts recognising contracts negotiated between AI systems as legally binding, especially when those same contracts might shield companies from traditional liability but I haven't opened up a Contract Law textbook in nearly three years now.

Expand full comment

I know Gabe and am very familiar with his proposals!

You would definitely need statute to put a framework like what I described in place.

Expand full comment