1 Comment
⭠ Return to thread

Thanks John! There are definitely ways to soften the liability protections. For example, above a certain size of harm, the protection could switch from safe harbor to a rebuttable presumption that the developer met the standard of care. Another option is to make companies retroactively liable for harms that were egregious enough to trigger their certification being revoked.

As I see it, there are two problems with approaches like this. If you pursued something like the first option, you would need to find a way to estimate damages ex ante, since usually damages are determined in the litigation process. This is also a problem SB 1047 faced, but it is probably solvable.

In general, I am not sure that negligence liability as an incentive on AI lab behavior will do what the safety community wants it to do. Catastrophic harms are going to be existential financial threats for most of the relevant players (everyone except Big Tech). When liability exposure is that broad, there is empirical evidence from other industries that firms have a tendency to ignore such risks. They are so large that they are not worth doing about, a bit like what TSMC has said about how they think about planning for a Taiwan invasion--"we'd be so royally, fractally, hopelessly screwed in that situation that it is not worth us really thinking about all that much."

And then there is the question of how courts would react to tort lawsuits for lesser harms. This is a whole can of worms, but my basic conclusion is that we should not trust courts to make the right decisions here. They could make decisions that would very badly harm AI development, for not very much gain at all. Happy to explain my reasoning on this, if helpful.

Expand full comment