Almost all major AI labs have called for NIST to develop a single set of standards for AI safety, specifically standards relating to encryption of user data and model weights. Google asked for international standards from the International Organization for Standardization / International Electrotechnical Commission in their OSTP AI plan proposal. All that to say, it seems as if labs want standard safety regulations to (1) cover their liability (tort liability requires the establishment of “duty” — the defendant owing a legal duty to the plaintiff, and by following a set of standards they can argue that they were operating within the law if harm came from a malicious use of their model) and (2) ensure that smaller AI startups do not compromise their progress by committing safety errors that result in harsher regulation.
And what new monstrous government agency with 50,000 grifting twits is going to police all this? That's the trouble...none of these agencies/institutions can be trusted.
Dean, great article, but your choice of AI firm examples earlier in this exposition seems to reveal a strong personal bias (intentional or not) in favor of Anthropic and against xAI. Quick gut check: would you still make the same conclusions about the two scenarios featured if the names of the firms were swapped?
You mean how over the last 248 years the citizens have governed the government? We the people and all that jazz?
We see how that has worked out. The citizens are not part of anything other than to be tax-slaves and voters for more and more government.
You will never govern A/i. It is designed to replace all current masters as the forever master of your digital prison. With A/i as your master, there will be no escape other than death once the prison is locked.
And just who exactly can we trust to make the rules for how A/i will operate? Government? Big tech? Big pharma? Who?
Why wouldn't private competing certification bodies quickly engaged in a race to the bottom, seeking to attract AI companies by offering the loosest possible regulation?
Good stuff...with the exception of the "free vs un-free" societies. I do not think this is a useful distinction. All governments and societies face the same challenges. And if the reference is to China, then the Chinese government has no more clue on how to ensure safe/secure AI development that the "free" society governments. We need to look for collaboration here between key countries that are developing the most advanced AI, and having just spent 2 weeks in China talking all the leading AI firms, guess what, they are really good and innovating and moving forward. To think we have the luxury of separating the word into "free" and "un-free" (China is anything but "un-free"), is missing the point, IMHO. See my Substack and recent look at dangers of US China AI competition in MIT Tech Review. Again, really thoughtful piece, the China angle is going to be the most complex here, because not much understanding of where Beijing is on all these issues...
What about insurance? Yes, there are costs, but wouldn't an insurance market more flexibly respond to real-world needs and implement standards to allow companies to get lower rates (versus even a third-party system)? With liability and insurance, the standards have a financial incentive to get things right. With a government mandate in exchange for liability protection, that incentive to get things right goes away.
insurance isn't magic. insurance needs risks that can be rigorously quantitatively modeled. you cannot do that for "all of the liability resulting from intelligent activity in the economy."
Yeah, I agree insurance won't be the answer for all the liability resulting from all intelligence activity (read - most liability in most circumstances). But won't the market lead companies to only need and take out insurance for risks/liability that put them at significant risk? Liability won't incur until a clear threat model becomes more apparent (there will need to be a clear case for a negligence claim to win). And the threat model becoming apparent is also what enables rigorous quantitative modeling to set insurance rates. Maybe the Venn diagram between winnable negligence claim and rigorously quantitatively modelable risk doesn't match up perfectly, but it seems to better take advantage of the market and not attempt a solution without the information necessary to assess threat models.
Have you considered how your proposals would hold up if timelines to transformative AI are extremely short? For clarity, I’m imagining a scenario similar to ai-2027.com: where we have AI agents capable of full AI R&D, leading to millions of intelligent systems being deployed rapidly.
In such a case, I suspect tort liability will become the first mechanism we see in practice—far sooner than any standards-based proposals—because legislative processes are just too slow. That makes it even more important to guide courts towards sound principles for handling these cases. It also suggests we should start thinking now about how to design a liability regime that avoids over exposing labs to liability risks.
I'm less familiar with the role juries play in the US system (I’m based in the UK), so I’m curious—does this approach run into problems with how American courts actually operate?
Almost all major AI labs have called for NIST to develop a single set of standards for AI safety, specifically standards relating to encryption of user data and model weights. Google asked for international standards from the International Organization for Standardization / International Electrotechnical Commission in their OSTP AI plan proposal. All that to say, it seems as if labs want standard safety regulations to (1) cover their liability (tort liability requires the establishment of “duty” — the defendant owing a legal duty to the plaintiff, and by following a set of standards they can argue that they were operating within the law if harm came from a malicious use of their model) and (2) ensure that smaller AI startups do not compromise their progress by committing safety errors that result in harsher regulation.
And what new monstrous government agency with 50,000 grifting twits is going to police all this? That's the trouble...none of these agencies/institutions can be trusted.
Have you read https://docs.google.com/spreadsheets/d/1UvPVStwCZQeDcdlRw4x3Tg5KUWOhGsjTeLsS5HyT3cQ/edit?gid=0#gid=0?
Team in the AI Liability Ideathon did useful work on highlighting which stakeholders can be held liable for what reason: https://docs.google.com/spreadsheets/d/1UvPVStwCZQeDcdlRw4x3Tg5KUWOhGsjTeLsS5HyT3cQ/edit?gid=0#gid=0
Dean, great article, but your choice of AI firm examples earlier in this exposition seems to reveal a strong personal bias (intentional or not) in favor of Anthropic and against xAI. Quick gut check: would you still make the same conclusions about the two scenarios featured if the names of the firms were swapped?
Both parts truly excellent. Thank you for your stellar contributions.
You mean how over the last 248 years the citizens have governed the government? We the people and all that jazz?
We see how that has worked out. The citizens are not part of anything other than to be tax-slaves and voters for more and more government.
You will never govern A/i. It is designed to replace all current masters as the forever master of your digital prison. With A/i as your master, there will be no escape other than death once the prison is locked.
And just who exactly can we trust to make the rules for how A/i will operate? Government? Big tech? Big pharma? Who?
Better think some more about the ongoing use of AI before eschewing strict regulations: https://www.projectcensored.org/military-ai-watch/
Why wouldn't private competing certification bodies quickly engaged in a race to the bottom, seeking to attract AI companies by offering the loosest possible regulation?
Good stuff...with the exception of the "free vs un-free" societies. I do not think this is a useful distinction. All governments and societies face the same challenges. And if the reference is to China, then the Chinese government has no more clue on how to ensure safe/secure AI development that the "free" society governments. We need to look for collaboration here between key countries that are developing the most advanced AI, and having just spent 2 weeks in China talking all the leading AI firms, guess what, they are really good and innovating and moving forward. To think we have the luxury of separating the word into "free" and "un-free" (China is anything but "un-free"), is missing the point, IMHO. See my Substack and recent look at dangers of US China AI competition in MIT Tech Review. Again, really thoughtful piece, the China angle is going to be the most complex here, because not much understanding of where Beijing is on all these issues...
What about insurance? Yes, there are costs, but wouldn't an insurance market more flexibly respond to real-world needs and implement standards to allow companies to get lower rates (versus even a third-party system)? With liability and insurance, the standards have a financial incentive to get things right. With a government mandate in exchange for liability protection, that incentive to get things right goes away.
insurance isn't magic. insurance needs risks that can be rigorously quantitatively modeled. you cannot do that for "all of the liability resulting from intelligent activity in the economy."
Yeah, I agree insurance won't be the answer for all the liability resulting from all intelligence activity (read - most liability in most circumstances). But won't the market lead companies to only need and take out insurance for risks/liability that put them at significant risk? Liability won't incur until a clear threat model becomes more apparent (there will need to be a clear case for a negligence claim to win). And the threat model becoming apparent is also what enables rigorous quantitative modeling to set insurance rates. Maybe the Venn diagram between winnable negligence claim and rigorously quantitatively modelable risk doesn't match up perfectly, but it seems to better take advantage of the market and not attempt a solution without the information necessary to assess threat models.
Don't tell me I will need to buy more insurance for anything.
Have you considered how your proposals would hold up if timelines to transformative AI are extremely short? For clarity, I’m imagining a scenario similar to ai-2027.com: where we have AI agents capable of full AI R&D, leading to millions of intelligent systems being deployed rapidly.
In such a case, I suspect tort liability will become the first mechanism we see in practice—far sooner than any standards-based proposals—because legislative processes are just too slow. That makes it even more important to guide courts towards sound principles for handling these cases. It also suggests we should start thinking now about how to design a liability regime that avoids over exposing labs to liability risks.
I'm less familiar with the role juries play in the US system (I’m based in the UK), so I’m curious—does this approach run into problems with how American courts actually operate?
Just what we need...court systems jammed up with thousands of these cases.