So it sounds like big businesses are pushing regulation to prevent competition from small businesses. SBA, chambers of commerce, etc needs to speak up here.
I suppose the idea of "harmful regulation" requires considering the question - "harmful to whom"? The problem for the tech industry is that the history of the unrestrained development of social media has imposed significant harms, whether with algorithmic suicidal ideation, eating disorders, or of course promotion of violence, terrorist ideology, and the normalisation of "alternative facts".
The impact on society, in particular in liberal democracies, has been unequivocally negative.
It is incumbent on the tech industry to figure out how they can propose AI regulation that will provide the appropriate checks and balances against harm to society, as well as making clear that they are not reliant on IP theft, to then work with democratically elected officials to create the right framework for enjoying the benfits of innovation whilst mitigating real harm.
As I mentioned above, the problem is that the tech industry has a poor reputation which it needs to work harder to regain before whingeing about a "safety first" approach to regulation.
I followed your link to the proposed NY State law (because I live here). By my reading, the law isn’t sufficiently specific, and could add regulatory burdens to, say, a blind-man using an AI-based reader to read resumes. That’s because the law regulates AI systems that are a “substantial factor” in making a “consequential decision.” Surely reading a resume is a substantial factor. And acceptance to a job is a consequential decision.
The law does exclude systems that perform a “narrow procedural task,” so maybe a judge or future regulation will classify text-to-speech as “narrow.”
Worse, though, perhaps that text-to-speech system should be regulated. What if it can pronounce “David Smith” better than it can pronounce “Subhadeep Chattopadhyay”? Couldn’t that injure Bengali Americans? Even the software used to produce resumes could need regulation. What if the software automatically corrects a misspelling of Brandeis University (which I can never remember how to spell even though I taught there) but not the Technion (where I studied computer science). Couldn’t that injure minorities?
Worst of all, to the extent that AI is an empowering technology (and I believe it is: https://ancientwisdommodernlives.com/p/in-the-ai-of-the-storm ), this regulation may end up keeping AI from the very people who need it most. Some people could be able to do a job very well with the help of AI, but, if AI becomes too regulated, those people will be banned from the jobs they otherwise could do.
As with so many things, in the end this seems like a good idea badly implemented.
Thank you! Fully agree--the trouble with these bills really does lie in how they define substantial factor and consequential decision. It is so hard to define "narrow procedural task," as well--which these laws largely do not.
I'm from Brazil, and the situation here is pretty much the same: we have a bill being discussed in our Congress that resembles the EU AI Act. It seems to me that this victory of the Brussels Effect can also be understood as a lack of alternatives to the AI Act being proposed by academia, institutions, and individuals who would have the responsibility to do so. Is it still possible to take a different approach? Is there a viable alternative at the moment? I don't think having no regulation is the answer either. And of course, it doesn't seem to me that you're proposing that.
I think you could plausibly make a law that requires and to some extent standardizes RSPs and have that as a national standard. Indeed, it could become global, since the other part of the AI Act I don't cover here--the Code of Practice--is essentially a beefed up version of an RSP requirement.
I think that if we think about discrimination or liability, yes. But if you want to establish obligations for companies that develop or use AI, then you need a specific law.
Well-intended leftist ideas have been a disaster in practice. Illegal immigration, crime, homelessness and drug use has only be been made worse by them.
Regulating AI before we even know how the current AI wave will turn out is wildly premature. If any of these bills make it, the hope is they would not be as onerous as some of the numbers cited in the article, but go figure.
Thanks for this very comprehensive post and I love this line at the end "we are well on our way to imposing a version of EU AI policy, inflected with American center-left quirks (“disparate impact” theories of discrimination)" which, I think, sums up this moment in American regulation and discourse quite well (searching constantly for disparate impact and turning any difference in outcomes as a matter of disparate impact).
I hang out on the fringes of the "the academics, the activists," etc. types you mention who really want AI (however we define it) to be regulated. But I suspect that these people would answer your question "Do we have evidence that algorithmic discrimination is such a significant problem?" (cost-benefit analysis is not really their thing) with a very emphatic yes. There is an entire conference (attended by statisticians, machine learning researchers but also some social scientists) called FaCCT (Fairness, Accountability, and Transparency) that has been ongoing since 2014 at least and its size has only grown. I myself have taught units in my classes where I've assumed that algorithmic discrimination is, if not widespread, at least a hard problem to solve (the COMPAS algorithm is great teaching material, https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/). The Biden Administration's Executive Order on AI came out of the same circles.
Indeed, from what I understand their argument to be, they are saying that regulations like anti-discrimination laws are NOT sufficient once we have AI in the mix and indeed, that's why it needs to be regulated. The technical people among them might even say that this is very difficult to do precisely because so much depends on training data and so forth (https://harinisuresh.medium.com/the-problem-with-biased-data-5700005e514c). I suspect you're aware of all these arguments.
So I guess, I have two questions: (a) how do we resolve the question of whether discrimination or disparate impact -- or even just wrong decisions like deciding someone is a risk when they are not; or failing to diagnose someone with a disease when they do in fact have it -- are frequent or negligible? (b) And whatever the answer to (a), is there any kind of regulatory framework that can replace the compliance-based Brussels one that you would favor? Or do you think it's best to just let AI develop and then let the courts figure things out, which is the classic American model. (To take the copyright issue for LLMs, why, for instance, is it better for OpenAI to just train its models on available but copyrighted text and then eventually hash out in court what licensing arrangements will look like rather than having the federal government get all parties at the table and forcing them to hash out something that's a trade-off for all of them?)
On a separate note, I'm also curious when you say that "in the area of privacy regulation, the “Brussels Effect” worked." By "worked," do you mean that American states adopted the European model? Or do you mean that the compliance-model of the GDPR worked in the sense of producing some beneficial outcomes. If the latter, can you say more about how? From what I can tell, all the GDPR has done for me as a consumer is increase the number of boxes I have to click when I go on any website.
Great questions! Yes, I think the answer is to let ai improve on its own. Discrimination is just one of many problems with the early algorithms that are often cited in the ADM literature. It happens to be a problem academics are primed to find, but it is far from the only problem. I think just letting the laws improve, with the looming threat of enforcement of existing civil rights laws, will do the trick.
Re: gdpr, yes, I meant that states have adopted gdpr-like laws, not that they or the gdpr itself have worked. Indeed, I think the opposite: gdpr is mostly a compliance burden and does little to genuinely protect privacy.
Appreciate the reply. Thank you. I would love to see a detailed post (or perhaps any recommended readings you've liked) that are arguing that the problem of discrimination is not as acute with algorithmic systems (or even that the problem is not "discrimination" but something else which requires a different solution than disparate impact analysis).
Great article. It shows that the “AI Regulation Divide” between the USA and Europe is less profound than one might have thought, after hearing Vice President Vance's speech at the Artificial Intelligence Action Summit in Paris. Some of the "states that matter" (California, NewYork, Texas...) are not 100% aligned with a position of unbridled AI production. Narrowing the gap between positions is necessary to avoid two-speed economies in a world already suffering from numerous social and economic fractures. Faced with the lightning pace of technological evolution, everyone seems to agree on the need for a regulatory framework, as this is a guarantee of democracy. The debate concerns the way in which regulation should be applied: anticipating the occurrence of the risk proposed by the AI Act versus anticipating sanctions in the event of its occurrence, as we saw recently with the conclusions of the Thomson Reuters vs Ross Intelligence trial.
Factually, it is impossible to reconcile the rapid advancement of exponential technologies with the enforcement of the laws that must govern these technologies. The brake on deployment imposed by legislation acts as a safeguard. Some eminent researchers consider AI to be potentially dangerous for humanity. For example, the position of Joshua Bengio, Turing Award 2018 or that of Geoffrey Hinton (aka the father of AI - Turing Award 2018 and Nobel Prize in Physics 2024). We're talking about the most impactful technology of all time for civilization as we know it today (even more so when we add agents and robots to the perimeter of AI itself). So, it's only natural to ask a few questions about its reasonable use. Legislation is one way, perhaps the only way, of defending a part of Humanity against the AI's impacts : the birth of an analytical capacity superior to that of the brain, the elimination/replacement of jobs, the control of information and the manipulation of thoughts, the reduction of freedoms, the rapid change of civilizational paradigm... The dangers of AI are multifaceted, including the destruction of humanity in worst-case scenarios. At the same time, the exponential development of virtuous AI is enabling incredible advances in the fields of health, physics, chemistry, biology and climate change management. By the way, legislation is not putting the brakes on these virtuous use cases.
Without a legislative framework, no company today has the time to worry about the risks of its AI tools, the priority being on learning these technologies and making them profitable. AI applications mapping is the starting point for implementing the AI Act. These mappings will be beneficial for detecting at-risk applications and managing remediation should the risk arise. Implementing compliance with the AI Act is going to be onerous, as has been shown by equivalent legislation such as carbon emissions monitoring, or personal data compliance with the GDPR. It will therefore be up to companies to apply the legislation with agility and pragmatism, notably by automating the process.
Although the cost of legislation is high, it's the price to pay to reconcile the innovation stemming from exponential technologies with social progress, which requires time to digest, to avoid fractures and foster cohesion.
We do not need ANY AI regulatory agency or special regulation. Application of existing laws (tailored as needed) is all that's required. Lawyers, regulators, and non-profit groups are going to push for a thicket of time-wasting, value-destroying regulations. We must push back hard.
I feel like I don't understand the objection here. Is it that these regulations will be too costly for big businesses? These companies make huge profits, and seem like they would easily be able to afford such costs. Wouldn't you want to err on the side of costs when you are initially implementing legislation?
I also don't understand the implication that big AI companies are the ones pushing for this regulation. Is the insinuation that they are trying to put up barriers to entry for smaller companies, and if so, what's the evidence for that?
Ai companies do not make “huge profits,” in fact basically none of them make any profit at all. And the costs I am describing affect not just ai companies but any company trying to use ai.
I do not imply that ai companies want this regulation; I literally state “I do not assert that they support these bills.” It’s true that these companies support fpf (they shouldn’t imo), but there is no evidence they support this legislation and I have never once asserted that they do.
I feel like I basically just don't know enough about what is normal as far as regulatory overshoot, and a sense of which specific companies would be impacted, like with example companies, which would allow me to evaluate whether the costs were overly burdensome.
You do give a lot of interesting context, so, thanks for that.
So it sounds like big businesses are pushing regulation to prevent competition from small businesses. SBA, chambers of commerce, etc needs to speak up here.
I suppose the idea of "harmful regulation" requires considering the question - "harmful to whom"? The problem for the tech industry is that the history of the unrestrained development of social media has imposed significant harms, whether with algorithmic suicidal ideation, eating disorders, or of course promotion of violence, terrorist ideology, and the normalisation of "alternative facts".
The impact on society, in particular in liberal democracies, has been unequivocally negative.
It is incumbent on the tech industry to figure out how they can propose AI regulation that will provide the appropriate checks and balances against harm to society, as well as making clear that they are not reliant on IP theft, to then work with democratically elected officials to create the right framework for enjoying the benfits of innovation whilst mitigating real harm.
As I mentioned above, the problem is that the tech industry has a poor reputation which it needs to work harder to regain before whingeing about a "safety first" approach to regulation.
This is an incredible summary. Thanks!
I followed your link to the proposed NY State law (because I live here). By my reading, the law isn’t sufficiently specific, and could add regulatory burdens to, say, a blind-man using an AI-based reader to read resumes. That’s because the law regulates AI systems that are a “substantial factor” in making a “consequential decision.” Surely reading a resume is a substantial factor. And acceptance to a job is a consequential decision.
The law does exclude systems that perform a “narrow procedural task,” so maybe a judge or future regulation will classify text-to-speech as “narrow.”
Worse, though, perhaps that text-to-speech system should be regulated. What if it can pronounce “David Smith” better than it can pronounce “Subhadeep Chattopadhyay”? Couldn’t that injure Bengali Americans? Even the software used to produce resumes could need regulation. What if the software automatically corrects a misspelling of Brandeis University (which I can never remember how to spell even though I taught there) but not the Technion (where I studied computer science). Couldn’t that injure minorities?
Worst of all, to the extent that AI is an empowering technology (and I believe it is: https://ancientwisdommodernlives.com/p/in-the-ai-of-the-storm ), this regulation may end up keeping AI from the very people who need it most. Some people could be able to do a job very well with the help of AI, but, if AI becomes too regulated, those people will be banned from the jobs they otherwise could do.
As with so many things, in the end this seems like a good idea badly implemented.
Thanks for writing about it.
Thank you! Fully agree--the trouble with these bills really does lie in how they define substantial factor and consequential decision. It is so hard to define "narrow procedural task," as well--which these laws largely do not.
I'm from Brazil, and the situation here is pretty much the same: we have a bill being discussed in our Congress that resembles the EU AI Act. It seems to me that this victory of the Brussels Effect can also be understood as a lack of alternatives to the AI Act being proposed by academia, institutions, and individuals who would have the responsibility to do so. Is it still possible to take a different approach? Is there a viable alternative at the moment? I don't think having no regulation is the answer either. And of course, it doesn't seem to me that you're proposing that.
I think you could plausibly make a law that requires and to some extent standardizes RSPs and have that as a national standard. Indeed, it could become global, since the other part of the AI Act I don't cover here--the Code of Practice--is essentially a beefed up version of an RSP requirement.
It seems to me that the project that came closest to this was SB1047. I found it more focused on these concerns present in RSPs.
I think you are on to something with the lack of alternatives explanation.
I wonder if people understand that this problem is already covered by existing law.
Is it in Brazil?
I think that if we think about discrimination or liability, yes. But if you want to establish obligations for companies that develop or use AI, then you need a specific law.
Well-intended leftist ideas have been a disaster in practice. Illegal immigration, crime, homelessness and drug use has only be been made worse by them.
Regulating AI before we even know how the current AI wave will turn out is wildly premature. If any of these bills make it, the hope is they would not be as onerous as some of the numbers cited in the article, but go figure.
We need the feds to preempt state legislation as Clinton did with the internet. Is it possible?
It is possible but hard--you need 60 votes in the Senate, and some plausible national policy to preempt with.
Not sure Section 230 has aged well, though.
Thanks for this very comprehensive post and I love this line at the end "we are well on our way to imposing a version of EU AI policy, inflected with American center-left quirks (“disparate impact” theories of discrimination)" which, I think, sums up this moment in American regulation and discourse quite well (searching constantly for disparate impact and turning any difference in outcomes as a matter of disparate impact).
I hang out on the fringes of the "the academics, the activists," etc. types you mention who really want AI (however we define it) to be regulated. But I suspect that these people would answer your question "Do we have evidence that algorithmic discrimination is such a significant problem?" (cost-benefit analysis is not really their thing) with a very emphatic yes. There is an entire conference (attended by statisticians, machine learning researchers but also some social scientists) called FaCCT (Fairness, Accountability, and Transparency) that has been ongoing since 2014 at least and its size has only grown. I myself have taught units in my classes where I've assumed that algorithmic discrimination is, if not widespread, at least a hard problem to solve (the COMPAS algorithm is great teaching material, https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/). The Biden Administration's Executive Order on AI came out of the same circles.
Indeed, from what I understand their argument to be, they are saying that regulations like anti-discrimination laws are NOT sufficient once we have AI in the mix and indeed, that's why it needs to be regulated. The technical people among them might even say that this is very difficult to do precisely because so much depends on training data and so forth (https://harinisuresh.medium.com/the-problem-with-biased-data-5700005e514c). I suspect you're aware of all these arguments.
So I guess, I have two questions: (a) how do we resolve the question of whether discrimination or disparate impact -- or even just wrong decisions like deciding someone is a risk when they are not; or failing to diagnose someone with a disease when they do in fact have it -- are frequent or negligible? (b) And whatever the answer to (a), is there any kind of regulatory framework that can replace the compliance-based Brussels one that you would favor? Or do you think it's best to just let AI develop and then let the courts figure things out, which is the classic American model. (To take the copyright issue for LLMs, why, for instance, is it better for OpenAI to just train its models on available but copyrighted text and then eventually hash out in court what licensing arrangements will look like rather than having the federal government get all parties at the table and forcing them to hash out something that's a trade-off for all of them?)
On a separate note, I'm also curious when you say that "in the area of privacy regulation, the “Brussels Effect” worked." By "worked," do you mean that American states adopted the European model? Or do you mean that the compliance-model of the GDPR worked in the sense of producing some beneficial outcomes. If the latter, can you say more about how? From what I can tell, all the GDPR has done for me as a consumer is increase the number of boxes I have to click when I go on any website.
Great questions! Yes, I think the answer is to let ai improve on its own. Discrimination is just one of many problems with the early algorithms that are often cited in the ADM literature. It happens to be a problem academics are primed to find, but it is far from the only problem. I think just letting the laws improve, with the looming threat of enforcement of existing civil rights laws, will do the trick.
Re: gdpr, yes, I meant that states have adopted gdpr-like laws, not that they or the gdpr itself have worked. Indeed, I think the opposite: gdpr is mostly a compliance burden and does little to genuinely protect privacy.
Appreciate the reply. Thank you. I would love to see a detailed post (or perhaps any recommended readings you've liked) that are arguing that the problem of discrimination is not as acute with algorithmic systems (or even that the problem is not "discrimination" but something else which requires a different solution than disparate impact analysis).
Great article. It shows that the “AI Regulation Divide” between the USA and Europe is less profound than one might have thought, after hearing Vice President Vance's speech at the Artificial Intelligence Action Summit in Paris. Some of the "states that matter" (California, NewYork, Texas...) are not 100% aligned with a position of unbridled AI production. Narrowing the gap between positions is necessary to avoid two-speed economies in a world already suffering from numerous social and economic fractures. Faced with the lightning pace of technological evolution, everyone seems to agree on the need for a regulatory framework, as this is a guarantee of democracy. The debate concerns the way in which regulation should be applied: anticipating the occurrence of the risk proposed by the AI Act versus anticipating sanctions in the event of its occurrence, as we saw recently with the conclusions of the Thomson Reuters vs Ross Intelligence trial.
Factually, it is impossible to reconcile the rapid advancement of exponential technologies with the enforcement of the laws that must govern these technologies. The brake on deployment imposed by legislation acts as a safeguard. Some eminent researchers consider AI to be potentially dangerous for humanity. For example, the position of Joshua Bengio, Turing Award 2018 or that of Geoffrey Hinton (aka the father of AI - Turing Award 2018 and Nobel Prize in Physics 2024). We're talking about the most impactful technology of all time for civilization as we know it today (even more so when we add agents and robots to the perimeter of AI itself). So, it's only natural to ask a few questions about its reasonable use. Legislation is one way, perhaps the only way, of defending a part of Humanity against the AI's impacts : the birth of an analytical capacity superior to that of the brain, the elimination/replacement of jobs, the control of information and the manipulation of thoughts, the reduction of freedoms, the rapid change of civilizational paradigm... The dangers of AI are multifaceted, including the destruction of humanity in worst-case scenarios. At the same time, the exponential development of virtuous AI is enabling incredible advances in the fields of health, physics, chemistry, biology and climate change management. By the way, legislation is not putting the brakes on these virtuous use cases.
Without a legislative framework, no company today has the time to worry about the risks of its AI tools, the priority being on learning these technologies and making them profitable. AI applications mapping is the starting point for implementing the AI Act. These mappings will be beneficial for detecting at-risk applications and managing remediation should the risk arise. Implementing compliance with the AI Act is going to be onerous, as has been shown by equivalent legislation such as carbon emissions monitoring, or personal data compliance with the GDPR. It will therefore be up to companies to apply the legislation with agility and pragmatism, notably by automating the process.
Although the cost of legislation is high, it's the price to pay to reconcile the innovation stemming from exponential technologies with social progress, which requires time to digest, to avoid fractures and foster cohesion.
This is a sharp take on AI regulation in the U.S.
We do not need ANY AI regulatory agency or special regulation. Application of existing laws (tailored as needed) is all that's required. Lawyers, regulators, and non-profit groups are going to push for a thicket of time-wasting, value-destroying regulations. We must push back hard.
I feel like I don't understand the objection here. Is it that these regulations will be too costly for big businesses? These companies make huge profits, and seem like they would easily be able to afford such costs. Wouldn't you want to err on the side of costs when you are initially implementing legislation?
I also don't understand the implication that big AI companies are the ones pushing for this regulation. Is the insinuation that they are trying to put up barriers to entry for smaller companies, and if so, what's the evidence for that?
Ai companies do not make “huge profits,” in fact basically none of them make any profit at all. And the costs I am describing affect not just ai companies but any company trying to use ai.
I do not imply that ai companies want this regulation; I literally state “I do not assert that they support these bills.” It’s true that these companies support fpf (they shouldn’t imo), but there is no evidence they support this legislation and I have never once asserted that they do.
I see, thanks for explaining.
I feel like I basically just don't know enough about what is normal as far as regulatory overshoot, and a sense of which specific companies would be impacted, like with example companies, which would allow me to evaluate whether the costs were overly burdensome.
You do give a lot of interesting context, so, thanks for that.