Introduction
At the AI Action Summit in Paris this week, Vice President J.D. Vance delivered a broadly optimistic message on AI. He chastised the European Union for moving too quickly with preemptive regulations and indicated that the Trump Administration would not be following in the EU’s lead.
I applaud the Vice President for his clear-eyed and optimistic message, but I am not sure he’s right on the facts: as we speak, more than a dozen US states are implementing laws that look strikingly similar to the AI Act. Consistent readers of this newsletter will know that I am referring to a set of laws focused on algorithmic discrimination in “automated decision systems.”
Each of these bills is complex, and frustratingly, they vary considerably between states. That has made it a difficult story for me to cover. So what I’d like to do today is step back and try to provide wider context: How did these laws come to be? What problem do they purport to solve? In what ways are they like the EU’s AI Act? Is that resemblance intentional? What would be the effect of these bills passing?
I think what you will find is that America is on the verge of creating a vast regulatory apparatus for AI, regardless of Vice President Vance’s admirable skepticism of such things. And there isn’t much time to push back: most of the states considering these laws will have ended their legislative sessions within three to four months from today. Given that, let’s dive in.
About the bills
What do these bills do? In most cases, the bill title says it all: their purpose is to prevent “algorithmic discrimination.” In broad strokes, they do this by regulating the use of “AI,” broadly defined to include much modern software, in automating decisionmaking in “high-risk” use cases. More specifically, they create preemptive paperwork and risk management requirements when “AI” is used as a “substantial factor” in making a “consequential decision,” which is any decision that affects a consumer’s access to, terms of, or price of services in areas like employment, education, legal services, financial services, insurance, utilities, government services, and, in some cases, much more (covered industries vary by state, but almost all the bills cover these areas).
Any covered business that uses an AI system as a substantial factor in a consequential decision must write and implement a risk management plan and create an algorithmic impact assessment for every covered use case of AI. Developers who make AI products or services that could be used as substantial factors in consequential decisions are subject to transparency requirements (so that businesses using their products can write their compliance documents), monitoring requirements, and have to write a risk management plan of their own. In some states, small businesses would be exempted, while in other states the law would apply to everyone—including individual users acting in a personal capacity. All parties—developer and deployer—also face the possibility of negligence liability for any instances of algorithmic discrimination. Of course, there are also fees and penalties that the state governments can impose.
As you can imagine, quite a bit hinges on how “substantial factor” and “consequential decision” are defined and interpreted. One could imagine definitions of those terms that restricts the law only to situations where AI is making the final decision. If, for example, a business owner wanted to use an AI system to decide who to hire, the law would apply. This is what New York City did with Local Law 144 in 2021, but that law has been criticized by advocates for being too narrow: basically, by having even cursory human review of the AI system’s hiring recommendations, businesses can avoid having to comply with the law. So, for these new algorithmic discrimination bills, advocates are pushing for more expansive definitions of “substantial factor” and “consequential decision.”
Here's where the trouble begins. Say that I want to use ChatGPT to filter resumes I have received. In this instance, the AI system is not making the final decision, but it is structuring the information environment for the person making the final decision. Is that a substantial factor in a consequential decision? Maybe! Or say that I want to promote a job listing on a social media platform. In that case, algorithms maintained by the social media platform are “deciding” who sees the job description in the first place. What if the algorithms decide to show my job listing primarily to white and Asian men, or to young people rather than old people? Is that discrimination? Is that a “substantial factor” in a “consequential decision”? Would I, as the owner of the business, need to write a risk management plan and algorithmic impact assessment for my use of a social media advertising system?
The short answer is that we have very little idea. The only state where a version of this law has passed is Colorado (SB 205, passed in May 2024). Governor Jared Polis was a skeptic of the bill, and when he signed it he bemoaned its “complex compliance regime” and urged the legislature to use the current legislative session to simplify the bill before it goes into effect in early 2026. How is that amendment process going?
By all accounts, it is a mess. The legislature has created a “Colorado AI Impact Task Force” to review the bill and come up with recommendations for improving it. In the months since SB 205 passed, they’ve had many meetings with advocates, AI industry representatives, and groups representing the broader business community in Colorado.
The Task Force released its final recommendations on January 30. Their report noted that stakeholders had “apparent consensus” on “a handful of relatively minor proposed changes.” The definition of “substantial factor,” however, was apparently an area of “firm disagreement on approach” requiring “creativity” to solve; so, too, was defining the legal duty of care firms would have to meet to avoid negligence liability for algorithmic discrimination.
After nearly a year of meetings, Colorado does not know how to implement this law. Rather than exercising legislative restraint, however, other states are piling in—here is a list of states with substantially similar laws to SB 205 under consideration this year:
California (forthcoming legislation and agency regulations)
Connecticut (SB 2)
Iowa (no bill number yet)
Illinois (SB 2203)
Maryland (SB 936)
Nebraska (LB 642)
New Mexico (HB 60)
Oklahoma (HB 1916)
Texas (HB 1709, also known as “TRAIGA”)
Virginia (HB 2094)
I’ve heard rumors of more states coming soon. That is an awful lot of very similar laws. Surely, this can’t have all happened by chance; there must have been some coordination going on. And you would be right.
Where did these laws come from?
There are two answers to this question: a proximate cause and a longer-term cause. The longer-term answer is that there have been concerns over algorithmic discrimination raised for many years. Indeed, this was once perhaps the hottest issue in AI policy—back before ChatGPT and other language models combusted our notion of what AI could be in the first place.
Back in those pre-ChatGPT days, we were told by academia, the media, and a smattering of activists that our primary concern with AI should be facial recognition algorithms that lead to false arrests, predictive policing systems that routed police disproportionately to black and Hispanic neighborhoods, hiring algorithms that discriminated against the elderly or women, or healthcare pricing algorithms that created barriers to care for African-Americans. Academia, the media, and activists talked a lot, during this period from approximately 2015 to 2022, about discrimination of all kinds. Perhaps it is not a surprise that their primary interest in AI policy was also discrimination.
I’ll return a bit later to the question of how concerned we should be about algorithmic discrimination; for the moment, suffice it to say that this topic was in the intellectual climate across Western industry, government, academia, and NGOs—especially among the left. It should come as no surprise, then, that when the Biden Administration took office in 2021, this issue was a top priority: it features heavily in Biden-era AI policy documents such as the now-rescinded Executive Order on AI, the Blueprint for an AI Bill of Rights, the Office of Management and Budget’s memo on federal agency AI use, and the National Institute of Standards and Technology’s AI Risk Management Framework.
Before this was the hot topic, the primary focus of tech policy was privacy regulation, spurred on by the European Union’s General Data Protection Regulation. While efforts to pass a federal privacy law failed, American regulation advocates realized that states could lead the way on privacy legislation—within a few years after the GDPR, states like California, Illinois, and Texas had all passed their own privacy laws. In addition to showing advocates that states could be a more productive avenue for their efforts, these EU and US privacy laws created a new industry of “privacy compliance professionals” who wrote the “data protection impact assessments” and risk management plans these laws required (are you noticing any similarities?).
When algorithmic discrimination, and AI more broadly, became the new trendy topic (and again, this was before ChatGPT), many people in this community—the academics, the activists, the legislators, the tech company lobbyists, the compliance consultants—coalesced around a similar playbook. The European Union would lead with a splashy and comprehensive AI regulation (the AI Act), and American states would follow suit with somewhat narrower, but structurally similar, laws of their own. And thereby, tech policy would be harmonized throughout the West.
This brings us to the proximate cause of these bills appearing throughout the United States so rapidly: a non-profit known as the Future of Privacy Forum, and its “Multistate AI Policymaker Working Group.” The group is funded by nearly all of Big Tech and many Fortune 500 companies, as well as smaller AI companies like OpenAI and Anthropic (I do not assert that any of these funder companies support the algorithmic discrimination bills). That FPF has a prominent presence in the European Union suggests a fundamental truth: in many ways, FPF’s role is to facilitate the export of European technology regulation to American state legislatures.
FPF now disclaims any involvement or coordination in these bills. But nearly every state legislator on the “steering committee” for FPF’s AI policy working group has introduced a bill in their state focusing on algorithmic discrimination in AI systems that are a substantial factor in making consequential decisions. The bills all share similar mechanisms and frequently use identical language. Members of FPF’s staff have written broadly supportive op-eds about the bills. They have been thanked in public by legislators on the working group for their efforts. It is impossible to say who “led” the creation of these bills, but the facts in evidence make it hard for me to believe that FPF played the role of an entirely neutral facilitator.
In the area of privacy regulation, the “Brussels Effect,” where aggressive European regulations end up as the de facto worldwide standard, worked. But in AI, many would agree the dynamics are different: American elites on both the left and the right have expressed concerns about avoiding EU regulation this time around, and some Europeans themselves have admitted that the AI Act may have gone too far. It would be a surprise, and a disappointment, then, for the Brussels Effect to work this time. And yet that is precisely what appears to be happening.
The algorithmic discrimination bills and the AI Act
Just like the algorithmic discrimination bills, the AI Act takes a risk-based approach, highlighting industries and use cases where preemptive compliance steps are required, as well as uses of AI that are outright prohibited. Covered industries include financial services, education, employment, utilities, government services, and law enforcement (sound familiar?). Developers that produce such systems are subject to both risk management requirements and transparency disclosures (to ensure that deployers can write their compliance documents—sound familiar?). Deployers of high-risk systems must write and implement a risk management plan and conduct a “fundamental rights impact assessment” (no way this sounds familiar, right?).
Here are some of the requirements for that impact assessment:
(a) a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;
(b) a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used;
(c) the categories of natural persons and groups likely to be affected by its use in the specific context;
(d) the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified pursuant to point (c) of this paragraph, taking into account the information given by the provider pursuant to Article 13;
(e) a description of the implementation of human oversight measures, according to the instructions for use;
(f) the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms.
Here are the requirements for “algorithmic impact assessments” in Virginia’s version of the algorithmic discrimination bill bill:
1. A statement by the deployer disclosing (i) the purpose, intended use cases and deployment context of, and benefits afforded by the high-risk artificial intelligence system and (ii) whether the deployment or use of the high-risk artificial intelligence system poses any known or reasonably foreseeable risk of algorithmic discrimination and, if so, (a) the nature of such algorithmic discrimination and (b) the steps that have been taken, to the extent feasible, to mitigate such risk;
2. For each post-deployment impact assessment completed pursuant to this subsection, whether the intended use cases of the high-risk artificial intelligence system as updated were consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system;
…
6. A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and
7. A description of any post-deployment monitoring performed and user safeguards provided concerning such high-risk artificial intelligence system, including any oversight process established by the deployer to address issues arising from deployment or use of such high-risk artificial intelligence system as such issues arise.
These provisions are not precisely the same, but it would be hard to deny the similarities between them. Sometimes, the laws use precisely the same text. Here, for example, is the language the EU AI Act uses to exempt AI systems from “high-risk” status:
(a) the AI system is intended to perform a narrow procedural task;
(b) the AI system is intended to improve the result of a previously completed human activity;
(c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
(d) the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.
Here’s the exemption language in Virginia’s version:
A system or service is not a "high-risk artificial intelligence system" if it is intended to
(i) perform a narrow procedural task,
(ii) improve the result of a previously completed human activity,
(iii) detect any decision-making patterns or any deviations from pre-existing decision-making patterns, or
(iv) perform a preparatory task to an assessment relevant to a consequential decision.
None of this is specific to Virginia; all of the algorithmic discrimination bills bear these structural similarities to the AI Act, and many use language directly borrowed from the AI Act in at least some provisions. Texas goes a step further, borrowing heavily from the EU’s list of prohibited AI uses for their version of the bill, the Texas Responsible AI Governance Act. It is not a conspiracy theory to allege that these algorithmic discrimination bills are importations of major parts of the AI Act; it is a fact.
Is any of this worth it?
Given the similarities these algorithmic discrimination bills have to the AI Act, it seems reasonable to use estimates of the AI Act’s compliance costs as a ballpark for the compliance costs these bills would impose on American businesses. The Center for European Policy Studies—the European analog of the Brookings Institution—estimated that the AI Act could add as much as 17% to any corporate spending on AI (either development or deployment). Though they note that they project the more normal compliance cost to be more like 5-15% of overall AI spending, which it is worth pointing out that this estimate was conducted before ChatGPT hit the market. In other words, this was written assuming that most AI systems would have narrow purposes (hiring algorithms, facial recognition, etc.) rather than millions of conceivable general-purpose uses. The widespread use of language models could meaningfully raise these compliance cost estimates.
Therefore, in pure economic terms, these algorithmic discrimination would be among the most significant pieces of technology policy passed in the US during my lifetime—even if compliance costs come in at the low end of these estimates.
Do we have evidence that algorithmic discrimination is such a significant problem that it is worth paying these costs? I believe the answer is no. Take facial recognition: in a survey of police departments, the Washington Post found eight instances of false arrest due to a flawed facial recognition algorithm. Eight, across thousands of arrests. In all cases, the charges were dropped after the error was recognized. False arrests are, of course, a serious problem—but they have happened for a long time. I am aware of no studies that attempt to compare the rate of AI-assisted false arrests to that of purely human-enabled false arrest. If I had to bet money, I’d bet the AI systems are less prone to false arrests than humans.
There is a wealth of other literature on algorithmic bias, and some of it is quite damning. Hospital systems, for example, have used a machine-learning-based system to recommend care for patients. This system was trained on historical data, which reflected the fact that, in general, less money is spent on the care of black patients than those of, say, white patients. So the system recommended lower levels of care for black patients than patients of other races. This is a poorly designed recommendation system, to be sure—but does it merit the imposition of the AI Act on the American economy?
In many cases, instances of algorithmic discrimination are discovered during testing of the system. In others, they are discovered only after deployment, after which they can be remedied by enforcing our country’s extensive base of existing consumer protection and civil rights law. Thus, these new algorithmic discrimination laws are not novel in making algorithmic discrimination a legally actionable offense; instead, their novelty lies in their preemptive approach, attempting to stop algorithmic discrimination before it happens rather than enforcing the law after the fact.
In terms of compliance costs, it is worth noting that, in some meaningful ways, our versions of these laws could be worse than the European Union’s. The algorithmic discrimination bills almost all have provisions imposing negligence liability on developers of AI systems (often regardless of size, and often including open-source developers). The EU itself just recently backed away from its own attempt to impose liability on AI, so in that sense these laws are going a step further even than the EU. On top of this, America is a far more litigious place than the EU, so the possibility of many lawsuits is high (especially in states like Virginia and New Mexico, where these laws have a private right of action, meaning anyone can sue).
Reasonable people can argue about this, but I believe the compliance costs of these laws are high enough that they do not justify the modest benefits the laws might deliver. And this is to say nothing of the many other ways these laws could harm both innovation and technology diffusion, creating a weaker American AI ecosystem overall.
Conclusion
America is not safe from harmful regulation because venture capitalists declare that to be the case on social media, or even because our federal government is led by avowed skeptics of AI regulation. Instead, quite the opposite is true: we are well on our way to imposing a version of EU AI policy, inflected with American center-left quirks (“disparate impact” theories of discrimination). At this point, I am genuinely uncertain if this outcome can be avoided. If these laws pass, federal preemption will likely become harder, not easier, because federal representatives of states with these laws will be reticent to take power away from the states they represent.
I wish I had a more positive story to tell you here, but unfortunately I do not. This is simply the reality, as I see it, of where things stand. The Brussels Effect seems to be winning, regardless of what the “vibes” seem to be telling us. Absent some kind of major course correction, the path ahead is clear: within a year or two, artificial intelligence will be the most heavily regulated general-purpose digital technology in American history.
So it sounds like big businesses are pushing regulation to prevent competition from small businesses. SBA, chambers of commerce, etc needs to speak up here.
I suppose the idea of "harmful regulation" requires considering the question - "harmful to whom"? The problem for the tech industry is that the history of the unrestrained development of social media has imposed significant harms, whether with algorithmic suicidal ideation, eating disorders, or of course promotion of violence, terrorist ideology, and the normalisation of "alternative facts".
The impact on society, in particular in liberal democracies, has been unequivocally negative.
It is incumbent on the tech industry to figure out how they can propose AI regulation that will provide the appropriate checks and balances against harm to society, as well as making clear that they are not reliant on IP theft, to then work with democratically elected officials to create the right framework for enjoying the benfits of innovation whilst mitigating real harm.
As I mentioned above, the problem is that the tech industry has a poor reputation which it needs to work harder to regain before whingeing about a "safety first" approach to regulation.