Please forgive the long essay this week. Today’s topic is a complex and broad AI policy proposal from Texas, and I want to give it appropriately thorough analysis.
In other news from me, I had three reports released last week. One is on what I think state governments should be doing on AI policy, one is on how I think governments should respond to deepfakes, and the third is a proposal on preemption coauthored with Brian Chau and Van Lindberg of the Alliance for the Future.
Onto the main event.
Introduction
There is nothing more difficult to plan, more doubtful of success, nor more dangerous to manage than the creation of a new order of things.
Niccolò Machiavelli, The Prince
After SB 1047 was vetoed, I speculated a bit on some potential future directions of American AI policy. Some, for example, have suggested that instead of regulating models, we should preemptively regulate uses of AI. I’ve long argued that this could be among the very worst ways of regulating AI. For example:
Fundamentally, the flaw with “use-based” regulation is that it leads to a kind of regulatory neuroticism: government becomes obsessed with documenting, assessing, risk-mitigating, and writing best practices for AI uses, regardless of whether it is valuable.
Imagine if you had to file paperwork with the government every time you wanted to use a computer to do something novel. Imagine if you had to tell the government how your computer has no adverse impact on “decisions that materially impact access to, or approval for, housing or accommodations, education, employment, credit, health care, and criminal justice”—every time you wanted to do something novel with a computer, and regardless of whether your computer use had anything to do with those things.
If this sounds appealing to you, I have good news: legislators in the State of Texas, and soon, numerous other states, are working to make this a reality for many uses of AI in huge swaths of the American economy.
I am referring to the Texas Responsible AI Governance Act, a new draft bill authored by Representative Giovanni Capriglione. It imposes an exceptionally broad liability standard based on the idea of “disparate impact” discrimination—that is, holding companies that use or develop AI legally responsible for negative effects of their products on civil rights-protected groups, even if there is no evidence that anyone intended to discriminate against those groups. A single instance of a business using an AI system in a way that has a disparate impact on any protected group could result in that system being taken off the market entirely until the problem is somehow remedied.
On top of that, as currently drafted, it could effectively outlaw frontier AI models like GPT-4o, and it creates a centralized regulator with broad power to regulate the “ethical development and deployment” of AI.
And not unlike SB 1047, it applies to a wide range of businesses—anyone “doing business in” the State of Texas. Its jurisdiction is probably not quite as broad as SB 1047 intended to be, but the bill almost certainly applies to most of the leading AI companies, as well as the vast majority of large cap US companies.
The bill draft has not been formally introduced in the legislature—Texas’ legislature does not go into session until January 2025—so for now it has no bill number I can use to easily refer to it. In the meantime, I’ll use the unfortunate acronym of TRAIGA.
Before I get into the details of TRAIGA, some context is necessary. This bill is part of a broader, multi-state effort to regulate AI, spearheaded by a group called the Future of Privacy Forum. The plan is to use, in essence, civil rights law to create sweeping regulatory powers for state bureaucracies. Something similar to TRAIGA, stemming from this same effort, became law in Colorado earlier this year (even though the Governor basically admitted it was a bad bill when he signed it), though it does not go into effect until 2026. Lookalike bills were also introduced in Connecticut and Virginia, but they failed. I will not be surprised if other states soon float similar legislation.
Thus, what I am about to describe has a decent chance of becoming the dominant paradigm of AI regulation in the United States within the next few years. This core of this bill was drafted not by some intern in a legislator’s office, but by a well-funded, sophisticated organization that works specifically on technology policy. Keep that in mind throughout. Every couple of paragraphs, I suggest pausing briefly to reflect on this facts, and what it suggests about the state of technology policymaking in the United States.
TRAIGA in Brief
The basic structure of TRAIGA is simple enough: for certain high-stakes decisions, AI developers, “distributors,” and users (“deployers” in TRAIGA’s EU-inflected patois) should consider the risks that the use of AI might pose for groups protected by civil rights law (age, sex, race, etc.). The way TRAIGA operationalizes this is by creating rules for the use of AI by a large subset of American society, namely:
Anyone developing an AI model of any kind (there is an exemption for open-weight models, but only if the developer “takes reasonable steps” to ensure that the open models “cannot be used as a high-risk AI system,” steps the bill devotes precisely no space to defining—so I am not sure that any open-weight models are exempted in practice).
Anyone who distributes AI models that they do not develop themselves (e.g. HuggingFace or a hyperscaler like Amazon Web Services).
Any business using AI for high-stakes activities (I’ll define this below), assuming that business is not considered a “small business” by the federal Small Business Administration. According to the SBA, this very roughly means that TRAIGA applies to any business with greater than $7.5 million in annual revenue, though in reality the SBA’s definition of a small business varies by industry.
TRAIGA, first of all, requires organizations meeting this description to write various reports. Developers need to write a report about all the ways their models may cause harm to protected groups, what they did to prevent that, and what “deployers” should do to prevent that. Every time they update their model (what precisely constitutes an update is not defined), they must write the report again. If the developer thinks that a protected group is being harmed by a “high risk” use of their model by some downstream user, they must do whatever it takes, up to and including taking the model off the market, to stop the discrimination from happening as soon as possible.
If you distribute models, you mostly just need to keep an eye out for discrimination involving AI in high-stakes decisions, somehow. It’s not exactly clear how the authors of TRAIGA would like the distributors to do this. But if a distributor thinks, or has reason to think, that discrimination may be happening anywhere in Texas, they also must stop distributing the discriminatory model as soon as possible.
Deployers, also known as “groups of humans trying to use AI to do stuff,” need to write long reports about their “high stakes” uses of AI, but mostly to themselves, and then stick those reports in a drawer somewhere in their office. This report is known as an “algorithmic impact assessment.” If the AI system or model being used is updated by its developer, the deployer needs to write another report. And even if nothing changes, users still need to write the report again “semiannually.” If the government ever asks to see your reports, you need to produce them, and you can be sued for all manner of noncompliance. What deployers say in these reports can and will be used against them, if push comes to shove.
The law would be enforced by a combination of the Texas Attorney General and the Texas AI Council, a new regulator created by the bill with broad powers to set rules for both the use and development of all AI.
And by the way, AI, for these purposes, likely means a massive amount of software that we have all taken for granted for decades, because of the way the bill defines AI. So everything I just described could easily apply not only to frontier AI systems, but far more basic software tools.
There is much more in this bill, including things that I am not going to address here (one of which is a quite positive “regulatory sandbox” model to allow experimentation with AI), but the above is a decent sketch. Now, let’s go into the details.
The Details
As ever, it’s important to understand how legislation defines key concepts. In this case, purely by virtue of alphabetic sorting, the bill’s very first definition tell us a great deal about the author’s intentions:
(1) "Algorithmic discrimination" means any condition in which an artificial intelligence system when deployed creates an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, national origin, race, religion, sex, veteran status, or other protected classification in violation of the laws of this state or federal law.
This is a fairly typical “disparate impact” definition. What this means is that “algorithmic discrimination” is deemed to have occurred purely if a differential impact on a protected group can be shown. If you are suing a business on the basis of TRAIGA, therefore, you do not necessarily need to prove that the person or business intended to discriminate with an algorithm. You simply need to show that the effect of the algorithm had a disparate impact on any of the protected traits and statuses described above (some Texas cities, such as Austin, also add protected classes of their own, which presumably would be covered by this law as well—though I am not sure of this).
It's not just any use of an algorithm that the bill covers, however. Instead, it’s uses of algorithms in what the bill calls “consequential decisions,” which are defined as follows:
…a decision that has a material legal, or similarly significant, effect on a consumer’s access to, cost of, or terms of:
(A) a criminal case assessment, a sentencing or plea agreement analysis, or a pardon, parole, probation, or release decision;
(B) education enrollment or an education opportunity;
(C) employment or an employment opportunity;
(D) a financial service;
(E) an essential government service;
(F) electricity services;
(G) food;
(H) a health-care service;
(I) housing;
(J) insurance;
(K) a legal service;
(L) a transportation service;
(M) surveillance or monitoring systems; or
(N) water.
(m) elections
(To the bill drafters, if you are reading: the lines above are precisely copied from the PDF online—(M) through (m) has some typos)
If you are an electrician and you use a language model to, say, write bids for customers, does that conceivably “affect a consumer’s access to” your services? What about using AI for customer service? What about using AI to assist in diagnosing problems at customer’s homes? It’s a fundamentally speculative question: the law is forcing business owners to ask whether use of a transformative technology could ever conceivably have a disparate impact of some kind. If the answer is even plausibly yes, I suspect many businesses will feel compelled to comply with this law—with a separate pile of paperwork for each use of AI that could be considered “consequential.”
And the use cases I’ve described are just about generalist chatbots like Claude or ChatGPT. TRAIGA, however, affects a far broader range of software applications than just these. So what kinds of algorithms, exactly, does the bill cover? This turns out to a difficult question to answer, requiring the reader to jump between several different subsections of the bill. I’m going to simplify things a bit here for readability’s sake.
Here’s how the bill defines “artificial intelligence system”:
… a machine-based system capable of:
(A) perceiving an environment through data acquisition and processing and interpreting the derived information to take an action or actions or to imitate intelligent behavior given a specific goal; and
(B) learning and adapting behavior by analyzing how the environment is affected by prior actions.
The bill does not define the terms “perceiving,” “environment,” “data acquisition,” “data processing,” “intelligent behavior,” “learning,” or “adapting,” all of which would be useful to understand what exactly this bill is aiming to regulate. By some interpretations, this could cover only a very small subset of AI systems. By others, it could cover basic operations in Microsoft Excel.
There are some exceptions to this nebulous definition. Namely, the following things are not covered by TRAIGA:
(i) anti-malware;
(ii) anti-virus;
(iii) calculators;
(iv) cybersecurity;
(v) databases;
(vi) data storage;
(vii) firewall;
(viii) internet domain registration;
(ix) internet website loading;
(x) networking;
(xi) spam- and robocall-filtering;
(xii) spell-checking;
(xiii) spreadsheets;
(xiv) web caching;
(xv) web hosting or any similar technology; or
(xvi) any technology that solely communicates in natural language for the sole purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful…
That last one (xvi) sounds like language models, right? Only kind of, unfortunately—in practice, I’m not sure any mainstream consumer chatbot really fits this definition (Gemini, Claude, and ChatGPT all are designed to do more than the things described in this provision). And there’s a bigger problem: the above technologies are exempted from regulation only if they are not used for a consequential decision! So, if you use a spreadsheet to collect information about job applicants, or you offer a college application using “internet website loading” technologies, your use of those things is, at least arguably, regulated by TRAIGA.
I regret to inform you that there is one more thing. The bill takes a page out of the EU AI Act playbook by specifying a variety of “prohibited uses.” And these, unfortunately, are a doozy. Some selections:
Regardless of the intended use or purpose, an artificial intelligence system shall not be developed or deployed that infers, or is capable of inferring, the emotions of a natural person without the express consent of the natural person.
Every language model in the world today is capable of recognizing a user’s emotions; indeed, the ability of very early predecessors of ChatGPT to recognize the emotions latent in a passage of text is one of the things that motivated OpenAI to invest heavily in language models. Furthermore, every language model trained using Reinforcement Learning from Human Feedback has learned to model users in more sophisticated ways. And this is to say nothing of models like GPT-4o, which can directly look at an image of a user or listen to the tone of their voice to infer their emotions.
Another prohibited use under TRAIGA is:
An artificial intelligence system shall not be developed or deployed that infers or interprets, or is capable of inferring or interpreting, sensitive personal attributes of a person or group of persons using biometric identifiers, except for the labeling or filtering of lawfully acquired biometric identifier data.
“Sensitive personal attribute,” is defined as “race, political opinions, religious or philosophical beliefs, or sex.” If I use GPT-4o’s live audio chat functionality, I am making a recording of myself (a “biometric identifier,” according to a recent Federal Trade Commission policy statement). Under this law, it would be flatly illegal for GPT-4o to infer that I am a man. Indeed, if I make a recording of myself saying “I, Dean Ball, am a communist,” and I send that recording to GPT-4o, it would be interpreting a sensitive personal attribute about me from a biometric record. It’s not just that the act of interpreting is forbidden by TRAIGA; it’s that being capable of interpreting such information is illegal.
In other words, the bill, as currently drafted, seems to accidentally ban at least one current frontier AI system, if not all language models and perhaps many more AI tools besides.
It is shockingly common for AI policy proposals to stumble into absurdities like this. And again I would ask you to consider: what does this mean about the people who write AI policy? How well do they understand, at even the most basic level, the technology they are so sure requires their deft regulatory touch?
We are told the AI companies are the irresponsible actors, but if OpenAI or Anthropic released a policy document as half-baked as this, it would be a scandal in the AI safety community, the tech media, and many in the general public. When an elected official does the same, it is not treated as “hilariously, outrageously, irresponsibly, get-the-hell-out-of-here” levels of bad—it is treated by established policy organizations, even those that generally oppose digital technology regulation, as “a good start.”
AI of epoch-defining capabilities could be coming soon. Without a doubt, the wheels of history are turning. It is time to hold our policymakers to a far higher standard than we currently do. This bill is not approximately, not remotely, not even in the same galaxy of, “a good start.” We should not lie for the sake of politeness; I certainly will not.
The Compliance Burden
Amazingly, all we’ve done so far is talk about how the bill is scoped. We haven’t really gone into detail on the substantive requirements the bill places on developers, distributors, and deployers (users) of AI systems, or the fact that the bill creates a centralized AI regulator with authority to create rules about any aspect of the development or deployment of AI, anywhere in the Texas economy. I’ll present some examples, with minimal analysis from me for the sake of brevity.
Paperwork requirements on AI developers:
(1) a statement describing how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision;
(2) any known limitations of the system, the metrics used to measure the system’s performance, and how the system performs under those metrics in its intended use contexts;
(3) any known or reasonably foreseeable risks of algorithmic discrimination, unlawful use or disclosure of personal data, or deceptive manipulation or coercion of human behavior arising from its intended or likely use;
(4) a description of the type of data used to program or train the high-risk artificial intelligence system;
(5) the data governance measures used to cover the training datasets and their collection, the measures used to examine the suitability of data sources, possible unlawful discriminatory biases, and appropriate mitigation; and
(6) appropriate principles, processes, and personnel for the deployers’ risk management policy.
Keep in mind this must be repeated anytime the model is “substantially or intentionally modified.”
Paperwork requirements on deployers/users:
(1) a statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system;
(2) an analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks;
(3) a description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces;
(4) if the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system;
(5) any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system;
(6) a description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use;
(7) a description of the post-deployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system; and
(8) a description of cybersecurity measures and threat modeling conducted on the system.
Keep in mind that businesses must write a separate one of these for each “high stakes” use case of AI, that they must re-do this “semiannually,” and that they must also re-do it every time a model is “substantially or intentionally modified” by its developer.
TRAIGA also creates a centralized AI regulator not entirely dissimilar from SB 1047’s Frontier Model Division, except that TRAIGA’s regulator is far more powerful than the FMD ever was in even the most expansive versions of 1047. It’s called the Texas AI Council, and its most important powers are detailed here:
Sec. 553.102. RULEMAKING AUTHORITY. (a) The council may adopt rules necessary to administer its duties under this chapter, including:
…
(2) standards for ethical artificial intelligence development and deployment;
(3) guidelines for evaluating the safety, privacy, and fairness of artificial intelligence systems.
(b) The council’s rules shall align with state laws on artificial intelligence, technology, data security, and consumer protection.
The rules that the TAIC can produce have the force of law, meaning that a single agency has the power to regulate effectively all aspects of the development and use of a general-purpose technology—at least, for companies with any economic ties to Texas. As I have written before, this is a political economy disaster.
Conclusion
Why does this bill exist? What problems is it solving? Why have similar versions of it been introduced in four states, and why will it likely continue to be introduced in other states? Who benefits from something like this?
There are some obvious answers:
The state governments, for whom these bills represent a power grab of astounding proportions, forcing many of their citizens to make shockingly broad commitments;
The industry of lawyers, consultants, auditors, and others who would surely emerge to write all of paperwork mandated by this bill;
Anyone who simply does not want AI to be developed much further, and who especially wishes to halt the diffusion of AI into everyday life and economic activity.
TRAIGA, as I mentioned earlier, emerged out of a multistakeholder process led by the Future of Privacy Forum, an organization whose members include a large swath of American industry, including Anthropic, Apple, Google, Meta, Microsoft, and OpenAI (though not Nvidia). Many blue-blooded academics, lawyers, and others (including some friends of mine) sit on their Board of Advisors.
The specific process that led to this bill is called the “multistate AI policymaker working group.” FPF bills themselves as a “neutral facilitator” in this working group, and I could find little information about what policymakers, academics, advocacy organizations, think tanks, corporations, consultants, law firms, auditors, and other organizations participated in it. But four members of the steering committee (whose members are publicly disclosed) have introduced strikingly similar bills in the past year:
Virginia House Bill 747, introduced by Representative Michelle Maldonado (did not pass)
Colorado Senate Bill 205, introduced by Senator Robert Rodriguez (passed and signed by Governor Polis)
Connecticut Senate Bill 2, introduced by Senator James Maroney (did not pass, and also was, incidentally, the least offensive of this family of bills)
And now TRAIGA, introduced by Representative Capriglione
It’s worth noting that we have seen these bills supported and introduced by both Republicans and Democrats, everywhere from regulation-loving Connecticut to Texas, the ostensible land of the free.
It is sad and baffling that bills of such poor quality have made it as far as they have, with, undoubtedly, the help of many sophisticated contributors behind the closed doors of FPF’s working group. Many smart people looked at this and decided it was good. Not only that, of all the regulatory approaches to AI, it is the one embodied by these bills that currently has the highest chance of becoming America’s default approach.
I don’t quite know how to explain this. Maybe America’s broader policymaking community is really this ignorant about AI. Or perhaps the cynical explanation is better: bills like this, rather than SB 1047, are the product of America’s inner NIMBY, the status quo’s immune system—not just the forces of civilizational stagnation, but the people who benefit from it. And if that is true, perhaps TRAIGA represents the shape of things to come far more than SB 1047 ever did. If you wanted to kill AI in its cradle, a few large states imposing laws like this would probably do the trick.
Whatever the explanation, this is an alarming state of affairs for American AI policy. We have genuine challenges to contend with, and bills like TRAIGA are not just unhelpful: they actively make things worse.
This bill sounds like a doozy, but would it actually pass? Maybe I just don't understand legislating well enough, but why would one try to legislate with such broad strokes when we have no precedent for these types of bills
I’m tired. This stuff keeps coming.