Happy new year, everyone.
Introduction
The Texas Responsible AI Governance Act (TRAIGA) has been formally introduced in the Texas legislature, now bearing an official bill number: HB 1709. It has been modified from its original draft, improving on it in some important ways and worsening in others. In the end, TRAIGA/HB 1709 still retains most of the fundamental flaws I described in my first essay on the bill. It is, by far, the most aggressive AI regulation America has seen with a serious chance at becoming law—much more even than SB 1047, the California AI bill that was the most-discussed AI policy of 2024 before being vetoed in September.
This bill is massive, so I will not cover all its provisions comprehensively. Here, however, is a summary of what the new version of TRAIGA does.
TRAIGA in Brief
The ostensible purpose of TRAIGA is to combat algorithmic discrimination, or the notion that an AI system might discriminate, intentionally or unintentionally, against a consumer based on their race, color, national origin, gender, sex, sexual orientation, pregnancy status, age, disability status, genetic information, citizenship status, veteran status, military service record, and, if you reside in Austin, which has its own protected classes, marital status, source of income, and student status. It also seeks to ensure the “ethical” deployment of AI by creating an exceptionally powerful AI regulator, and by banning certain use cases, such as social scoring, subliminal manipulation by AI, and a few others.
Precisely like SB 1047, TRAIGA accomplishes its goal by imposing “reasonable care” negligence liability. But TRAIGA goes much further. First, unlike SB 1047, TRAIGA’s liability is very broad. SB 1047 created an obligation for developers of AI models that cost over $100 million to exercise “reasonable care” (a common legal term of art) to avoid harms greater than $500 million. TRAIGA requires developers (both foundation model developers and fine-tuners), distributors (cloud service providers, mainly), and deployers (corporate users who are not small businesses) of any AI model regardless of size or cost to exercise “reasonable care” to avoid “algorithmic discrimination” against all of the protected classes listed above. Under long-standing legal precedent, discrimination can be deemed to have occurred regardless of discriminatory intent; in other words, even if you provably did not intend to discriminate, you can still be found to have discriminated so long as there is a negative effect of some kind on any of the above-listed groups. And you can bear liability for these harms.
On top of this, TRAIGA requires developers and deployers to write a variety of lengthy compliance documents—“High-Risk Reports” for developers, “Risk Identification and Management Policies” for developers and deployers, and “Impact Assessments” for deployers. These requirements apply to any AI system that is used, or could conceivably be used, as a “substantial factor” in making a “consequential decision” (I’ll define these terms in a moment, because their definitions have changed since the original version). The Impact Assessments must be performed for every discrete use case, whereas the High-Risk Reports and Risk-Identification and Management Policies apply at the model and firm levels, respectively—meaning that they can cover multiple use cases. However, all of these documents must be updated regularly, including when a “substantial modification” is made to a model. In the case of a frontier language model, such modifications happen almost monthly, so both developers and deployers who use such systems can expect to be writing and updating these compliance documents constantly.
In theory, TRAIGA contains exemptions for open-source AI, but it is weak—bordering on nonsensical: the exemption only applies to open models that are not used as “substantial factors” in “consequential decisions,” but it is not clear how a developer of an open-source language model could possibly prevent their model from being used in “consequential decisions,” given the very nature of open-source software. Furthermore, the bill defines open-source AI differently in different provisions, at one point allowing only models that openly release training data, code, and model weights, and at another point allowing models that release weights and “technical architecture.” If you are an open-source developer, the odds are that every provision, including the liability, applies to you.
On top of this, TRAIGA creates the most powerful AI regulator in America, and therefore among the most powerful in the world: the Texas Artificial Intelligence Council, a new body with the ability to issue binding rules regarding “standards for ethical artificial intelligence development and deployment,” among a great many other things. This is far more powerful than the regulator envisioned by SB 1047, which had only narrow rulemaking authority.
The bill comes out of a multistate policymaker working group convened by the Future of Privacy Forum, a progressive non-profit focused on importing EU-style technology law into the United States. States like California, Connecticut, Colorado, and Virginia have introduced similar regulations; in important ways, they resemble the European Union’s AI Act, with that law’s focus on preemptive regulation of the use of technology by businesses.
All of this is purported by its sponsor, Representative Giovanni Capriglione, a Republican, as a model for “red state” AI legislation—in the months after Donald Trump ran a successful presidential campaign based in part on the idea of broad-based deregulation of the economy. Color me skeptical that Representative Capriglione’s bill matches the current mood of the Republican Party; indeed, I would be hard-pressed to come up with legislation that conflicts more comprehensively with the priorities of the Republican Party as articulated by its leaders. Perhaps you view this as a virtue, perhaps you view it as a sin; I view it as a fact.
All of this has been the thrust of TRAIGA since the beginning. But how has the bill changed since it was previewed in October?
Changes to TRAIGA
AI System and Algorithmic Discrimination
TRAIGA contains a new definition of “AI system.” It is still quite broad, but more sensible than the one used in the original draft. You can see, however, that the bill would apply to everything from linear regressions to image classifiers to language models:
"Artificial intelligence system" means the use of machine learning and related technologies that use data to train statistical models for the purpose of enabling computer systems to perform tasks normally associated with human intelligence or perception, such as computer vision, speech or natural language processing, and content generation.
The new version of TRAIGA also changes the definition of algorithmic discrimination to remove some of the disparate impact language from the draft. The old definition read:
“Algorithmic discrimination" means any condition in which an artificial intelligence system when deployed creates an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, national origin, race, religion, sex, veteran status, or other protected classification in violation of the laws of this state or federal law.
Whereas the new version is:
"Algorithmic discrimination" means any condition in which an artificial intelligence system when deployed creates an unlawful discrimination of a protected classification in violation of the laws of this state or federal law.
This is a positive step, but ultimately, disparate impact-based theories of discrimination are still permissible under long-standing judicial precedent. Thus, by creating the reasonable care obligation for developers, distributors, and deployers in the first place, TRAIGA likely opens up those parties to disparate impact-based litigation regardless of the bill’s specific definition of algorithmic discrimination—unless that definition were to explicitly exclude disparate impact theories.
Indeed, in a sense, TRAIGA and its lookalike bills in other states depend upon disparate impact to operate in the first place. If the law only applied in situations where discriminatory intent could be proven in court, there would be little point in the bill’s preemptive measures (in other words, most of the bill). There would be no point in forcing companies to write compliance documents that simply say “I am not intentionally discriminating against people.” In that way, these bills rely on disparate impact-based theories of discrimination—among the most culturally contentious issues of the past decade, overlapping heavily with concerns about “DEI” and “wokeness”—to even be worth writing in the first place.
It is worth noting, however, that the new version eliminates the original draft’s private right of action, meaning that only organs of the Texas government (the Attorney General, state agencies, and the new regulator, about which more below) can sue or otherwise enforce the law. This is a positive development.
“Substantial Factor” and “Consequential Decision”
TRAIGA applies primarily to AI systems that are “high risk,” which means systems whose outputs are a “substantial factor” in making “consequential decisions.” The term “substantial factor” was, crucially, undefined in the first draft of TRAIGA. That oversight has since been corrected. In the current version, “substantial factor” means:
… a factor that is:
(A) considered when making a consequential decision;
(B) likely to alter the outcome of a consequential decision; and
(C) weighed more heavily than any other factor contributing to the consequential decision.
And “consequential decision” means:
… any decision that has a material, legal, or similarly significant, effect on a consumer’s access to, cost of, or terms or conditions of:
(A) a criminal case assessment, a sentencing or plea agreement analysis, or a pardon, parole, probation, or release decision;
(B) education enrollment or an education opportunity;
(C) employment or an employment opportunity;
(D) financial service;
(E) an essential government service;
(F) residential utility services;
(G) health-care service or treatment;
(H) housing;
(I) insurance;
(J) a legal service;
(K) a transportation service;
(L) constitutionally protected services or products; or
(M) elections or voting process.
The new definition of “substantial factor” meaningfully narrows the scope of TRAIGA, particularly with the provision that the output of an AI system must be weighed more heavily than any other. I applaud this move, but the bill is still far too expansive. The bill treats “decisions” as though they are made only rarely—but in reality, people and software alike make countless decisions per day. Let’s take a concrete example: employment.
Say a business covered by TRAIGA (which is to say, any business with any operations in Texas not deemed a small business by the federal Small Business Administration) wants to hire a new employee. What might be some uses of AI covered by TRAIGA, even under this new definition of “substantial factor”? Assume, for our purposes, that each of the uses below is fulfilled by a different algorithmic system, each with its own distinct developer:
Writing job descriptions: if you use a language model simply to write a job description, that would likely constitute a “substantial factor” for the purposes of TRAIGA, because it could “materially effect” an applicant’s “access to, or terms of” a job. Therefore, as the employer, you would likely need to write an algorithmic impact assessment for the language model you used to draft the job description;
Advertisements: if an employer wants to use social media-based advertising to promote the open role, they would need to write algorithmic impact assessments for the algorithms used by all social media platforms with whom they advertise;
Applicant screening: say that the employer has the job application on their website, and they use an algorithmic system of some kind or another to try to identify bots and other malicious actors (which will themselves become more prevalent and capable thanks to AI); this algorithm surely could affect a legitimate applicant’s access to the job application, so it, too, would require an algorithmic impact assessment;
Applicant filtering: if the employer received hundreds or thousands of applications, they may want to use an algorithmic system to filter through applications, showing only the candidates most qualified for the role; you better believe this would require an algorithmic impact assessment;
Interview scheduling and candidate outreach: if an employer wanted to use an automated system to schedule interviews with potential candidates, or reach out to them in any way, this would require an algorithmic impact assessment;
Resume parsing: if an algorithmic system of any kind is used to extract salient information from candidate resumes or other application materials, this system would require its own algorithmic impact assessment;
Salary determination: if an employer wants to use an algorithmic system to determine what the optimal salary for a prospective hire is, it would require an algorithmic impact assessment;
In each of these examples, “the algorithm” is making some kind of “decision” (whom job listing ads should be shown to, which candidates’ applications are filtered, what is said to prospective candidates during interview scheduling, etc.). Thus, in all cases, these rather mundane uses of AI are “substantial factors” in making “consequential decisions,” requiring impact assessments and risk management plans from both developers and deployers, and imposing reasonable care negligence liability on all parties (as well as on any cloud computing providers involved in the distribution of these algorithms).
Keep in mind that all of these would require the deployer—that is, any company in the economy not considered a small business—to write these algorithmic impact assessments. Each of these use cases would likely require a distinct algorithmic impact assessment, and the assessments must be re-written annually, and also re-written every time there is a “substantial modification” made to any of the systems mentioned above. You can see why the consultants who write compliance documents for a living would salivate over this bill.
The scare tactic that supporters of bills like this frequently use involves fears of “AI systems making hiring decisions.” But even if an employer does not do that, they still would be massively burdened by these laws.
While we’re talking about enforcement, it is also worth pointing out that the new version of TRAIGA massively increases the financial penalties that can be imposed on companies. Violations of most of provisions of the bill now incur fines of between $50,000-$100,000 per violation, up from between $5,000-$10,000 in the original draft. In the employment examples I gave above, if a corporate user failed to write algorithmic impact assessments (because, say, they did not know about the law), or if they were found to have not followed their risk management plans and/or impact assessments, they could be looking at fines of between $350,000-$1,400,000, assuming that the algorithmic systems I mentioned above were the only ones in violation of TRAIGA. Other fines and penalties have increased massively as well.
Texas Artificial Intelligence Council
TRAIGA’s extremely powerful regulator had its mandate changed somewhat in the new version. It still has broad rulemaking authority to ensure the “ethical development and deployment of AI” (which means, in essence, “this regulator can do whatever it wishes.”) This mandate alone effectively guarantees all sorts of negative political economy outcomes; in short, this regulator is sure to become captured by special interests who will lobby for socially suboptimal policies. I have written at length about the problems with centralized AI regulators here, and suggest this piece if you would like to learn more about these problems.
Given this, I could not help but smile at some of the new mandates for the AI Council. For example:
(2) identify existing laws and regulations that impede innovation in artificial intelligence development and recommend appropriate reforms;
You mean, like TRAIGA itself?
There’s also this:
(5) investigate and evaluate the influence of technology companies on other companies and determine the existence or use of tools or processes designed to censor competitors or users;
Just so we are all clear, creating “reasonable care” negligence liability for language models will guarantee, beyond a shadow of a doubt, that AI companies heavily censor their model outputs to avoid anything that could possibly be deemed offensive by anyone. If you thought AI models were HR-ified today, you haven’t seen anything yet. Mass censorship of generative AI is among the most easily foreseeable outcome of bills like TRAIGA; it is comical that TRAIGA creates a regulator with the power to investigate companies for complying with TRAIGA.
But finally, there is my favorite one:
(4) investigate and evaluate potential instances of regulatory capture, including undue influence by technology companies or disproportionate burdens on smaller innovators;
It would be as if the European Union passed a law saying, “it is illegal for our regulations to cause our economy to stagnate.” Chef’s kiss.
Conclusion
TRAIGA, despite its improvements, remains a Lovecraftian regulatory nightmare—just like its brethren in numerous other states. In many ways, it is a caricature of what some opponents of SB 1047 claimed that bill would do. It is also a great example of the “Brussels Effect,” where the European inclination to regulate early and heavily causes other countries to adapt European standards simply by virtue of institutional momentum.
I believe America can do better. America leads the world in AI development; we must also lead the world in AI governance. Make no mistake: this means that America needs to pass some kind of AI policy, if only to fill the vacuum that will otherwise be filled by European regulations—and that we must do so soon. TRAIGA, however, is not an example of leadership. It merely follows in the footsteps of failure.
Great writeup! Do you have a sense for how likely this is to pass?
Love people trying to regulate all of ai, predictive ai and language models, before either of the categories have been regulated well. 😶🌫️