thank you! from what I understand, the Speaker of the Texas House supports the bill, as do most legislators I've heard from in the state. And capriglione himself is quite powerful, particularly on tech issues. And Texas has a quite compressed legislative calendar. So I think the odds are at least as high as SB 1047.
this is flat out wrong and I suggest you get informed about how SB-1047 came to be
Dan Hendrycks was not involved in the drafting of the intent bill published by Scott Wiener in Sep 2023
there were multiple co-sponsors for SB-1047, and the Center for AI Safety Action Fund (not the Center for AI Safety) was one of them, but Dan Hendrycks was not "heavily involved" in the work that the co-sponsors did
he was one of the most popular account tweeting updates, which explains how people who have been mostly following updates on twitter (alongside various conspiracy theories) got this wrong
Agreed, but no in general preemption is a bit of a third rail. Everyone knows it’s a good idea, but political incentives stop it from being a political reality. Still, there are other things the incoming admin can do..
For what it's worth, I passed this on to my state senator here in TX. A month later, I received the response "On behalf of Senator Kolkhorst thank you for sharing the information below about the HB 1709 AI regulation legislation. She appreciates Dean Ball's analysis and we will certainly take this into consideration as the bill moves to the Senate. Your message made some great points and we will share this with the senator. Thank you again. Please feel free to email our Chief of Staff Chris Steinbach at chris.steinbach@senate.texas.gov with any further thoughts on the AI legislation during this legislative session."
You intend your examples as satire, but on face I don't think most of them are that unreasonable to want to regulate. Advertising, scheduling and outreach, fine, but most of the other use cases you mention have a literal history of discrimination - classic ML systems trained on amazon's hiring data showing them how discriminatory they always were for example, or early research on ChatGPT's parsing of resumes discriminating against parents. If an algorithm suggests paying you less because you live in a less wealthy zip code, or the ethnicity implied by your name is correlated with lower average comp, should that be legal?
We can object to how the regulation is being implemented, but the idea that regulating these things at all isn't worth the overhead because they happen so often is missing the point - it's important BECAUSE they happen so often, and that discrimination has serious negative impacts.
1st amendment issues aside, I can't help but suspect that once they go down the road of "protecting citizens from "Algorithmic Discrimination," US conservatives are going to find themselves enmeshed in a Kudzu of regulatory dilemmas that pit various factions of the right against both "progressives" and each other as AI's "algorithmic discrimination" rapidly comes to include (for example) "unbiased" AI evaluations of the of the "truth" of political speech.
"Hey... don't blame us for that Fox News score, CHAT-TRUTH4 does not have political "opinions" - let alone "biases" - it just evaluates "truthfulness" based on evaluations of statements against "benchmark" consensus weighted by its evaluation of reliability..."
For example, how will Laura Trump sue an AI for slander based on its ratings of her truthfulness during the 2028 presidential debates?
Does she sue its creators?
For what, exactly?
"We didn't "program" CHAT-TRUTH4 to give Trump's statement that "The US and Russia have always been at war with the EU" a trust rating of -114; the algorithms operate in a similar manner no matter who the user asks it to evaluate. If Ocasio-Cortez had said the same during the same debate instead of Laura Trump, it would have given Ocasio-Cortez a similar score..."
developers could theoretically do this, but any company that uses AI that offers products in Texas would also be affected, so the developer's customers in other states would still be covered. plus, half a dozen other states are trying similar laws.
Wait, so if I use OpenAI's system to provide a service to customers in California, do I have to submit a usage doc to a regulator in Texas? That would be absurd.
I think the technical workaround for developers is to require the IP address for API capability access to the developer's customers as well, and thereby shut off access not just to your site, but all enterprise development services, for anyone in Texas. Coupled with a change to T's & C's for API Usage, and this could be used to shift some liability to the enterprise developers for putting in reasonable safeguards to block Texas-customer access too, which in turn would almost certainly ha e second order negative effects on Texas businesses and innovation too. Simply put, they would be at a competitive disadvantage to any other state without such laws in place.
This is unfortunately precisely what many (dare I say most) AI companies will do should this become law. They simply will not be able to operate in a patchwork quilt of onerous regulations in every state — Texas being just one — that wants to "regulate safety" into AI systems. The legal risk will be too great.
This could be true, but remember that at least half a dozen states, including big ones like California, are considering bills like traiga. These companies may find it hard or impossible to block such a large region of America. They can only afford to narrow their market so much. I’m not so sure that simply blocking Texas-based is what the labs will do.
Great writeup! Do you have a sense for how likely this is to pass?
thank you! from what I understand, the Speaker of the Texas House supports the bill, as do most legislators I've heard from in the state. And capriglione himself is quite powerful, particularly on tech issues. And Texas has a quite compressed legislative calendar. So I think the odds are at least as high as SB 1047.
Here we go again… but did this also come from ai safety insiders?
not sure I would call Scott Wiener an "AI Safety insider"
Dan Hendrycks was heavily involved and very much an insider
this is flat out wrong and I suggest you get informed about how SB-1047 came to be
Dan Hendrycks was not involved in the drafting of the intent bill published by Scott Wiener in Sep 2023
there were multiple co-sponsors for SB-1047, and the Center for AI Safety Action Fund (not the Center for AI Safety) was one of them, but Dan Hendrycks was not "heavily involved" in the work that the co-sponsors did
he was one of the most popular account tweeting updates, which explains how people who have been mostly following updates on twitter (alongside various conspiracy theories) got this wrong
I'm just trying to ask the question how different communities of AI would react to this bill, no need to be so aggressive.
Love people trying to regulate all of ai, predictive ai and language models, before either of the categories have been regulated well. 😶🌫️
Has Trump or any other fed level person agitated for fed regulation to preempt state ai regs? Seems like this should be his highest priority.
Agreed, but no in general preemption is a bit of a third rail. Everyone knows it’s a good idea, but political incentives stop it from being a political reality. Still, there are other things the incoming admin can do..
It is so hard to reconcile this bill with the deregulation rhetoric
Truly!
For what it's worth, I passed this on to my state senator here in TX. A month later, I received the response "On behalf of Senator Kolkhorst thank you for sharing the information below about the HB 1709 AI regulation legislation. She appreciates Dean Ball's analysis and we will certainly take this into consideration as the bill moves to the Senate. Your message made some great points and we will share this with the senator. Thank you again. Please feel free to email our Chief of Staff Chris Steinbach at chris.steinbach@senate.texas.gov with any further thoughts on the AI legislation during this legislative session."
You intend your examples as satire, but on face I don't think most of them are that unreasonable to want to regulate. Advertising, scheduling and outreach, fine, but most of the other use cases you mention have a literal history of discrimination - classic ML systems trained on amazon's hiring data showing them how discriminatory they always were for example, or early research on ChatGPT's parsing of resumes discriminating against parents. If an algorithm suggests paying you less because you live in a less wealthy zip code, or the ethnicity implied by your name is correlated with lower average comp, should that be legal?
We can object to how the regulation is being implemented, but the idea that regulating these things at all isn't worth the overhead because they happen so often is missing the point - it's important BECAUSE they happen so often, and that discrimination has serious negative impacts.
1st amendment issues aside, I can't help but suspect that once they go down the road of "protecting citizens from "Algorithmic Discrimination," US conservatives are going to find themselves enmeshed in a Kudzu of regulatory dilemmas that pit various factions of the right against both "progressives" and each other as AI's "algorithmic discrimination" rapidly comes to include (for example) "unbiased" AI evaluations of the of the "truth" of political speech.
"Hey... don't blame us for that Fox News score, CHAT-TRUTH4 does not have political "opinions" - let alone "biases" - it just evaluates "truthfulness" based on evaluations of statements against "benchmark" consensus weighted by its evaluation of reliability..."
For example, how will Laura Trump sue an AI for slander based on its ratings of her truthfulness during the 2028 presidential debates?
Does she sue its creators?
For what, exactly?
"We didn't "program" CHAT-TRUTH4 to give Trump's statement that "The US and Russia have always been at war with the EU" a trust rating of -114; the algorithms operate in a similar manner no matter who the user asks it to evaluate. If Ocasio-Cortez had said the same during the same debate instead of Laura Trump, it would have given Ocasio-Cortez a similar score..."
Can you dodge the bill by blocking access in Texas?
developers could theoretically do this, but any company that uses AI that offers products in Texas would also be affected, so the developer's customers in other states would still be covered. plus, half a dozen other states are trying similar laws.
Wait, so if I use OpenAI's system to provide a service to customers in California, do I have to submit a usage doc to a regulator in Texas? That would be absurd.
I think the technical workaround for developers is to require the IP address for API capability access to the developer's customers as well, and thereby shut off access not just to your site, but all enterprise development services, for anyone in Texas. Coupled with a change to T's & C's for API Usage, and this could be used to shift some liability to the enterprise developers for putting in reasonable safeguards to block Texas-customer access too, which in turn would almost certainly ha e second order negative effects on Texas businesses and innovation too. Simply put, they would be at a competitive disadvantage to any other state without such laws in place.
This is unfortunately precisely what many (dare I say most) AI companies will do should this become law. They simply will not be able to operate in a patchwork quilt of onerous regulations in every state — Texas being just one — that wants to "regulate safety" into AI systems. The legal risk will be too great.
This could be true, but remember that at least half a dozen states, including big ones like California, are considering bills like traiga. These companies may find it hard or impossible to block such a large region of America. They can only afford to narrow their market so much. I’m not so sure that simply blocking Texas-based is what the labs will do.