Great analysis. The impact on discriminatory intent makes sense to me, though I am presuming there is enough legal distinction on protected classes. It gets especially confusing if you think about using AI to optimize a service business oriented towards a protected class.
Regardless, if the AI is that powerful, could the concerns be addressed "in situ" by having the AI monitor impact across protected classes and propose remedies maybe annually that better meet the intent of the impact? There would have to be a presumption of goodwill for anyone doing that. Doesn't seem like it needs to be a capability of the foundation model, but maybe a service bundled in at the application layer for businesses.
I am sure all sorts of strange compliance strategies will be possible with ai, but I’m also worried about these impact assessments being used as the basis for intentionally slowing down development or deployment of ai, much as nepa is used. And then you are just dealing with litigation delays, which you cannot meaningfully accelerate.
exactly! in the pre-llm era it was much likelier that you were designing a fully custom or highly customized model for some very narrow purpose--thus higher cost, longer timeline, easier to isolate what the system's "decision" is. but they make zero sense for LLMs
Any employer with over 15 employees already has to generate plans and write policies to demonstrate compliance with:
-Title VII of the Civil Rights Act of 1964
-Americans with Disabilities Act
-Equal Pay Act of 1963
-Age Discrimination in Employment Act
-Civil Rights Act of 1991
-Public accommodation nondiscrimination rules
-State-level nondiscrimination laws
-Local-level nondiscrimination laws
-Nondiscrimination poster requirements
These requirements have been in place for decades. No business should be unaware of their requirements with regards to developing policies and plans to substantiate their nondiscrimination at this point.
If this was the first time a law had been written that imposed requirements related to Civil Rights compliance, I would definitely support your argument. But it isn't. These AI laws are instead a continuation of a well-established and broadly nonburdensome process, one that businesses across America should not have undue difficulty adapting to.
Could you perhaps point me to an analogous law where some technology required specific civil rights assessment before use? Did businesses have to document whether their use of computers specifically would be non-discriminatory in every way in the 1990s?
Facts, not assertions of opinion based on call to authority, please. I’m open to hearing the former, not so much the latter.
Hiring is a salient example. Businesses must have plans and procedures in place that explain how they avoid violating Civil Rights law when hiring and promoting. There's an obvious similarity with AI, where before a business can take an action they need to document how they will do so in a manner compliant with law.
In the past the action taken is hiring an employee, in the future it'll be deploying an AI system. Either way, a business is expected to know how to document ways that their actions are nondiscriminatory as a basic requirement to operate.
Great analysis. The impact on discriminatory intent makes sense to me, though I am presuming there is enough legal distinction on protected classes. It gets especially confusing if you think about using AI to optimize a service business oriented towards a protected class.
Regardless, if the AI is that powerful, could the concerns be addressed "in situ" by having the AI monitor impact across protected classes and propose remedies maybe annually that better meet the intent of the impact? There would have to be a presumption of goodwill for anyone doing that. Doesn't seem like it needs to be a capability of the foundation model, but maybe a service bundled in at the application layer for businesses.
I am sure all sorts of strange compliance strategies will be possible with ai, but I’m also worried about these impact assessments being used as the basis for intentionally slowing down development or deployment of ai, much as nepa is used. And then you are just dealing with litigation delays, which you cannot meaningfully accelerate.
Impact assessments are designed for the slow moving pre LLM era of AI tbh. They can be revisited when we have a better understanding of the future.
exactly! in the pre-llm era it was much likelier that you were designing a fully custom or highly customized model for some very narrow purpose--thus higher cost, longer timeline, easier to isolate what the system's "decision" is. but they make zero sense for LLMs
Is the same as all the NAIRR and other issues I’ve discussed. People are understandably confused.
I for one don’t want to write the predictive vs generative ai articles but someone needs to
Any employer with over 15 employees already has to generate plans and write policies to demonstrate compliance with:
-Title VII of the Civil Rights Act of 1964
-Americans with Disabilities Act
-Equal Pay Act of 1963
-Age Discrimination in Employment Act
-Civil Rights Act of 1991
-Public accommodation nondiscrimination rules
-State-level nondiscrimination laws
-Local-level nondiscrimination laws
-Nondiscrimination poster requirements
These requirements have been in place for decades. No business should be unaware of their requirements with regards to developing policies and plans to substantiate their nondiscrimination at this point.
If this was the first time a law had been written that imposed requirements related to Civil Rights compliance, I would definitely support your argument. But it isn't. These AI laws are instead a continuation of a well-established and broadly nonburdensome process, one that businesses across America should not have undue difficulty adapting to.
Really? Businesses have had to write algorithmic impact assessments for decades? Have you read the laws in question?
Could you perhaps point me to an analogous law where some technology required specific civil rights assessment before use? Did businesses have to document whether their use of computers specifically would be non-discriminatory in every way in the 1990s?
Facts, not assertions of opinion based on call to authority, please. I’m open to hearing the former, not so much the latter.
Hiring is a salient example. Businesses must have plans and procedures in place that explain how they avoid violating Civil Rights law when hiring and promoting. There's an obvious similarity with AI, where before a business can take an action they need to document how they will do so in a manner compliant with law.
In the past the action taken is hiring an employee, in the future it'll be deploying an AI system. Either way, a business is expected to know how to document ways that their actions are nondiscriminatory as a basic requirement to operate.