9 Comments

Great work!!! Keep it coming!!!

Expand full comment

Thank you!

Expand full comment

It seems the primary issue is that if anything has a sniff of AI it falls into the bucket of all encompassing algo assessment review.

Wouldn’t a more reasonable policy approach would be to provide specific criteria that would cause an experience to warrant an in depth algorithmic review, as well as the specific corresponding evaluation criteria?

That feels like a middle ground between completely stifling all innovation and letting the “deployers” just be able to put things out there without consequence

Expand full comment

In principle that would work but I don’t think it’s realistic to expect this to happen in practice.

Indeed, I think we have different intuitions. It’s not that you’d be able to “put things out there without consequence.” The United States has *millions* of pages of law. If you genuinely discriminate against people or otherwise violate an existing law using AI, you can be charged under that law. There is no need for a new law just for AI.

So this is really about ex post versus ex ante enforcement. Take computers. I can imagine that people might say: “Should a business just be allowed to deploy computers of *any* processing power to their employees? Think of the potential harms! We need a process to ensure that innovation can flourish, but with guardrails.”

I think that’s dumb, and the better way to handle this is to enforce the law against people after they are suspected of committing crimes using AI.

Expand full comment

You’ll get no argument from me there - for these types of concerns (ethical, discrimination etc…) it seems like holding AI to the same standards as Humans makes perfect sense. Holding “algorithms” to a higher and almost impossible standard given the breadth of the algorithm definition helps no one.

On the same token, shouldn’t these algorithmic experiences be held to the same standard as humans in proving credentials to perform certain functions? Ex. I know what I need to do to become a MD, and there are clear penalties if I try to practice without those credentials

It seems like the logical place for regulation to step in would be to find new ways to evaluate algorithmic experiences to give the same type of certification. This has the benefit of consumer protection, but also gives a path for businesses to operate in areas that might otherwise be deemed too radioactive/risky. I think this would start with the curation of datasets to enable this evaluation. Not this documents intention but the recent Whitehouse memorandum highlights potential areas on page 31-32 https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf

Expand full comment

The fundamental problem with "disparate impact" and "equity" as concepts is that they dictate an outcome whereby, if a group that constitutes 13% of the population commits 55% of murders (to pull some numbers out of a hat), the framework of "disparate impact" and "equity" requires that no more than 13% of people serving prison sentences for murder be drawn from this group. This can either be achieved by allowing the majority of murderers from the 13% group to walk free, or imprisoning innocent people from other groups -- there is no other mechanism for satisfying "disparate impact" and "equity". Nobody likes to confront this ugly fact, but it follows directly from the principles of "disparate impact" and "equity".

Expand full comment

I don’t think that these assessments are necessarily bad, it is just in this political climate that they will be almost surely bad.

Expand full comment

Yeah, like if you’re a large business deciding whether to use AI it absolutely makes sense to have a process for weighing tradeoffs.

But that’s true for any new tool or business process. And how specifically to do that depends a lot on the particulars of a specific business. So mandating impact assessments will probably end up with myopic focus on politically convenient issues, which both is burdensome and partially obviates the point of the assessments in the first place.

Expand full comment

I think mandating it for narrow applications is okay. Like Clearview.ai.

Expand full comment