14 Comments

Who would be the auditor assessing the annual risk mitigation report? That item seems like an extremely large and ambiguous component of this bill

Expand full comment
author

One among many! I imagine you’d hire a fairly traditional technical auditing firm. Certainly specialized firm creation would be incentivized if this law passed.

Expand full comment

Yeah, I’d expect traditional firms to struggle with the probabilistic nature of LLMs. All regulation I’ve seen seems to hyper focus on the generalized use cases and broader doom-like risk, it seems measuring whether specialized experiences actually work is just not on the radar. I.E. does an LLM-based mental health coach actually provide effective coaching etc….

Expand full comment
author

The cynical take would be: measuring LLM effectiveness is bad policy, because entrenched interest groups might not like the results.

Expand full comment

I think that’s really on the nose. My counter take would be the lack of a means to establish credibility either stifles innovation from risk aversion or creates a chaotic “trust us” environment that doesn’t protect consumers. Thanks for this article and all the work summarizing these bills!

Expand full comment
author

Entirely possible! You’re very welcome.

Expand full comment
Jul 30Liked by Dean W. Ball

I notice that you have failed to include an indelible watermark showing which portions of this essay were generated by AI (e.g., spell check, autocorrect, Grammarly) and are deceptively similar to human generated content. Deepfake!

This goes double for any photographs you might have included in this blog post, which would be generated by multiple layers of AI (autofocus, autoexposure, auto filters of various kinds applied to every photograph taken by a modern smartphone). Deepfakes!

Deepfakes! Deepfakes! Everything is deepfakes! We’re living in a post-truth era ever since photographers learned how to doctor photographs in the 1910s! Deepfakes! Microsoft Word makes deepfakes!

It’s all so goddam tiresome to live through the same moral panic every ten years.

Expand full comment

Enjoyed this post. Apposite to the content: Let me know if you are OK with this being shared online: I have "Narrated" the post through ElevenLabs for those like me that find audio more accessible.

https://askwhocastsai.substack.com/p/californias-other-big-ai-bill-by

Expand full comment
author

Absolutely! I always encourage sharing, so long as you link back to the original post (as you have).

I should really start doing AI reads of my pieces…

Expand full comment

I'm always in favour of more audio versions of everything! let me know if I can help advise in any way on AI Narration workflow / process stuff.

Expand full comment
Jul 29Liked by Dean W. Ball

Good piece. I can only imagine the concerns at Adobe, Microsoft, Apple, and every socmed platform.

Expand full comment

Gosh, I really need to write the piece "watermarking AI content is technically infeasible," we can only watermark human verification, and only until the first meaningful edit.

Expand full comment
author

This is such an important point. We should concentrate technical effort at watermarking *the scarce thing,* which long term is going to be human-generated content and/or images produced by actual photons hitting actual sensors in the actual world.

Expand full comment