One among many! I imagine you’d hire a fairly traditional technical auditing firm. Certainly specialized firm creation would be incentivized if this law passed.
Yeah, I’d expect traditional firms to struggle with the probabilistic nature of LLMs. All regulation I’ve seen seems to hyper focus on the generalized use cases and broader doom-like risk, it seems measuring whether specialized experiences actually work is just not on the radar. I.E. does an LLM-based mental health coach actually provide effective coaching etc….
I think that’s really on the nose. My counter take would be the lack of a means to establish credibility either stifles innovation from risk aversion or creates a chaotic “trust us” environment that doesn’t protect consumers. Thanks for this article and all the work summarizing these bills!
I notice that you have failed to include an indelible watermark showing which portions of this essay were generated by AI (e.g., spell check, autocorrect, Grammarly) and are deceptively similar to human generated content. Deepfake!
This goes double for any photographs you might have included in this blog post, which would be generated by multiple layers of AI (autofocus, autoexposure, auto filters of various kinds applied to every photograph taken by a modern smartphone). Deepfakes!
Deepfakes! Deepfakes! Everything is deepfakes! We’re living in a post-truth era ever since photographers learned how to doctor photographs in the 1910s! Deepfakes! Microsoft Word makes deepfakes!
It’s all so goddam tiresome to live through the same moral panic every ten years.
Enjoyed this post. Apposite to the content: Let me know if you are OK with this being shared online: I have "Narrated" the post through ElevenLabs for those like me that find audio more accessible.
Gosh, I really need to write the piece "watermarking AI content is technically infeasible," we can only watermark human verification, and only until the first meaningful edit.
This is such an important point. We should concentrate technical effort at watermarking *the scarce thing,* which long term is going to be human-generated content and/or images produced by actual photons hitting actual sensors in the actual world.
Who would be the auditor assessing the annual risk mitigation report? That item seems like an extremely large and ambiguous component of this bill
One among many! I imagine you’d hire a fairly traditional technical auditing firm. Certainly specialized firm creation would be incentivized if this law passed.
Yeah, I’d expect traditional firms to struggle with the probabilistic nature of LLMs. All regulation I’ve seen seems to hyper focus on the generalized use cases and broader doom-like risk, it seems measuring whether specialized experiences actually work is just not on the radar. I.E. does an LLM-based mental health coach actually provide effective coaching etc….
The cynical take would be: measuring LLM effectiveness is bad policy, because entrenched interest groups might not like the results.
I think that’s really on the nose. My counter take would be the lack of a means to establish credibility either stifles innovation from risk aversion or creates a chaotic “trust us” environment that doesn’t protect consumers. Thanks for this article and all the work summarizing these bills!
Entirely possible! You’re very welcome.
I notice that you have failed to include an indelible watermark showing which portions of this essay were generated by AI (e.g., spell check, autocorrect, Grammarly) and are deceptively similar to human generated content. Deepfake!
This goes double for any photographs you might have included in this blog post, which would be generated by multiple layers of AI (autofocus, autoexposure, auto filters of various kinds applied to every photograph taken by a modern smartphone). Deepfakes!
Deepfakes! Deepfakes! Everything is deepfakes! We’re living in a post-truth era ever since photographers learned how to doctor photographs in the 1910s! Deepfakes! Microsoft Word makes deepfakes!
It’s all so goddam tiresome to live through the same moral panic every ten years.
Enjoyed this post. Apposite to the content: Let me know if you are OK with this being shared online: I have "Narrated" the post through ElevenLabs for those like me that find audio more accessible.
https://askwhocastsai.substack.com/p/californias-other-big-ai-bill-by
Absolutely! I always encourage sharing, so long as you link back to the original post (as you have).
I should really start doing AI reads of my pieces…
I'm always in favour of more audio versions of everything! let me know if I can help advise in any way on AI Narration workflow / process stuff.
Use my open source code ;) https://github.com/natolambert/blogcaster
Runs https://podcast.interconnects.ai/ and https://www.youtube.com/@interconnects
Good piece. I can only imagine the concerns at Adobe, Microsoft, Apple, and every socmed platform.
Gosh, I really need to write the piece "watermarking AI content is technically infeasible," we can only watermark human verification, and only until the first meaningful edit.
This is such an important point. We should concentrate technical effort at watermarking *the scarce thing,* which long term is going to be human-generated content and/or images produced by actual photons hitting actual sensors in the actual world.