[Please note: I have modified this post to remove a mostly positive discussion on Connecticut’s SB 2, a bill that was under debate at the time of writing and later was abandoned by the legislature. My analysis was a little too rosy, and I likely underestimated the effect the bill would have. In short: I was wrong. -DB 12/7/2024]
Most of what I want to flag in this section ended up being the focus of this piece, so I’ll leave you with just two links which caught my attention this week and which were not covered much elsewhere.
Also, I appeared on The Cognitive Revolution, Nathan Labenz’s excellent podcast, to discuss neural technology and brain-computer interfaces, a pet issue of mine. It was a fun deep dive.
Quick Hits
CRISPR-GPT is a new agentic AI system from researchers at Princeton, Stanford, and DeepMind intended to automate the design of CRISPR-based gene editing experiments. It’s important to keep in mind that these agentic systems are often composed of scaffolding and sophisticated prompt engineering of frontier language models; as soon as a better language model comes along, it’s possible to drop it in to the existing agentic structure for a much better system. Because this scaffolding now exists for many different domains, we may see agents take off faster than many non-specialists expect—more of a phase transition than a gradual ramp. This is just a guess on my part, though.
The Permitting Council, a federal agency designed to streamline the environmental review and permitting process, has announced $30 million in grants to a variety of federal agencies to use AI and other advanced software to speed up this perennial bottleneck in infrastructure construction. A small, but positive step.
The Main Idea
There was some public pushback this week on SB 1047, California’s sweeping proposed AI law. I wrote about 1047 back when it was introduced, and was able to chip in this week with some factual analysis of what this bill really does—and does not—do.
For all the needless complexity it would generate, SB 1047 is a simplistic law at its core: “we are worried about AI models being used to do very bad things,” say California’s legislators, “so let’s make it illegal for anyone to make an AI model that could possibly be used to do very bad things.”
The problem, of course, is the general-purpose nature of powerful tools like AI. “We are worried people could use computers to do very bad things,” one could imagine them saying, “so let’s make it illegal to manufacture a computer that could be used to do very bad things.”
No amount of hand-waving from the bill’s proponents about being “pro-innovation” or “common sense,” no number of 100 page “AI risk assessment frameworks” furnished by well-funded AI safety non-profits, nothing, can erase the fundamental simple-mindedness of this line of thinking.
Criticizing bad legislation is an important part of policy scholarship, and it tends to be the part that generates the most attention. Yet it is my least favorite part of the job. I prefer building to tearing down. To that end, I’d like to highlight a few government actions—at the state and federal levels—that I think do move the ball forward in a positive manner. I’ll talk about two very different things: new guidance on nucleic acid synthesis from the Biden administration, and an effort in the State of Ohio to use AI to improve the quality of (non-AI) regulation. None are likely to generate viral threads on X or prominent opeds; all are good-faith efforts to productively incorporate AI into our society.
Biden Administration Guidance on Nucleic Acid Synthesis
One of my earliest posts on Hyperdimensional focused on AI and biorisk. While current language models donot present a particularly salient risk for the creation of biological or chemical weapons, there are now promising DNA foundation models. The transformer architecture that underpins modern language models does not lend itself well to DNA modeling, but models like the Arc Institute’s Evo take advantage of new hybrid architectures like StripedHyena to tackle this problem more effectively. In the long term, this could enable the creation of novel forms of life (bacteria, mostly), with the entire genome predicted by an AI model.
The promise is great, but the potential of something like this being used to dangerous ends is obvious. The good news is that software has not been, and will not be, the primary bottleneck to synthesizing DNA or RNA—either for beneficial or nefarious purposes. Instead, as I wrote in “AI Biorisk: A Dose of Reality,” hardware is a bigger, and much easier to police, bottleneck:
Fortunately, creating a novel biological or chemical weapon requires more than just the exact recipe needed to make it. Of course, specialized scientific know-how, as opposed to knowledge, is essential. ChatGPT may be able to tell me the steps to make hare à la royale, but that doesn’t mean I’ll know remotely how to do it, much less be able to do so. But more importantly for these purposes, it also requires specialized tools, such as RNA/DNA synthesis machines in the case of a virus or bacteria, and substantial laboratories in almost all cases.
…
Strangely enough, even though it is AI that makes this risk more salient, it is best addressed by focusing on almost every part of the production chain other than AI. This is because this is specialized, physical, equipment whose use can be reasonably regulated, even strictly, without impinging overmuch on freedom of thought, speech, or scientific inquiry.
Just a few examples of these controls include:
Regulating the export of DNA/RNA synthesis equipment;
Mandatory screening mechanisms for DNA/RNA synthesis, as described here and as put forward in President Biden’s Executive Order on AI;
With the “Framework for Nucleic Acid Synthesis Screening,” the Biden Administration’s National Science and Technology Council is doing precisely this. Specifically, the framework requires that:
Manufacturers of DNA synthesis equipment must screen all orders for more than 200 nucleotides for “sequences of concern,” a database of federally regulated pathogenic agents;
Manufacturers must also screen any purchaser of nucleic acid synthesis equipment to validate the purchaser’s identity and associated institution;
Potentially dangerous sequence synthesis must be reported to the FBI;
In the long term, manufacturers must integrate the ability to perform this screening directly into the synthesis equipment;
Any federally funded life sciences research relying upon nucleic acid synthesis must use one of the approved manufacturers for their equipment, who must conform to the standards above.
Presto! A major medium and long-term risk from AI, meaningfully addressed by policy without oppressive regulations on the distribution of software on the Internet and without a wide-ranging AI licensing regime. Kudos to all involved.
Ohio Cleans up (non-AI) Regulations using AI
This is a pretty simple one. Starting in 2020, the State of Ohio conducted a review of the state’s 17 million long administrative code, to identify duplicative or outdated regulation. The system, called RegExplorer, cut 2.2 million words of unnecessary regulation. A great next step would be to use language models to make more human-readable forms of existing regulatory legalese.
All of these steps from a variety of government entities are reasonable, prioritize the diffusion and use of AI while working to minimize various foreseeable downsides, and do so without impinging on this fast-moving and deeply promising emerging technology. We can do more work of this kind. Despite what some alarmists might tell you, we do not need to invent a new regulatory regime from scratch. We can proceed in the same way we a prompt engineer might instruct a language model: “take a deep breath, and think step-by-step.”