8 Comments

You might appreciate this podcast on the regulation deception:

https://spotifyanchor-web.app.link/e/ASYUrdDKEJb

Expand full comment

You've correctly distinguished the difference between use and conduct and the problem with the EU approach (which I've also criticized), but given how you opened the piece, I was expecting you to make the case against model- or input- based ways of triaging oversight. Instead you illustrated my exact point, i.e. that the EU's use-based approach is ridiculously over-broad! I agree a conduct-based approach would be better, but that's still broader in scope, and tangential to, the case for using compute thresholds to pick out frontier labs for oversight. So how does this represent a misunderstanding on my part?

Expand full comment

I have written about the fundamental problems with model regulation extensively elsewhere. I post frequently, so not every piece is necessarily meant to stand on its own.

Ultimately pre-approval for models above a certain compute threshold has a serious risk of creating a central planning agency. The long term difference between model and use-based regulation is in fact not clear to me. It seems to me that the former inevitably devolves into the latter. It is, to me, an extremely large leap to suggest that model based regulation is in any sense “light touch” if it involves NRC style pre approval. In fact, because it would represent the biggest change from the status quo, it’s most certainly the heaviest regulatory option on the table.

I think we agree that reporting requirements are fine. They will probably seem kind of dumb in a couple years, so I am glad it was done via an EO, so that they can be easily rolled back once the current moment of uncertainty ends. But as a temporary measure, it seems fine.

I took your post, and Zvi’s, as conflating use and conduct based approaches to regulation. Perhaps I was mistaken, but it seems to me that my interpretation was fair.

Expand full comment

Context is scarce in a format that restricts each thesis to a single sentence, but my piece explicitly referred to "use-based" being over-broad, which you agreed with. If I had said "conduct-based" and then gave an example of "use-based," you'd be fair in accusing me of an incorrect conflation.

I also explicitly state that I'm *against* statutory regulation of frontier models, including at the state level. My theses only recommend *oversight* of frontier labs under defense / natsec authorities, not a formal pre-approval process.

I'm also not an AI pessimist in any sense. At most I'm an *institutional* pessimist.

Expand full comment

I identified the problem because both you and Zvi jumped on the “use-based” approach without suggesting that an alternate, third path might exist. I wrote this post to clarify that distinction, which I suspect is non-obvious to most people.

I’m not sure what oversight under natsec authorities exactly means, but sounds scary to me! I think I am comfortable with these companies being under the oversight of the market and legal processes they’re already under, with an informal process for them to alert governmental authorities if something problematic emerges during frontier training. Perhaps we agree less re: the EO provisions than I thought. I also think that the compute threshold should be raised over time, since everyone knows that 10^26 flops is just a made up number based on no science. That’s fine, but as we get past various levels, I think there is a strong case for raising it.

Expand full comment

I do apologize, however, if I’ve misread your stance re: AI pessimism. I guess if I had to summarize my perception of your stance briefly, it is that you are a relative AI capabilities bull and a pessimist about the impact those things will have on society. Is that incorrect?

Expand full comment

I think the benefits are very positive on net in the short term, mixed but mostly positive in the medium term, and overwhelmingly positive in the longterm provided we navigate the subset of truly catastrophic risks. Beyond 2030, I think regime change becomes quite likely, which will be an incredibly destabilizing and a fought period geopolitically, but alas there's no way out but through. I've try to express that forecast in descriptive rather than normative terms. Longer term, I am quite pessimistic about an eventual posthuman transition, but I say that as a human chauvinist who thinks all meaning and value is ultimately endogenous to our peculiar evolutionary niche.

Expand full comment

I think we are in broad agreement about the potential for instability and shared skepticism about post-humanism. I am much more bullish on the US than you are, though instability throughout the globe is a certainty and these things can wreak nonlinear havoc on our shores.

I think America needs to do everything in our power to set global standards (technical, and soft standards) for use of advanced AI, which means among other things maintaining the lead. Lots of people fetishize language and biology models, where we lead. I think the near-medium term power of those technologies is likely being overstated, but I readily admit I could be wrong about that. We are likely behind (or at least not clearly ahead) China in other things that matter, notably industrial applications from in-factory RL, CV, and robotics.

To maintain our lead, and to catch up where we are behind, openness is essential. That comes with tradeoffs. It will be worth it. If you think of LLMs alone as a kind of OS (a la Karpathy), the benefits of openness in the long term become clear, even for very powerful models. It’s easy to overindex on the present frontier, where vertical integration pays dividends (agentic models, voice assistants, etc.) Over the long arc of history, though, modularity usually wins. That is what we want *especially* if you are right in your optimism about future model capabilities and scaling.

I doubt I’ll convince you here, but I hope this at least illustrates why, for me at least, the open v. closed debate is about a lot more than Meta.

Expand full comment