6 Comments

Thanks for writing this. Sometimes it's a little unclear which doomers exactly you're responding to, but I suspect that you actually don't disagree with at least many of the most reasonable people worried about AI x-risk (e.g. Bengio, Russell, etc.).

This kind of level-headed engagement and dialogue is really valuable. If you were interested in writing more on this it could be helpful to see you respond to a particular piece. For example you write "many AI doomers advocate for a domestic or global “kill switch”" but without a link to a source or proposal I'm not sure you're actually arguing against a proposal anyone serious has really put forward (I could be wrong here, but I don't recognize this as a proposal I've heard my colleagues discuss).

Instead I'd love to read your thoughts on the governance measures Bengio et al. write about in their latest Science piece (https://www.science.org/doi/10.1126/science.adn0117). These include whistleblower protection, incident reporting, model registration, external audits, liability, "if-then" and responsible scaling policies - along with flexibility to strengthen or loosen regulations as AI advances.

There are a few other points where I feel like, by responding to an undefined doomer case, you end up taking down a strawman. For example: "The entire premise that an ultra-high-intellect AI will necessarily want to dominate the world is faulty." The premise isn't that intelligence will *lead to* a takeover urge. The premise is instead that an AI could have faulty or harmful goals *and* be extremely intelligent, allowing it to potentially escape human control and do massive societal damage. Stuart Armstrong has a short article on this, the Orthogonality Thesis, which might be of interest. Though I'll also say I don't think many "modern" AI risk arguments rely on this much, at least not in a very strict sense.

Finally, you write: "I don’t dismiss the AI doom arguments completely; I just see it as extremely unlikely". Again many (though certainly not all) AI risk researchers would agree with this, depending on what you mean by extremely. It seems to me, though, that once one assigns any credence to AI doom arguments, it's difficult to assign extremely low credence to them (say, less than 1 in 10,000 or something). Very small probabilities imply very high confidence, and as you say here, we just don't have enough data or certainty to be that confident here. But this leads pretty naturally to a strong argument for lots of caution and scrupulousness in future AI developments. There's lots to gain, to be sure, but to me it seems fine to do lots of risk assessment and testing as we advance to ensure we realise those gains without losing everything. Realising the benefits of advanced AI slightly later, after we develop robust technical and legal safety frameworks, in order to avoid a 1 in 10,000 (or something ) chance of losing control of our future seems sensible to me.

Expand full comment
author

It’s certainly not the case that I’m responding to a straw man. Every proposal mentioned is something I have seen formally published.

That said, I think you have more or less entirely missed the point of this article. Your fixation is on the risk: the philosophy behind the risk, the mechanics, the scenarios. My central point, however, was about the nature of intelligence itself, and suggests a fundamentally different model of the current situation.

The policy outcome you’ve described sounds nice. It is not what would be likely to actually happen if we enacted the policies you’re gesturing at. “After we develop robust technical and legal safety frameworks” is doing a lot of work for you. It implies, first of all, that “safety frameworks” are things that will be designed, refined, and implemented like a software library. I think if this tech is as powerful as we both suspect, one needs to recognize that the reality will be much more complex.

Ultimately, finding reliable, safe, and productive uses for generalist AI is a product design question. We do not know how to design a generally intelligent assistant. It’s a design challenge at least as complex as inventing the GUI. Do we have any reason to believe that this design challenge will go *better* is government is involved? What are some recent examples of situations where the government has inserted itself into complex product design decisions? Does it usually go well when the government does that, or does government involvement usually make it worse? What does that tell you about how it would likely go this time?

You think I didn’t engage with a wide enough range of AI risk theories. That’s fine and probably fair (keep in mind word limits—I’m not in the business of writing 8000 word pieces on this site). I am contending, however, that your beliefs are insufficiently engaged with reality.

Expand full comment

Interesting piece thank you.

I tend to agree. There are so many wild assumptions baked into doomsday scenarios. One of the major one’s is that AI would be motivated to kill us for some reason. Seems to be an assumption it would go sentient and be subject to the same biological/evolutionary pressures and resulting emotions/motivations as biological creatures.

It is a really strange assumption. We just do not know what creates sentience and consciousness and don’t have any good leads to find out. Could be the first transistor calculator in 1954 was conscious in some way. But that still doesn’t give it the motivation (or ability) to destroy all humans.

And the ability bit is really key imo. If we give control of the nuclear button to AI, it is obviously a threat to all humanity if it goes wrong for any reason. So maybe don’t do that. And the AI drone army — maybe don’t do that either. For AI to defeat humanity, it needs a mechanism. And there is where I think restrictive controls might be mandated.

Expand full comment

Great read - I do wish people would not sensationalize AI and bio weapons etc etc... the more likely utopian scenario I think will come first us is the idiots running insurance and financial firms will use a half baked version of an LLM and cut all of us off from getting medicine, life saving medical procedures or they will sell us complicated financial products leading to another economic recession.

It will be a degradation of middle-class living standards, getting us more on the work hamster wheel.

This is infinitely more likely to happen but it’s a non-sexy scenario or worth of social media algorithms so nobody talks about it. Plus above scenario does not get clicks. 😁

Expand full comment

Dean Ball's piece on AI 'X-Risk' is a stellar read — well-argued and compelling. But it doesn't quite alleviate my concerns about AI's potential for existential risk, which remain modest but persistent. There's a nuance often missed in these discussions: those wary of AI aren't Luddites. In fact, it's the creators and innovators of AI who are often most vocal about these risks. We're not fearmongers; we're realists. Our interest in AI is deep, our love for its potential, immense. And yet, we can't turn a blind eye to the 'what-ifs', even if they seem remote.

The article's focus on hostile superintelligence is a bit narrow, though. AI's potential risks are more varied, particularly its role in democratizing bio-weaponry. What's really intriguing is the idea that centralized AI could be our Achilles heel — that our strength might lie in decentralization and diversity of development. I'm not sure if it is true, but it's an angle worth exploring, and I plan to, with all due credit to Ball.

Expand full comment
author

Hey Harrison! Really appreciate the thoughtful comments. I definitely agree with your point that hostile super intelligence (the "Terminator" scenario) is far (very far) from the only major AI risk. It's just one common one that I worry is skewing lots of people's thinking. But I certainly don't mean to use it as a straw man for all AI risk arguments. I, for one, worry what might happen in a world thickly populated with agentic AI--might they begin to communicate with one another in ways that no one really grasps, leading to unexpected failures in key systems? We've already seen a primitive example of this happen occasionally with algorithmic trading in financial markets. Bioweapons, too, are a big potential threat, and one I plan to do an in-depth post on in the future.

I think it's a good thing that the creators of AI are some of the foremost people talking about risk. I take folks like Ilya Sutskever and Geoff Hinton with the utmost seriousness. I don't think there is a general-purpose technology in history whose risks and tradeoffs have been so thoroughly contemplated from the outset. I just don't want fear about risks to crowd out the awesome potential of the technology.

Expand full comment