Discussion about this post

User's avatar
Stephen Clare's avatar

Thanks for writing this. Sometimes it's a little unclear which doomers exactly you're responding to, but I suspect that you actually don't disagree with at least many of the most reasonable people worried about AI x-risk (e.g. Bengio, Russell, etc.).

This kind of level-headed engagement and dialogue is really valuable. If you were interested in writing more on this it could be helpful to see you respond to a particular piece. For example you write "many AI doomers advocate for a domestic or global “kill switch”" but without a link to a source or proposal I'm not sure you're actually arguing against a proposal anyone serious has really put forward (I could be wrong here, but I don't recognize this as a proposal I've heard my colleagues discuss).

Instead I'd love to read your thoughts on the governance measures Bengio et al. write about in their latest Science piece (https://www.science.org/doi/10.1126/science.adn0117). These include whistleblower protection, incident reporting, model registration, external audits, liability, "if-then" and responsible scaling policies - along with flexibility to strengthen or loosen regulations as AI advances.

There are a few other points where I feel like, by responding to an undefined doomer case, you end up taking down a strawman. For example: "The entire premise that an ultra-high-intellect AI will necessarily want to dominate the world is faulty." The premise isn't that intelligence will *lead to* a takeover urge. The premise is instead that an AI could have faulty or harmful goals *and* be extremely intelligent, allowing it to potentially escape human control and do massive societal damage. Stuart Armstrong has a short article on this, the Orthogonality Thesis, which might be of interest. Though I'll also say I don't think many "modern" AI risk arguments rely on this much, at least not in a very strict sense.

Finally, you write: "I don’t dismiss the AI doom arguments completely; I just see it as extremely unlikely". Again many (though certainly not all) AI risk researchers would agree with this, depending on what you mean by extremely. It seems to me, though, that once one assigns any credence to AI doom arguments, it's difficult to assign extremely low credence to them (say, less than 1 in 10,000 or something). Very small probabilities imply very high confidence, and as you say here, we just don't have enough data or certainty to be that confident here. But this leads pretty naturally to a strong argument for lots of caution and scrupulousness in future AI developments. There's lots to gain, to be sure, but to me it seems fine to do lots of risk assessment and testing as we advance to ensure we realise those gains without losing everything. Realising the benefits of advanced AI slightly later, after we develop robust technical and legal safety frameworks, in order to avoid a 1 in 10,000 (or something ) chance of losing control of our future seems sensible to me.

Expand full comment
Ed P's avatar

Interesting piece thank you.

I tend to agree. There are so many wild assumptions baked into doomsday scenarios. One of the major one’s is that AI would be motivated to kill us for some reason. Seems to be an assumption it would go sentient and be subject to the same biological/evolutionary pressures and resulting emotions/motivations as biological creatures.

It is a really strange assumption. We just do not know what creates sentience and consciousness and don’t have any good leads to find out. Could be the first transistor calculator in 1954 was conscious in some way. But that still doesn’t give it the motivation (or ability) to destroy all humans.

And the ability bit is really key imo. If we give control of the nuclear button to AI, it is obviously a threat to all humanity if it goes wrong for any reason. So maybe don’t do that. And the AI drone army — maybe don’t do that either. For AI to defeat humanity, it needs a mechanism. And there is where I think restrictive controls might be mandated.

Expand full comment
4 more comments...

No posts