Discussion about this post

User's avatar
Steve Newman's avatar

> This is radically distinct from the view of many in the AI safety community, who instead often see the technology developing toward a singular, almost God-like “superintelligence” that, by its own, commands much of the productive activity in the world. I believe that the debate about open versus closed AI, and about much else in AI policy, often comes down to where one stands on this question.

I agree that many debates in AI policy hinge on differing views on this question – this is an important point and doesn't get enough attention. (Often neither party explicitly states their view, treating it as an obvious fact; as a result, we often see debates between people working from differing-but-unstated worldviews, which I think explains some of the dysfunction in the conversation.)

> Which of those two visions sounds more appealing to you? Which sounds more realistic, based on the technologies you have seen rise and fall during your lifetime? Which sounds more competitive? Which sounds more human-enriching? Which sounds more *dangerous*?

Are you saying that "many in the AI safety community" actively *hope for* a takeover by a singular superintelligence, and advocate for policies which they believe will help to bring this about? My understanding is that most folks who are concerned about x risk fear that a takeover scenario is the default path (unless some other catastrophe intervenes first); they *predict* this outcome, but are striving to *avoid* it.

I'm aware that there are some people who do in fact advocate that the way to avoid disaster is to ensure takeover by a singular, carefully aligned entity. But I don't think this is a mainstream view among the safety community? I have yet to personally interact with anyone who espouses this view.

Are there pointers you could share? This could be fodder for one of the panel discussions I've been putting together.

> With Llama 3.1, and especially its 405 billion parameter variant, we are now thoroughly in the territory of models that the AI safety community assured us, in the recent past, would present a major danger to society.

Similarly here: I'm sure there are folks who have said such things, but I believe it was a minority view? Certainly most people I follow have explicitly stated that they don't see GPT-4 class models as posing catastrophic risks.

Expand full comment
M Flood's avatar

This is a rally good piece.

My speculation about Meta's embrace of open source is not to build a moat, but to dump earth to fill in the frontier labs' moats. Then the competition comes about delivering services building on the models - a place where Meta arguably has a lead with their UX and UI experience, not owning the model. This also keeps Meta from having to license the tech from others.

Not sure whether it will work, but it's a viable strategy.

Expand full comment
7 more comments...

No posts