9 Comments

> This is radically distinct from the view of many in the AI safety community, who instead often see the technology developing toward a singular, almost God-like “superintelligence” that, by its own, commands much of the productive activity in the world. I believe that the debate about open versus closed AI, and about much else in AI policy, often comes down to where one stands on this question.

I agree that many debates in AI policy hinge on differing views on this question – this is an important point and doesn't get enough attention. (Often neither party explicitly states their view, treating it as an obvious fact; as a result, we often see debates between people working from differing-but-unstated worldviews, which I think explains some of the dysfunction in the conversation.)

> Which of those two visions sounds more appealing to you? Which sounds more realistic, based on the technologies you have seen rise and fall during your lifetime? Which sounds more competitive? Which sounds more human-enriching? Which sounds more *dangerous*?

Are you saying that "many in the AI safety community" actively *hope for* a takeover by a singular superintelligence, and advocate for policies which they believe will help to bring this about? My understanding is that most folks who are concerned about x risk fear that a takeover scenario is the default path (unless some other catastrophe intervenes first); they *predict* this outcome, but are striving to *avoid* it.

I'm aware that there are some people who do in fact advocate that the way to avoid disaster is to ensure takeover by a singular, carefully aligned entity. But I don't think this is a mainstream view among the safety community? I have yet to personally interact with anyone who espouses this view.

Are there pointers you could share? This could be fodder for one of the panel discussions I've been putting together.

> With Llama 3.1, and especially its 405 billion parameter variant, we are now thoroughly in the territory of models that the AI safety community assured us, in the recent past, would present a major danger to society.

Similarly here: I'm sure there are folks who have said such things, but I believe it was a minority view? Certainly most people I follow have explicitly stated that they don't see GPT-4 class models as posing catastrophic risks.

Expand full comment
author

All great questions!

It seems to me that many in the safety world see alignment as the path to solving this problem. But even an aligned super intelligence would need to be kept under centralized control, at least to some meaningful extent. Certainly super intelligence could never be a broadly distributed consumer good. Since some degree of centralized control remains, one is merely transferring the alignment problem from one entity (the AI itself) to another (the people who control it).

Many quite mainstream organizations were saying models in this class should be considered dangerous, often before or immediately after the release of GPT-4. The community has softened, but none of the examples I selected are from especially long ago (some are within the last 9 months).

Expand full comment
Jul 24Liked by Dean W. Ball

This is a rally good piece.

My speculation about Meta's embrace of open source is not to build a moat, but to dump earth to fill in the frontier labs' moats. Then the competition comes about delivering services building on the models - a place where Meta arguably has a lead with their UX and UI experience, not owning the model. This also keeps Meta from having to license the tech from others.

Not sure whether it will work, but it's a viable strategy.

Expand full comment

It’s extremely simple: AI commoditizes creation of content and when the creation process in any industry is getting cheaper and technical / people moats decline, the amount of overall content produced rapidly rises.

The primary business problem therefore moves from creating (and paying creators) to selling (advertisement). If you want your game, movie, etc to compete with the rest, you’ll have to do it via digital marketing. Eyeballs are fixed and they are owned by Facebook, Google and Amazon.

Meta is pouring the accelerant on markets that rely on digital advertisement to get their products adopted.

Expand full comment

> Does anyone think that the world will be a more dangerous place because a 405 billion parameter version of Llama 3 is on Hugging Face?

I do, and so do my fellow coworkers who are working with me to evaluate the risks of these models. Even Llama2 was dangerous. Llama3 is much more so. The trouble is, we can't publish the details of these dangers without basically putting up a billboard saying, "Look here if you want to cause massive harm to the world." You're working off a false negative because of this. Perhaps you won't believe me, just thought I should let you know. My reports are for the federal government's eyes, not the public's. Keep in mind that high up decision makers are working with information you don't have access to.

Expand full comment
author
Jul 26·edited Jul 26Author

This is a really condescending response that assumes an awful lot about what I do and do not know! It definitely doesn’t help your argument to go around saying stuff like this, and btw, people who advertise “I know something you don’t know” often don’t know anything that interesting, in truth. I find it difficult to believe that you know some hidden capability of llama 2, when that model has been poked and prodded by millions of people around the world. But perhaps you do! If that’s the case, more power to you. But probably just don’t talk about it—writing things like this just makes you sound rude.

Expand full comment

"This is radically distinct from the view of many in the AI safety community, who instead often see the technology developing toward a singular, almost God-like “superintelligence” that, by its own, commands much of the productive activity in the world. I believe that the debate about open versus closed AI, and about much else in AI policy, often comes down to where one stands on this question."

I disagree with the second sentence. I think there are many credible criticisms of open source on economic grounds, such as some of the arguments made by Trae Stephens at Founders Fund. As you know, I assign zero credibility to the superintelligence claims. National security arguments made by people who see AI as important, but not eschatological, are also worth taking seriously, even if I disagree with them. On policy grounds, this matters because even after policymakers are informed enough to reject the crazies, I expect there will still be disagreements about open source.

Expand full comment
author

Oh yes, I think there are reasonable strategic/business arguments to be had about OSS, and I’m not a zealot about that.

But, when it comes to policy, I do think both of the arguments boil down to that. The natsec arguments, for instance, are often predicated on the idea of the weights of specific AI models as a kind of sacred, precious thing. Whereas in the alternative view, models become commoditized, fast following is relatively easy, and so the costs of restrictionist policies are not worth it. The natsec arguments are VERY influenced by the doomers, whether either party knows it or not.

Expand full comment

I wonder how long it’ll take people to see how little lock in Meta has. They don’t own any of the infra that people use to use the models, so a competitor open model is one click away.

Expand full comment