8 Comments
Sep 30Liked by Dean W. Ball

It does seem like AI safety is getting subsumed politically in a bigger tent faction that is simply "anti AI". There are not really that many doomers. There are many more people who are either anti-tech, anti-capitalism, or (especially in California) in a legacy industry like music or movies or Uber-driving that is specifically threatened by AI.

The problem with the AI safety movement is an intellectual problem. None of the theories behind AI safety have worked yet. It's not like structural engineering, where there are clear dangers like bridges collapsing, we developed better and better models of how these dangers happened, and we discovered principles like load factors that are effective in preventing the danger, and now we can regulate load factors on bridges. With AI, we don't agree on what the dangers are, and the core doomer fear of "the AIs will replace us" has not been modeled.

Expand full comment

I don’t think most ai safety people are anti ai. Though there may be anti ai people using the same arguments against the bill.

Expand full comment
author

I agree! I hope it stays that way.

Expand full comment
Oct 5Liked by Dean W. Ball

Many AI safety people see AI progress as the biggest existential threat to humanity’s existence. They see preventing this catastrophe as the highest calling they could personally hope for - to prevent our doom would make them true heroes and humanity’s savior.

Once a person thinks that way, very little else matters. They won’t shun strange bedfellows, they won’t worry about a bit of exaggeration here or there. Maybe even the occasional lie. After all, it’s for the greater good, you see.

Doomers are dangerous.

Expand full comment
Oct 1Liked by Dean W. Ball

Consider the regulation of automobiles compared to nuclear power: the majority of Americans own a vehicle which they depend upon for transport, but I would bet most Americans could not tell you if the majority of their power came from a nuclear power station. This makes it difficult to do things like mandate changeovers to electric vehicles on short timelines, but easy to regulate nuclear power nearly out of existence (though I'm happy to see data center power needs may be changing that around).

My hope is that widespread and increasing use of AI technology will create a buffer against overreaching or overzealous regulations. If you're using something regularly, you're skeptical of and want to hear respectable arguments from people telling you what you can and cannot do with it.

Expand full comment
author

Exactly! And once it is more widely adopted, there will also be more organic forms of "self-governance" that inevitably emerge (just as traffic laws weren't originally written from the top down--they emerged organically).

Expand full comment

“Serve as amaerica’s lead AI regulator” - cries in we have no fed

Expand full comment

The AI Safety movement should continue to primarily decide on its strategy on its own, but collaborating with other interest groups when there are shared interests. This approach provides the best of both worlds.

Expand full comment