15 Comments

Hi Dean, While this is not my area, I'm glad I subscribed a bit ago as I'm finding your writing very interesting. Hope you have a nice holiday season!

Expand full comment

Thank you!

Expand full comment

As a an AI Safety researcher who is freaked about about catatrophic risks (e.g. bioweapons) from AI (sometimes to the point of rudeness, sorry): this is the best take on AI regulation I've yet seen, by far.

You really hit the nail on the head with,

"But how do we regulate an industrial revolution? How do we regulate an era?

There is no way to pass “a law,” or a set of laws, to control an industrial revolution. "

The Narrow Path essay and Max Tegmark's Hopium essay both seem to suggest that, because the AI industrial revolution is scary, we should ask the US Federal government to wave a magic wand and make it not happen. That simply isn't an option on the table, and any realistic plan must start by facing up to that.

Rather than trying to tell people not to use AI for science or AI for improving AI, we should aim to channel it. Offer rewards (e.g. subsidized compute) for researchers in exchange for operating in a loosely supervised setting. Focus on preventing only the very worst civilization-scale risks rather than micromanaging. Trying to reign in developers with overly stringent rules will just drive research underground. We can't stop the tide of technology.

Expand full comment

Thank you! And 100% in agreement. I wish more people in policy making communities grokked this.

Expand full comment

Nice to have it all in one place. The hard part is rank ordering them.

Manifesto gang 👀.

Lots of overlap here in the areas relevant to me: NAIRR, DoE, model specs. Nice.

Expand full comment

Your model spec idea has been quite influential on me. Next big step is: how do we define this in statute? (Something we should discuss sometime!)

Expand full comment

Will help when my friend Johny finally gets Amanda to write Anthropic's. They're next. Google is always last on policy things, unless we can convince them its good for developers.

Expand full comment

Continue to find your writing really worthwhile, converted this through ElevenLabs to audio again:

https://open.substack.com/pub/askwhocastsai/p/heres-what-i-think-we-should-do-by

Expand full comment

thank you!

Expand full comment

The rise of AI means the replacement of humanity by something else. Do you have a policy for that?

Expand full comment

I obviously do not share your degree of certainty about this matter even remotely.

Expand full comment

Is there anything that humans will always be better at?

Expand full comment

Transparent to whom?

And when government labs, intelligence, and military orgs set up projects, these regulations will not apply to them?

Expand full comment

To the public, and correct.

Expand full comment

The risks go beyond the obvious - one should be looking at other ways that proliferation of AI could endanger humankind. For one, devolution has species-altering potential if not worse. We (and children) are increasingly relying on AI instruments for the simplest of tasks like writing essays, arithmetic calculations, and so on. We are devolving biologically faster than we have evolved in the last million years. Brains are shrinking fast enough to show significant decline in a few decades.

One can’t just sit back and wonder how an industrial revolution can be contained or suggest that it can’t. Instead, find ways to enforce regulations, advocates need to come forward and act. I wonder what the annual conventions run by the other section of leaders and billionaires (AI opponents) are doing and saying about this.

On the flip side, the recent elections perhaps could work in favour of the cause. Although I dread the far right and the implications, and think it doesn’t bode well for World progress, Trump’s alliance with Musk could actually help foster AI regulations to a great extent.

Expand full comment