I want to bring your attention to an oped I wrote with Daniel Kokotajlo about the need for frontier lab transparency. Daniel and I disagree about many things relating to AI regulation and the trajectory of AI, but we agree about this. Over several conversations with Daniel, our agreement on this issue became clear, but so, too, did something else—something that reminded me of an essay called “The Voice of Poetry in the Conversation of Mankind,” by one of my favorite philosophers, Michael Oakeshott:
In conversation, facts appear only to be resolved once more into the possibilities from which they were made; certainties are shown to be combustible, not by being brought in contact with other certainties or with doubt, but by being kindled by the presence of ideas of another order; approximations are revealed between notions normally remote from one another. Thoughts of different species take wing and play around one another, responding to each other’s movements and provoking one another to fresh exertions.
Thanks to Daniel, and the other new friends I have made over these past few months. Thank you all for combusting certainties, for provoking me to fresh exertions, and for reminding me that ideas of another order lurk all over. And thank you, most of all, for the conversation. It means more than I can say.
Onto this week’s essay.
The following is the conclusion of a lecture I have to undergraduates at Mercer University entitled “What is AI, and How Do We Govern It?” After delivering it, I realized it may be of some interest to you, as well. I hope you enjoy. Thanks to Antonio Saravia, Andres Marroquin, and Mercer’s Center for the Study of Economics and Liberty for graciously hosting me.
…
So far, we’ve mostly talked about how government can regulate AI. In other words, how government can set prescriptive rules for how AI should be developed and used. Maybe you think near-term regulation is a great idea. Maybe, like me, you question the wisdom of policymakers trying to preemptively shape an industrial revolution that has not happened yet.
Regardless of where you come down on that question, though, there’s an entirely different way of thinking about government’s role in AI—one that I fear is little-discussed. And that is the government’s role in building capabilities and in providing the public goods that it is best-positioned to create.
What might that look like?
AI is already blurring the boundary between man and machine in digital environments, and it will only do so more in the future. Perhaps, then, we should build digital public infrastructure for validating personhood and identity online. Should the government build that, just as it has created our physical identification system?
The AI industry is in desperate need of safety and reliability evaluations, as well as technical standards for generalist AI. We are far from building mature versions of either of these, but we could be moving much faster than we are today. What role should the government play in this? Is it obvious that the private sector will produce optimal standards and evaluations on its own?
No matter what the capabilities trajectory of AI will be, I think it is a safe bet that models will become very competent programmers—and one country’s competent programmer is another’s cyberattacker. Should cyberdefense efforts receive much more funding and staff time within the Department of Homeland Security, the Cybersecurity and Infrastructure Security Agency, or other federal agencies?
These are just a few examples of things that our government could be building, or helping to build. They have little to do, at least superficially, with “regulation.” Indeed, they are probably harder than regulation. Building is harder than complaining, and it is harder than declaring rules from the top down.
Building new capabilities well would require government to devote resources to them, which in our current era of severe fiscal constraints means de-funding other programs. It would mean forging alliances with Big Tech and other companies, who surely will bring technical expertise to the challenges that the federal government does not have on its own. It would involve a completely different posture than the ones our governments, by and large, are currently assuming.
Rather than rushing to make paternalistic pronouncements about what AI uses are “good,” and which are “bad,” I wonder if our policymakers shouldn’t spend more time in the trenches, with the rest of us, trying to contend with reality rather than trying to mold reality. I do not believe we created our government to mold reality, and such worshipping of the state is, I think, inconsistent with the values of our republic.
Think about our history. Is there a law government passed that made the civilization-defining technology of electricity “go well”? Not really, no.
Despite this, governments—local, state, and federal—did play a major role in the introduction of electricity to factory, the office, and the home. It worked with private industry to build the vast amounts of infrastructure necessary to make it so that houses and places of business could have electric light, cooking and cleaning appliances, and all the other myriad benefits of electricity.
Yet for the most part part, government’s role did not involve legislating a maximum brightness for lightbulbs, or a maximum size of electric generators. Even one of the biggest fights of the time—the conflict between alternating and direct current—was resolved by private industry, and only later was it codified by government in law.
Government helped diffuse this technology throughout society, recognizing the importance of spreading the benefits of electricity to as many Americans as possible.
Is the early electricity rollout an example of a “laissez-faire” government? In some ways, perhaps. But I prefer to think of it as humble government. Humble government does not, necessarily, mean inactive government. It means government that understands the limitations of its power. It means leaders who focus on building a better future rather than dictating to us mere mortals what the future should be.
Humility, more than anything, is what we need to guide us through the coming technological revolution.
Finally, I want to talk a bit about what AI means for you. I won’t lie to you: people graduating from college in the late 2020s may bear the brunt of the labor market impacts of this technology. AI systems will surely automate many junior-level jobs that you otherwise may have taken. Even if AI cannot do everything you can do, it may make it so that employers simply do not hire as much for junior roles. Indeed, we may be starting to see this already in software engineering.
My generation—the millennials—are stereotyped for spending their 20s in a kind of extended adolescence. I am not sure you will have that same luxury. I would be lying if I said that I think the next decade will be easy—it will be hard, in some ways, for all of us. But it could be especially hard for new college graduates.
Yet I am not among those who believe that AI will eliminate all human labor, or make work “optional.” Indeed, I believe the opposite: the early years of the AI transformation could be one of the greatest economic opportunities for young people in history.
Step back for a moment and think: what is intelligence? Does intelligence explain why Apple built the iPhone under Steve Jobs, but Microsoft—a far bigger company at the time—struggled with mobile phone software? Does intelligence explain why OpenAI, a small startup, led the way with language models, even though Google—a juggernaut—invented the transformer architecture that undergirds them?
How much of these successes are explained by intelligence per se, and how much is explained by entirely different factors? How many problems in life are “solved” purely by intelligence, and how many require other things—inspiration, luck, creativity, serendipity, perseverance, charisma, charm, money, power, connections? How many smart people do you know who have thought about a problem their whole life, yet made little tangible progress? How many human creations worth their salt were made with thinking alone?
AI will solve some of our problems, but far from all of them. The most interesting problems in the world are the ones that are bottlenecked by something other than, or in addition to, intelligence. Ample machine intelligence can help with those problems, but it cannot solve them altogether. Those are the problems really worth your time. And those are the problems that I believe the next generation will be able to solve more readily than anyone else.
Because you, more than anyone else, are unencumbered by the past. You, more than anyone else, don’t have “traditional ways of doing things” that blind you to new possibilities. You, more than anyone else, are better positioned than anyone to take AI and its related technologies and build things with it. And there are all sorts of things worth building.
I do not believe that AI will mean that everything will be “solved,” as others do. I do not believe in utopia. Instead, I believe AI will open up entirely new worlds of problems to be solved.
I believe that in the next decade or so, we will cure all manner of disease once and for all, and find far better treatments for other diseases. We will build the technologies that let us solve climate change. We will invent new materials and new means of transport. And we will do even broader things. AI will change the structure of businesses and other organizations. It will change the way our government works, and could enable novel forms of civic engagement. Industries which had been dying, like local journalism, could get a new lease on life thanks to this extra dose of abundant machine intelligence. And these are just the easy things—the things from our current world I can imagine being made better by AI. But you, better than anyone else, can invent entirely new things—businesses, intellectual endeavors, hobbies, things for which we do not even yet have words.
You will probably be tempted, by our overly negative culture, to be cynical about this technological change. To mutter things about “late capitalism,” or to disregard the wonders being created by the week as “hype” or “plagiarism” or something similarly simplistic. It’s tempting because it sounds smart—it sounds savvy. But it isn’t. The cynics and the pessimists often win the day, but they rarely win the decade. And they almost never win the century. They may look smart today, but one day, they will be forgotten. So I encourage you to set their easy cynicism aside, and forge your own path.
There will be turbulence along the way, but overall we are headed to a far better world. It is not wrong to be optimistic. It is not wrong to be excited.
All of us will piece this transformation together collectively, imperfectly, and over time. But I believe that you, as young people, will play a starring role. Though all of us will change the way we do things, you will be the ones with the fewest attachments to the past. Though bold new ideas will come from all over, yours, I think, will be the boldest.
So my advice is not to worry too much about what AI means for your prospects as a software engineer or as a lawyer or anything else. Instead, think about what you, uniquely, could do with 1000 tireless geniuses that work for you day and night. Think about what you want to build. Cultivate wisdom. Learn from as many disciplines as you can. Be curious. Be broad. Make friends. Forge relationships and alliances. Get obsessed. Create.
Remember that an AI system, no matter how smart, is only as good as the questions it is asked. In a few years, you’ll have a cognitive tool that can do a staggering range of intellectual tasks. In some ways, you’ll have as much cognitive power at your disposal as only the wealthiest and most powerful people alive today. The people who use these capabilities well will be the ones who thrive. So spend these few years you have in college pondering: what is it that you will do?
Thank you.