When I started this newsletter in January, I wrote some foundational posts outlining my basic appraisal of the AI situation. Since then, I’ve added well over 1,000 subscribers, and have updated my views considerably. So, I thought it would be useful to briefly outline my current position on the major issues implicated by AI. These include: what AI means in a philosophical and macrohistorical sense, what kind of technological transformation AI is likely to be, how I expect AI to be integrated into society, timelines to AGI, and how policy should react.
This is obviously a ton of ground to cover, so please understand that I will have to speak in very broad strokes.
A Note on Style
I write Hyperdimensional from a first-person perspective. This is not usually considered best practices for analytical prose, and indeed, nearly all my writing outside Hyperdimensional is not written in the first person. I do so here for a few reasons. First, because this is in some sense a deeply personal project, even if it doesn’t always show. I try to bring everything I have learned in my life—every book, every personal failing, every success, every work of art—into the writing I do here. There is more blood in my writing here than there might be in a think tank report, or an op-ed. Second, because I find it important to remind you—and myself—that everything I write is simply one man’s perspective. I do not wish to pretend to an authority that I do not have.
The Big Big Picture
At its core, AI is a philosophical endeavor. Nearly all our current debates about AI—even ostensibly prosaic and technocratic matters—boil down to one’s beliefs about the nature of reality. I’ll briefly lay out my perspective, with minimal philosophical jargon.
I believe that everything is constructed over time. What’s more, I believe that everything is being constructed in real-time. Some of these constructions are less interesting to us than others. A rock is being constructed in real time in the microscopic sense that, at root, it is the emergent product of the interaction of quantum fields (as far as we currently know), and in the macroscopic sense that it is subject to the forces of nature (erosion, wind, the sun). But the microscopic forces are mostly invisible to us because of their size, and the macroscopic forces are mostly invisible to us because of the long time periods over which they act. Neither set of forces is especially relevant in day-to-day life, despite their immense complexity.
Most interesting things in life, though, are more volatile. This is why the behavior of all living things, the meaning of words, the stories we tell, the climate, and most other things we care about, are all context-dependent. Their context matters to us not because they change but because they change over a period of time that is relevant in our day-to-day perception of time.
But at root, all things are constructed in real-time, and the universe is likely a hierarchy of emergent orders—orders that arise spontaneously because of the interaction of independent, largely uncoordinated forces—all the way down. The only “ground truth,” I suspect, is mathematics. And that’s unsatisfying in a sense, because ultimately mathematics can merely be used to represent these inscrutable emergent orders with greater and greater fidelity (also known as modeling reality).
The implication of this view, though, is that reality is constructed in real time at every instant. Thus my philosophical perspective tends to be heavily focused on the here and now—what I am doing in this precise moment. Yet the past is also deeply important to me, because to understand and act appropriately within a given moment, I need to understand the scaffolding on top of which I am standing.
This also means that the future is constructed. It does not exist. We have to get from here to there, and the path there is probabilistic and is itself also constructed in real time. Everything—every single thing—is a process.
Life, to paraphrase Michael Oakeshott, is a ceaseless improvisatory adventure. Ceaseless. I suspect that many of my disagreements with others in the AI policy world stems from my belief that there is no firm ground on which to walk, no solid ontologies on which to rest. We are in an endless and bottomless sea. There is no anchor. So if you want to achieve something—AI safety, say—you have to build it. You cannot just declare it from the top-down. That doesn’t work. Nature does not care about your declarations. You can try to cast an anchor into the bottomless sea, but the ship will just start sinking, inch by inch.
The Smaller Big Picture
AI is a transformational technology. I do not believe, as some do, that it represents a new kind of organism per se, or that its effect will be like that of photosynthesis—a sudden and dramatic change to the habitability of Earth for human beings.
At the same time, the distinction between technology and organism is not so simple. Language, for example, is a kind of technology, yet it also resembles a form of life. It likely co-evolved with the human brain, adapting itself, like a microbe, to our physiological and psychological quirks. At the simplest level, for example, all language must be producible by human vocal cords. At a higher level, language likely adapted itself to cohere with pre-existing spatiotemporal circuits within the human brain. It is nearly impossible to speak about anything—even purely non-physical things—without reference to implicitly or explicitly physical metaphors. I suspect these evolutionary adaptations explain this.
Language is, of course, not conscious. It didn’t hatch some plot to dominate our minds. And we didn’t hatch some plot to invent it. Language evolved similar to the way that other forms of life evolved, which means that it has key properties of life. I tend to think language should be understood as a kind of life, but not in the way that we typically define life (oh, look at that—a limitation of language!).
Yet symbiotic evolution is bidirectional: humans also adapted to the newfound tool of language. Indeed, it is not clear whether humans or language adapted more. After all, language defines the limits of our ability to reason about the world. It is a box—or a prison, I suppose, if you are pessimistic. A box that radically expanded our ability to understand and grapple with the world (particularly after the invention of writing), but a box nonetheless, and a box that no human invented or even asked for. Such was the beginning of man’s fusion with technology, and such will be its path for all time.
I see AI as another step along the path of mankind’s co-evolution with technology—the biggest one humans have taken since at least the printing press. Let me try to make that more specific.
Where We Are Going
AI allows humans to model increasingly complex aspects of reality. Because reality is a series of emergent orders, many of its interesting processes do not conform to rules we can easily write down. Humans want to be able to write down the rules of, say, protein folding, because the limits of our language are the limits of our world. Yet even the rules of language cannot be written down. Try as we might, we have consistently failed, and we’ll keep doing it.
Scientists in various domains have long understood this, which is why they’ve been using computational models of reality for some time. Many of those models have enabled great progress. Yet as mankind continues in its quest to control more and more aspects of nature with greater precision, these computational approaches started to show their limits. In some cases, the equations that undergird these models began to be imprecise, because they are lossy approximations of the thing being studied. Other times, the equations were fine but required infeasibly vast computational resources to process it all in the necessary level of detail.
Deep learning gives us, finally, a new chapter in this story. To make things exceptionally, stupidly simple: some people realized that the human brain could readily process many things that we found impossible to accomplish with computers. So they designed a series of architectures very loosely inspired by the human brain, as well as by an eclectic mix of philosophical, mathematical, and scientific traditions, which can broadly be called neural networks. At first, these showed promise but didn’t work all that well, but around 2012, because chips designed to play video games ended up being ideal for the same kind of math needed for neural networks, it started to work. As we added more data, made the networks bigger, and added more of the gaming chips, one intractable computational problem after another fell. Image recognition. Superhuman board game performance. Natural language. Protein structure prediction.
These are all areas where the limits of our language defined the limits of our science, which in turn defined the limits of our ability to create satisfying computational models. Suddenly, with deep learning models, some of those limits were shattered. The limits of our language may still be the limits of most peoples’ worlds, but for those operating at the frontiers of human knowledge, this is no longer the case. For the first time in a long time, we have a new, general-purpose, cognitive tool at our disposal. And right now, it improves at an exponential rate.
Timeline to AGI
AGI is a loosely defined term, often confusing more than it clarifies. Yann LeCun’s Artificial Machine Intelligence (ami, or friend in French) strikes me as among the most rigorous proposals for what AGI might look like. LeCun has clearly thought rigorously, from a wide variety of angles, about what human intelligence is and how it might be achieved. I have no idea whether he is correct, and more to the point, I see no current signs that we are close to achieving this vision or anything like it.
With that said, I expect that the more prosaic definition of AGI—a system capable of automating most of the tasks that currently define most of the productive labor done on computers—will be achieved within the next decade or so. Current LLMs do not resemble the rigorous conception of intelligence that LeCun has, but with brute force and a lot of ingenuity, they can be made smart enough to do a lot of the tasks in current-day knowledge work. In some ways, they’ll be smarter than the smartest human (arguably Claude 3.5 already is), and in other ways they’ll be dumber than almost every human.
Everything I’m about to say is predicated on a few assumptions: compute performance (Moore’s Law, at root) continues to improve at roughly its current pace, AI capital investment (data center construction) is maintained, sufficient energy capacity to power AI compute is built, and no major destabilizing events (most importantly, a war in Taiwan) occur. Each of these is a highly combustible assumption.
My guess as to how this will be done is not so much that we will create a singular digital mind capable of doing all this work. Rather, we will make current generalist models—things like GPT 4o and Claude 3.5—more capable and better at formal reasoning and planning. We’ll give them tools (environments in which to execute code, search engines, APIs to access various web services, etc.), and we’ll change the digital world to make it easier for them to operate within. We’ll change our work habits and the structure of our organizations. We’ll co-evolve, just as we did with language.
Eventually, we’ll parallelize these generalist agents. You’ll talk to an orchestrating agent, and other agents will be deputized on the fly to do research, write code, etc. At first it will be just a few agents, but there is no reason in principle that this parallelization cannot continue indefinitely. It will be possible to ask a question, and, in essence, create a company of digital minds exclusively for the purpose of solving your specific task. The company will exist as long as it needs to: a day, a week, indefinitely.
This “company” idea is what I suspect OpenAI means when they refer to concepts such as “superintelligence.” Many people, however, believe that “superintelligence” instead refers to the creation of a singular mind that is, by sheer virtue of its overwhelming intellect, able to do things like discover the nature of dark matter or figure out how to manufacture nanomachines. I doubt this is possible on philosophical grounds, as explained above: I believe everything, including knowledge, is constructed in a dynamic process that necessarily requires real-time feedback from the world. That need for feedback means that the value of pure intellect has serious limits, if only because it slows the thinking process down to something like human scale.
It probably is the case that, at sufficient scales, these systems will be able to perform a human lifetime’s worth of thinking about an intellectual problem in a matter of minutes or seconds—by parallelizing the task to many, near-zero marginal cost digital agents.
This doesn’t scare me as much as it seems to scare other people, though to be clear I believe it will be transformative. Yet it seems obvious to me that there are clear limits on what mere thinking can achieve. Almost no human creation worth its salt was made with pure thought. And I know people who have thought about problems their entire lives and made no real progress on them. Indeed, that’s probably true for all of us.
And even for the company-like superintelligence the AI industry currently seems to be driving at, there will be limits. Most interesting things require the expenditure of capital to accomplish, particularly if you accept the premise that one cannot think one’s way to greatness. Even if AI causes the cost of many goods to drop, there will remain some bottleneck. This is known as Baumol’s Cost Disease, and I see no reason for it to go away. If AI leads to a significant surge in scientific discovery, I expect there to be a similar surge in demand for capital. That means interest rates will go up, perhaps by a lot. In my view, the biggest threat from AI facing the West is not AI-generated deepfakes or bioweapons, not war with China, and not the loss of the “human element.” It is a debt crisis provoked by this surge in interest rates.
Regardless of the debt spiral though, my point is that there will be some bottleneck, some scarce resource, over which competition is waged. And that, too, will slow down the “superintelligence.”
Regulatory Approaches
AI is a general-purpose technology that, like electricity, will become so embedded into the world within a couple of decades that it will be nearly impossible to imagine life without it. Most technologies of that kind, especially information technologies, are not regulated in a centralized manner.
Regulating computers in this centralized way would be a radical change to the status quo, and as a dispositional conservative who sees much to like about the way computers are regulated today, this strikes me as an immensely costly decision. We probably shouldn’t do it, and at the very least, we should be damn sure it’s what we want before we do it. We should not be rushed into action by a small group of anxious zealots who have been on a jihad against AI since the early aughts.
Nor should we let the generalized society-wide anxiety guide us overmuch. We live in a republic for a reason. Our founders were deeply skeptical of the raw will of the people, and we should be too. Democracy is an ingredient, but the difference between our system and the pure democracy many seem to imagine we have is the difference between a glass of Burgundy and a bottle of Everclear. The alcohol is great in moderation—poisonous in excess.
Instead, we should take a careful approach. We should reason rigorously about what capabilities we think will be beneficial to us in light of technological and other developments, and then build those capabilities with alacrity. Those capabilities can include technical standards for AI, a coherent way to reason about AI liability, public computing infrastructure, digital public infrastructure for combatting deepfakes, a better military, fresh approaches to scientific inquiry, and much, much else.
Regulation is likely to lead to path dependency that may well exacerbate the problems we are trying to solve and lock in currently entrenched economic actors (not just in technology—the political economy of AI regulation is a nightmare). Regulating digital information (AI models) is very hard, and rather than starting there, and assuming the massive costs that come with it, we should mitigate AI risk by attempting to regulate other parts of the risk production chain. For example, rather than regulating biological foundation models that can maybe help with parts of making a virus, thereby grinding academic use of these models to a halt, policymakers should regulate the machines used to synthesize nucleic acids, which are far easier to regulate by virtue of being physical, rather than digital, goods.
We should remember that, for the exact same reasons that we needed to invent neural networks in the first place—the difficulty of modeling complex aspects of reality with top-down, first-principles rationality—laws governing anything, and especially a general-purpose technology, are immensely difficult to write well. One should therefore be very skeptical of our ability to write a regulatory regime from scratch for AI. Because of the realities of public choice economics—that government agencies tend to grow in power and size over time whether or not they should—we should be skeptical of anything that pushes in the direction of centralized, top-down regulation.
Instead, we should prioritize the application of existing regulation, law, social norms, and precedent—the collective wisdom humans have constructed over time—to AI. It will not be easy, nor will it be a universally satisfying process. But evolution never is, and evolution is far preferable to sudden regime change. Evolution never happens from the top down.
In general, I expect the good uses of AI to massively outweigh the bad, though I also expect that the bad use cases will be quite tangible, and that the good ones will be more ambient.
None of this is to deny that the technologies I am describing will be a revolution in human affairs, that it may provoke unpredictable emergent consequences, including a financial crisis or a war. None of it is to deny that there is a realistic scenario in which many humans get left behind. Such is the human predicament of this era. Such is life on the boundless sea.
Writing with a personal experience is actually a good way to grow. People normally follow individuals not outlets, so it makes the personal relationship stronger.
I for one try to include bursts of it when I can.
I am not sure what you mean by mathematics is a “ground truth”. My first reaction is to say, no. Turing (and others) realized the insufficiency of mathematics, as described in Erik J. Larson’s book, The Myth of Artificial Intelligence.