“I love those who do not know how to live, except by going under, for they are those who cross over.”
Friedrich Nietzsche, Thus Spoke Zarathustra
I often marvel at the fact that writing is a technology. It is awe-inspiring, to me, that something so elemental, so intrinsic to the human experience, is something humans invented. There was a time when no one wrote anything down. What must that have been like?
There’s the obvious fact that knowledge was inescapably evanescent, always contingent, always subject to the vagaries of human memory. But the nature of knowledge itself must have been different too. A thought is transformed by the ability to look at it. Before writing, complex causal chains must have been exceedingly difficult to establish. With no way to keep track of one’s previous thoughts, all but the simplest deductions must have been out of bounds. Logic and reason themselves, therefore, must have been profoundly constrained.
These higher-dimensional thoughts were impossible without writing, so practically speaking, they didn’t exist until the technology of writing made them thinkable. Writing, then, altered the human mind. It is, quite literally, a thought technology, a colonization of the human mind by the human mind, a partnership between human biology and man’s quest for knowledge and order, an early and radical step in the fusion of tool and toolmaker. The people who invented writing couldn’t have realized that they were doing all of that, but it’s obvious in retrospect.
Lots of people liken AI to electricity. I have done that myself many times; it’s a valuable analogy. But these days, I find myself wondering whether AI might also be a bit like writing.
The Promise and the Peril
In the not-so-distant future (perhaps really not-so-distant), everyone in the world will have the opportunity to use a personal AI assistant with expertise in every technical field and the ability to use a vast range of digital tools. Soon after, they’ll have something like a ten person company at their fingertips. Then a 100 person company, then 1000. Each time, the virtual headcount will grow, and so too will the intelligence of each “employee.” I believe there will be limits on how far this intelligence can grow, though I do not dispute that each “employee” may be smarter than the average human in at least some novel ways. The quantity, on the other hand, is likely to be far less bounded. And as Stalin is purported to have said, “quantity has a quality all its own.”
If this pattern sounds a little like Moore’s Law (the heretofore inexorable improvements in the performance of semiconductors), that’s because, at root, it is Moore’s Law. Advances in AI are inextricably tied to advances in semiconductors. It has become harder to sustain Moore’s Law, because it’s not a law of nature; it’s a law that humans enforce. But remember that AI systems themselves will be able to help design semiconductors (in fact, they already do this) and, at some point, even create better, more computationally efficient AI. No exponential continues forever, of course: If governments don’t intervene, the laws of physics or economics eventually will. But we have a lot of room to run.
Every human, then, will be like the CEO of a vast organization that has technical expertise in every domain of human knowledge, never sleeps, has no internal politics, and can dynamically reconfigure itself to suit any task it is assigned. One’s limits will no longer be determined by the technical skills one possesses, but instead by the quality and number of the goals one seeks to pursue—and by perennial things such as charisma, ambition, looks, access to energy, and personal connections. Our AI systems will work for us, but in so doing, they may interact with one another in complex ways that may well escape our detailed understanding.
This will change the world, of course, but I believe it will also change our minds. Like writing, I believe it will unlock higher dimensions of cognition, ultimately allowing us to operate at an elevated level of abstraction. The trouble is that nobody knows exactly what that means.
All this sounds a little chaotic, doesn’t it? How will we avoid losing control of things? Won’t we be overwhelmed? Won’t the world rapidly spiral out of control? The future I am describing feels deeply alien, almost impossible, yet all facts in evidence suggest it is possible, and perhaps even close. OpenAI thinks it is possible within a decade. Whether “close” means 15 years or 5 matters little: something here does not compute. It is this puzzle that keeps almost everyone who thinks seriously about AI up at night. The real elephant in the room is not whether AI itself will seek to harm humanity—it’s how on Earth humanity is going to solve this bizarre conundrum.
Part of the answer must lie in the reality that these capabilities will become less extraordinary than they sound to us now once everyone possesses them. My mentor and friend Bob Paquette reminded me recently that Alexis de Tocqueville once wrote:
“When all the prerogatives of birth and fortune are destroyed, when all professions are open to all, and when one can reach the summit of each of them by oneself, an immense and easy course seems to open before the ambition of men, and they willingly fancy that they have been called to great destinies. But that is an erroneous view corrected by experience every day. The same equality that permits each citizen to conceive vast hopes renders all citizens individually weak: It limits their strength in all regards at the same time that it permits their desires to expand.”
Tocqueville’s wisdom notwithstanding, though, it remains a baffling thing to think about. At some point, one imagines, there will need to be artificial limits placed on this exponential growth in intelligence, but when, and of what kind? Does Sam Altman know? Joe Biden? Gina Raimondo? Josh Hawley? Donald Trump? Xi Jinping? The European Union? The National Institute of Standards and Technology? I have seen no evidence that any of them do.
Keep in mind also that, at least with our current capabilities, any hard limits on this technology would ultimately entail a commitment by governments to employ violence anywhere in the world to police the distribution of software—a good that can be replicated infinitely at zero marginal cost. If that sounds infeasible, consider that the other options would be to place all AI data centers around the world under the control of a global governing institution of some kind, or to simply destroy them. To say the least, these limits would necessitate an unprecedented degree of government surveillance and coercion over economic and personal activity.
I suspect that we are, for better or worse, going to roll the dice—for now, at least.
What is to be done?
It’s hard to imagine how humans will maintain our collective grasp on the world without a suite of technological breakthroughs distinct from artificial general intelligence, though no doubt related to it. One of those breakthroughs, I believe, will be the accelerated fusion of man and machine.
If I am right, we may be approaching not just the next stage of technological evolution, but of human evolution. OpenAI CEO Sam Altman calls it the merge, and he thinks it’s inevitable (or at least he did in 2017). “My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like,” he writes. “It would be good for the entire world to start taking this a lot more seriously now.”
The merge is almost certainly not going to be a singular event or a binary choice; like most things, it will be a complex spectrum. Instead, the merge denotes the idea of a step-change in the relationship between human beings and their most sophisticated tools, something far more visceral than the man-machine fusion we have known to date. It may well only necessitate the use of non-invasive devices, for regulatory, logistical, ethical, medical, cybersecurity, and economic reasons. Such devices will be my focus, though one should be clear-eyed: they are not the only path.
Like electrification, the merge will be the aggregate product of innumerable sets of decisions made by people all over the world. We probably do not face an either-or decision, but it is helpful sometimes to think of it as one so that we can better comprehend the tradeoffs at play.
You don’t hear Altman, or other leading figures in AI, talking much about the merge these days (though Altman did recently in an interview with Joe Rogan). Perhaps he feels that it’s simply too much to put on the public’s plate. Most people I know in the Valley think that the merge is likely, but similarly feel uncomfortable discussing it too much. After all, it’s a big leap.
Will we want to take that leap? Will we land on our feet?
In some sense, we’re already midair. The fusion began long ago—perhaps even before writing—and like a lot of things, it’s accelerated over time. It seems appropriate, in a Biblical sense, that a company called ‘Apple’ has led the charge during my lifetime, first with the Macintosh and then the iPhone.
Apple is seen as an AI laggard, but don’t count them out as an actor on this stage. As I write this, I’m wearing a pair of AirPods Pro. I’m not listening to anything, yet the silicon inside my AirPods is hard at work. It’s sampling noise from the outside environment dozens of times per second, adapting in real time to soften any sudden, harsh noises and filter out mechanical hums and other white noise. It can distinguish between a person shouting, a jackhammer, and a siren, making decisions about how to mix each one into its rendering of my surroundings. I find the silicon’s version of my auditory reality more pleasant than what my ears create on their own. Apple calls this “Adaptive Transparency.” I call it a new co-producer of the movie that goes on in my mind.
Take a look also at the soon-to-be-released Vision Pro, Apple’s “spatial computing” platform. In the broadest sense, it’s the same idea as the AirPods, but for what you see rather than what you hear. Specifically, the ambition of the Vision Pro is to create a digital twin of your visual reality using a networked array of 13 cameras, six microphones, and many other sensors. It is a wearable machine that, in real time, takes in photons, deconstructs them, and reassembles them into a convincing 3D rendering of the world around you that can then be used as a computing environment. “12 millisecond photon-to-photon latency,” as Apple describes it, in what somehow feels simultaneously like science fiction and a profound understatement.
One of the many implications of this technical capability is that these renderings can be recorded, not just observed in real time. Apple has named these recordings “spatial videos.” John Gruber of Daring Fireball described them as such:
“Nothing you’ve ever viewed on a screen, however, can prepare you for the experience of watching these spatial videos, especially the ones you will have shot yourself, of your own family and friends. They truly are more like memories than videos. The spatial videos I experienced yesterday that were shot by Apple looked better — framed by professional photographers, and featuring professional actors. But the ones I shot myself were more compelling, and took my breath away. There’s my friend, Joanna, right in front of me — like I could reach out and touch her — but that was 30 minutes ago, in a different room.
Prepare to be moved, emotionally, when you experience this.”
Are these not neural interfaces of a kind? Are they not pointing toward the fusion of the physical and digital worlds, of man and technology?
Like Apple, most of the companies working on merge-ish technology don’t tend to talk about their work as such. It’s still a bit too far down the road, and at the end of the day, it’s just a little off-putting. The work, however, is proceeding apace.
Electroencephalography (EEG) is the science of recording and interpreting electrical signals produced by neural activity. Compared to alternative methods like magnetoencephalography (MEG—the same idea as EEG, but for the magnetic fields generated by cognition), or fMRI (functional magnetic resonance imaging), EEG is coarse. It can be distorted by other electrical signals from your body or the environment. It struggles to pick up activity in the deeper regions of the brain. On the plus side, though, it’s relatively inexpensive and portable. EEG is already in shipping products, like headbands used to monitor fatigue in workers using heavy machinery. Could it one day be miniaturized into, say, a pair of AirPods? The folks in Cupertino are, it would seem, on the case. Obviously, others are pursuing this notion as well.
EEG devices produce a crude signal compared to other forms of brain activity monitoring. AI, however, excels at extracting surprising amounts of meaning from crude signals. AI systems can detect respiratory diseases by listening to a single cough. They can detect type II diabetes by listening to a patient’s voice. They can unravel ancient scrolls that were buried in volcanic ruins for a millennium (it remains to be seen whether they can read them, but I am hopeful). AI has been used in combination with EEG readings to screen for cognitive dysfunction, analyze sleep states, and interpret a person’s emotional state.
Sometimes, this work resembles more literal forms of mind-reading. Researchers at Meta have recently used AI and MEG data to accurately guess what objects the subjects in their study were picturing. Variations on this theme have been accomplished by other teams. fMRI data has been used to decode language subjects thought about, but never spoke aloud. EEG, which is far easier to integrate into consumer hardware than fMRI, has also been used to decode language processing inside the brain, though in more primitive ways.
Brain monitoring allows devices to interpret what a user is thinking, either at a very high level (“the user is currently anxious”) or at a granular level (“the user wants to write the words ‘Hello world!”). Neither is science fiction: Both things have been accomplished, though products with such capabilities have yet to penetrate the mass market.
Closer fusion between AI and human beings may make each more capable than they are on their own. One of the things that may limit AI is how much human knowledge is tacit knowledge. Tacit knowledge is a crucial part of how we accomplish almost everything. Can we truly achieve AGI without this tacit knowledge? Are we just supposed to write down all of our tacit knowledge for our AI systems? Are we even consciously aware all of our tacit knowledge? Fusion may enable AI systems to make use of our tacit knowledge, thereby making them more useful tools.
An AI system with sufficient access to brain data might be able to articulate a user’s desires better than a user can. Think about an essay you might want to write—think about how it manifests in your mind. There are probably high-dimensional connections between the themes of that essay and other topics you’ve thought about, some of which you may not even be fully cognizant of. Imagine an AI system that could understand those connections, turn the whole web of connections into a prompt, and have a next-generation AI model research and write the essay you have in mind—in a few seconds. How would this capability change the way you think? How would it change the way you spend your time?
It is said that some people speak in sentences, while others speak in paragraphs. Perhaps the merge will allow us all to communicate, even think, in still larger units of meaning.
Neurofeedback, which stimulates the brain proactively, is already shipping in some products too. Brain monitoring is about understanding and acting on human cognition, but neurofeedback is about enhancing it. Take the essay example from the previous paragraph: what if an AI system could make you aware of the subtle connections between different ideas in your mind? Making connections between disparate things is the essence of human creativity. What if we could mechanize that, not just in a discrete AI model but within our minds? We’re in the early stages, but people are working on it.
I’m not endorsing any specific technological path or company. I’m neither a neuroscientist nor an engineer, and I’m not equipped to say which of these is most likely to succeed as a consumer product. Taking an accomplishment in a lab to the market is a famously challenging task, and many promising technologies die somewhere between an academic paper and marketing copy.
The point, though, is that there are many options, and many well-funded groups are working on this problem (Apple, Google, and Meta all employ teams of neuroscientists, and there are dedicated firms such as Neuralink). It’s a question of when, not if, this technology will come to mainstream consumer products. I suspect it’ll be sooner than a lot of people think.
Maybe it sounds dystopian to you. Maybe it just sounds weird. I don’t necessarily disagree, but I am nonetheless cheering for the people developing this technology. Augmentation of our brains is likely the only way we’ll be able to keep up with the flood of information, invention, and complexity that AI will unleash.
It may not end with gadgets, either: Bioengineering of various kinds may play a role as well, though the safety and ethical considerations are obviously larger than the non-invasive devices I’ve chosen to focus on (this seems like as good a time as any to mention the fact that Bytedance—the team that brought you TikTok—has an in-house, AI-enabled drug discovery department. Caveat emptor.)
It might also be simpler than I am suggesting. After all, writing changed our brains quite profoundly—no additional assembly required. Maybe all we need is some kind of breakthrough in user interface design. The merge does not necessarily mean that we will all become cyborgs or similar outlandish notions. Reality often ends up being both stranger and more mundane than we expect.
Whatever technology or combination of technologies we find most fruitful, we have no idea whether they will allow us to keep pace with AI, but humans may well lose our primacy if we do not try. The implications of this, obviously, are profound in many dimensions. I want to focus on what this might mean for our laws and our government.
The Role of the State
What does this mean for the future of governance? Quite a lot in the long term, but frustratingly little today. It’s not currently possible—at least not for me—to articulate a detailed proposal for laws to pass or amend. At the same time, the merge helps to illuminate the stakes at play in present-day debates about AI policy and regulation. If an AI model were to be in direct dialogue with your neural circuitry, would that change the way you think about the regulation of AI models? Would you come to see regulation of AI as something closer to the regulation of writing or of thought than it is to the regulation of, say, an airplane? It thus becomes clear that this is a debate about something much larger than deep fakes, algorithmic bias, or misinformation. The price of bad policy is higher than just a hobbled new technology industry—though that alone is a high price to pay.
Whenever one finds oneself on new terrain, one has only experience and the wisdom of one’s ancestors on which to draw. Each of us, then, needs to think about the principles that we want to guide us. Here are mine.
I believe we are where we are—facing this unfathomable wave of potential and peril, and asking ourselves if we want to learn to surf—primarily because of free markets, individual freedom, limited government, the rule of law, private property, and the widespread diffusion of knowledge—because, basically, of the classically liberal ideas of the Enlightenment and the American founding. Contending with this wave will be hard, but I’d rather be doing that than grappling with how to orchestrate a managed decline of the human species.
At its core, the merge is an intensely personal matter. I trust people to make the right decisions (in the aggregate, at least), but my trust isn’t what matters: I believe it is self-evidently the individual’s right to decide for themselves with minimal intervention from the top. I doubt that the merge looks the same for everyone—in fact, I hope there is a great diversity of outcomes. For this reason, I am deeply skeptical of regulation, especially early on; the more regulated parts of our society tend to be the ones where options are fewest, and where path dependency is hardest to overcome.
Thinking about the merge invites us to confront the limits of the state head-on. Don’t think about an idealized or abstract state—think about the one we have now. Beyond basic product and public safety matters, how involved do you want that group of people to be in normative decisions about the use of this advanced technology? Is the provision of basic government services going so well that policymakers have free time to think about the technological destiny of mankind?
I’m not sure any government, no matter how legitimate, is really the dispositive authority figure on subjects such as these, but I’m quite sure that the United States federal government of 2024 is not. I want government to be a productive part of this process, but to do that it must not pretend that it possesses a unique moral authority on these issues.
Setting aside the personal decisions, there can be little doubt that the mechanisms of government itself will need to be updated in light of the merge, yet we have only faint ideas about how. Policymakers, at this early stage, should seek to create knowledge rather than rules, to use their convening powers for insight rather than jumping to oversight.
In this regard, a good starting point would be for senior government officials to stop bragging about the fact they don’t understand the technology they are trying to regulate. “Who wants to work on tech policy if you actually have to understand how these microscopic things work? But you don’t,” said Bruce Reed, a senior advisor on technology policy to President Biden and a key figure in the drafting of the administration’s recent Executive Order on AI. This is the attitude of a monarch, not a public servant, and such arrogance has no place in the adult conversations of a democratic republic. We have little time for decadence.
Where specifically should we direct that insight-gathering process? For me, a few things are top of mind.
Because of how intimately these models will be interwoven with our day-to-day cognitive labor (regardless, by the way, of whether the merge ends up being a useful or realistic concept), I believe that code and mathematics should be considered (within limits) First Amendment-protected self-expression, and that we need robust private property protections for one’s computing tools. But what should the limits of those protections be?
The merge is also an important part of why I believe that we need stronger legal protections for our personal data—it’s not just web browsing history or GPS location I’m thinking about, but data about what is going on within one’s mind. What do subpoenas look like in a world rich with this kind of data? What about warrants? What should evidentiary rules be in courts of law?
Finally, I hope there is a way for people to opt out of the merge, either in part or altogether, for medical, familial, religious, and other personal reasons. Just as how one merges is a matter of basic human liberty, so too is whether one does so. This is the thing that worries me most, because I’m not sure if it’s possible. I doubt that it is possible through market forces alone: markets are, at the end of the day, evolutionary processes, and those who merge are likely to outcompete those who do not—perhaps comically so. Thus it would seem, in the long term, that government intervention of some kind will be required to protect those who elect to opt out. I hope this is something government takes seriously whenever it comes to the fore. I hope that our political leaders do not get distracted by trivia, as they so often do with matters of technology’s role in society.
Aside from these principles, though, it’s difficult to speculate about the specifics of something that is still so nebulous. Indeed, that’s exactly what I criticize many people in the AI safety community of doing. Tempting though it may be, we should not make policy based purely on our reckons. And in any case, we have nearer-term issues to address.
At the same time, the merge will always be on my mind in some way or another. It’s devilishly difficult to predict how much time we have before this becomes an urgent issue. It may be decades. It may be just a few years. Like every technological progression, it will be a gradual process, though our understanding of the word “gradual” may change as the accelerando of human history beats on. Perhaps it is changing already.
Outlandish though the idea may seem, then, I agree with Altman: The concept of the merge is something that everyone should spend time seriously contemplating.
Does this make you want to run for the hills? It might not be time to put on your running shoes, but I’d keep them by the front door.
Nobody said progress would always be fun. Or as one of the protagonists of For All Mankind, the techno-optimist prestige TV show produced by Apple (yep), put it, “Progress is never free. There is always a cost.”
Enlarge the place of your tent,
Stretch your tent curtains wide,
Do not hold back;
Lengthen your cords,
Strengthen your stakes.
For you will spread out to the right and to the left;
Your descendants will dispossess nations
And settle in their desolate cities
Do not be afraid
Isaiah 54:2-4
Housekeeping: I said last week that this week’s post would be about AI bioweapon risk. I still plan to write on the topic. Lesson learned: don’t pre-commit to a timeline for something you haven’t finished drafting. I apologize. Also, I’m posting this on a Monday because I’m traveling for work this week; in general, I plan to publish later in the week.