This reads like a closing post for the year, but there will be at least one more post after this one in 2024.
I created Hyperdimensional nearly one year ago—on January 9, 2024. Since then, I have published here at least once per week, for a total so far of 58 posts (including this one). With a year of experience under my belt, I’d like to reflect on the work thus far and offer some thoughts about where this project might go next.
The Origins of Hyperdimensional
I created Hyperdimensional on a lark.
By the summer of 2023, I had decided that I wanted to go into AI policy as a full-time writer. The first half of 2023 was not a happy time in AI policy; doom narratives predominated, and shockingly broad assertions of state authority over the future of computing were on the table. My career in public policy had shown me the many ways in which state planning and regulation had sapped America of its ability to innovate in the physical world. I was terrified that AI would be used as a pretext for a similar extension of government power over the digital world, thereby robbing our country of its last major source of dynamism. This, I resolved, must be stopped.
Writing was a major transition for me, since my previous work at think tanks had largely concentrated on managing research teams, not conducting research and writing myself (though you cannot successfully manage a research team without understanding the research at a deep level).
I had spent a decade observing think tank scholars, seeing what worked and what didn’t. I had a good sense of what my differentiated point of view would be on AI policy topics. So I made a plan: I would write a paper about AI, political theory, and philosophy for a small political theory conference, outlining my core views. I would present it, refine it, and adapt it into something that would be useful to policymakers. I did all that in September of 2023, and this eventually became one of my first public essays about AI. So far, so good.
I continued to write op-eds, slowly building my writing portfolio in established media outlets while I worked a full-time management job at a think tank. I leveraged my network to discuss jobs with think tank executives. I had one prospective role that looked extremely promising—a dream job, especially considering that I had no formal background in policy writing. Again—so far, so good.
But then the plan fell apart. I was turned down for that job on January 8, 2024 (no hard feelings, by the way!). Frustrated, I realized that the only way to advance a career in writing about AI policy was to… write about AI policy—weekly, in a medium I alone controlled. Hyperdimensional was born less than 24 hours later.
It has been more successful than I expected. The publication began with 40 subscribers, and as I write today there are well over 3,200. But the numbers aren’t what matter most: the audience of Hyperdimensional is exceptional. Every week, readers write to me with feedback, questions, and thoughts. They never cease to impress me with their insight, and it blows my mind that people of such intellectual caliber take time out of their week to read my work. Whether you have written to me or not, I want to thank all of you from the bottom of my heart for reading these essays.
Please know that I will always write for you—my job is to give you my honest apprehension of what is happening in this fast-moving and chaotic world of AI. My perspective is limited, and probably wrong in at least some important ways, but I will always see it as my duty to tell you the truth as I understand it.
To that end, I think I owe my readers an honest self-reflection on my work so far. What follows will be self-critical, but I don’t mean to sound entirely negative about my work; on the contrary, I am proud of the work I’ve done here, and believe I got quite a bit of analysis right. It’s just that patting myself on the back isn’t an interesting exercise for me, or, I suspect, for you. There is much more value in identifying the interesting ways in which I erred.
SB 1047
A great deal of my writing here has focused on critiquing or otherwise commenting on public policy proposals written by others. Many of you likely came to know my work through my commentary on SB 1047, which was largely published here. Even Dario Amodei, CEO of Anthropic, cited my writing on SB 1047 as something that influenced his own views on the bill. SB 1047 was also one of the first posts on Hyperdimensional, and the first one to gain significant attention. I think it is fair to say that I would probably not have had the modest success I’ve enjoyed so far if SB 1047 had never existed, or if I had never written about it. It seems logical, then, to focus my self-critique on this body of work.
I was convinced, way back in February 2024, that the eye of the state had turned toward AI, that fears about AI would be used to justify state intervention at massive scale into digital life, and that the AI safety community was at best composed of useful idiots for this intrusion, and at worst was actively enthusiastic about the prospect. I had been a silent observer of AI safety discussions on places like LessWrong and the AI Alignment Forum for years, so I did understand their concerns, and was even sympathetic to some of them.
Principally, though, I viewed AI safety as an enemy. And too often, I treated them like one—especially on X. I contributed to, and perhaps even helped to create, an unhealthy partisan divide on SB 1047. I presented the choice between SB 1047 and “not SB 1047” as a stark, civilizational fork in the road.
I believe this was a mistake, and it is one I have been trying to correct in recent months. I’ve come to understand that while the AI safety community is, as my friend Richard Ngo put it, “structurally power-seeking,” it is not the enemy I once apprehended it to be.
I started to realize my mistake in May. For example:
I am here to tell you that the current debate over AI, no matter its flaws (and there are many), is among the most elevated and nuanced I have seen during my career in public policy. I am here to tell you that my intellectual “opponents”—those who worry immensely about AI catastrophic risks—are, by and large, honest and good faith people. I believe they are wrong, that they are sometimes anti-empirical, and that their proposed policies could be ruinous, but that is beside the point.
It took a while after this post for that new attitude—my own attitude!—to filter into all of my writing. I maintained an overly combative posture toward SB 1047 for months to come. By the end of the summer, I had developed friendships with numerous people from the AI safety world, including Daniel Kokotajlo, with whom I coauthored a piece on what we believe could be the source of a fruitful compromise: transparency in frontier AI.
Ultimately, I remained a critic of SB 1047 to the end—though I noted that the final version of the bill was much improved. I stand by that position in every way. Still, I wonder often whether a different outcome may have been possible if I had taken a less adversarial approach just a bit earlier on in the process. Probably not: I do not want to overstate my influence, which is small. But I wonder, nonetheless.
To be clear, I do believe the state has turned its eye toward AI, and I do worry that fears about AI will be used to attempt large-scale state intrusion into economic activity and private life. Those concerns about state power are healthy. But I was wrong to make SB 1047 the dominant prism through which I expressed those concerns. SB 1047 was the most prominent AI policy debate of 2024, but it was not, in the final analysis, the best exemplar of my worries.
What I Missed
Instead, those concerns are best typified by another bill that was quietly working its way through the legislature alongside SB 1047, in the State of Colorado: SB 205, the algorithmic discrimination-based framework I have now covered extensively. I was tracking the Colorado bill, and I wrote a small oped criticizing it just as it was going to Governor Polis’ desk (indeed, my oped was published the day he signed it into law—encapsulating just how late to the game I was).
At the time, I wanted to believe this framework could be the source of a workable compromise; I even forced myself to write mostly kind words about SB 2, a similar bill in Connecticut that did not pass. I now retract those words; I was entirely wrong. I erred, I think, because I had an unsophisticated taxonomy of AI policies: I believed AI policy could be model-based (bad!) or “application”-based (good!). We could regulate the models themselves, or the uses of the models.
This was naive and overly simplistic. I soon realized my mistake, and within a month or so began developing more nuanced ways of categorizing different approaches to AI policy. I have since come to realize that while model-based regulation is indeed often problematic, “use-based” regulation is often more so—especially when it is preemptive, forcing users of AI to go through bureaucratic hoops before they can even use AI. Regulatory mechanisms like this have a tendency to result in the same kind of process-based vetocracy that characterizes, say, environmental permitting in the United States.
This is exactly what Colorado’s discrimination-based regulation does. This is the projection of state power I feared the most. The law exposes developers and businesses seeking to use AI to massive liability for vague “disparate impact” based theories of “algorithmic discrimination,” while creating a massive compliance burden for those businesses. Sadly, it is going to be replicated in the coming months in Connecticut, Virginia, Texas, California, and likely a few other states. One version of the bill (the draft proposed in Texas) even creates a centralized regulator with broad power to ensure the “ethical and responsible deployment and development of AI”—far broader powers than the centralized regulator envisioned by SB 1047.
Here is how I summarized my thoughts on the bills in Pirate Wires last month:
None of these government actions are sexy. None of them are about Skynet, or killer robots, or exotic bioweapons. Instead, they represent perhaps the ugliest thing American domestic statecraft has to offer: the immune system of the status quo — not just the bureaucracy, but the attendant mix of lobbyists, lawyers, compliance experts, consultants, and auditors that undergird and profit from the bureaucracy — seeking to devour a powerful new general-purpose technology. The same ideas that make it near-impossible to build new things in the physical world, coming now for the digital. And it is, at least for now, a bipartisan effort.
In some ways, this is simply the result of bureaucratic momentum. Many of the ideas in these documents predate generalist AI models, and the machine that is known as “the policymaking community” moves far too slowly to keep up with the AI industry. The identity politics-inflected priorities, too, seem far more threadbare today than they would have even two years ago, when they were first conceived. But no matter: the bureaucrats have settled on their frameworks, and so now, the ship barrels ahead, engines at full tilt.
But in other ways, this is something more base: a power grab, plain and simple. A circuitous process like this, far more than a singular bill that draws the world’s attention, is how you assert control over the most promising emerging technology in a generation. You do it before it’s popular, before people or businesses will notice too much. You do it quietly, behind closed doors in working groups and workshops and steering committees with trays of stale brownies lining the wall and a hotel-branded water bottle at every seat. You do it with the active participation of every “stakeholder” you can think of — except for the startups too new to earn a seat at the table, or the ones that haven’t even been founded yet. You do it through endless paragraphs of meandering gobbledygook, through flow charts and Venn Diagrams. This is how the technocracy rolls. This is how you kill an industrial revolution.
It has been remarkable to me to see this discrimination-based framework come together. I do not see American individuals or businesses calling out for this. The “woke” mindset behind these bills went out of style two years ago. The EU AI Act, which these bills resemble in important ways, is widely derided in America as an example of Europe going too far with regulation. Nobody wants these bills, except for the bureaucracy itself and the Lovecraftian apparatus of consultants, lawyers, lobbyists and academics who are adjacent to the bureaucracy. It is a regulatory regime that is practically assembling itself. The system wills it, not the electorate.
This is, ultimately, why I find myself skeptical of claims by some that we need AI regulation so that AI development can be “democratic.” There is nothing especially “democratic” about these bills. I don’t believe anyone is intending to be antidemocratic; it’s just that a modern technocratic state is only kind of a democracy. Yes, Americans have the right to vote. But we also live beneath a massive apparatus of state power that has grown up over decades and that, increasingly, our elected officials seem only to be able to poke at rather than steer for themselves. No one is in control of the system. It is not obvious to me that handing power over to this system is the path to “democratic” AI development.
During a discussion about these issues, a friend sent me this quote from Vaclav Havel:
No matter what position individuals hold in the hierarchy of power, they are not considered by the system to be worth anything in themselves, but only as things intended to fuel and serve this automatism. For this reason, an individual's desire for power is admissible only in so far as its direction coincides with the direction of the automatism of the system.
It’s not that the system is bad per se. It’s that it has grown beyond anyone’s control, including the people who “run our country.” It’s not that we should do away with regulation or bureaucracy—these things are necessary. There are people trying to improve bureaucracy, and I applaud these efforts; indeed, I count myself among the reformers. But if we do succeed, I suspect our success will be marginal. This is not an indictment—it is instead a recognition of just how expansive and firmly entrenched this system has become.
There is more to this system than just “regulation.” Nor is it necessarily a matter of left or right, Republican or Democrat. Like any system, it is built upon processes, logic, language, epistemics, mentalities, and worldviews. And like any system, it is in these things that its true power lies. It is in these things that one discovers the system’s unending desire to bring progressively more domains of human thought and culture and activity under its jurisdiction. It is here that its insatiable appetite for power is laid bare.
I hope that my writing can shed light on these subtler aspects of the thing we call “bureaucracy” or “government.” But this system is a hard thing to tell you about, because from where I stand, it looks like a gigantic blue bird soaring above, stretching 10,000 miles in every direction, and it is all too easy to mistake it for the sky.
Conclusion
There is another area I wish I focused on more in 2024, and this is on a proactive agenda for AI policy. It is easy enough to criticize, but to say what you think should be done is a different thing altogether. I did some of this throughout the year, but not enough. In 2025, I intend to write much more about what AI policy should look like. I welcome criticism and support alike for these ideas.
2025 will be a year of putting pen to paper on frontier AI policy. Compromises on policy will likely be necessary, but I intend not to compromise on the things that matter to me the most. I will try my best to get the analysis and the details right, but I will surely err in at least some ways. No matter the errors, I hope you’ll trust me that I am always doing my best to give you the truth as I see it. In my analysis and writing, my first loyalty will always be to you, the reader.
It seems fitting to close with some words I wrote back on January 9, about what matters most to me:
Technological progress can only be slowed so much, the diffusion of knowledge cannot ultimately be stopped, and we cannot through sheer force of regulatory will invent a general-purpose technology that is impossible to abuse. Transformational change is a near certainty; the question is whether it will be for good or for ill.
This newsletter will describe those transformational changes. It will argue that we should embrace many of them, tolerate some of them, and combat others. By and large, it will accept those changes as inevitable facts. Yet my work will be conservative at its core. America has reached this point of profound technological potential because of freedom of thought and speech and action, because of private property and free enterprise, because of limited and republican government. Indeed, I believe that we require these things to reach a positive outcome. I will argue for the restoration of these qualities in the many cases where they have begun to atrophy. No matter how much about our society and our government must change in the coming decades, we must work diligently to preserve the things that matter most. That is our central challenge, and it will be the primary concern of my writing.
Thank you all again for taking the time out of your weeks over this past year to read my work. I cannot say how much it means to me. Merry Christmas, happy holidays—and talk to you again soon.
I subscribed to your blog specifically because I appreciated the thought you put into criticizing AI safety missteps and overreach. I find it valuable to have points of view less caught up in groupthink.
>> "Technological progress can only be slowed so much, the diffusion of knowledge cannot ultimately be stopped, and we cannot through sheer force of regulatory will invent a general-purpose technology that is impossible to abuse. Transformational change is a near certainty; the question is whether it will be for good or for ill."
I agree with this completely. I think the people who imagine we can stop this avalanche-in-progress are lacking perspective. Like a surfer looking like a tiny speck on the face of a vast wave, the best we can hope for is to steer wisely and balance carefully.