I’m going to experiment with a new format. My newsletter will now start with things that caught my attention in the past week, followed by the main article. Thanks again for being a subscriber and (I hope!) a reader.
Quick Hits
Google’s Gemini, a family of multi-modal AI models, has been refusing to offer historically or demographically accurate depictions of reality: when asked to depict families in 1820s Germany, medieval British kings, the American founders, or modern-day Swedes, it insists on a “diverse” representation, with the result being that an otherwise ignorant person would walk away believing that Germans and Swedes are mostly non-white, for example. It also, in at least one instance, refused to depict a white person in any manner, but did not similarly refuse requests to depict other races. This is a good example of the AI safety/bias mentality gone awry, and is yet another demonstration of why we need a competitive marketplace and open-source AI.
ChatGPT had a schizophrenic meltdown on Tuesday, resulting in almost Joycean prose in response to normal user prompts. OpenAI says this is now fixed and has provided an explanation for what went wrong. This is a positive development compared to prior ChatGPT problems, where OpenAI has been much less forthcoming with technical explanations.
Apple introduced a new, supposedly quantum-proof encryption method for its iMessage platform. I hope this comes quickly to other encrypted data on Apple products. I also would note that the move comes as the Senate is close to passing the Kids Online Safety Act, which would potentially create substantial security and privacy problems (I won’t go into it, but see here). While Apple enhances its already robust protections of personal privacy, the federal government is (inadvertently, I think/hope) working to weaken digital privacy.
Onto the main event.
AGI, Political Unrest, and How to Deal with It
Artificial general intelligence (AGI) is getting close to fruition. The US government, already facing a serious legitimacy crisis, will struggle to deal with the upheaval wrought by AI, particularly on such a short timeline. Onerous regulations on AI, in addition to being extremely challenging to enforce, will make the US less competitive internationally, damage our economic growth, and could impose serious limitations on the personal liberty of Americans. Because of this, regulating AI could, especially at this still-early stage, exacerbate the legitimacy and competence problems of the federal government. It will be challenging enough to incorporate AI into existing law, and this path is probably the better way to accomplish most ‘AI regulation.’
It is probably going to be a turbulent decade no matter what, but what can the governments (federal, state, and local) do that will help stabilize things without counter-productive regulation? I believe it can start by focusing on providing basic things that are currently in poor shape: things like public safety, sound budgets, and reliable and abundant energy. Let’s dive in.
AGI is Probably Close
There is no widely accepted definition of ‘AGI,’ but for these purposes let’s use OpenAI’s definition: “highly autonomous systems that outperform humans at most economically valuable work.” For the time being, I would qualify that as “economically valuable work that takes place in the digital realm.”
The leading AGI labs (OpenAI, Anthropic, etc.) don’t tend to publish as much revealing research as they used to (Google’s DeepMind being somewhat of an exception), but it seems safe to conclude they are close to striking breakthroughs based on what we do know:
Google/DeepMind recently announced Gemini 1.5 Pro, a new language model with the ability to reason over huge inputs (up to 10 million tokens, or about 7.5 million words) at the same time. This is similar to how a human reader can keep a few paragraphs or pages of text in their active ‘working memory’ at the same time, except that Gemini 1.5 Pro can do that for, say, a three hour movie or the entire United States tax code. It can do this with far fewer of the confabulations we have grown used to with generative AI models.
Two recent papers, also from DeepMind, involved the fusion of LLMs with deterministic reasoning systems to achieve new capabilities in mathematics. In essence, the LLM generates ideas, which are then assessed by a deterministic evaluator program. The ideas that don’t work are ejected, while those that do are fed back into the LLM for further ideation and iteration. Applying this broad approach, DeepMind researchers developed FunSearch, which broke new ground on frontier mathematical problems, and AlphaGeometry, which achieved top-tier performance in the International Math Olympiad. These are big leaps in advanced reasoning, and I suspect it is just the tip of the iceberg.
OpenAI’s Sora model for video generation, has made critical advancements in 3D world simulation, which requires a physics-based world model (how good this model really is remains to be seen; the model is not publicly available). This grounded understanding of the real world is seen by many researchers as a necessary step toward AGI.
OpenAI employees have been more or less explicitly stating that AGI is close, with one even calling it ‘imminent.’
Many people assumed that the first AGI would be roughly equivalent to an average person. These developments open the possibility that the first AGI will in fact far exceed the vast majority or even all humans in some important respects. The first AGI will have its flaws and rough edges; all first versions do. But it may be closer to ‘superintelligence’ than many had been expecting.
If I am right, all of the disruption and tumult wrought by AI could happen on a more compressed timeline than I had expected, say, one year ago.
To make matters worse, the speed with which we seem to be approaching this technological milestone suggests that it will be duplicable by others as compute becomes more plentiful. “Fast following” seems inevitable here, though it is important to understand that OpenAI is not simply shoveling more and more data into larger and larger compute clusters: Beyond having arguably the finest concentration of talent in the field, they also have invested serious effort into amassing outstanding datasets. Still, though, the first AGI will be an existence proof, and other entities, both within the US and in other countries, will surely aim to emulate it as soon as they can.
I want this technology to exist and to be widely dispersed throughout the world. I created this Substack partly to push back on regulations that I worry will constrain AI before it can find its place in society. But I have also consistently said that AGI/ASI is likely to be the most challenging, and potentially destabilizing, technological transformation humanity has ever faced.
People who write and think about AI often say they feel a bit like people warning about COVID did in January 2020, when the virus was obviously real but its impact was not yet apparent to most people. AI writers and technologists have been telling people to “buckle up” for a while now, most especially in the past year.
But really, buckle up. It’s going to get bumpy.
The Challenge for Government
Policymakers, particularly at the federal level, might find themselves in a particularly tough spot: government is facing a serious legitimacy crisis. Already, half of the electorate seems unwilling to recognize the legitimacy of presidents who come from opposing political parties. The right does this explicitly, via election denial, references to the “Biden regime,” etc. The left does this somewhat more subtly: theories about the 2016 election interference from either Russians, social media algorithms, or both, for example, or through discursive tactics that attempt to define the scope of “acceptable” policy beliefs (every nationally successful conservative politician, for example, is always ‘far right,’ never simply right wing, whether it is Javier Milei or Viktor Orban).
In part, this legitimacy crisis is deserved. Government often fails to deliver on its obligations. Many US cities experienced a crime increase in recent years, driven in part by city governments, prosecutors, and police departments collectively deciding to weaken various aspects of law enforcement and criminal justice. There was a subsequent increase in criminality in America’s largest cities that is hard to capture through numbers alone: Shootings and homicides increased, to be sure. But, to give one personal example, many shelves in chain stores near my home in Washington, DC are empty due to near-daily robbery; I’m not sure how well this is captured in the numbers, because store employees do not consistently report these crimes to the police because they know the police will not respond. Some local governments have responded effectively, but the fact that this increase in crime was partially the predictable result of government policy is itself telling.
At the federal level, the government’s persistent unwillingness and/or inability to secure the southern border has been widely observed. Our defense industrial base is unable to produce munitions in sufficient quantity for a war we are not even directly fighting. Every political thinker with major influence in the American founding would agree that a government unable to provide for public safety is a government that lacks legitimacy. John Locke, Montesquieu, Washington, Madison, Hamilton—all of them concur on this point.
Because these legitimacy issues are widely understood by the public (though perhaps not articulated as such), government is particularly susceptible to crises and other shocks. Perhaps it will be a war or a pandemic. Perhaps it will be a slow-moving crisis such as widespread blackouts caused by the increasing fragility of our electrical grid, or a fiscal crisis caused by our unsustainable federal debt. Or perhaps it will be some combination of these things.
Regardless of what the specifics, a crisis could very well happen while AI is transforming the labor market (potentially resulting in at least temporary mass unemployment), radically empowering individuals with capabilities that were previously only available to well-resourced corporations, and further disrupting our collective truth-finding procedures. That does not sound like a recipe for political stability.
Policymakers speak often about the dangers of AI bias, misinformation, concentration of power, and malicious use. I suspect they are genuinely concerned about these things. Some, however, understand the threat they face to their legitimacy and their status, and use these high-minded concerns to conceal their desire for self-preservation.
Regardless of the motivation, it is far from obvious what broad-based regulation of AI models is both feasible and desirable. I have spent a lot of time elsewhere arguing the specifics of this, but I will very briefly summarize my views: I am in favor of reporting requirements for large training runs, safety standards for models used in specific high-risk industries, and efforts to keep advanced AI hardware out of the hands of foreign adversaries for as long as possible. Beyond that, I would suggest that government do everything it can to establish the exact location of as many AI data centers around the world as possible: catastrophic risks such as mass cyber- or bio-attacks are acts of war, so we should be prepared to use violence to stop sufficiently malicious conduct. I do not support much else because I believe most of the potential harms from misuse of AI are already illegal, and because I believe that it is impossible to legislate a positive outcome from AI, because no one knows what a positive outcome is.
Also consider the fact that the debt issue, perhaps the thorniest policy problem facing America, will be far easier to address if economic growth over the next decade exceeds our recent historical norm. While the economic potential of AI is debated, it is hard to think of a near-term technological breakthrough with more economic promise than AI. Viewed in the context of America’s fiscal problems, it is questionable whether we can really ‘afford’ AI regulation if the cost of regulation is reduced productivity gains from AI. One can imagine a world in which reduced growth is a luxury we can afford—it just isn’t the world we live in.
The better path for policymakers to take, it seems to me, would be to focus on the basics, the things that almost everyone agrees are government responsibilities and that governments, at least in theory, know how to do. Is our country building energy infrastructure to power the next several decades of US economic growth? It is not. Is the federal government in a sound financial position? Not remotely.
The good news is that these are eminently addressable problems, unlike the technical, legal, and feasibility challenges associated with AI regulation. We can reduce regulation on nuclear fission reactors and invest in geothermal energy development, and we can reform entitlements. And by doing these things, government can better uphold its end of the social contract.
There is no shortage of ideas for addressing each of these problems. Better yet, the scholars who work on these issues don’t have to blanket their research with caveats like “further research is required to determine if our proposal is legal or possible,” as AI policy writers often do. The catch: these problems, like many others, become harder and more painful to address each day policymakers kick the can down the road. The time for action is now.