Introduction
On Monday, the Chinese AI company DeepSeek has released r1, its reasoning model and competitor to OpenAI’s o1 (and soon, o3). Alongside the main r1 model, the company also released distilled versions as small as 1.5 billion parameters—small enough to run locally on a modern iPhone.
r1 should be seen as a reaffirmation of something that has long been obvious: open-source frontier AI is going to be relevant for at least the foreseeable future if not much longer, and it is going to be an important vector in the broader technological and economic competition between the US and China. Open source is, therefore, an important part of American competitiveness and national security. Some people will probably still try to deny this reality, but r1 makes their job even harder.
I prefer to think of the DeepSeek release as an invitation rather than a threat. America needs to think bigger and more boldly about the things our AI systems—closed and open alike—might make possible at home and around the world.
No longer should we pretend that open-source is something that can be willed away through regulation or top-down control. Nor should we indulge the simplistic idea that open source involves “giving away our technology to China.” And no longer should be pretend that a fierce competition with China is something we can “avoid” due to fears of “AI arms race dynamics.”
By the end of 2025, the capabilities of frontier AI systems will start to push past human performance, the economic potential will become palpable to all, and America will have probably established the foundations of its AI policy regime. It’s time to set aside the platitudes and simplistic arguments. Things are starting to become serious, and the gravity of our moment requires more from us than we have so far given. I include myself in that criticism.
But it is also time to get specific. Supporters of open-source AI need to transcend vague proclamations about “decentralization of power.” What does a world with powerful open-source AI look like, really?
I’d like to give you my own perspective on this—a fragmentary and incomplete glimpse, to be sure, but a start nonetheless. Far more people should be writing concrete, positive expositions of what the world could look like under all manner of AI scenarios. Whether you like my ideas or not, I encourage you to think of this as an invitation for you to write your own vision.
Everything is a Computer with a PhD
What if every electronic device around you had common sense? What if every electronic device around you was not “smart” in the sense of being connected to the internet, but “smart” in the human sense? This could enable all kinds of interesting things: What if your smoke detector could call 911, itself, when it detects a problem—while also calling you and cross-referencing what it is sensing with security camera footage in your home?
What if my TV “knew” what I was watching because a private, fully local AI system was running on it? What if that system could offer live commentary, educating me in the finer points of basketball strategy using examples from the game I am watching in real time? What if every computing device around you (keeping in mind that steadily more things are becoming computing devices) had a PhD in everything?
We can only dimly grasp how this would transform familiar home appliances and other devices, and what entirely new kinds of devices this might enable. Sure, these devices could make API calls to AI systems in the cloud, but at a cost of cybersecurity, privacy, and flexibility. Fully local AI has real benefits already, even with the still-primitive LLMs we have today. They will only grow from here.
The web of intelligent devices I am describing is simply impossible without open-weight AI models being broadly and widely available, lest every device manufacturer also train a custom AI system from scratch. These needn’t be the best models to be useful. For anything these local models could not handle, the AI on board could always make a request themselves to a much larger and more capable cloud-based model.
AI can be thought of as a substrate of intelligence—supremely high-quality intelligence—that will permeate nearly everything you touch eventually, in some way or another.
Open-Source AI as a Governance Technology
AI is not just a personal consumer technology—it is also a governance technology. By radically lowering the cost of intelligence, it will enable countries of all sizes and budgets to provide world-class public services. And because AI is an industrial output, it means that governance itself—or at least a significant part of it—could one day also be thought of as a product or a commercial service. Government services become disentangled from the political process, and instead become something offered on a competitive global market place—dare I say it, a software-as-a-service.
Imagine an AI-enabled urban first responder service. Its patrolmen might be dense networks of cheap, autonomous drones, or perhaps one day, robots. They could patrol for crime, identify and help put out fires, get cats out of trees, and many of the other things we traditionally associate with first responders in a first-world country. Each would of course run local models capable not just of autonomous navigation and locomotion, but also planning and reasoning skills at the level of very intelligent humans (or greater). Undergirding it all could be a centralized “brain”—and perhaps this would be a much larger model.
You can imagine that larger model being a closed-source AI system accessed via API, but you can also imagine that a developing country’s government might not want to send all of its data to a foreign company. They also might not be allowed to, since many governments around the rules have data security policies that prohibit such things. There is a very good chance, then, that the only way any of this could work—from the super-smart local models on drones to the centralized brain-in-the-cloud—is with open-source models.
Perhaps you will object and call what I am describing “Skynet.” But consider that in many developing countries, the police are feared; in others, they are mocked. In very few do they exhibit the comparative competence of emergency services in the first world (but for the record, American police and fire departments would be improved by such a service as well).
There is no doubt, though: such a capability will surely be possible, and it very well could become Skynet. Whether it does or does not—whether services of this kind enrich or deplete the human experience—will depend upon the values of the people who deploy those technologies, and of the people who build them.
Consider also that many developing countries do not have universal primary public education in the way that we do in the United States. Perhaps one day, there will be AI schools that serve as a form of universal public education in the developing world. Think much more broadly than “a chatbot children can ask educational questions.” Think about a system that can produce entire curricula, diagrams, movies, educational games, and much more, perfectly tailored for the needs of each child—on the fly.
Any system like that, too, will fundamentally depend on the values of the people who deployed it and the people who built it. Whose technology—and hence, whose values—should inform the tools that could one day serve as the foundation for the education of billions?
Trust and Security
Many times, economic, legal, and social life requires a trusted intermediary. Those entites are often in short supply. Perhaps a company’s ownership is considering selling to a prospective buyer, and the buyer wants to verify information about the business and its operations. The sellers, though, are rightfully often wary of disclosing too much proprietary information to the buyer; after all, the buyer could just learn all the company’s secrets, go off, and start his own venture.
An AI system could one day sit in between the buyer and seller, verifying information about both parties and reporting back. If it was carefully designed, it could perhaps only reveal the results of its analysis without divulging proprietary information. And because it may well be smarter than any human, its analysis could be quite worthwhile indeed.
Whether you’re buying a home, negotiating a contract, or engaging in a legal dispute, intermediaries of this kind are often essential. There is no reason that AI systems could not one day do all of this intermediation, and perhaps even arbitrate what we now think of as legal disputes in civil court.
Often, fruitful collaboration between business is impeded by data-related obstacles—even more so as a vast data privacy regulatory regime proliferates in many US states. Medical researchers and economists alike spend enormous amounts of their time wrangling the data they need to do their work out of highly regulated entities with complex data access and retention policies. These agreements can take years to negotiate and very often, they are simply impossible. The amount of research that never happens—and knowledge that is never created—because of these barriers is staggering. The same dynamics are often at play even in government agencies attempting to share information with one another, and this ends up being a practical hindrance to all sorts of positive public policy outcomes.
Again, an AI system could sit in between the various parties to this data transaction, performing analysis on data (since it will, after all, be a quite gifted quantitative analyst) at researchers’ request while guaranteeing the owner of the data that the researcher cannot access the underlying sensitive data. The researchers could have access to the model’s chain-of-thought (or whatever fancy monitoring mechanism we have by then) to ensure it is doing precisely what they desire. And if an unscrupulous researcher tried to reverse engineer the data out of the model with leading questions, remember that the model is very intelligent and adversarially robust, and it will know the researcher is trying to do this.
Just as with the other use cases, one can readily imagine that many participants in exchanges or processes like this might not want to share their data with a closed model provider, and in some cases, might not legally be able to do so. Open-weight models, combined with novel private governance institutions we have yet to create, could be the enablers for the technology I am describing.
Conclusion
Imagine the soft power that might accrue to the companies and countries that build the most commonly used versions of each of these technologies worldwide.
Open-weight models are not without unique risks, particularly as the capabilities of models get stronger. Ultimately, someone with access to model weights can, with sufficient effort, disable any developer-created safeguards against misuse. Right now, “misuse” does not mean that much, because models are not yet that capable. But that could change. We do not know when, or whether, model capabilities will carry sufficient danger that open-weight models become problematic. Very few coherent policy responses to this problem have been put forth.
To a large extent, the answer is that society will need to be made more robust against AI-enabled threats—something I hope I have shown you that AI itself, and open-weight AI in particular, can help us achieve. I will be the first to tell you that this is an incomplete answer. More politically and intellectually challenging policy solutions are likely to be necessary. One thing I can assure you, however, is that “banning open source” is neither desirable nor, if we are being truthful, possible.
But our cultural obsession with risk has caused us to miss the immense opportunities here—opportunities that if seized could make the risks look small, even if the risks manifest themselves. It is worth noting that the majority of the use cases I’ve imagined here—policing, legal services, data privacy, and education—are the target of US and EU regulation which assumes that AI will always be a source of harm in these domains.
It is not that our policymaking class lacks imagination; they dream up exotic forms of hypothetical AI harm daily. It is that, owing to some strange civilizational sickness, some chemical in our intellectual water, very few of them seem able to imagine anything good happening.
Right now, only one American company stands a chance at making the open-weight foundational AI models to undergird the uses I have described: Meta. They should be applauded for their efforts to do so, but the truth is that one company is not enough. Even as we consider policy measures to incentivize developers to mitigate major foreseeable risks (which we should), I wonder if we also need to consider ways to incentivize more open-source models. What is stopping Anthropic and OpenAI, for example, from open-sourcing older models? The answer, in short, is probably something like “it’s not a priority internally” and also “a ton of potential legal risks.” Should we try to lessen those legal risks as a starting point? After all, we know these models are safe to release open source.
Might there come a time when some kinds of open-weight models are “too dangerous” to release? Of course; anyone expressing certainty about that is fooling you or themselves. And if that happens, we may need to take policy measures to disincentivize open-weight models from being released. But if that day does come, my guess is that China and most other countries will have already taken draconian measures to limit the proliferation of open models. For now, though, we are on the plain of open competition, and I suspect that we are losing, at least with respect to open source.
It is naïve to think that simply having the best AI models is the path to technological supremacy. DeepSeek’s recent wave of impressive models shows that the moat there is too shallow to rely upon. But that does not mean there is no supremacy to be achieved through AI and related technologies. There is an entirely new worldwide substrate of digital intelligence—and a physical infrastructure to underpin it—that needs to be built. New institutions, both enabled and necessitated by AI, will need to be imagined, fought over, and brought to fruition. Trust will need to be established, and alliances forged. The United States is in pole position to lead, if only we choose to do so.
Those who build these things will thrive in the century to come. Those who fret and focus on writing down rules about their neuroses will flounder.
On the question of smart appliances, etc. being ai-enabled: while I suspect that this is a directionally accurate prediction, I expect that we will see many appliance manufacturers seize upon this and try to pivot to being an AI/tech company, akin to how Ford and GM have tried, and failed, to become electric car companies. Not for nothing are the leading EV companies tech-native.
"Imagine an AI-enabled urban first responder service. Its patrolmen might be dense networks of cheap, autonomous drones, or perhaps one day, robots. They could patrol for crime," the AI-citizen-arrest, what's not to like about that.