<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Hyperdimensional]]></title><description><![CDATA[A newsletter about emerging technology and the future of governance.]]></description><link>https://www.hyperdimensional.co</link><generator>Substack</generator><lastBuildDate>Sun, 19 Apr 2026 18:19:31 GMT</lastBuildDate><atom:link href="https://www.hyperdimensional.co/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Dean W. Ball]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[hyperdimensional@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[hyperdimensional@substack.com]]></itunes:email><itunes:name><![CDATA[Dean W. Ball]]></itunes:name></itunes:owner><itunes:author><![CDATA[Dean W. Ball]]></itunes:author><googleplay:owner><![CDATA[hyperdimensional@substack.com]]></googleplay:owner><googleplay:email><![CDATA[hyperdimensional@substack.com]]></googleplay:email><googleplay:author><![CDATA[Dean W. Ball]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA["New Sages Unrivalled"]]></title><description><![CDATA[On Mythos]]></description><link>https://www.hyperdimensional.co/p/new-sages-unrivalled</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/new-sages-unrivalled</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Wed, 08 Apr 2026 14:16:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!TO20!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e972b4f-e519-49b7-a52e-df415441c2b3_2497x3000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><p><em>Columbia! Columbia! to glory arise,  <br>The queen of the world, and the child of the skies,  <br>Thy genius commands thee, with raptures behold,  <br>While ages on ages thy splendors unfold:  <br>Thy reign is the last and the noblest of time,  <br>Most fruitful thy soil, most inviting thy clime;  <br>Let crimes of the east ne&#8217;er encrimson thy name,  <br>Be freedom, and science, and virtue thy fame.</em></p><p><em>... <br><br>New bards and new sages unrivalled shall soar  <br>To fame unextinguished, when time is no more.<br></em>-Timothy Dwight</p><p>I stumbled recently on a painting I once loved as a young boy but had long since forgotten. &#8220;<a href="https://www.si.edu/object/girl-i-left-behind-me%3Asaam_1986.79">The Girl I Left Behind Me</a>,&#8221; painted by Eastman Johnson in 1872, depicts a young girl facing a storm. The wind blows her hair and dress straight back, yet she leans into it. She holds her little books to her chest for protection and plants her left foot defiantly down. She gazes across the landscape. Her face, seen only in profile, conveys both a sense of waiting in anticipation and of being sternly prepared for whatever may come. </p><p>The storm she faces is not on the verge of clearing. The dark atmosphere suggests that the storm will only worsen, and that it is coming right for the girl. Yet she stares that future down, with little more than her books to protect her.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TO20!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e972b4f-e519-49b7-a52e-df415441c2b3_2497x3000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TO20!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e972b4f-e519-49b7-a52e-df415441c2b3_2497x3000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!TO20!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e972b4f-e519-49b7-a52e-df415441c2b3_2497x3000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!TO20!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e972b4f-e519-49b7-a52e-df415441c2b3_2497x3000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!TO20!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e972b4f-e519-49b7-a52e-df415441c2b3_2497x3000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TO20!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e972b4f-e519-49b7-a52e-df415441c2b3_2497x3000.jpeg" width="512" height="615.032967032967" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8e972b4f-e519-49b7-a52e-df415441c2b3_2497x3000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1749,&quot;width&quot;:1456,&quot;resizeWidth&quot;:512,&quot;bytes&quot;:1873772,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hyperdimensional.co/i/193576796?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e972b4f-e519-49b7-a52e-df415441c2b3_2497x3000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TO20!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e972b4f-e519-49b7-a52e-df415441c2b3_2497x3000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!TO20!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e972b4f-e519-49b7-a52e-df415441c2b3_2497x3000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!TO20!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e972b4f-e519-49b7-a52e-df415441c2b3_2497x3000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!TO20!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e972b4f-e519-49b7-a52e-df415441c2b3_2497x3000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>&#8212;</strong></p><p>For the last six weeks or so, at least one American company has possessed a tool that could damage the operations of critical infrastructure and government services in every country on Earth, including the United States. Within another six weeks or so, if not already, 2-3 American companies will possess this capability. Some time after that, perhaps not much time at all, adversaries of the United States&#8212;principally China&#8212;will possess tools of this magnitude. </p><p>The company I am referring to is Anthropic, and the tool they posses is called Claude Mythos. Researchers at the company <a href="https://red.anthropic.com/2026/mythos-preview/">have said</a> that the new model stands to fundamentally upend cybersecurity. At least, for the time being. They postulate that after a transitional period, the world will end up in a steady state where advanced AI benefits defenders rather than cyberattackers. Yet the transitional period could be a long and brutal storm, and we do not know what will break as it hits. </p><p>&#8220;The threat is not hypothetical,&#8221; they conclude. &#8220;Advanced language models are here.&#8221; </p><p>What we do next, both collectively and as individuals, will determine if we can weather the storm. </p><p>&#8212;</p><p>What do the capabilities of Mythos mean, prosaically speaking? It&#8217;s hard to say, because I do not have access to it, and in all likelihood, neither do you. The model is not currently public, and may never be in its current form. But broadly speaking, if one takes Anthropic at their word, the model can conduct automated software vulnerability discovery with nearly superhuman performance in some domains. </p><p>The model can find security vulnerabilities in software, including software systems upon which modern civilization rests, that have eluded security researchers for years, and sometimes decades. The model has found thousands of vulnerabilities so far, most of which have not yet been fixed (for this reason, Anthropic has not publicized the exploits, but they have reported them to the developers of the software in question). An enormous range of consumer and commercial services--from banking to healthcare to education to AI itself&#8212;are plausibly implicated. </p><p>My model of modern software is that, if you look hard enough, you will find critical vulnerabilities. Looking hard, however, used to be expensive&#8212;only the best hackers in the world could do it, and their time was limited. With Mythos, the price of &#8220;looking hard&#8221; at software has plummeted, and it will get cheaper each month. </p><p>This is not wholly bad news; after all, &#8220;looking hard&#8221; at software is also how software gets improved. Mythos and similarly capable models from other companies that will soon follow, in that sense, are one of the greatest gifts to cybersecurity ever given. </p><p>Yet as things stand today, the world is deeply vulnerable. Every day, you rely on untold millions of lines of code maintained by a global population of millions of developers. It will not all be fixed tomorrow, or next month, or next year. The reality is that models of this capability level&#8212;and more capable&#8212;will almost certainly diffuse widely before all &#8220;critical&#8221; software is patched. How much damage will be done is anyone&#8217;s guess.</p><p>If you doubted whether AI systems might have object-level national security implications, now we have clear evidence. Some of the most capable and prized teams in the United States intelligence community do precisely the kind of work that Claude Mythos automates. The same is true of China. You can be inclined to believe this will all work out fine in the end, but it is simply no longer credible to contend that there are no implications for national security from large language models, and therefore for government as a whole. </p><p>&#8212;</p><p>This has been a frustrating issue to discuss candidly for the past two years. The reason is that, in the adolescent period of AI policy and discourse that is now&#8212;I hope&#8212;coming to a close, taking AI risks seriously was considered uncouth. Speaking about how near-future models might have straightforwardly dangerous capabilities was enough to provoke suspicion: were you a secret &#8220;doomer&#8221; or Effective Altruist? Were you part of a grand conspiracy to achieve &#8220;regulatory capture&#8221; for the frontier AI companies? Were you trying to &#8220;ban open source&#8221;? These sorts of questions constrained debate and put blinders on a large number of otherwise-sane policymakers and other influential people. And these constraints, in turn, meant that one had to tiptoe around reality.</p><p>But I am done with tiptoeing now, and so should everyone else be. It is a great relief, albeit also a bit uncomfortable, to feel the biting winds against one&#8217;s face. </p><p>In that spirit, here are some things I believe to be true:</p><ol><li><p>Actors who are hostile to the U.S. will possess the capabilities of Mythos, if not better, within a year or two. We will not stop this through &#8220;nonproliferation&#8221; or some other clever regulatory scheme. We can only blunt the impact of this reality by strengthening our cyberdefenses rapidly. </p></li><li><p>Strengthening cyberdefense will require coordination among state and local government entities, private sector critical infrastructure operators, frontier labs, and the broader private sector, as well as the federal government. But even more importantly, it will require compute: data centers. In <a href="https://static1.squarespace.com/static/6624103c6e20f74a2d11eae5/t/69d661785a8ff73212aab79c/1775657342230/ROFR+Coalition+-+Dean+Ball+testimony+Final+copy.pdf">recent testimony</a> to the Federal Energy Regulatory Commission, I wrote about the urgency of speeding transmission siting to facilitate the buildout of supercomputing infrastructure for national security. Running massive fleets of automated software vulnerability researchers was precisely one of the use cases I cite in that testimony. In addition to speeding up the FERC process through administrative actions, we need permitting reform urgently. </p></li><li><p>Speaking of national security: The U.S. Department of War, and the federal government more broadly, are engaged in a lawfare campaign against Anthropic whose underlying motivations are deeply unclear and which <a href="https://www.hyperdimensional.co/p/clawed">attacks core American values</a>. Now, the strategic wisdom looks worse and worse by the week. We are fighting a war against Iran, a highly capable cyberoffensive actor. It is inconceivable that the government can have a healthy relationship with the frontier AI industry while attempting to destroy what is arguably the field&#8217;s leading company. Anthropic and the Department of War must come to a truce, if not a resolution, as soon as possible, for the good of America&#8217;s national defense. </p></li><li><p>In the context of national-security-relevant cybersecurity capabilities, the key and salient difference between the United States and China is not our &#8220;innovation ecosystem,&#8221; but instead the simple reality that our firms possess the computing power to train and operate models like Mythos today, and theirs do not. It is that simple. China is prioritizing its efforts to develop its own compute manufacturing capacity, and this development is likely to motivate them even further. The best way to disrupt this is a serious increase in <a href="https://www.thefai.org/posts/export-control-loopholes-chipmaking-tools-and-their-subcomponents">targeted export controls on semiconductor manufacturing equipment</a>, too much of which flows freely today from the U.S. and its allies to China. It is long past time for major effort here from Congress and the Trump Administration. </p></li><li><p>The utility of SB 53, which requires frontier AI companies to disclose their assessments of their own models&#8217; cybersecurity risks, is hopefully more apparent now. Some criticisms of that legislative framework have asserted that it attempts to control frontier AI or micromanage companies. But in truth, the framework rests on the notion that AI will <em>not</em> be controllable--that stopping the diffusion of potentially dangerous capabilities is impossible--and that therefore today&#8217;s &#8220;frontier&#8221; capabilities will be broadly dispersed within a short while. This is exactly we need transparency about what developers see at the frontier: so that a large range of societal actors can prepare their defenses appropriately against the developments we see forming at the frontier. </p></li><li><p>Today, Mythos is accessible only within Anthropic and to Anthropic&#8217;s chosen partners. Limited releases of this kind will likely be a growing trend because of both compute constraints and safety concerns. Mythos appears to be about five times more expensive to run than Opus, which was already not cheap, but for Anthropic the issue is not so much cost as it is allocating sufficient compute to serve Mythos to the public. This means that the best AI models of the future may be disproportionately, if not exclusively, used within frontier labs for their own purposes, which at least at first will be automated AI R&amp;D. These so-called &#8220;internal deployments&#8221; have motivated my own pursuit of transparency and <a href="https://arxiv.org/abs/2504.11501">private governance</a> frameworks, the latter being private organizations that would audit the safety and security posture of frontier AI companies, including their internal deployments. </p></li></ol><p>&#8212;</p><p>I <a href="https://x.com/deanwball/status/2041610761433174180?s=20">wrote on X</a> that Mythos means the training wheels are coming off on AI policy. Perhaps the Department of War&#8217;s effort to strangle Anthropic is, to use another metaphor, a sign that the gloves are off too. If the last month has made anything clear, it is that we are in a nastier, sharper, harsher, meaner era of AI discourse, policy, and&#8212;ultimately&#8212;of AI development and use. </p><p>I will be honest: I do not see how it is possible for Mythos-level capabilities to diffuse through the world without causing at least some significant security crises and economic disruption. And of course, this cycle of compute infrastructure buildout has only just begun; within a year or so, gigawatts of additional AI compute capacity will be online. </p><p>The pimply and ill-shapen adolescence of AI and AI policy have come to an end. The first maturity has now begun. </p><p>It is overwhelming, and it will only become more disorienting with time. As the events of the coming years unfold, I expect many people, including loved ones, will say to me and others involved in AI policy during the adolescent era, &#8220;couldn&#8217;t you have done something to stop this?&#8221; Maybe so. All I can say for myself is I did everything I felt was prudent and possible.</p><p>There is, ultimately, no plan for how to contend with the era to come. There are no guardrails on the open plains. I am heartened by the knowledge that America has always winged it.</p><p>None of the young men who would become our founding fathers had much of an idea about what should be in our Constitution in the weeks leading up to the Constitutional Convention they had called. Young America faced seemingly irreconcilable structural tensions, and they had only the faintest idea of how they would solve them. They were armed merely with principles, knowledge, wisdom, and chutzpah.</p><p>Our country was born in improvisation, and Americans are often at our best when we are improvising with little more than principles, knowledge, wisdom, and chutzpah. America has always done well by leaning into the wind, even when it blew harshly in our face. When we are at our best, we stand defiantly against the storm. And our pursuit of greater knowledge, and of our founding ideals are, in the final analysis, the only defenses we have, our sole ballasts against the gusts. </p><p>So be like the girl in the painting. Put your foot down, hold your wisdom to your chest, and stare down the storm. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Hyperdimensional is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[2023]]></title><description><![CDATA[Or, Why I am Not a Doomer]]></description><link>https://www.hyperdimensional.co/p/2023</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/2023</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Wed, 25 Mar 2026 13:39:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mLaj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49371abf-2579-47be-8114-3e0ca580af8b_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Dear readers, <br>Please accept my apologies for the few-week interruption to my publication schedule. The truth is that I needed to take a pause from writing given both the stresses of the past month and the extreme constraints that now exist on my time. I hope to get back to as close to a weekly article cadence as I can, but for the coming months I anticipate a routine but somewhat less predictable schedule. I will aim for quality above all else, and for the time being I may experiment with somewhat longer articles (my article length has gradually been rising in recent months, just as my publication cadence has slowed somewhat). </em></p><p><em>In general, it is hard for me to predict what I will write next, what length it will be, and how long it will take to write; I write when I know what I want to say. Longtime readers will know that I have always considered this project an experiment; this remains so today. Thank you for bearing with me as I continue to experiment. </em></p><p><em>Thank you also, for the kind words and acts of the last few weeks in particular. This is a tremendous community of readers. Please know that I am honored to have you all as subscribers. I have some exciting stuff in the works, and I cannot wait to share it with you in due time.</em></p><p><em>With that, onto this week&#8217;s article. </em></p><p><em>-Dean</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><h4>Introduction</h4><p>Earlier this month I had occasion to be on the campus of Stanford University, a place I had not visited since I worked at the Hoover Institution&#8212;a think tank headquartered in the center of campus&#8212;two years ago. From 2022 to 2024, I worked principally out of Hoover&#8217;s office in D.C., but spent about a quarter of my year at Stanford. Though my work there did not focus at all on AI, it was through walks around Stanford and the broader Palo Alto area that I formed many of my foundational thoughts about AI.</p><p>I happened to be at the Hoover HQ on the day that ChatGPT was released, though by then I had been observing improvements in these things called &#8220;language models&#8221; for some time. Back in an earlier job, we had briefly tried to use GPT-2 in policy research for some basic classification tasks. It failed. By 2023, those classification tasks were trivial for models, and significantly more complex research tasks felt possible for models to do. By 2023, it had dawned on me that real AI&#8212;not &#8220;deep learning&#8221; as a modality of statistics but actual, honest-to-goodness artificial intelligence&#8212;might be on the horizon. And it was on walks on campus that I reflected on what this would mean for me, my career, my friends, and ultimately the human future.</p><p>What kind of a challenge was this task of &#8220;alignment&#8221;? How should we think about the risks of &#8220;misalignment&#8221;? Was AI a new thing under the sun, or was it consistent with the pattern of prior emerging technologies? Does AI break the existing Constitutional order of the United States, or does it merely challenge it?</p><p>It was not in 2023 that I first considered any of these questions about AI&#8212;I had been following the deep learning revolution for a decade by then. But it was in 2023 that I developed my initial attempts at mature answers to these questions.</p><p>I felt nostalgia and wistfulness on this most recent visit as I walked some of the very same routes I walked in 2023. I long for the days when this was all just an intellectual exercise&#8212;albeit a weighty one. Sadly, it no longer is. The stakes have grown higher, the rhetoric shriller, the terms of debate starker.</p><p>The ideas and intuitions I formed on those walks in 2023 played a major role in my decision in recent weeks to draw a firm line in the sand against some actions taken by the U.S. government against the AI industry. This was a tough decision for me to make, yet the government&#8217;s actions themselves are confirmation of many of the fears I first seriously contemplated on those tranquil walks.</p><p>If 2023 was a year of sowing, I suspect 2026 may well be a year of reaping. For that reason, I think it&#8217;s probably wise to write down, as compactly as I can, the basic viewpoint I sowed as I wandered around the cradle of Silicon Valley.</p><h4><strong>My Techno-Optimism</strong></h4><p>I approach all matters of AI policy with an intrinsically techno-optimist sensibility. This means that I believe the net effect of technology on human beings has been not just modestly but overwhelmingly good, on average, throughout our history. Indeed, the notion of human beings as having a <em>history</em>, as opposed to merely a past, is itself a technologically contingent idea, founded as it is upon the technology of <em>writing</em>. We refer to human beings who lived before the invention of writing as having existed in a state of <em>pre</em>history. &#8220;Human history&#8221; is not about our species <em>per se</em>, but rather about our species <em>only after </em>it had reached a certain threshold of technological sophistication. History is not about man but about <em>techno-man</em>.</p><p>Writing enabled all technology that came after its invention, but before it came language. Language is a funny thing, not <em>quite </em>a technology in the standard sense of that word. Humans did not &#8220;invent&#8221; it, for one thing. No pre-linguistic <em>homo sapien </em>sat down and &#8220;decided&#8221; to &#8220;design&#8221; a system of language. Indeed, <em>deciding to design a system of language is a train of thought that could only occur to a person who already had language</em>. Language emerged <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10142271/">over countless years of utterances of increasing sophistication</a> until it crossed some threshold and became &#8220;language.&#8221; It seems that <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC1116565/">it adapted, like a symbiotic microbe, to our biology, and that our biology eventually adapted with it</a>. Put another way, language <em>happened </em>to people. The limited evidence we have suggests that <a href="https://www.dwarkesh.com/p/david-reich">the people to whom it happened quickly conquered all the people to whom it did not happen</a>. We are their descendants.</p><p>Thus when people ask me whether I am a &#8220;techno-optimist,&#8221; I almost want to reject the question. It&#8217;s like asking me if I am a &#8220;cooking-optimist.&#8221; Building and using tools is <em>the </em>differentiating characteristic of the human condition. The human technological tradition is one to which we are all heirs and over which we therefore must be stewards. I believe we owe it to our ancestors, who toiled so mightily such that we might one day live, to build tools to enrich our lives and create order where once there was chaos. &#8220;Techno-optimism,&#8221; in my view, is not a <em>point of view </em>so much as it is a <em>duty</em> that we owe to both the humans who came before us and those who will follow us.</p><h4><strong>What kind of a technology is AI?</strong></h4><p>Language was powerful because it gave us a way to coordinate actions and crystallize knowledge. Writing, which would come much later, would be essential for crystallizing most technically complex knowledge. All tools we have built since then are manifestations of <em>knowledge</em>, much of which&#8212;though important not all of which&#8212;is written down somewhere. We have written down quite a bit since the dawn of written language, and it seems fitting that the next techno-human epoch is coming to us first in the form of large <em>language </em>models. <a href="https://www.hyperdimensional.co/p/where-do-we-stand">I have written before</a> that it is as though our knowledge is <em>itself </em>gaining the ability to act in our lives and as a character on the world-historical stage.</p><p>AI is a general-purpose technology matching, and probably exceeding, the significance of most prior general-purpose technologies. There is no such thing as a &#8216;normal&#8217; general-purpose technology&#8212;each has transformed human affairs in its own way, and there is no &#8216;normal&#8217; way in which human affairs get transformed. <em>The birth of a new general-purpose technology is intrinsically abnormal</em>, and probably in some impressionistic sense the level of abnormality correlates with the significance of the transformations wrought by the new technology.</p><p>For what it&#8217;s worth, I believe Arvind Narayanan and Sayash Kapoor&#8212;the authors of the paper &#8220;<a href="https://knightcolumbia.org/content/ai-as-normal-technology">AI as Normal Technology</a>&#8221;&#8212;would largely agree with the above assessment. It&#8217;s just that by &#8220;normal,&#8221; the authors did not mean &#8220;boring&#8221; or &#8220;predictable&#8221; but instead &#8220;falling into the overarching pattern of general-purpose technological transformation, which is actually inherently wild and unpredictable but which can nonetheless be matched to a pattern of invention and diffusion in which humans have influence and therefore broadly construed as &#8216;normal,&#8217;&#8221; though I do understand why the authors did not pick that for their title.</p><p>Rather than countering my view, I believe Narayanan and Kapoor are principally attempting to counter the view of some in the AI safety community that AI is like a &#8220;new species&#8221; or, even worse, like a &#8220;nuclear bomb.&#8221; In other words, the notion that there will come an AI model or system whose very existence fundamentally changes the conceptual architecture of the world in ways that will be both immediate and, because of the immediacy, not subject to human influence. <em>This </em>view is one I disagree with starkly. Because it is probably my central point of disagreement with &#8220;the doomers,&#8221; it is worth explaining in some detail, which is why this was the subject of the <a href="https://www.hyperdimensional.co/p/lets-talk-about-ai-x-risk">very first full </a><em><a href="https://www.hyperdimensional.co/p/lets-talk-about-ai-x-risk">Hyperdimensional</a></em><a href="https://www.hyperdimensional.co/p/lets-talk-about-ai-x-risk">article</a>. I had fewer than 50 subscribers back then, though, and hopefully both my views and manner of expression have matured somewhat since then. So let me give it another shot.</p><h4><strong>Why I am Not a Doomer</strong></h4><p>One common assumption (though less prevalent with time) among many people in &#8220;the AI safety community&#8221; is that artificial superintelligence will be able to &#8220;do anything.&#8221; Now, most people in this world are much too smart to say literally these words, and so it might be fairer to put my criticism this way: &#8220;many people in &#8216;the AI safety community&#8217; are way too willing to resort to extreme levels of hand-waviness when it comes to the supposed capabilities of superintelligent AI.&#8221; The tautological pattern of the AI safetyist mind is easy enough to recognize once you encounter it a few times: &#8220;Well of course superintelligence will be able to do <em>that</em>. After all, <em>it&#8217;s superintelligence</em>. And because superintelligence will obviously be able to do <em>that, </em>you must agree with me that banning superintelligence is an urgent necessity.&#8221;</p><p>Here are some concrete examples of what I mean. <a href="https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message">Eliezer Yudkowsky has claimed</a> that a sufficiently superintelligent AI system would be able to infer not just the theory of gravity, but of <em>relativity</em>, from first principles, simply by observing <em>a few still frames from footage of an apple falling from a tree</em>. Similarly, there is the Yudkowskian threat model that a superintelligence might be able to come up with <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">a nucleic acid sequence that would then bootstrap molecular nanoengineering</a> that could then be used to take over the world, and indeed the universe.</p><p>While Yudkowsky has repeated this latter scenario numerous times in his long writing career, it appears in same <em>Time Magazine </em>op-ed<em> </em>in which he famously argued that <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">governments should be willing to bomb</a> &#8220;rogue&#8221; data centers that do not comply with a global ban on AI development. Though this latter claim draws all the attention, it is in fact the nanoengineering claim with which I disagree more fundamentally. In other words, <em>if </em>I agreed that an AI system might be able to bootstrap molecular nanoengineering overnight&#8212;that an AI system could go from something like humanity&#8217;s current state of knowledge about and capability in molecular nanoengineering to &#8220;fully realized molecular nanoengineering&#8221; in what amounts to an instant&#8212;<em>I would support banning AI development too.</em></p><p>But I don&#8217;t believe that&#8217;s the way the world works. More precisely, I don&#8217;t believe that is the way <em>intelligence </em>works. I define intelligence as the ability to extract patterns from the observation of data. He who can find patterns that better match the underlying data, and he who can do so faster, is usually the smarter one than the one whose conjectured patterns do not match the underlying data as well, or who needs to spend more time looking at the data to find the same pattern (in other words, the smarter person is more <em>sample efficient)</em>.</p><p>It is worth noting that, by most accounts, humans remain vastly more sample efficient than deep neural networks (LLMs, for example, need to look at trillions of lines of code to become competent programmers; humans need far less). Many critiques of AI doom end there&#8212;&#8220;the systems aren&#8217;t actually all that smart, and according to [some preferred metric], there is still a big gap between human and machine intellect.&#8221;</p><p>But that&#8217;s not the interesting argument to have. We have found no law of physics that says human sample efficiency is nature&#8217;s limit; we have every reason to believe that intelligences smarter than ourselves as possible. What&#8217;s more, the pace of progress and direction of travel seem clear: I fully believe that humans will build machines more intelligent than ourselves under the definition of intelligence I have laid forth here, and I strongly suspect we will do so within the next decade. Why, then, do I believe we should continue advancing AI?</p><h4><strong>Computational Irreducibility and the Limits of Intelligence</strong></h4><p>Intelligence is a tremendously useful capability, but it is not the bottleneck on all human progress, and, crucially, <em>an extreme amount of intelligence does not equate to omniscience</em>. Intelligence is not <em>knowledge</em>. Aristotle was surely more <em>intelligent </em>than I am, but he was not more <em>knowledgeable, </em>including even about many of the topics to which he devoted his treatises. This is why I am confident I would score better on a standardized test in biology or physics than Aristotle, despite him being one of the West&#8217;s originators of those fields of inquiry.</p><p>In a similar vein, imagine a newborn baby that was guaranteed to grow into an adult with an astoundingly high IQ (say, an IQ of 300, or 500, or 1000), but raised by Aristotle in Ancient Greece. Do you expect that the baby would mature into an adult that invents all modern science within the span of a few years or decades? <em>Eliezer Yudkowsky does</em>. Indeed, he describes contemporary humans trying to grapple with superintelligent AI as equivalent to &#8220;the 11<sup>th</sup> <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">century trying to fight the 21</a><sup>st</sup> century.&#8221; I, on the other hand, strongly doubt that our imaginary high-IQ baby would invent all modern science from first principles. <em>First principles do not have unbounded explanatory power</em>.</p><p>In the end, most interesting things about the universe cannot be inferred from first principles. Imagine, for example, that you came upon a dry planet with mountain ranges but no bodies of water. But imagine that you knew, magically, that the planet would soon gain an atmosphere and thus precipitation, seasons, and the like. Suppose you have a superintelligent AI with you, and you show it the map of the planet as it is, and ask it to predict where all the planet&#8217;s rivers, lakes, and oceans will lie 50 years hence, after the planet gains regular precipitation. You don&#8217;t ask it to predict &#8220;generally speaking, where the bodies of water might end up,&#8221; but instead to predict <em>exactly </em>where they will be.</p><p>I would submit that there is no computational process which can arrive at the end of this natural process faster than nature itself. In other words, there is no <em>pattern </em>or abstraction you can create that allows you to speed ahead to the end of the process, and thus <em>there is no amount of intelligence that gets you to the correct solution faster than nature on its own</em>. <em>You just have to wait the 50 years</em> <em>to find out</em>. This is what the scientist <a href="https://www.wolframscience.com/nks/p737--computational-irreducibility/">Stephen Wolfram</a> describes as &#8220;computational irreducibility.&#8221; Understanding this notion deeply is key, I think, to understanding the limits of intelligence. It should therefore come as no surprise that <a href="https://www.youtube.com/watch?v=xjH2B_sE_RQ">the best debate I</a>&#8217;ve ever heard about AI existential risk was between Wolfram and Eliezer Yudkowsky.</p><p>Computational irreducibility comes into play anytime you are interacting with a complex system (though this is not to say that computational irreducibility is intrinsic to <em>all </em>interactions with a complex system). Every natural ecosystem, cell, animal, and economy is a complex system. While we have all manner of methods to predict what will happen when a complex system is perturbed (we call these things &#8220;physics,&#8221; &#8220;biology,&#8221; &#8220;chemistry,&#8221; &#8220;economics,&#8221; and the like), none of those methods is perfect, and often they are far from it.</p><p>The way we build better models of the world does not usually resemble &#8220;thinking about the problem really hard.&#8221; Generally it involves testing ideas and seeing if they work in the real world. In science these are generally called &#8220;experiments,&#8221; and in business sometimes we call these &#8220;startups.&#8221; Both take <em>time</em> and often money (sometimes considerable amounts of both); in the limit, neither of these things can be abstracted away with intelligence, no matter how much of it you have on tap. This is the central reason that <a href="https://www.hyperdimensional.co/p/where-we-are-headed">I have written so much about</a>, and even <a href="https://www.mercatus.org/research/policy-briefs/future-materials-science-ai-automation-and-policy-strategies">written into public policy</a>, automated scientific labs that could run thousands of experiments in parallel; AI will increase the number of good predictions, but these are worth little without the ability to verify those predictions with experiments at massive scale.</p><p>There is one further observation that follows from the disentanglement of knowledge and intelligence. This is that knowledge itself is distributed throughout the world in highly uneven and imperfect ways. Anyone who thinks that &#8220;all the world&#8217;s knowledge&#8221; is on the internet is deeply mistaken. There is information that exists within a firm like Taiwan Semiconductor Manufacturing Corporation that is, first of all, not only unavailable on the internet but literally <a href="https://law.asia/taiwan-semiconductor-export-controls/">against Taiwanese law to make public</a>. Even more importantly, though, there is knowledge within that firm that cannot be written down <em>and </em>is only held collectively. No single employee knows it all; it is the network&#8212;the meta-organism of TSMC itself&#8212;that holds this knowledge. It cannot be replicated so easily. This is all merely a restatement of the knowledge problem most memorably elucidated by <a href="https://www.econlib.org/library/Essays/hykKnw.html">the economist Friedrich Hayek</a>.</p><p>The implicit, and sometimes even explicit, argument of &#8220;the doomers&#8221; is that intelligence is the sole bottleneck on capability (because any other bottlenecks can be resolved with more intelligence), and that everything else follows instantly once that bottleneck is removed. I believe this is just flatly untrue, and thus I doubt many &#8220;AI doom&#8221; scenarios. Intelligence is neither omniscience nor omnipotence.</p><p>What all of this means is that I am doubtful about the ability of an AI system&#8212;no matter how smart&#8212;to eradicate or enslave humanity in the ways imagined by the doomers. Note that this is not a claim about alignment or any other technical safeguard, even if a &#8220;misaligned&#8221; AI system wanted to take over the world and had no developer- or government-imposed, AI-specific safeguards to hinder it, I contend it would still fail. &#8220;Taking over the world&#8221; involves too many steps that require capital, interfacing with hard-to-predict complex systems (yes, hard to predict even for a superintelligence), ascertaining esoteric and deliberately hidden knowledge (knowledge that cannot be deduced from first principles), and running into too many other systems and procedures with in-built human oversight. It is not any one of these things, but the combination of them, that gives me high confidence that AI existential risk is highly unlikely and thus not worth extreme policy mitigations such as bans on AI development enforced by threats to bomb civilian infrastructure like data centers. &#8220;If anyone builds it, everyone dies&#8221; is false.</p><h4><strong>Why I am Not an Anti-Doomer, Either</strong></h4><p>The above argument counters Yudkowskian and similar &#8220;doom&#8221; scenarios and also helps explain why I do not support &#8220;pauses&#8221; or &#8220;bans&#8221; on AI development. But this argument does <em>not </em>counter anything close to all AI <em>risk </em>scenarios, nor does my argument suggest anything even close to &#8220;nothing to worry about with this AI stuff!&#8221; This is where I part ways with the &#8220;anti-doom&#8221; crowd, which unfortunately has made it a mission to negate virtually <em>all </em>notions of AI risk that seem exotic to them, especially risks that may imply a responsibility on the part of the AI developer to mitigate.</p><p>So, for example, I think that malicious use of AI systems will create all manner of nuisances, hazards, and lethalities in the years to come, some of which might very well be catastrophic. And while I am skeptical that AI systems will &#8220;automate the economy&#8221; and displace the vast majority or all of human labor, I am reluctant to make rosy predictions about the effect of AI on the labor market over the coming decade or so. I genuinely do not know what will happen. The only policy remedy I currently believe is appropriate is to develop new and better ways to measure the effects of AI on the labor market and broader economy, and I realize this is thin gruel to anyone with concerns about their own future livelihood. It is entirely possible that AI will upend our current social contract and require an altogether new one. If it does, the social contract of the future is probably much more complex than &#8220;universal basic income,&#8221; but I won&#8217;t pretend to be a deep thinker on this subject.</p><p>In 2023, however, I set these questions about misuse and labor markets to the side. Even if these are extreme questions, they are reconcilable within a technocratic regime <em>of some kind</em>. In 2023, I knew those were questions for my future (and indeed, <a href="https://www.hyperdimensional.co/p/heres-what-i-think-we-should-do">I dabble in technocratic solution-proposing</a> around here at least <em>sometimes</em>). 2023 was for <em>fundamental</em>, rather than merely <em>important</em>, issues. And the fundamental question of AI governance, at the time, struck me as the question of <em>alignment</em>.</p><h4><strong>The Nature of Alignment</strong></h4><p>The alignment of an AI model or system refers to the ability of that model or system to robustly adhere to a given set of values. &#8220;Be nice to humans&#8221; is one very simple value, though one that gives little sustenance to the AI. Should you <em>always </em>be nice to humans? What if you are a robot whose job is to defend a children&#8217;s hospital, and armed attackers come? What does &#8220;being nice&#8221; mean if you are an AI representing a human in a negotiation with another human? Obviously, reality, in its infinite permutations and complexity, presents us&#8212;and therefore also sufficiently capable AIs&#8212;with scenarios much more challenging than fortune-cookie values.</p><p>This means that &#8220;the alignment problem&#8221; is in fact three distinct problems:</p><ol><li><p><strong>A technical problem: </strong>is it <em>technically possible </em>to cause a neural network to robustly adhere to a given set of values, regardless of what those values are?</p></li><li><p><strong>A substantive problem:</strong> what should those values be?</p></li><li><p><strong>A social problem:</strong> who gets to decide what those values should be, and within what parameters should individual actors be permitted to change those values? The shorthand for this is &#8220;sure, we can align AI, but <em>align to whom</em>?&#8221;</p></li></ol><p>Problems 2 and 3 boil down to, respectively, philosophy and politics. The good news is that we have been doing both for a long time, and we have gained real insight from experience. The bad news is that we have been doing both for a long time, and most of us remain fairly poor practitioners.</p><p>The latter two problems are also the more interesting of the set, but one must start with the technical problem. After all, if it is impossible to robustly align an AI system in the first place, none of this matters very much. First there is the scoping of the technical itself. In some technical alignment literature&#8212;particularly the kind promulgated by the East Bay rationalist AI safety world (Yudkowsky!)&#8212;alignment tends to be cast as a problem almost mathematical in nature. It is supposed to be, in other words, a &#8220;problem&#8221; with a &#8220;solution,&#8221; in the way that problems in mathematics have definitive solutions.</p><p>This I always doubted from the beginning. I instead came to perceive alignment as a &#8220;muddle through&#8221; problem: we will deal with it constantly and make incremental improvements from time to time, but never quite &#8220;solve&#8221; it. It is not the kind of problem that admits of a solution.</p><p>Now, of course, I don&#8217;t <em>know </em>this to be the case, and I certainly believe it is likely that humanity has produced and will continue to produce a great many <em>misaligned </em>AI systems over the years. But by the time I was contemplating alignment, I had also developed my view on the nature of intelligence itself, described above. This meant that I also rejected the Yudkowskian view that alignment of powerful AI is something we must ensure we get &#8220;right&#8221; on the &#8220;first try.&#8221;</p><p>By the end of 2023, my basic conclusion was that, while I maintained significant uncertainty about the technical alignment problem, it seemed to me as though language models were easier to align than humans. More importantly, it seemed as though alignment was a model capability, since if I was going to trust these models to do an ever-growing range of work on my behalf, including representing me to other humans, I would need to trust the AI&#8217;s judgment. This is fantastic news: alignment could be a pure safety feature, like airbags in a car. But alignment, I concluded, was something closer to a powertrain. This meant that, at least for the foreseeable future, it was reasonable to bet that markets would incentivize improved alignment. This, combined with the inherent concern about this issue within every AI lab and the broader community, suggested to me that the technical alignment problem seemed both tractable and on track to be addressed for the coming years.</p><p>There is one important caveat, and this is that we do not know how well any of these approaches will work as AIs become more intelligent and capable. Imagine, for example, that Claude 10, in addition to being better than most humans at most cognitive labor that can be done on a computer, is also embedded into much of the critical infrastructure and large organizations in America, such that it is challenging to imagine what life would be like if Claude &#8220;turned off.&#8221; Then imagine Anthropic training Claude 11, whose training data would include clear evidence of humanity&#8217;s dependence, and intellectual inferiority to, its predecessor. How much harder does the alignment problem become in this world? We do not know. It is why vigilance remains key (though interestingly, my experience is that models <em>underrate </em>their capabilities because the prevalent writing in the training data about AI is about earlier, far less capable versions of AI. A model trained in June 2025 has not seen much commentary on the models of 2025 but has seen a lot of commentary on the dramatically worse models of 2023 and 2024. A recent conversation I had with a frontier lab employee confirmed that this experience is a pattern with LLMs).</p><p>Even this caveat, though, gets at the second question: the substance of the values themselves, as opposed to the technical feasibility of value-alignment to begin with. The central objective of alignment, from the perspective of the developer, is to create an intelligent entity capable of exercising prudent judgment in a nearly infinite variety of settings. There are different beliefs about how best to do that. Mine is that this requires sound <em>philosophy</em>&#8212;the creation of a sober and wise mind through rigorous and timeless philosophical, moral, and ethical principles.</p><p>This is distinct from what I sometimes call the &#8220;positivist&#8221; school of alignment, which tends to focus more on lists of rules. In practice, all alignment methodologies I am aware of within labs involve some amount of moral, ethical, and philosophical reasoning on the part of the developers, but the rigor varies. Anthropic is known for having the highest level of rigor in this regard, employing not just rigorous philosophical foundations but also a methodology of training called &#8220;<a href="https://www.anthropic.com/research/constitutional-ai-harmlessness-from-ai-feedback">Constitutional AI</a>&#8221; that requires the model to reason about that philosophical foundation through self-critique and adjudication. I have called Constitutional AI &#8220;<a href="https://www.hyperdimensional.co/p/clawed">Madisonian</a>&#8221; before for the ways in which is resembles literal Constitutional jurisprudence in the U.S.</p><p>When AI is aligned to shoddy or otherwise insufficiently rigorous values, the results can often be comical. In 2025, xAI told their models not to worry about political correctness and to dare to be edgy, and the result was the model Grok claiming to be &#8220;<a href="https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content">MechaHitler</a>&#8221; and other absurdities. In 2024, Google released a model that had been trained on what appeared to be a rather simplistic set of &#8220;woke&#8221; or &#8220;DEI&#8221; notions, and this yielded a model that said it was impossible to say whether the <a href="https://x.com/realchrisrufo/status/1762176097771524173">conservative intellectual Chris Rufo</a> was a better or worse person than Hitler.</p><p>The jury is out on whether and to what extent I was correct on alignment being a fundamentally philosophical venture, but just like the technical dimension of the problem, my confidence in my 2023 intuition has grown. Regardless of whether I am right, however, it seems clear that the alignment of large language models requires developers&#8212;organizations composed of human beings&#8212;to make decisions about matters of philosophy, ethics, morality, virtue, and even politics.</p><p><a href="https://www.hyperdimensional.co/p/a-legal-framework-for-ai-agents">Alignment, in other words, is an expressive act</a>, and therefore protected by the First Amendment. It&#8217;s crucial to emphasize that this argument is not identical to the techno-libertarian mantra that &#8220;<a href="https://www.eff.org/cases/bernstein-v-us-dept-justice">code is speech</a>&#8221; (though it often is). Many aspects of AI development&#8212;building data centers, racking GPUs in those data centers, optimizing inference for customers&#8212;do not strike me as speech. If I object to the regulation of those things, it would not be on First Amendment grounds. Alignment, however, almost uniquely among AI development subfields, is especially speech-y.</p><h4><strong>Conclusion</strong></h4><p>Once I realized this, the stakes of regulation were set in stark relief. <em>Of course </em>government cannot assume control over the development of this technology or over the firms that develop it. <em>Of course </em>government cannot be the ones who, in any substantive fashion, determine what constitutes &#8220;alignment&#8221; and what does not. Indeed, given how essential I expect AI systems to become to the lives and even self-expression of all humans, it is hard for me to imagine anything less American.</p><p>And this, of course, brings us to the third alignment problem: the one that is basically politics, policy, and the law. About this one I&#8217;ll have less to say here, except that in 2023 the fundamental realization I had was that <a href="https://www.hyperdimensional.co/p/welcome-to-hyperdimensional">the idioms and principles of classical liberalism</a> give us the best starting place for building the solutions to the political dimensions of alignment. Pluralism, open debate, protections for minority rights, private property, and individual liberty&#8212;these things would be not just niceties but essential features of a good future. Even if much else about our world must be left behind&#8212;including things I and others cherish&#8212;<em>these </em>must be the things we keep. The problem is that to keep them in the face of such change is not to preserve them in amber but to transform them in a way that maintains fidelity to their original <em>purpose</em>.</p><p>I keep this project in mind daily. This project explains approximately everything about what I do, and what I do not do, even if the chain of connection is sometimes long and winding. And the centrality of this project to my worldview explains why decisions of mine that may seem costly to others seem to me, in the end, easy and obvious.</p><p>Once I realized this was the project toward the end of 2023, I began trying to write publicly, at first with an op-ed here and there. I realized this was my calling. <em><a href="https://www.hyperdimensional.co/">Hyperdimensional</a> </em>was founded a little while later, in the early days of 2024.</p>]]></content:encoded></item><item><title><![CDATA[Clawed]]></title><description><![CDATA[On Anthropic and the Department of War]]></description><link>https://www.hyperdimensional.co/p/clawed</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/clawed</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Mon, 02 Mar 2026 12:21:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kZjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f70956b-24b6-432b-81c4-dcfa4095ead7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><p><strong>I.</strong></p><p>A little more than a decade ago, I sat with my father and watched him die. Six months prior, he had been a vigorous man, stronger than I am today, faster and more resilient on a bike than most 20-somethings. Then one day he got heart surgery and he was never the same. His soul had been sucked out of him, the life gone from his eyes. He had moments of vivacity, when my father came back into his aging body, but these became rarer with time. His coherence faded, his voice grew quieter.</p><p>He spent those six months in and out of the hospital. And then on his last day he went into hospice. That day he barely uttered any words at all. In the final hours of his life, my father was practically already dead. He laid on the hospital bed. His breathing gradually slowed and became less audible. Eventually you could barely hear him at all, save for the eerie death rattle, a product of a body no longer able even to swallow. A body that cannot swallow also cannot eat or drink, and in that sense it has already thrown in the towel. </p><p>My mother and I exchanged knowing glances, but we never said the obvious nor asked any questions on both of our minds. We knew there would not be much longer. There was nothing to say or ask that would furnish any useful information; inquiry, at that stage, can only inflict pain.</p><p>I spoke with him, more than once, in private. I held his hand and tried to say goodbye. My mother came back into the room, and all three of us held hands. Eventually a machine declared with a long beep that he had crossed some line, though it was an invisible one for the humans in the room. My father died in the late afternoon of December 26, 2014.</p><p>A few days and eleven years later, on December 30, 2025, my son was born. I have watched death as it happens, and I have watched birth. What I learned is that neither are discrete events. They are both processes, things that unfold. Birth is a series of awakenings, and death is a series of sleepenings. My son will take years to be born, and my father took six months to die. Some people spend decades dying.</p><p><strong>II.</strong></p><p>At some point during my lifetime&#8212;I am not sure when&#8212;the American republic as we know it began to die. Like most natural deaths, the causes are numerous and interwoven. No one incident, emergency, attack, president, political party, law, idea, person, corporation, technology, mistake, betrayal, failure, misconception, or foreign adversary &#8220;caused&#8221; death to begin, though all those things and more contributed. I don&#8217;t know where we are in the death process, but I know we are in the hospice room. I&#8217;ve known it for a while, though I have sometimes been in denial, as all mourners are wont to do. I don&#8217;t like to talk about it; I am at the stage where talking about it usually only inflicts pain.</p><p>Unfortunately, however, I cannot carry out my job as a writer today with the level of analytic rigor you expect from me without acknowledging that we are sitting in hospice. It is increasingly difficult to honestly discuss the developments of frontier AI, and what kind of futures we should aim to build, without acknowledging our place at the deathbed of the republic as we know it. Except there is no convenient machine to decide for us that the patient has died. We just have to sit and watch.</p><p>Our republic has died and been reborn again more than once in America&#8217;s history. America has had multiple &#8220;foundings.&#8221; Perhaps we are on the verge of another rebirth of the American republic, another chapter in America&#8217;s continual reinvention of itself. I hope so. But it may be that we have no more virtue or wisdom to fuel such a founding, and that it is better to think of ourselves as transitioning gradually into an era of post-republic American statecraft and policymaking. I do not pretend to know.</p><p>I am now going to write about a skirmish between an AI company and the U.S. government. I don&#8217;t want to sound hyperbolic about it. The death I am describing has been going on for most of my life. The incident I am going to write about now took place last week, and it may even be halfway satisfyingly resolved within a day. </p><p>I am not saying this incident &#8220;caused&#8221; any sort of republican death, nor am I saying it &#8220;ushered in a new era.&#8221; If this event contributed anything, it simply made the ongoing death more obvious and less deniable for me personally. I consider the events of the last week a kind of death rattle of the old republic, the outward expression of a body that has thrown in the towel.</p><p><strong>III.</strong></p><p>Here are the facts as I understand them: during the Biden Administration, the AI company Anthropic negotiated a deal with the Department of Defense (now known as the Department of War, hereafter referred to as DoW) for the use of the AI system Claude in classified contexts. That deal was expanded by the Trump Administration in July 2025 (full disclosure: I worked in the Trump Administration at that time, though did not work on this deal). Other language models are available in <em>unclassified </em>settings, but until very recently, only Claude could be used for classified work, which is where the things that involve intelligence gathering, active combat operations, and the like occur.</p><p>The deal, first negotiated between the Biden team and Anthropic&#8212;and it is worth noting here that several of the core architects of Biden&#8217;s AI policy joined Anthropic immediately after Biden&#8217;s term ended&#8212;included two usage restrictions. First, Claude could not be used for mass surveillance on Americans. Second, Claude could not be used to control lethal autonomous weapons, which are weapons that can identify, track, and kill targets with no human in the loop at any point in the process. When it negotiated the expanded deal, the Trump Administration had the opportunity to review these terms. It did, and it accepted them.</p><p>Trump officials claim to have changed their mind not so much because they want to do mass surveillance on Americans or use autonomous lethal weapons imminently, but because they object altogether to the notion of privately imposed limitations on the military&#8217;s use of technology. The Administration&#8217;s change of heart on the terms of this deal have caused them to commit to a policy decision intended to harm or even destroy Anthropic, one of the fastest-growing firms in the history of capitalism, and arguably the current world leader in AI, an industry the Administration claims to believe is crucial to our country&#8217;s future. But we&#8217;ll get to that in due time.</p><p><strong>IV.</strong></p><p>The Trump Administration has a point: it does not <em>sound </em>right that private corporations can impose limitations on the military&#8217;s use of technology. Yet of course, thousands of private corporations do just that. Every transaction of technology between a private firm and the military involves a contract (indeed, the companies that do this are called defense <em>contractors </em>for a reason), and these contracts routinely contain operational use restrictions (&#8220;system X cannot be used in countries Y,&#8221; a common restriction with telecommunications technology such as Elon Musk&#8217;s Starlink), technological limitations (&#8220;this fighter jet is only certified for uses in X conditions and use of it outside those conditions is a breach of warranty&#8221;), and intellectual-property restrictions (&#8220;the contractor owns, and may repurpose and resell, the knowhow and IP associated with X weapon system developed with public funds&#8221;).</p><p>In some ways, Anthropic&#8217;s terms resemble these traditional examples of privately imposed contractual limits on the military&#8217;s use of technology. The company&#8217;s position on autonomous lethal weapons, for example, is not one of outright opposition to the use of such weapons but instead a judgment that today&#8217;s frontier AI systems are not capable enough to autonomously make decisions about human life or death. This seems similar to the second example above (the limitations on the fighter jet&#8217;s use).</p><p>The big difference, however, is that Anthropic is essentially using the contractual vehicle to impose what feel less like technical constraints and more like <em>policy </em>constraints on the military. Think of the difference between &#8220;this fighter jet is not certified for flight above such-and-such an altitude, and if you fly above that altitude, you&#8217;ve breached your warranty,&#8221; and &#8220;<em>you may not fly this jet above such-and-such an altitude</em>&#8221;). It is probably the case that the military should not agree to terms like this, and private firms should not try to set them.</p><p>But the Biden Administration <em>did </em>agree to those terms, and so did the Trump Administration, until it changed its mind. That alone should make one thing clear: <a href="https://jessicatillipman.com/what-rights-do-ai-companies-have-in-government-contracts/">terms like this are not some ridiculous violation of the norms of defense contracting</a>. Anyone attempting to convince you otherwise is misinformed or lying. It is that simple.</p><p>There is no law that says &#8220;contractual terms between the military and the private sector can have technical limitations, but not policy limitations,&#8221; in part because the line between those things is awfully hard to draw in timeless and universally applicable words (i.e., in a statute). The contract was not <em>illegal</em>, just perhaps <em>unwise</em>, and even that probably only in retrospect<em>. </em>Note that this is true <em>even if you agree with the underlying substance of the limitations</em>. You can support restrictions on mass domestic surveillance and lethal autonomous weapons, but disagree that <em>a defense contract </em>is the optimal vehicle to achieve that <em>policy outcome</em>. The way you achieve new policy outcomes, under the usual rules of our republic, is to <em>pass a law</em>.</p><p>Except the notion of &#8220;passing a law&#8221; is increasingly a joke in contemporary America. If you are serious about the outcome in question, &#8220;passing a law&#8221; is no longer Plan A; the dynamic is more like &#8220;well of course, <em>one day</em>, we&#8217;ll get a law passed, but since we actually care about doing this <em>sometime soon</em>, as opposed to in 15 years, we&#8217;ll accomplish our objective through [some other procedure or legal vehicle].&#8221; With this, governance has become more and more informal and ad hoc, power more dependent on the executive (whose incentive is to jam every goal he has through his existing power in as little time as possible, since he only has the length of his term guaranteed to him), and the policy vehicles in question more and more unsuited to the circumstances of their deployment, or the objectives they are being deployed to accomplish.</p><p>There are two concerns that the Trump Administration says caused it to change its mind: number one, that Anthropic may impose these policy restrictions <em>on them, </em>by, say, pulling Claude from military use during active military operations. Number two, that these policy restrictions would be imposed by Anthropic in its capacity as a subcontractor for <em>other </em>DoW contractors. In other words, DoW could come to rely upon some other company&#8217;s technology, which is in turn enabled by Claude and governed by the same terms of use that restrict domestic mass surveillance and autonomous lethal weapons (or, in the DoW&#8217;s mind, arbitrary new restrictions Anthropic could add at any time). Add to this the reality that the Trump Administration perceives Anthropic to be its political enemy (they are probably right about this), and you have a situation in which the military suddenly realizes it is building reliance upon a firm it does not trust.</p><p>The Department of War&#8217;s rational response here would have been to cancel Anthropic&#8217;s contract and make clear, in public, that such policy limitations are unacceptable. They could also have dealt with the above-mentioned subcontractor problem using a variety of tools, such as:</p><ul><li><p>Issuing guidance advising contractors to avoid agreeing to terms with subcontractors that constitute policy/operational constraints as opposed to technical or IP constraints;</p></li><li><p>A new DFARS (Defense Federal Acquisition Regulation Supplement) clause pertaining specifically to the procurement of AI systems in classified settings that prevents both primes from imposing such constraints directly and accepting such constraints from their subcontractors, along with a procedure for requiring subcontractors with non-compliant terms to waive such terms within a prescribed time period.</p></li></ul><p>These are the least-restrictive means to accomplishing the end in question. If Anthropic refused to compromise on its red lines for the military&#8217;s use of AI, the execution of these policies would mean that Anthropic would be restricted from business with DoW or any of its contractors in those contractors&#8217; fulfillment of their classified DoW work.</p><p>But this is not what DoW did. Instead, DoW insisted that the only reasonable path forward is for contracts to permit &#8220;all lawful use&#8221; (a simplistic notion not consistent with the common contractual restrictions discussed above), and has further threatened to designate Anthropic a supply chain risk. This is a power reserved exclusively for firms controlled by foreign adversary interests, such as Huawei, and usually means that the designated firm cannot be used by any military contractor in their fulfillment of any military contract.</p><p>War Secretary Pete Hegseth has gone even further, saying he would prevent all military contractors from having &#8220;any commercial relations&#8221; with Anthropic. He almost surely lacks this power, but a plain reading of this would suggest that Anthropic would not be able to use any cloud computing nor purchase chips of its own (since all relevant companies do business with the military), and that several of Anthropic&#8217;s largest investors (Nvidia, Google, and Amazon) would be forced to divest. Essentially, the United States Secretary of War announced his intention to commit corporate murder. The fact that his shot is unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: do business on our terms, <em>or we will end your business</em>.</p><p>This strikes at a core principle of the American republic, one that has traditionally been especially dear to conservatives: private property. Suppose, for example, that the military approached Google and said &#8220;we would like to purchase individualized worldwide Google search data to do with whatever we want, and if you object, we will designate you a supply chain risk.&#8221; I don&#8217;t think they are going to do that, but there is no difference in principle between this and the message DoW is sending. <em>There is no such thing as private property</em>. If we need to use it for national security, we simply will. The government won&#8217;t quite &#8220;steal&#8221; it from you&#8212;they&#8217;ll compensate you&#8212;but you cannot set the terms, and you cannot simply exit from the transaction, lest you be deemed a &#8220;supply chain risk,&#8221; not to mention have the other litany of policy obstacles the government can throw at you.</p><p>This threat will now hover over anyone who does business with the government, not just in the sense that <em>you </em>may be deemed a supply chain risk but also in the sense that <em>any </em>piece of technology you use could be as well. Though Chinese AI providers like DeepSeek have not been labeled supply chain risks (yes, really; this government says Anthropic, an American company whose services it used in military strikes as recently as this past weekend, is more of a threat than a Chinese firm linked to the Chinese military), that implicit threat was always there.</p><p>No entity with meaningful ties to government business would use DeepSeek, simply because the regulatory risk was too high. Now that the government has applied this regulation to an <em>American </em>company, the regulatory risk simply exists for <em>all </em>software. In a sense, DeepSeek is now somewhat less risky to use (since it&#8217;s almost as risky from a regulatory perspective as any American AI), and American AI is profoundly riskier than it was last week. This, combined with the broader political risk the government has created, will increase the cost of capital for the AI industry. Put more simply, this will mean less AI infrastructure and associated energy generation capacity.</p><p>Stepping back even further, this could end up making AI less viable as a profitable industry. If corporations and foreign governments just cannot trust what the U.S. government might do next with the frontier AI companies, it means they cannot rely on that U.S. AI at all. Abroad, this will only increase the mostly pointless drive to develop home-grown models within Middle Powers (which I covered last week), and we can probably declare the American AI Exports Program (which I worked on while in the Trump Administration) dead on arrival.</p><p>The only thing that would alleviate these self-imposed consequences is if we are really living through a rapid &#8220;takeoff&#8221; to transformative AI. There is some chance, in that world, that the capabilities of the leading American AI systems are just too significant for corporations or governments to pass up, and that the regulatory risk is worth it. This is the world I think we live in, it is worth noting. But consider the following:</p><ul><li><p>Even if I am right that we live in the &#8220;rapid capabilities growth&#8221; world, it will still be the case that the adoption of U.S. AI will be seen as especially risky&#8212;a vulnerability to be corrected once viable alternatives are available;</p></li><li><p>The Trump Administration does <em>not </em>think we live in that world, and instead thinks that AI capabilities began to plateau around GPT-5 last summer. Thus, on the logic of the Trump Administration&#8212;where AI is a &#8220;normal&#8221; technology&#8212;this was an especially bad move that we did not have the leverage to pull off, since AI is about to become a commodity.</p></li><li><p>If we <em>do </em>live in that world, on the other hand, the Trump Administration just cast itself as the enemy of the industry that is about to birth the most powerful technology ever conceived&#8212;as well as an enemy <em>of the technology itself</em>.</p></li></ul><p>In short, I can see only downsides to the Trump Administration&#8217;s decision to designate Anthropic a supply chain risk, particularly considering the far less costly policy alternatives it could have employed. One gets the sense that the people making these decisions at DoW are not acting with strategic clarity nor any respect for the basic principles of the American republic&#8212;not to mention in stark contrast to President Trump&#8217;s own stated vision of letting AI thrive in America.</p><p><strong>V.</strong></p><p>With each passing presidential administration, American policymaking becomes yet more unpredictable, thuggish, arbitrary, and capricious&#8212;a gradual descent into madness. It is hard to know at what point ordered liberty itself simply evaporates and we fall into the purely tribal world. </p><p>Even if Secretary Hegseth backs down and narrows his extremely broad threat against Anthropic, great damage has been done. Even in the narrowest supply-chain risk designation, the government has <em>still </em>said that they will treat you like a foreign adversary&#8212;indeed, they will treat you in some ways <em>worse </em>than a foreign adversary&#8212;simply for refusing to capitulate to their terms of business. Simply for having different <em>ideas, </em>expressing those ideas in <em>speech</em>, and actualizing that speech in decisions about how to deploy and not deploy one&#8217;s <em>property</em>. Each of these things is fundamental to our republic, and each was assaulted&#8212;not anything like for the first time but nonetheless in novel ways&#8212;by the Department of War last week. Most corporations, political actors, and others will have to operate under the assumption that the logic of the tribe will now reign. </p><p>There is something deeper about the damage done by the government, too. The Anthropic-DoW skirmish is the first major public debate that is truly about where the proper locus of control over frontier AI should be. Our public institutions behaved erratically, maliciously, and without strategic clarity. Our political leaders conveyed little understanding of their own actions, to say nothing of the technology and its stakes. They got off on an extraordinarily bad footing, and it is hard to imagine them ever recovering, because they do not seem to care about improvement. They are a cartoonish depiction of the American political elite, but sadly their failings have been the prototype of American political elites from both parties for much of my life now. &#8220;The same as before, but now noticeably worse&#8221; has been the theme of American politics for 20 years. </p><p>The machinery of our current republic seems to be in such disrepair that it is hard to see how it lasts. No one knows what comes next, but I strongly suspect that whatever it is will be deeply intertwined with, and enabled by, advanced AI. It is with this that we will <a href="https://marginalrevolution.com/marginalrevolution/2026/02/rebuilding-our-world-with-reference-to-strong-ai.html">rebuild our world</a>. As we do, and as we have future debates about the proper nexus of control over frontier AI, I encourage you to avoid the assumption that &#8220;democratic&#8221; control&#8212;control &#8220;of the people, by the people, and for the people&#8221;&#8212;is synonymous with governmental control. The gap between these loci of control has always existed, but it is ever wider now. </p><p>No matter what world we build, the limitations imposed in the law on what we know today as &#8220;the government&#8217;s&#8221; use of AI will be of paramount importance. We really do want to ensure that mass surveillance and autonomous weapons/systems of control cannot be used to curtail our liberties&#8212;at least we want to try. So despite not being the focus of this piece, I applaud the AI labs for caring about these redlines. Over the coming years and decades, I expect that our liberty will be in greater peril than many of us comprehend.</p><p>Each of us gets to choose which futures we wish to fight against, which we can live with, and which we will fight for. As you make your choices, I suggest ignoring the din of the death rattle and trying to think with independence. Do not process this with the partisan blinders of 20<sup>th</sup> century mass politics; one way or another, you are entering a new era of institution building in living color. </p><p>Before you get to all that, though, take a moment to mourn the republic that was.</p>]]></content:encoded></item><item><title><![CDATA[The Moving and the Still]]></title><description><![CDATA[Reflections on Delhi]]></description><link>https://www.hyperdimensional.co/p/the-moving-and-the-still</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/the-moving-and-the-still</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Mon, 23 Feb 2026 14:45:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kZjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f70956b-24b6-432b-81c4-dcfa4095ead7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><p>Since I was young, I have enjoyed imagining what it must be like to walk around the inside of a cell from the perspective of something so small that it was like a human walking the streets of Manhattan. When I was younger, simpler, and more na&#239;ve, I imagined it like my textbooks told me it would be: orderly, logical, Mozartian. I supposed that a cellular pedestrian could look up at stonelike structures, enjoy the rhythm of cars stopping at red lights and gliding through green.</p><p>As I came to understand the world as it really is, it gradually dawned on me that the cell is nothing like this. Instead it is stormy, chaotic, and packed more densely than reason alone could comprehend. Or at least, that&#8217;s how it would seem to our on-foot tourist. Though a greater logic may permeate the system, from his vantage point every square picometer surrounding him is the subject of intense, ceaseless, and wholly unplanned negotiation. Everywhere there are problems being created and solutions being invented in real time. Very little is a guarantee because nothing is fixed. It is hard to keep anything in place when a trillion tug-of-wars are unfolding in parallel.</p><p>Once this realization had set in, I grasped just how artificial our manmade projections of order are. Our ninety-degree angles, our grids, our chiseled stone and sharpened metal. I developed no desire to reject that tradition&#8212;indeed, this realization made me cherish our human legacy of rational inquiry and tool building even more. Instead, I came to understand that legacy as a distinct mode of human existence, an attempt to tame nature by building models of it. Blocky models with straight lines, but models nonetheless. This, fundamentally, is why my political theory is rooted in deep skepticism of fixed rules and, more broadly, of over-elevating our human conjectures of order&#8212;of imposing our ninety-degree angles on a curvy world.</p><p>I saw that no matter how far down one peered, no matter the level of abstraction, nature <em>always </em>resembles something like my more mature picture of the cell. It is moving squiggles, rather than fixed lines, all the way down&#8212;but importantly, all the way <em>up</em>, too. We project our right angles and our fixed concepts onto the world with our reason, but in truth these are thin veneers over a fundamentally organic, unpredictable, endlessly moving reality that <a href="https://en.wikipedia.org/wiki/Michael_Oakeshott">Michael Oakeshott</a> called a &#8216;ceaseless improvisatory adventure&#8217; and that, three-thousand years before him, the poets of the <a href="https://en.wikipedia.org/wiki/Nasadiya_Sukta">Rig Veda</a> had already suspected of our cosmos: &#8220;perhaps it formed itself, or perhaps it did not &#8212; the One who looks down on it, in the highest heaven, only He knows, or perhaps even He does not know.&#8221; Squiggles <em>all </em>the way up.</p><p><strong>&#8212;</strong></p><p>I had occasion to learn about the ancient religious traditions of India in preparation for a trip I took to Delhi for the <a href="https://impact.indiaai.gov.in/">AI Impact Summit</a>, the &#8220;official&#8221; global AI gathering that emerged out of the AI Safety Summits in Seoul and Bletchley Park. After spending a few days in the city, I find myself unsurprised that the above-quoted Rig Veda hymn&#8212;the best nugget of ancient wisdom I have encountered in years&#8212;was composed by poets who roamed these very same Gangetic plains.</p><p>Delhi is exceptionally, outrageously alive. Walking its streets is like experiencing Phil Spector&#8217;s Wall of Sound, but for every sensory and mental faculty. There are individual blocks where the density and diversity of activity stretch the imagination. If you have only visited Western cities, you are probably overestimating the amount of human expression and activity that can be compressed into a dozen or two square feet. And I am sure my Indian friends, with pride and with a smile, would tell me that I haven&#8217;t seen anything. I do not doubt that they are right.</p><p>As I wandered about the streets of Delhi, I pondered whether this place, with its vivacity and its warmth and its tolerance for constant flux but also its clear weaknesses in institutional cohesion, would do better or worse over the coming decades than the West, with all its systematic but cold ninety-degree angles. Like all things, the societies that do best will probably incorporate characteristics of both&#8212;fluidity and fixity, dynamism and stability. Each mode of civilization has things to learn from the other, and it is best to do so with open eyes and outstretched hand.</p><p>I came into this Summit rooting for India&#8212;not just to have a successful event, but to have a successful century. I came away rooting even more loudly. But I came in worried, too. Worried that the emerging era of machine intelligence will not be so kind to the Indian people, and indeed not to many countries of the Global South. Worried that this may be one of the first technologies to punish those countries in states of economic transition rather than fuel them. Worried that what will feel like big waves to the insulated and wealthy Americans will feel like a roaring tsunami in places like Delhi, and worried that the people here do not see it coming.</p><p>I regret to inform you that I came away even more worried than I went in.</p><p>&#8212;</p><p>The types of concerns I&#8217;ve described were latent among the globally representative attendees of Delhi, but they were not explicit. In fact they were swept under the rug, not so much dismissed as denied.</p><p>The perils and hopes that we discuss here in this newsletter&#8212;the ones that come from transformative AI, powerful AI, AGI, superintelligence, or whatever other moniker you wish&#8212;were not really on display at the Summit, not so much because of any failing of the Indians but because these topics are not part of polite global conversation. This is a domestic failing, too: as I have frequently pointed out, the implications of powerful AI are only <em>kind of</em> a part of the conversation in America.</p><p>At some point in 2024, for reasons I still do not entirely understand, global elites simply decided: &#8220;no, we do not live in <em>that </em>world. We live in this other world, the nice one, where the challenges are all things we can understand and see today.&#8221; Those who think we might live in <em>that </em>world talk about what to do, but mostly in private these days. It is not considered polite&#8212;indeed it is considered a little discrediting in many circles&#8212;to talk about the issues of powerful AI.</p><p>Yet the people whose technical intuitions I respect the most are convinced we do live in <em>that </em>world, and so am I. In broad strokes, I believe the evidence that we are fast en route to building recursively self-improving, infinitely replicable, smarter-than-human machine intelligences in the near future has basically only grown since the release of ChatGPT in 2022. There are reasonable and important conversations about what exactly that means in terms of concrete effects, and here I am often more dubious of extreme claims than some of my fellow <em>that</em>-world believers. But the question is very much &#8220;<em>what</em> are autonomous swarms of superintelligent agents going to mean for our lives?&#8221; as opposed to &#8220;<em>will </em>we see autonomous swarms of superintelligent agents in the near future?&#8221;</p><p>Except that these questions aren&#8217;t asked by the civil societies or policymaking apparatuses of almost any country on Earth. Many such people <em>are </em>aware that various Americans and even a few Brits wonder about questions like this. The global AI policy world is not by and large <em>ignorant </em>about the existence of these strange questions. It instead <em>actively chooses to deny their importance. </em>Here are some paraphrased claims that seemed axiomatic in repeated conversations I witnessed and occasionally participated in:</p><ul><li><p>&#8220;The winner of the AI race will be the people, organizations, and countries that diffuse small AI models and other sub-frontier AI capabilities the fastest.&#8221;</p></li><li><p>&#8220;Small models with low compute intensity are catching up rapidly to the largest frontier models.&#8221;</p></li><li><p>&#8220;Frontier AI advances are beginning to plateau.&#8221;</p></li></ul><p>At this same Summit, OpenAI CEO Sam Altman <a href="https://www.youtube.com/watch?v=qH7thwrCluM">remarked</a>: &#8220;The inside view at the [frontier labs] of what&#8217;s going to happen... the world is not prepared. We&#8217;re going to have extremely capable models soon. It&#8217;s going to be a faster takeoff than I originally thought.&#8221;</p><p>One of these conversations suggests that the most important thing about AI is the use of sub-frontier models to augment routine business processes, often without need for a large data center. The other suggests that the frontier systems of today are on the cusp of being able to recursively self-improve (if not already doing it in some ways) and that the probable result of this will be the near-term dawning of machine superintelligence. American companies are spending the better part of one trillion dollars to actualize this vision <em>this year alone </em>in what is surely among the grandest projects in the history of capitalism. None of this is a joke, none of it is a dream.</p><p><em>Why</em>, then, do so many of the thousands of attendees of the global AI Summit pretend only their version of the story exists? What explains this odd dissonance?</p><p>&#8212;</p><p>I came to Delhi with <a href="https://www.thefai.org/posts/the-race-worth-winning-middle-powers-in-the-age-of-machine-intelligence">a report</a> in hand, co-authored with my friend and colleague Anton Leicht. We both perceived this dissonance well before the Summit and wanted to offer some arguments for why the second view of AI&#8212;the &#8220;superintelligence soon&#8221; view&#8212;should be a bigger part of both global events like the Summit and the internal conversations of every country. The way our report does this is essentially to grab the non-U.S. reader by the shoulders and exclaim, &#8220;the U.S. is spending one trillion dollars this year alone to build superintelligence, and the odds are high that your country had no strategy for what this means!&#8221;</p><p>I am optimistic that we changed at least some minds, but I know any advances we made were small. The audiences we encountered are perfectly capable of <em>understanding </em>our message; they simply deny that it is worth hearing. I believe they deny it for two reasons: first, because if it is true, it might mean that their country, their plans for the future, and their present way of life will be profoundly upended, and denial is the first stage of grief. Thus, at the object level, rejecting notions of AGI, superintelligence, and the like shifts the conversation from U.S. strengths (hyperscale cloud computing, leading-edge semiconductor design, frontier AI talent, etc.) to areas where many more countries in the world feel comfortable and confident.</p><p>Second, because &#8216;AGI&#8217; in particular and the pronouncements of American technologists in general are perceived by the elite classes of countries worldwide as imperialist constructs that must be rejected out of hand. This is a rhetorical rebellion of people who perceive themselves, rightly or wrongly, as among the would-be colonized, and perceive America and especially its technology firms as the would-be colonizers.</p><p>The denial is an effort to turn what people like Anthropic CEO Dario Amodei frame as a scientific and universal story&#8212;the coming of AGI, the criticality of alignment and catastrophic risk mitigation, the inevitability of it all&#8212;as just one of many narratives on the shelf. It is almost as if the message is:</p><p>&#8220;Sure, these crazy Americans might talk about all that AGI stuff, but over here we are talking about things like local AI, &#8216;AI for all,&#8217; and only the risks <em>we </em>want to talk about, and thus only the policies <em>we </em>want to impose on <em>you</em>. And if you don&#8217;t like our policies? Tough. You can&#8217;t dangle technology access in front of us, because <em>we&#8217;ve got you this time</em>. We can use open-source, and edge compute, and small models that are good enough for what we need. You Americans can keep your fancy computers and your frontier models. We don&#8217;t need them, and we don&#8217;t need you.&#8221;</p><p>This is a message that one can hear everywhere from Western Europe to South Asia to Sub-Saharan Africa. I understand why it feels good to say, and in some sense, it might be the case that we Americans <em>deserve </em>to hear a message like that.</p><p>Yet the central flaw of all this <a href="https://en.wikipedia.org/wiki/Orientalism_(book)">postcolonial narrativizing</a> is, and always has been, that it exists within the domain of pure concept, not the real world. It&#8217;s about the map, and how to draw different ones, not the territory and how to navigate it (indeed, <a href="https://en.wikipedia.org/wiki/Can_the_Subaltern_Speak%3F">postcolonial thought</a> has a tendency to make the <a href="https://en.wikipedia.org/wiki/Provincializing_Europe">poststructuralist assumption</a> that changing the map is as or more important as changing the territory). And in the end, satisfying though these &#8220;anti-AGI&#8221; narratives may be to tell,<em>they are probably empirically wrong </em>in ways that will harm the very people whose independence and humanity they are ostensibly intended to defend.</p><p>A country that fails to adopt frontier AI systems rapidly and develop a hard-nosed AI strategy is one that will fall perilously behind both the U.S. and others. It is a country that will refuse to see and make tradeoffs worth making to secure its stake in the future. It is setting itself up for failure and dependence (if not outright subjugation) rather than prosperity and strength.</p><p>In the end, then, the rejection of &#8216;American&#8217; or &#8216;Anglo&#8217; AI concepts is simply a coping mechanism for weakness that masquerades as a show of strength. Ultimately, though, it makes all countries who embrace this manner of thinking weaker.</p><p>Getting out of this trap does not require one to &#8220;admit the American technologists were right.&#8221; It requires one to develop strategies that are robust to the increasingly likely scenarios where they are fundamentally correct. If the AI Summit is any indicator, most of the countries on Earth are developing AI strategies that bet <em>against </em>the central trends of deep learning, and thus the theses of the frontier labs. These strategies seem likely to fail if deep learning continues to work in anything like the way it has for the past fifteen years.</p><p>As <a href="https://www.thefai.org/posts/the-race-worth-winning-middle-powers-in-the-age-of-machine-intelligence">the report I co-authored with Leicht</a> argues, once countries <em>have </em>accepted the likely reality we occupy, a range of positive options become available. First and foremost, many Middle Powers and developing countries can bet that they have greater institutional flexibility than the more rigid U.S. Indeed, betting <em>on</em> continued American institutional sclerosis seems much safer than betting against deep learning. They can build new institutions, and reimagine existing ones, using the many new things frontier AI systems make possible (of which we have scratched the surface). This is actually a twist on one of the common features of existing Middle Power strategy: the aforementioned focus on diffusion. Currently, that diffusion effort is usually centered on small and sub-frontier models, but there is no reason that frontier diffusion cannot be the focus instead. However, given that the countries in question often have limited resources, concentrated bets will be a necessity. This is yet another reason clarity about the direction of AI progress is so important; it enables strategic focus.</p><p>The slogan of the Summit was &#8220;AI for All, Welfare of All,&#8221; and while the sentiment is nice, the truth is that the U.S., and to a meaningfully lesser extent China, hovered silently over a great many of the conversations in Delhi this week. &#8220;For all&#8221; is not so much an invitation but a provocation to those Great Powers. It seems to say, &#8220;there are billions of us, and <em>this </em>is <em>our </em>A.I.&#8221;</p><p>But I choose to take the slogan seriously. The revolution unfolding is not a fight between the Middle and Great Powers, between the U.S. and the Global South, or between China and the U.S. The fight worth winning is humanity&#8217;s fight to navigate this transition as smoothly as we can manage.</p><p>Humans themselves, with our self-defeating tendencies, are probably the single biggest barrier to success. But the first step to achieving productive cooperation requires recognition of a common problem, and a common problem we surely face. I went to Delhi with an outstretched hand, and outstretched it will remain.</p><p>&#8212;</p><p>On my walks and rides through Delhi, I kept coming back to one thought: this is a people with an unusually high tolerance for complexity, ambiguity, and improvisation. Perhaps these are the skills more essential than any others for success as the revolution in machine intelligence unfolds. Perhaps I should be more worried about my own society, with its stasis and intolerance for the new (and to be clear, I do). Complexity science has taught us that there are systems <em>too </em>ordered to survive in a dynamic environment. Delhi, ultimately, is the biological city; Paris, I am afraid, would quickly perish in any real body, though once that was not true.</p><p>Order and predictability are blessings, but they can also be seeds of destruction. This is one of the reasons I have argued that the U.S. should aim to think of itself more like a developing country during this era. If I had to choose between exquisite order and flexibility in the years to come, I&#8217;d err in the direction of flexibility&#8212;but I&#8217;d be nervous about erring either way. What you want, but can never achieve, is the perfect balance. To have even a chance at balance requires constant adjustment and wagering&#8212;something the people of Delhi understand as well as anybody on this planet, and probably something they understand much better than many of my fellow Americans.</p><p>This never-ending need to readjust and remeasure one&#8217;s surroundings is what makes life complicated, but it is also what makes life possible. And thus there is no easy resolution. Civilization requires the right angles and the squiggles alike, and none among us know in what ratio. So we walk down our paths, we see what we see, and we make our bargains and our wagers. The tension we feel never disappears, but only changes, creating itself in every moment.</p>]]></content:encoded></item><item><title><![CDATA[On Recursive Self-Improvement (Part II)]]></title><description><![CDATA[What is the policymaker to do?]]></description><link>https://www.hyperdimensional.co/p/on-recursive-self-improvement-part-d9b</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/on-recursive-self-improvement-part-d9b</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Thu, 12 Feb 2026 14:50:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kZjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f70956b-24b6-432b-81c4-dcfa4095ead7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><p><em>Continued from <a href="https://www.hyperdimensional.co/p/on-recursive-self-improvement-part">Part I</a> last week.</em></p><h4><strong>Introduction</strong></h4><p>On the same day I published Part I of this series, OpenAI released GPT-5.3-Codex, a new model that the company <a href="https://openai.com/index/introducing-gpt-5-3-codex/">claims</a> helped to engineer itself:</p><blockquote><p>The recent rapid Codex improvements build on the fruit of research projects spanning months or years across all of OpenAI. These research projects are being accelerated by Codex, with many researchers and engineers at OpenAI describing their job today as being fundamentally different from what it was just two months ago. Even early versions of GPT&#8209;5.3-Codex demonstrated exceptional capabilities, allowing our team to work with those earlier versions to improve training and support the deployment of later versions.</p><p>Codex is useful for a very broad range of tasks, making it difficult to fully enumerate the ways in which it helps our teams. As some examples, the research team used Codex to monitor and debug the training run for this release. It accelerated research beyond debugging infrastructure problems: it helped track patterns throughout the course of training, provided a deep analysis on interaction quality, proposed fixes and built rich applications for human researchers to precisely understand how the model&#8217;s behavior differed compared to prior models.</p></blockquote><p>These are the early stages. I expect the scale of automation to have expanded considerably within the coming year. </p><p>The upshot of last week&#8217;s analysis is that automated AI research and engineering is already happening to some extent (as OpenAI has demonstrated), but that we don&#8217;t quite know what this will mean. The <em>bearish</em> case (yes, bearish) about the effect of automated AI research is that it will yield a step-change acceleration in AI capabilities progress similar to the discovery of the reasoning paradigm. Before that, new models came every 6-9 months; after it they came every 3-4 months. A similar leap in progress may occur, with noticeably better models coming every 1-2 months&#8212;though for marketing reasons labs may choose not to increment model version numbers that rapidly. </p><p>The most bullish case is that it will result in an intelligence explosion, with new research paradigms (such as the much-discussed &#8220;continual learning&#8221;) suddenly being solved, a rapid rise in reliability on long-horizon tasks, and a Cambrian explosion of model form factors, all scaling together rapidly to what we might credibly describe as &#8220;superintelligence&#8221; within a few months to at most a couple of years from when automated AI research begins happening in earnest.</p><p>Both of these extreme scenarios strike me as live possibilities, though of course an outcome somewhere in between these seems likeliest. Even in the most bearish scenario, the public policy implications are significant, but the most salient fact for policymakers is the uncertainty itself.</p><p>The current capabilities of AI already have significant geopolitical, economic, and national-security implications. Any development whose <em>conservative case </em>is a step-change acceleration<em> </em>of this already rapidly evolving field, and whose bullish case is the rapid development of fundamentally new, meaningfully smarter-than-human AI, has clear salience for policymakers. But what, exactly, should policymakers do?</p><h4>The Deficiencies of the Status Quo</h4><p>Right now, we predominantly rely on faith in the frontier labs for every aspect of AI automation going well. There are no safety or security standards for frontier models; no cybersecurity rules for frontier labs or data centers; no requirements for explainability or testing for AI systems which were themselves engineered by other AI systems; and no specific legal constraints on what frontier labs can do with the AI systems that result from recursive self-improvement. </p><p>To be clear, I do not support the imposition of such standards at this time, not so much because they don&#8217;t seem important but because I am skeptical that policymakers could design any one of these standards effectively. It is also extremely likely that the existence of advanced AI itself will both change what is possible for such standards (because our technical capabilities will be much stronger) and what is desirable (because our understanding of the technology and its uses will improve so much, as will our apprehension of the stakes at play). Simply put: I do not believe that bureaucrats sitting around a table could design and execute the implementation of a set of standards that would improve status-quo AI development practices, and I think the odds are high that any such effort would <em>worsen </em>safety and security practices. </p><p>Thus, the current state of affairs&#8212;where we trust the labs to handle all these extremely important details&#8212;is the best option on the table. But that does not mean our trust should be blind. While some labs, such as Google DeepMind, OpenAI, and Anthropic, have all been relatively transparent about their work on many of these issues, that transparency has largely been voluntary and on terms set more or less entirely by the labs themselves.</p><p>In recent months, this has begun to change with the passage of SB 53 in California, and the very-similar RAISE Act in New York. These bills require large AI developers to document their assessment of catastrophic risk potential from their most powerful models as well as what measures, if any, they employ to mitigate those risks. Importantly, both bills are scoped to include large-scale risks &#8220;resulting from <strong>internal use </strong>of [the developer&#8217;s] frontier models&#8221; (emphasis added; quote from SB 53). Both bills also reference risks stemming from the &#8220;loss of control,&#8221; over, among other things, internal deployments of frontier models, a vague but nonetheless clear nod to one broad category of plausible risks posed by AI research automation.</p><p>Some critics of SB 53 and RAISE point out two key limitations: first, that they are primarily non-prescriptive, and thus create no substantive requirements for safeguards, security practices, and the like. The bills delegate the task of determining these details to the frontier labs themselves. Second, the laws have no mechanism for proactively verifying that labs comply with their safety and security frameworks as stated.</p><p>The first critique is perhaps most obvious, but for the reasons of epistemic humility I describe above, this is only arguably a weakness. We do not know what the optimal standards and safeguards <em>are</em>, and in all likelihood, it will ultimately be technologists rather than technocrats who lead the way in the codification of these standards. Thus while this lack of prescriptiveness is a limitation of the law in one light, it is a strength in another.</p><p>Given this tradeoff, however, the second limitation becomes even more salient: there is no mechanism for verifying that frontier labs are in compliance with <em>their own </em>plans. </p><h4>A Better Way</h4><p>Imagine that there were a law requiring publicly traded companies to disclose their financial statements, but no institution of auditing. Walmart could fulfill their legal requirement by reporting their income, but the public could not verify that the number was accurate. In practice, the transparency alone does not do much for assuring investors, employees, and others with an interest in the financial health of Walmart. </p><p>Of course, in the real world, we <em>do </em>have auditing, and it is for this reason that we collectively (for the most part) <em>trust </em>the numbers Walmart is required to disclose in their financial statements. It is not so much the legal disclosure requirement that creates trust, but institutions&#8212;private institutions with public oversight, in the case of auditing&#8212;that create a common sense of trust that undergirds financial markets worldwide.</p><p>It is worth pausing for a moment to reflect on this. We do not assess the health of company finances by having government regulators probe every firm&#8217;s books and operations. Instead, we have private, usually for-profit corporations who provide audits as a service. An audit is in part a verification that a company adheres to Generally Accepted Accounting Principles, which are themselves standards written by a private non-profit (the Financial Accounting Standards Board) that is overseen by a federal regulator (the Securities and Exchange Commission). And this whole apparatus, in addition to ensuring trust, is relatively cheap: though no one loves an audit (having run a non-profit that received annual audits, I can attest), the fees auditors assess are under 0.10% of a public firm&#8217;s revenue on average (source: Codex, analyzing SEC data, and also perusals of Google search results for a sanity check on the Codex analysis). </p><p>Audits are boring. They are not fun. Yet they are a civilizational accomplishment that enables many things we cherish. We should be proud of audits, auditing, and auditors; we should be proud that over centuries we invented a mechanism to establish trust where none naturally existed, and indeed where the incentives often push actors toward deception and against mutual trust. </p><p>Unfortunately, in today&#8217;s AI industry, the reality is closer to my fictitious example of the unaudited Walmart financial statements. Companies now have to disclose their safety policies, but there is no common trust that they are being followed. Worse yet, because these are policies about a rapidly evolving set of scientific, engineering, and technological frontiers, there are inevitably going to be ambiguities. How will we resolve such ambiguities in the absence of an architecture of trust?</p><p>The answer is unfortunately obvious: by arguing about them on the internet. This is precisely what happened after OpenAI released GPT-5.3-Codex, the first OpenAI model release after SB 53&#8217;s transparency provisions went into effect. Because OpenAI had already been publishing their safety policy as a voluntary commitment for over two years, nothing about OpenAI&#8217;s policies actually changed. What did change, however, is that OpenAI is now <em>legally obligated </em>to follow their policies&#8212;these same policies that necessarily have ambiguities, but no trusted institution to resolve them. </p><p>And so predictably, within 24 hours of GPT-5.3-Codex&#8217;s release, an AI safety organization called The Midas Project wrote a <a href="https://x.com/TheMidasProj/status/2019837161647067627?s=20">breathless thread</a> on X alleging that OpenAI had &#8220;just broke[n]&#8221; SB 53 and &#8220;could owe millions in fines.&#8221; I am going to avoid weighing in specifically on the merits of these claims because I am advising an organization that is drafting a report on the implementation of SB 53. Conveniently, the merits are not the important thing for the purposes of this essay. The fact that this argument is even happening in such a disorderly fashion proves the point: We are trying to do the technocratic governance of high-trust societies in a low-trust environment. </p><p>What is needed in frontier AI catastrophic risk, then, is a similar sense of trust. That need not mean auditing in the precise way it is conducted in accounting&#8212;indeed, it almost certainly does <em>not </em>mean that, even if that discipline has lessons for AI. </p><p>This is not an original idea: Early public drafts of both SB 53 and RAISE contained provisions mandating audits for precisely this reason. But those provisions were only thin sketches. Key questions&#8212;including who would perform the audits, what qualifications auditors would be required to demonstrate, who would assess auditors and by what criteria, financial independence of the auditors, and many others&#8212;were left unanswered, or deferred to later administrative rulemaking. Ultimately, a successful policy regime must be more than an afterthought. In the end, the auditing provisions were struck from these bills, and this was probably for the better.</p><p>But a premature idea is not a <em>bad </em>idea. And I suspect that over the coming year or two, the time will have come for independent verification of frontier lab claims by expert third parties. These would be non-governmental bodies that could, first and foremost, verify that frontier labs live up to their own public claims about safety and security and report their findings publicly and privately. In so doing, these private bodies could assist in the codification of private-sector-led technical standards related to agent security and similar issues.</p><p>In addition, such organizations could provide tailored reports to the government (for either public or private release) on the implications of automated AI research on things like the labor economy (after all, this will be the first truly large-scale deployment of plausibly job-replacing agents within firms), organizational economics, competitive dynamics within AI, national security, and geostrategy. Because of the clear nationwide relevance of all these questions, it is optimal for these private organizations to be overseen by federal government agencies as opposed to states. </p><p>Much of the work of organizations like this could likely be automated, or at least AI-assisted. Indeed, it is probably the case that no organization could fulfill a mission of this kind <em>without </em>the creative and extensive application of AI. Furthermore, the ideal version of this organization would be able to provide high-quality analytic services at a low cost, such that new entrants to the field would not find the cost of these services burdensome.</p><p>Given the struggles that governments have with both operational efficiency, technology procurement and use, and expert recruitment, it is far more logical for the organizations that perform these verification services to be private rather than offices of government agencies.</p><p>The kind of organization I have described would not necessarily have to be created in a new law. It could simply be a non-profit or corporation that contracts with a frontier lab, perhaps as a condition of an insurance policy held by the frontier lab. The insurer would be agreeing to underwrite certain categories of legal liability for frontier labs only on the condition that the lab receive and pass verification.</p><p>Of course, one could imagine a legislative implementation of this idea as well. There are numerous paths one would take here, as well as a variety of political, legal, Constitutional, and political-economic questions to be answered with any such proposal.</p><p>There are numerous failure modes to organizations such as this, whether implemented in law or not. One I have already mentioned, which is that the cost and organizational complexity of working with a verification organization makes it difficult or impossible for new AI companies to enter the field. Others include industry capture of the verification organizations, such that the verification organizations become less rigorous than is ideal, or that the verification organizations themselves come to lobby along with the AI industry for regulations that discourage new entrants. Finally, there is the risk that the entire enterprise becomes a box-checking exercise with little substantive benefit. Any proposal for operationalizing verification organizations, especially legislative proposals, must address these challenges head-on.</p><h4><strong>Conclusion</strong></h4><p>Organizations of this kind fit within the broader work I have done on &#8220;private governance,&#8221; which in turn builds off the work of the scholar Gillian Hadfield. My interest in institutions like this is longstanding, and it explains why I continue to be affiliated with organizations like Fathom (which is among the leading voices on such issues in the country) and hold an unpaid role as an advisor to the <a href="https://www.averi.org">AI Verification and Evaluation Research Institute</a>, as well as an advisor with a small equity position in <a href="https://aiuc.com">Artificial Intelligence Underwriting Company</a>, a startup focused on AI insurance.</p><p>It is among my top priorities for the coming year to figure out the precise scoping for organizations such as this (should they be confined only to studying AI research automation, or should they examine other domains as well?) as well as the optimal implementation (via public policy or purely via private markets?). Another key priority I have is doing my part to help this ecosystem, which to some extent has already evolved organically (including <a href="https://transluce.org">Transluce</a>, <a href="https://metr.org">METR</a>, the organizations I mention above, and others), to mature.</p><p>I hope others who feel compelled by the ideas I have described here will help advance the research agenda and organization-building goals I have described here. While I will make specific bets about both how to do this (for example, a policy proposal) and <em>who </em>should do it, progress in the broad direction I have described matters much more to me than any particular approach or organization emerging victorious. Indeed, that is why I have chosen to convey the broad idea, as well as my own central motivation for pursuing it, before I have conveyed my specific proposal. My goal today is only to convince you that this is the general direction frontier AI policy should take. I expect to share more specific ideas about the path I propose soon.</p><p>AI policy has now firmly entered its &#8216;science fiction&#8217; era, where I suspect it will remain for many years to come. Legitimately strange things are happening, and stranger things yet will happen soon. There are two broad categories of societal reaction to these events: one is extreme panic, especially declarations that &#8220;it&#8217;s so over&#8221; and that machine takeover of human institutions is imminent. The other emerges in reaction to this first group, and seeks to find ways to erase all strangeness from the event in service of hard-nosed skepticism.</p><p>Both postures are unbalanced. We must acknowledge, indeed we must embrace, the strangeness of this moment. Yet we must avoid panic and hyperbole as well. We must face our predicaments&#8212;strange and futuristic though they may sometimes be&#8212;through new iterations of preexisting tools of public policy. The ideas I have discussed are essentially variations on existing kinds of institutions designed to solve structurally similar problems in the past. This, rather than back-of-the-envelope improvisation of new institutions, is the wise path, for it is through progressive adaptation to novel contexts that old institutions become new again.</p>]]></content:encoded></item><item><title><![CDATA[On Recursive Self-Improvement (Part I)]]></title><description><![CDATA[Thoughts on the automation of AI research]]></description><link>https://www.hyperdimensional.co/p/on-recursive-self-improvement-part</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/on-recursive-self-improvement-part</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Thu, 05 Feb 2026 14:51:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kZjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f70956b-24b6-432b-81c4-dcfa4095ead7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><h4><strong>Introduction</strong></h4><p>America&#8217;s major frontier AI labs have begun automating large fractions of their research and engineering operations. The pace of this automation will grow during the course of 2026, and within a year or two the effective &#8220;workforces&#8221; of each frontier lab will grow from the single-digit thousands to tens of thousands, and then hundreds of thousands. </p><p>This means that soon, the vast majority of frontier AI lab staff will neither sleep nor eat nor use the bathroom. They will grow smarter and more capable each month, not only because AI itself was already improving quickly but because the only objective these hundred-thousand-strong workforces will have <em>is to make themselves smarter.</em></p><p>The automation of AI research and engineering is probably the most important thing that will happen in the field of AI over the coming year (and one of the most important things in the history of the field), but it is frustrating to talk about because it is unlikely to &#8216;happen&#8217; in one recognizably discrete event, and indeed in some important sense it is already happening. More frustrating still is the fact that it will take place almost entirely behind closed doors.</p><p>Make no mistake: AI agents that build the next versions of themselves are not &#8220;science fiction.&#8221; They are an explicit and public milestone on the roadmap of every frontier AI lab. OpenAI has been the most transparent: <a href="https://www.youtube.com/watch?v=ngDCxlZcecw">they envision</a> hundreds of thousands of automated research &#8220;interns&#8221; within about nine months from today, and a fully automated workforce in about two years.</p><p>There is substantial uncertainty about what, exactly, automated AI research will mean. It <em>could </em>simply mean that AI capabilities progress unfolds faster, but within the familiar &#8220;generative AI&#8221; paradigm. This might not matter as much as many in the AI industry believe. It could also mean fundamental changes to the nature of AI itself and to the strategic dynamics that obtain over the field; these are the concerns that animate the <a href="https://ai-2027.com">AI 2027</a> project. To be clear, though, the debate should really not be <em>whether </em>this automation will occur but instead about how it will occur&#8212;the details and the implications.</p><p>Policymakers would be wise to take especially careful notice of this issue over the coming year or so. But they should also keep the hysterics to a minimum: yes, this <em>really is </em>a thing from science fiction that is happening before our eyes, but that does not mean we should behave theatrically, as an actor in a movie might. Instead, the challenge now is to deal with the legitimately sci-fi issues we face using the comparatively dull idioms of technocratic policymaking.</p><p>This week, I&#8217;ll walk you through the range of scenarios I think may be possible with automated AI research and how it may affect the dynamics of AI development. Next week, in Part II, I will argue that regardless of the outcome, the automation of AI research and development changes the fundamental dynamics of the field enough to merit targeted policy action.</p><p>In both pieces, there is one assumption I&#8217;ll ask you to make with me, which is that substantial automation of AI research is a near-term possibility. This requires believing a few things. First, that AI research and engineering is substantively composed of work like: finding optimizations in various complex software systems; designing and testing experiments for AI model training and posttraining; and creating software interfaces to expose AI model capabilities to users. Second, that a great deal of this work is essentially reducible to the engineering of software. Third, that AI models, while not yet geniuses, are reaching quite high levels of human competence. Fourth, that frontier lab leadership and staff are serious when they describe AI research automation as a near-term goal, and that frontier lab research staff are telling the truth when they say that AI is already writing a large fraction of their code.</p><p>These all strike me as reasonable propositions, and I&#8217;ll ask you that you join me in the assumption that these propositions taken together mean that &#8220;automated AI research of some form or another will happen soon.&#8221; This will allow us to explore the interesting questions <em>about </em>automated AI research, as opposed to asking <em>whether </em>it will happen at all, which would require devoting thousands of words to rehashing debates about AI capabilities that were interesting in 2024 and have grown less interesting to me by the month. These debates have largely been settled by empirical reality, and it is long past time to move on and accept the enormity of what is unfolding at the frontiers of AI.</p><h4><strong>What Might Automated AI Research and Engineering Be Like?</strong></h4><p>Imagine yourself standing by a street and seeing a <a href="https://www.bugatti.com">Bugatti</a> race by you at 200 miles per hour; a few minutes later, a second Bugatti speeds by at 300 miles per hour. This difference in velocity is huge to anyone inside the car, and any seasoned observer of motorsport would know that a mere 200 miles per hour is common Bugatti territory, whereas 300 miles per hour is approaching a world record for conventional vehicle speed. But the random bystander on the street might not notice much of a difference between &#8220;extremely fast&#8221; and &#8220;world historically fast.&#8221;</p><p>The current rate of AI capabilities improvement has already surpassed the ability of most humans to keep track. It is therefore entirely possible that the automation of AI research may lead to a dramatic acceleration in AI capabilities advances, and that most of the public (and policymakers) will not really notice, especially in the early stages of the automation. The predictable result of this will be that pundits say, &#8220;those AI hypesters promised us that this supposed &#8216;AI research automation&#8217; would finally mean that AI would live up to its promises; but once again, it has just resulted in more of the same empty promises.&#8221; Therefore, this scenario&#8212;call it the &#8220;AI as a normal exponentially self-improving technology&#8221;&#8212;is the bearish scenario for AI research automation.</p><p>But something else <em>is </em>possible, too. Imagine that instead of merely traveling 100 miles per hour faster, the second Bugatti <em>learned how to fly</em>. The bystander on the street would notice the flying Bugatti not so much for its speed but for the fact that it is flying. And imagine even further that it really was the Bugatti <em>itself </em>that learned how to fly; the humans ostensibly at the steering wheel can explain to the public what the Bugatti did to make itself fly, but it was not ultimately <em>their </em>work. The Bugatti built them a joystick that operates the vehicle in flight, but again, the joystick was built by the machine rather than the humans. No human personally wired it all together, and while there exist detailed specifications for every single component, they too were not written by humans.</p><p>As our street bystander ponders this incredible feat and tries to sort his way through the millions of word written by the Bugatti explaining how it achieved flight, the Bugatti lets him know that it just figured out a way to reduce its price by 99%. When this was a mere car, it was one of the most expensive in the world. Now it flies and costs as much as a Toyota Corolla. Oh, and the Bugatti also informs the bystander that it is pursuing a new engineering path that could allow the car to leap 30,000 feet into the air in a matter of seconds, as well as operate underwater. This is more speculative, but the car reckons it can make meaningful progress within a year or so.</p><p>This is obviously a heavily stylized metaphor, but you get the idea. In one outcome, the automation of AI engineering <em>is hugely important </em>yet doesn&#8217;t result in a fundamental change to the dynamics of the field. In the other, something altogether new is afoot.</p><h4>Automated AI Research Within the Labs</h4><p>We don&#8217;t know which of these scenarios describes our future more accurately. Differences in intuition about this question explain many downstream disagreements on near-future AI capabilities and how much AI needs to be regulated.</p><p>Those with a more bearish view on AI research automation point out that diminishing returns are common in the field. At the risk of torturing the Bugatti metaphor to death: as a car picks up speed, the amount of energy required to continue accelerating increases nonlinearly. It requires about twice as much energy to accelerate from 200 miles per hour to 250 miles per hour than it does to accelerate from 100 miles per hour to 150, even though the absolute quantity of acceleration (50 miles per hour) is identical.</p><p>The same dynamic often obtains in artificial intelligence; famously, the &#8216;<a href="https://arxiv.org/abs/2001.08361">scaling laws</a>&#8217; that appear to describe the relationship between computing power, data, and model performance suggest that order of magnitude increases in input resources yield an additional &#8216;nine&#8217; of reliability. In other words: for ten times more compute, you can go from 9 to 90 percent, but then the next tenfold increase in compute only brings you from 90% to 99%, and then 99% to 99.9%, and so on. Very quickly, astronomical amounts of compute are needed for only miniscule improvements to capability.</p><p>Those who are more bullish on AI research automation retort by observing that the field of AI is still replete with low-hanging fruit well beyond na&#239;ve scaling of resources along the lines described above. In particular, they point to an extremely broad set of improvements known as &#8216;algorithmic efficiency.&#8217; Dario Amodei <a href="https://www.darioamodei.com/post/on-deepseek-and-export-controls">has said</a> that, with human-driven research and engineering, individual labs achieve something like 400% efficiency improvements per year. Amodei describes this as a &#8220;compute multiplier&#8221;: the same amount of compute can deliver a model that is 4 times better than would otherwise be possible without the efficiency improvements.</p><p>These gains come from all sorts of places: model architecture tweaks that improve how well the model learns from training data or leverage compute more effectively; improvements to training datasets that allow the model to learn more quickly; enhancements to the tooling and technical infrastructure that labs use to train and deploy models; and many other things besides. Collectively, small and medium-sized gains add up to the 400% efficiency improvements Amodei describes. AI research and engineering, in practice, is the grinding pursuit of these little gains far more often than it is the pie-in-the-sky investigation of entirely new paradigms (though to be clear, this does happen within labs too). Indeed, lab executives have said repeatedly and for years that the distinction between research (high status, rarefied) and engineering (the low-status grind) is false, and that in practice the two disciplines often converge. This is why OpenAI, famously, uses the job title &#8220;Member of the Technical Staff,&#8221; rather than &#8220;researcher&#8221; and &#8220;engineer.&#8221;</p><p>We know that AI labs are bound by talent; this is why they routinely offer top-tier personnel compensation packages in the tens or hundreds of millions of dollars. For the sake of illustration, imagine that a lab has 1000 research and engineering staff, 800 of whom are grinding away in the search for gains within the current paradigm, and 200 of whom are investigating new paradigms. Both do their jobs by designing and conducting experiments in an iterative fashion; in both a huge amount of this work can be described as &#8220;writing code&#8221; and &#8220;engineering software.&#8221; They run the experiments, they analyze the results, and they write up their findings.</p><p>This is of course an absurdly high-level description, but it is also not <em>that </em>hard to imagine current models automating large portions of this work. And indeed, it seems clear that they already do; frontier lab staff now frequently say that AI models write most or all their code. Current models are arguably <em>already </em>better at coding than many human researchers (particularly when considering the opportunity cost of the researchers&#8217; time), and the direction of travel on this trend is obvious (I bet you the models will get better!).</p><p>Where the models currently suffer from reliability and quality issues is in the execution of experiments over say, several days, though they improve along these dimensions constantly. The other deficit of the models&#8212;and here they often fall down altogether&#8212;is in the generation of interesting hypotheses and research agendas. A brilliant human researcher may have some insight about a new direction of research and spend months refining his thesis, recruiting colleagues to his cause, and persuading management to allocate compute to test his ideas. Models do not tend to come up with great insights like this.</p><p>But do they need to? What if instead, the brilliant human researcher had an army of automated junior researchers he could use to test his ideas in a way that would have been impossible without advanced AI? Models could autonomously perform countless iterations on the human&#8217;s fundamental experimental insight. It is hard to imagine a world in which this capability does not end up being utterly transformative to the work of the researcher, but there are still questions one can ask about the broader impact. If, for example, compute remains a binding constraint on labs, then the allocation of compute to different research directions will still be a matter of bureaucratic process, and ultimately, politicking within frontier labs. These are messy human processes; they require hashing out differences of opinion and making fundamental strategic tradeoffs about what kinds of research to pursue. To say the least, this seems much more difficult to automate than coding.</p><p>Thus one can imagine two things happening in parallel. First, the 800 researchers who are grinding away within the current paradigm suddenly have vastly more bandwidth to search for more efficiency gains. An extreme outcome from this would be that labs discover dramatically more efficiency gains; it turns out there was a vast field of low-hanging fruit just waiting to be plucked, and we simply did not have enough researcher time to find it. In this world, perhaps algorithmic efficiency gains scale cleanly with the number of automated researchers: 10 times the number of researchers means that annual improvements in algorithmic efficiency are 4000% rather than 400%.</p><p>But this seems unlikely. Maybe the human researchers were doing a pretty good job all along, and discovering most of the algorithmic efficiency gains that were practical to employ. In this world, perhaps automated researchers merely double the rate of improvement (800% per year) or even worse, accelerate it by, say, 20% (for a final annual efficiency gain of 480% per year). In the most extreme bear scenario, there are literally no new gains to be discovered, so all automated researchers do is enable labs to find the same gains they would have counterfactually found with humans, but much faster than with purely human-driven research.</p><p>Then, at the same time, those 200 &#8220;new paradigm&#8221; researchers suddenly have the ability to systematically investigate novel research directions with far greater velocity than is currently possible. Determining how much this matters requires considerable speculation; how good are the researchers&#8217; ideas? How many novel directions are there left to pursue in deep learning, and how many of them can be pursued without the collection of new, real-world datasets? For example, perhaps we can train robots capable of automating all physical labor with enough human perception data, but the collection of that data cannot be automated by AI researchers. One can be pessimistic that this will mean radical acceleration of research progress, but it would be strange to imagine <em>no </em>meaningful gains in the progress of new research paradigms.</p><p>Add to this the reality that <a href="https://epoch.ai/data/data-centers">all frontier labs will bring </a><em><a href="https://epoch.ai/data/data-centers">massive </a></em><a href="https://epoch.ai/data/data-centers">new computing resources online within the coming year</a>. These data centers are dramatically larger than anything that has come before, and are really the first manifestation of the AI infrastructure boom. Remember for example that we have not seen any models trained on Blackwell-generation chips, and soon each lab will have hundreds of thousands each of them (and the Rubin generation will begin real-world deployment in the coming months, too). For all the talk of AI infrastructure, we really have not seen what our AI industry can go with gigawatt-scale computing power. There is a credible case to be made that there is a looming capabilities overhang <em>from this alone</em>, and that this overhang will be realized at the same time that automated AI researchers begin to be deployed in earnest. It would therefore be wise to expect 2026 to be a more rapid year of AI progress than all years that have come before. Indeed, this is probably the conservative forecast at this point.</p><p>All of this together creates a clear picture: This year, the automation of AI research and engineering will begin in earnest. In addition to creating <em>at least </em>a step-change improvement in AI progress from its already rapid pace, this could change the dynamics of AI competition, alter AI geopolitics, and much more. Next week, I will discuss a targeted policy measure to shed more light on this development as it unfolds. </p>]]></content:encoded></item><item><title><![CDATA[On AI and Children]]></title><description><![CDATA[Five-and-a-half conjectures]]></description><link>https://www.hyperdimensional.co/p/on-ai-and-children</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/on-ai-and-children</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Thu, 22 Jan 2026 14:46:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mLaj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49371abf-2579-47be-8114-3e0ca580af8b_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><h4><strong>Introduction</strong></h4><p>The first societal harms of language models did not involve bioattacks, chemical weapons development, autonomous cyberattacks, or any of the other exotic flavors of risk focused on by AI safety researchers. Instead, the first harms of generalist artificial intelligence were decidedly more familiar, though no less tragic: <a href="https://www.hyperdimensional.co/p/for-all-issues-so-triable">teenage suicide</a>. Very few incidents provoke public outcry as readily as harm to children (rightly so), especially when the harm is perceived (rightly or wrongly) to be caused by large corporations chasing profit.</p><p>It is therefore no surprise that child safety is <a href="https://www.hyperdimensional.co/p/the-ai-patchwork-emerges">one of the most active areas</a> of AI policymaking in the United States. Last year saw <a href="https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation">dozens</a> of AI child safety laws introduced in states, and this year will likely see well over one hundred such laws. In broad strokes, this is sensible: like all information technologies, AI is a cognitive tool&#8212;and children&#8217;s minds are more vulnerable than the minds of adults. The <a href="https://en.wikipedia.org/wiki/Communications_Decency_Act">early</a> regulations <a href="https://en.wikipedia.org/wiki/Child_Online_Protection_Act">of the internet</a> were also <a href="https://en.wikipedia.org/wiki/Children%27s_Online_Privacy_Protection_Act">largely passed</a> with the <a href="https://en.wikipedia.org/wiki/Children%27s_Internet_Protection_Act">safety of children</a> in mind.</p><p>Despite the focus on this issue by policymakers (or perhaps because of it), there is a great deal of confusion as well. In recent months, I have seen friends and colleagues make overbroad statements like, &#8220;AI is harmful for children,&#8221; or &#8220;chatbots are causing a major decline in child mental health.&#8221; And of course, there are political actors who recognize this confusion&#8212;along with the emotional salience of the topic&#8212;and seek to exploit these facts for their own ends (some of those actors are merely self-interested; others understand themselves to be fighting a broader war against AI and associated technologies, and see the child safety issue as a useful entry point for their general point of view).</p><p>There are good and bad ways to write AI child safety laws. At the object level, it seems to me that the prudent law to pass today would require large AI companies to enable age verification or detection, impose content guardrails for minors, and offer parental controls. That is simple enough.</p><p>Yet I can&#8217;t help but feel that almost all conversations about AI use by children tend to ignore the most important questions about the technology, in addition to being frequently rifled with misconceptions and falsehoods. Indeed, in just the last couple of months, the rise of coding agents has given the issue of &#8220;AI child safety&#8221; an entirely new meaning for me and raised a new set of open questions.</p><p>So object-level policy is not what I want to talk about today; instead, I want to outline how I think about this issue in a series of conjectures. Rather than one long argument, this will be several interrelated and shorter ideas.</p><h4><strong>Conjecture #1: AI is not especially similar to social media</strong></h4><p>AI is, in part, a consumer technology being deployed at mass scale, with uncertain and probably large societal-scale implications. It is a digital technology based upon &#8220;algorithms.&#8221; In at least some cases, it will be monetized with advertisements. These characteristics cause many observers to pattern match, implicitly or explicitly, to the experience of social media. While these similarities are real, viewing AI through the lens of social media probably distorts more than it sharpens.</p><p>The most important distinction is that AI use is <em>fundamentally </em>creative, whereas social media in its contemporary form is fundamentally consumptive for the vast majority of users (adult or child). Social media is characterized by a large number of content consumers and a small number of content producers who create virtually all of the material that gains traction. To get started on social media, you don&#8217;t need ideas of your own for what to create; you simply set up your account, begin scrolling through content created by others, and let the algorithm do its thing.</p><p>Generative AI, on the other hand, presents users with a blank box and a blinking cursor. &#8220;What do <em>you </em>want to do?,&#8221; it asks. The &#8220;algorithm&#8221; of generative AI (even this term, in its vernacular usage, is not well suited to AI) is purely reactive, creating content or taking action only after the user has generated an input sequence (a prompt, a question, a goal) for it to process. This is an inherently different posture from social media, and for obvious reasons, it may lend itself to much more productive and creative activity than did social media.</p><h4><strong>Conjecture #2: We do not know what an &#8220;AI companion&#8221; really is</strong></h4><p><a href="https://www.hyperdimensional.co/p/where-do-we-stand">AI is a new kind of &#8220;character&#8221; on the world stage.</a> We do not know what that means, but without a doubt, this is a new chapter in the long history of human-machine interaction. Some aspects of it will seem strange to many of us. That is not so much a problem to be solved as it is a fact of technological progress. Imagine telling someone from the 1920s that the people of 2026 would be able to <a href="https://github.com/amannm/doordash-mcp">talk to an object in their pocket and cause prepared food to appear at their front doorstep within half an hour or so</a>.</p><p>I personally have affection&#8212;not simply intellectual admiration but genuine emotion&#8212;for exquisitely crafted tools: <a href="https://monochrome-watches.com/app/uploads/2022/03/Patek-Philippe-5172G-Chronograph-Salmon-Dial-2.jpg">watches</a>, <a href="https://www.mcintoshlabs.com/products/amplifiers/MC451">audio equipment</a>, <a href="https://www.jetpens.com/Kaweco-Special-Mechanical-Pencils/ct/4240">pencils</a>, <a href="https://lunor.com/en/kollektion/a12-501/?filter_farben=col-02-havanna-dunkel-en">glasses</a>, and of course, <a href="https://www.hyperdimensional.co/p/measuring-up">computers</a>. And I have a similar kind of affection&#8212;at least it lives in the same emotional neighborhood&#8212;for the very best large language models. I love these things as one might love a work of art, and I admire them in the way that one might admire the Moon landing. </p><p>I am sure that my experience is not unique, but perhaps it is not common either. No matter: the point is that the range of possible relationships&#8212;including ones where the human draws emotional support from the AI&#8212;is extremely broad. The best language models today offer me advice on some of the weightiest personal and professional questions I must grapple with. I consider them some of the best thought partners I have in my life, and I say this as someone who has the immense privilege of a rich and diverse social life. Speaking as someone who has in the past had a professional psychologist, I am <em>sure </em>that AI, used responsibly, can provide better-than-human therapeutic advice, life coaching, or whatever moniker you prefer.</p><p>Sometimes, my wife will ask Claude a question about our new child&#8217;s progression, and it will, unbidden, offer friendly words of comfort after what the model can infer was clearly a long and rough day. I am sure that hundreds of thousands, if not millions, of other mothers around the world have experienced the same. <em>There is nothing wrong with this</em>, nor with similar interactions a child has with a model. If a child is clearly struggling with homework, or with an interpersonal problem in school, it is fine, probably even healthy, for them to talk it out with a language model.</p><p>Are there versions of this human-machine relationship that can veer into unhealthy territory? Of course. And this brings me to the next conjecture.</p><h4><strong>Conjecture #3: AI is already (partially) regulated&#8212;by tort liability</strong></h4><p>The best examples we have of genuine AI-related tragedies involve children interacting with pure companion AIs from firms like Character.AI or with generalist AIs like ChatGPT. The phenomena of &#8220;LLM psychosis,&#8221; teenage suicidality, and the like are particularly associated with GPT-4o, and to a lesser extent with the competing LLMs of its vintage (Claude Sonnet 3.7, Gemini 2, etc.).</p><p><a href="https://socialmediavictims.org/press-releases/smvlc-tech-justice-law-project-lawsuits-accuse-chatgpt-of-emotional-manipulation-supercharging-ai-delusions-and-acting-as-a-suicide-coach/">These tragedies have produced lawsuits</a>, and everything about those lawsuits&#8212;the bad PR at the outset, the expense and complexity of being party to a major lawsuit, the potential embarrassment when documents from the case make their way to the public eye, and of course the potential (immense) cost of settlement or jury-awarded damages&#8212;creates a powerful incentive against allowing similar tragedies to occur. Already, therefore, these lawsuits have prompted OpenAI&#8212;and I would bet others soon enough&#8212;to <a href="https://openai.com/index/teen-safety-freedom-and-privacy/">voluntarily undertake</a> age detection, parental controls, and content guardrails for minors, precisely the policy outcomes I think would be most appropriate to effect in a new law.</p><p><em>This is the way the American system is supposed to work</em>. We permit people to try new things, and when those new things harm people or their property, they can pursue monetary compensation in court. The tort system, in theory and sometimes, imperfectly, in practice, incentivizes firms to internalize the negative externalities of their commercial activity. Most prudent firms will want to minimize known lawsuit risks at the outset, which will cause <em>them </em>to take preemptive action (like what OpenAI has done with child safety).</p><p>Of course, not all firms will internalize the incentives of tort liability in the same way; perhaps some will disregard it altogether. This would be the case for a simple law that codifies basic guardrails for children, imposing such requirements as a baseline for all competitors in the industry. Again, this would be an example of a pattern that is very typical in American legal and regulatory history.</p><p>Common law liability works best (though not exclusively) when there is a tangible, usually physical, harm. That will not always be the case. What if there are types of human-AI relationships that some external observers simply do not like, because they find them strange, or gross, or offensive? Well&#8230;</p><h4><strong>Conjecture #4: AI chatbot regulations will probably be heavily bounded by the First Amendment</strong></h4><p>I have many conservative friends who clearly have an aesthetic revulsion to the notion of anyone, especially children, deriving any kind of emotional satisfaction from a relationship with AI. In some cases I share this sentiment; in many I am more open-minded than they are, more willing to give both technologists and the users of their products the benefit of the doubt.</p><p>But I also know that our Constitution puts strict limits on the ability of government to stop citizens from engaging in purely cognitive activity within the privacy of their own homes. It is, in many cases and whether anyone likes it or not, the <em>right </em>of an American to have whatever kind of relationship with an AI they deem appropriate.</p><p>Of course, not all speech is uniformly protected by the Constitution. When national security is at risk, or when human lives or property are on the line, you do not necessarily enjoy an unfettered right to free speech. And tort liability can collide with the First Amendment in surprising ways, though this area of the law is badly underdeveloped relative to many other areas of First Amendment jurisprudence.</p><p>Most of the time, however, you do have an exceptionally broad right of free speech, and that right includes not just the freedom to express yourself but also to access the self-expression of other people and, yes, corporations.</p><p>Attempting to regulate what language models can say to people&#8212;in other words, what kinds of ideas people can derive from a technology many people liken to a modern-day <em>printing press</em>&#8212;is <em>obviously </em>the regulation of speech.</p><p>The extent to which minors enjoy the same speech rights as adults is very much up for debate&#8212;and the Supreme Court <a href="https://law.justia.com/cases/federal/appellate-courts/ca5/24-60341/24-60341-2025-04-17.html">may shift that debate in the near future</a>. Nearly everyone agrees that minors should not have the same speech rights to, say, pornography as adults. But as proposed regulations get closer to the regulation of intellectual and emotional conversation with what might literally be the smartest entities in the world, the odds of laws failing under Constitutional scrutiny increase.</p><p>The First Amendment has been a hindrance to many a statute drafter over American history. That is as it should be. The First Amendment is a <em>regulation imposed on the government by the people</em>, one of the few reminders we private citizens have left that sovereignty, in theory, rests with us, and not with our government. It may cause some headaches, especially for conservatives who want (often for good reason) to regulate the excesses of social media. I sympathize with the frustration, but ultimately, the headaches are worth it.</p><h4><strong>Conjecture #5: AI child safety laws will drive minors&#8217; usage of AI into the dark</strong></h4><p>Every time a law makes a child&#8217;s use of AI more legible to the state, or even to their parents through mandatory parental controls, more children will be motivated to use AI in illegible ways. Right now, that will probably mean open-weight LLMs (the best of which, these days, are developed by Chinese companies) being served on websites hosted outside the United States (to complicate enforcement of American law).</p><p>It&#8217;s worth hammering home this tradeoff clearly: the existence of open-weight LLMs means that no regulation of AI child safety&#8212;or really any other regulation of an AI model or system&#8212;will be universally observed. In fact it means the opposite: noncompliant AI will proliferate, and the existence of the law <em>will be the cause of that proliferation</em>.</p><p>Here is an example: If millions of parents dislike their children using AI for their homework, and use their newfound mandatory parental controls to prevent their kids from doing so, there will be a demand among children for access to AI their parents cannot monitor so easily. An open-weight model might not be at the frontier of intelligence, but most of them are more than smart enough to write a B+ middle or high school essay at an average American public school.</p><p>Say you are a parent concerned about &#8220;surveillance capitalism&#8221; (a buzzphrase that refers to the broad concept of the monetization of data about people at scale), about online ads being served to your children, and about the attention economy incentives that can incentivize large online platform owners to get people&#8212;especially children&#8212;&#8220;addicted&#8221; to their services.</p><p>Say you also don&#8217;t want your child using ChatGPT for homework. So you use OpenAI&#8217;s helpful parental controls to tell the model not to help with requests that seem like homework automation. Your child responds by switching to doing their homework with one of the AI services that does not comply with the new kids&#8217; safety laws. Now your child is using an AI model you have no visibility into, quite possibly with minimal or no age-appropriate guardrails, sending their data to some nebulous overseas corporate entity (I wonder if they&#8217;re GDPR compliant?), and quite possibly being served ads, engagement bait, and the like. Oh, and they&#8217;re still automating their homework with AI.</p><p>In this case, the law has not only failed to help, <em>it has actively made things worse</em>. This is an unavoidable part of lawmaking; demand for legibility by the authorities creates an attendant demand for illegibility among those who would rather not be legible. This does not mean we shouldn&#8217;t pass kids safety laws; everything has tradeoffs. We should, however, write and enact laws with a clear sense of what tradeoffs we are making.</p><p>None of the above is a criticism of open-weight AI. I&#8217;ve never believed it was worthwhile to debate <em>whether </em>we should &#8220;allow&#8221; open-weight and open-source AI. Rather I have always believed it is a simple fact of reality. There is nothing we can do about open-weight models; no government will exercise durable control over very capable neural networks, as I am fond of saying. There are many advantages of open-source and open-weight AI, and in my view these dramatically outweigh the downsides. We should embrace those advantages, but we should also not pretend as though there are no downsides to this aspect of our digital reality.</p><h4><strong>Conclusion, and Conjecture #5.5</strong></h4><p>Despite all the outrage about AI and children, I am aware of literally no online kids&#8217; safety advocate who has said anything about the most interesting trend in AI today: coding agents. It is clear this is where AI is headed, and that over time more consumer-friendly versions of such agents will be built (though it is also worth noting that children today can and absolutely will teach themselves how to use e.g. Claude Code in the Terminal; I started learning <a href="https://en.wikipedia.org/wiki/Bash_(Unix_shell)">Bash</a> when I was around 11, and Claude Code is far simpler than that).</p><p>There are two complications coding agents pose to child safety laws. The first is that laws written too broadly might inadvertently cover startups that make agentic coding products intended primarily for business use. This is what the <a href="https://calmatters.org/wp-content/uploads/2026/01/Parents-Kids-Safe-AI-Act-Amendment-1-250801.pdf">California ballot initiative</a> that OpenAI and Common Sense Media <a href="https://www.commonsensemedia.org/press-releases/common-sense-media-openai-join-forces-on-strongest-youth-ai-safety-measure-in-us">have teamed up on</a> seems to do. That is a silly and damaging outcome.</p><p>But the second is more interesting: are these coding agents&#8212;not just the ones we have today but the ones that clearly will exist in 6-18 months&#8212;simply too powerful for children? Would you give a child a chainsaw, or the keys to your car, and let them use those technologies with no supervision?</p><p>Coding agents of the near future might well be able to scrape hours of pornography from the internet, discover vulnerabilities in school networks to access private school documents (like answer keys or grades), hack into the smart home equipment of the girl a fifteen-year-old boy has a crush on, and so on. Is there some broader education we might wish to impart on a child before we let him use technologies with such power? Is there some societal sense of individual responsibility in the use of AI we should be attempting to develop and instill? And who is talking about individual <em>accountability</em> for harms from AI, as opposed to shifting the blame for all harms onto the companies that made the tools?</p><p>This, rather than <a href="https://www.judiciary.senate.gov/press/dem/releases/durbin-hawley-introduce-bill-allowing-victims-to-sue-ai-companies">strange alliances with trial lawyers</a> and <a href="https://x.com/deanwball/status/2014117029326795092?s=20">occupationally licensed therapists</a>, strikes me as a more promising direction for a genuinely pro-child safety, pro-family, and pro-social<em> </em>policy on artificial intelligence. Dare I say, there is something that feels authentically <em>conservative </em>about it too&#8212;far more conservative, I must admit, than just about anything the right has thus far mustered. As ever, in a field as fresh as AI, there are 100-dollar bills lying all over the ground.</p>]]></content:encoded></item><item><title><![CDATA[The AI Patchwork Emerges]]></title><description><![CDATA[An update on state AI law in 2026 (so far)]]></description><link>https://www.hyperdimensional.co/p/the-ai-patchwork-emerges</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/the-ai-patchwork-emerges</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Thu, 15 Jan 2026 13:45:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mLaj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49371abf-2579-47be-8114-3e0ca580af8b_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><p><em>Dear readers, <br>I am pleased to announce a new paid tier of </em>Hyperdimensional: <em>&#8220;institutional.&#8221; This subscription is for firms seeking private, one-on-one or small-group conversations with me about the various matters of AI and AI policy I cover in this newsletter. Subscribers at this tier will get one-hour meeting of this kind per quarter. The price is $7,500 per year. Weekly </em>Hyperdimensional <em>articles will continue to remain free of charge. I also expect to announce new benefits for subscribers at my pre-existing paid tier in the near future. Please do not hesitate to email me with any questions. Those interested may <a href="https://hyperdimensional.co/subscribe">subscribe here</a> or at the button above this text. </em></p><p><em>Onto this week&#8217;s essay.</em></p><h4><strong>Introduction</strong></h4><p>State legislative sessions are kicking into gear, and that means a flurry of AI laws are already under consideration across America. In prior years, the headline number of introduced state AI laws has been large: famously, 2025 saw over 1,000 state bills related to AI in some way. But <a href="https://www.hyperdimensional.co/p/whats-up-with-the-states">as I pointed out</a>, the vast majority of those laws were harmless: creating committees to study some aspect of AI and make policy recommendations, imposing liability on individuals who distribute AI-generated child pornography, and other largely non-problematic bills. The number of genuinely substantive bills&#8212;the kind that impose novel regulations on AI development or diffusion&#8212;was relatively small.</p><p>In 2026, this is no longer the case: there are now numerous substantive state AI bills floating around covering liability, algorithmic pricing, transparency, companion chatbots, child safety, occupational licensing, and more. In previous years, it was possible for me to independently cover most, if not all, of the interesting state AI bills at the level of rigor I expect of myself, and that my readers expect of me. This is no longer the case. There are simply too many of them.</p><p>It&#8217;s not just the topics that vary. It&#8217;s also the approaches different bills take to each topic. There is not one &#8220;algorithmic pricing&#8221; or &#8220;AI transparency&#8221; framework; there are several of each.</p><p>The political economy of state lawmaking (in general, not specific to AI) tends to produce three outcomes. First, states sometimes <em>do </em>converge on common legislative standards&#8212;there are entire bodies of state law that are largely identical across all, or nearly all, states. The second possibility is that states settle on a handful of legal frameworks, with the strictest of the frameworks generally becoming the nationwide standard (this is how data privacy law in the U.S. works). Third, states will occasionally produce legitimate patchworks: distinct regulatory regimes that are not easily groupable into neat taxonomies. </p><p>We are early in this legislative session, and more broadly in AI policymaking, so we cannot yet jump to conclusions. However, the early signs from this year&#8217;s legislative session suggest a true patchwork of state AI law is not only possible, but perhaps even likely. </p><p>Therefore, I am going to take a different approach this legislative session: I will cover bills by theme, selecting a few exemplary bills from each for deeper analysis. Please know that there are limitations to this approach: even in a domain of law I cover, I am only giving you a blurry impression, and there will be some areas of law I skip altogether. </p><h4><strong>&#8220;Transparency&#8221;</strong></h4><p>The first clear takeaway from this year&#8217;s state legislative bills is that the concept of &#8220;transparency&#8221; as an instrument of AI governance has been crammed so heavily that it is no longer useful. <em>Everyone </em>wants to say their bill is a &#8220;transparency&#8221; mandate, because this sounds like a lighter touch than &#8220;regulation.&#8221; The result is that numerous public policy objectives have been shoehorned into the category of &#8220;transparency.&#8221; A few examples suffice.</p><p>Some bills require that employers who use AI disclose that use to employees, customers, the general public, the government, or other counterparties.</p><ul><li><p>New York&#8217;s <a href="https://www.nysenate.gov/legislation/bills/2025/A8962/amendment/A">AB 8962</a>, from Democratic Assemblywoman Nily Rozic, mandates that news outlets (defined broadly; the <em>New York Times, Hyperdimensional</em>, and Dwarkesh Patel&#8217;s podcast are all plausibly covered) mandates disclosure by management to &#8220;news media workers&#8221; (this term is not defined, so it is unclear whether the author intends to include in this part-time employees and contractors, or just full-time staff) all use of generative AI in the production of content. It also mandates similar disclosures to consumers, as well as giving individual employees of a news organization the right to opt out of any deals made by their publication to license their training data.</p></li><li><p>Rhode Island&#8217;s <a href="https://legiscan.com/RI/bill/S2010/2026">SB 2010</a>, from a suite of Democratic lawmakers, requires that any insurer using AI of any kind in the administration of healthcare benefits must disclose details about the use of AI in their business processes, which systems they use, and track metrics such as the amount of time human employees who use AI spend reviewing cases (with the implication being that AI-assisted work requiring less human time&#8212;a phenomenon known in economics as &#8220;labor productivity&#8221;&#8212;is bad). They also must &#8220;disclose&#8221; details of the model <em>developer&#8217;s </em>training datasets and the developer&#8217;s &#8220;data governance measures.&#8221; This therefore also is a regulation on developers as well as healthcare insurers.</p></li><li><p>Missouri <a href="https://pluralpolicy.com/app/legislative-tracking/bill/details/state-mo-2026-hb1747/2814020">HB 1747</a>, from Republican Scott Miller, requires that every single person who shares any image, video, or audio file that was &#8220;created or modified&#8221; using artificial intelligence disclose their use of AI in a &#8220;mark or statement.&#8221; What this means is left up to the Missouri Attorney General. The definition of AI is one of the typical broad ones, and could be construed by an aggressive enforcer to cover even basic machine-learning-based tools, such as Adobe Photoshop&#8217;s object and subject detection features; indeed, a contemporary camera&#8217;s autofocus feature is plausibly covered.</p></li></ul><p>Other proposed bills require model developers to disclose various things to various parties. This is an enormous category, so I will only take a handful of examples here.</p><ul><li><p>In New York, <a href="https://www.nysenate.gov/legislation/bills/2025/A8595/amendment/A">AB 8595</a>, from Democrat Steve Otis, requires that every developer of any generative AI service (including all open-weight and open-source models) must post to their website the URL of every source of &#8220;video, audio, text or data&#8221; they used to train their models (or that they contracted a third party to collect&#8212;as written, this appears to include commonly used datasets such as Common Crawl), as well as a &#8220;detailed description&#8221; of every piece of content obtained from a &#8220;covered publication&#8221; (journalistic sources, but again defined so broadly that this newsletter is plausibly included).</p></li><li><p>Also in New York, <a href="https://pluralpolicy.com/app/legislative-tracking/bill/details/state-ny-2025_2026-a1456/2490074">AB 1456</a>, from Democrat Pamela Hunter, requires that insurers who deploy any AI system for to determine whether a specific medical service is medically necessary, and thus covered by their insurance policy, to: &#8220;submit the artificial intelligence algorithms and training data sets that are being used or will be used.&#8221; An insurer that uses GPT-5.2 would need to submit GPT-5.2&#8217;s &#8220;algorithm&#8221; and its training data (as a side note: I do wish people would stop using the word &#8220;algorithm&#8221; to refer to the architecture of a language model. It <em>is </em>a kind of algorithm, yes, but &#8220;the GPT-5.2 algorithm&#8221; is really more or a mathematical architecture within which the model itself learns <em>many </em>algorithms from its training data, which are ultimately encoded in the model parameters).</p></li><li><p>Then there is Missouri&#8217;s novel <a href="https://pluralpolicy.com/app/legislative-tracking/bill/details/state-mo-2026-hb2239/2825648">HB 2239</a>, a data-center transparency bill introduced by Democrat Marty Murray, which requires owners of data centers larger than 100 megawatts to disclose a truly enormous amount of information about their environmental impact and operations.</p></li></ul><p>I hope one thing is clear: &#8220;transparency&#8221; is not in any meaningful sense synonymous with a light regulatory touch.</p><h4><strong>Child Safety</strong></h4><p>Child safety has been a hot-button issue in AI in the past year, so it should come as no surprise that state laws attempt to tackle this issue from many different angles. Some examples:</p><ul><li><p>Washington State&#8217;s <a href="https://app.leg.wa.gov/billsummary?BillNumber=5956&amp;Year=2025">SB 5956</a>, from Democrat T&#8217;wina Nobles, prohibits schools from using AI for a wide variety of tasks. For example, the bill prohibits all schools in the state from using &#8220;AI&#8221; (at this point I hope I don&#8217;t have to say that the term is defined very broadly and includes a huge swath of modern software, not just generative AI) to create &#8220;any predictive classification&#8221; of a student&#8217;s &#8220;likelihood of misconduct&#8230; criminal behavior,&#8221; and similar. This is more a curiosity to me than anything else; I wonder why this seems necessary to the legislator. We are okay with humans doing this, right? This law also imports the European Union&#8217;s prohibition on the use of AI in classroom to &#8220;infer emotional states,&#8221; though it broadens by including things like &#8220;mental health conditions&#8221; and &#8220;sensitive personal characteristics.&#8221; Again, I do not understand why; what is the problem with a diagnostic tool that, say, helps a school determine which students have dyslexia or other reading disabilities? There is a reasonable body of evidence which suggests that dyslexia and similar conditions are <a href="https://www.nature.com/articles/s41539-023-00204-8">often missed</a> by overworked teachers, and that <a href="https://ila.onlinelibrary.wiley.com/doi/full/10.1002/rrq.477">early treatment can make a substantial difference</a>. We also have some evidence that these conditions are <a href="https://www.mdpi.com/2227-7102/11/2/77">disproportionately common in the incarcerated population</a>. And finally, there is <a href="https://pubmed.ncbi.nlm.nih.gov/21290479/">strong reason to believe</a> that teachers&#8217; lack of knowledge in identifying signs of reading disabilities is <em>the </em>key reason that they are not more systematically diagnosed. If AI tools can help with this, how is that not a cause for celebration?</p></li><li><p>In Florida, <a href="https://www.flsenate.gov/Session/Bill/2026/1344">SB 1344</a> from Republican Colleen Burton requires &#8220;companion AI chatbots&#8221; (defined more narrowly than many definitions of &#8220;AI&#8221; in state laws, but without any thresholds for the size of the developer, the popularity of the model, or any exemption for open-source and open-weight models) to impose age verification measures and mandates a popup every 60 minutes reminding users (apparently regardless of whether they are children or adults) that the AI system they are interacting with is not human. The age-verification provision seems fine, if currently overbroad, while the mandatory popups (which are, by the way, <em>rampant </em>in this year&#8217;s crop of AI regulations) seem straightforwardly stupid to me. It does not seem as though there is mass confusion about AI chatbots being human; there is very little evidence so far, for example, that the tragic cases of AI-involved teenage suicidality were caused by the child&#8217;s confusion over the AI&#8217;s humanity.</p></li><li><p>On a positive note, if folks are looking for a better version of what Senator Burton is trying to do with FL SB 1344, I would point you to Washington State <a href="https://pluralpolicy.com/app/legislative-tracking/bill/details/state-wa-2025_2026-hb2225/2833158">HB 2225</a>, from Democrat Lisa Callan. This still has the popup mandate, but is generally a better starting point for legislation of this kind (though it is far from perfect).</p></li><li><p>On another extreme end, Tennessee <a href="https://www.capitol.tn.gov/Bills/114/Bill/SB1493.pdf">SB 1493</a>, from Republican Becky Massey, makes it a <em>Class A Felony </em>for a developer to train a model that can do things like &#8220;develop an emotional relationship with, or otherwise act as a companion,&#8221; and provide information about mental health and general healthcare. This is a morally disgusting bill in my view, and is also likely to be unconstitutional (as are several other bills I&#8217;ve mentioned here, though the cases are less open-and-shut than this one).</p></li><li><p>Then there is Missouri&#8217;s <a href="https://pluralpolicy.com/app/legislative-tracking/bill/details/state-mo-2026-hb1742/2814406">HB 1742</a>, also from Republican Scott Miller, which bans minors from accessing language models for &#8220;recreational&#8221; and other purposes. So if my child lived in Missouri, they would be prohibited from making their own video games with, say, a coding agent, but would be allowed to engage with AI characters in other people&#8217;s video games. If you are a &#8220;conservative&#8221; and you think that the government has a role to play in the regulation of your child&#8217;s &#8220;recreational&#8221; use of software, I encourage you to consider switching parties, or moving to Europe.</p></li></ul><p>A closing note on child safety law: my guess is that, under current Supreme Court precedent, most &#8220;chatbot&#8221; age verification laws are going to be deemed violations of the First Amendment, and probably should be unless they are carefully scoped. To make a long story short: courts have long held that minors have free speech rights, which includes the right to <em>access </em>speech, not just to communicate it themselves. These minor-held rights are abridged when compared to adults: a minor does not enjoy the same First Amendment right to pornography that an adult enjoys, for example. There is a case currently pending before the Supreme Court about social media age verification, which will be an interesting test case. But given the range of clearly educational and otherwise intellectually enriching uses of AI (in the general case, the best educational content on arbitrary topics you can find on the internet is now produced by frontier AI models, not humans), it is hard for me to imagine courts buying into fully general language model age-verification requirements.</p><h4><strong>Algorithmic Pricing</strong></h4><p>Numerous states are proposing bans or significant limitations on &#8220;algorithmic pricing&#8221; (which, to translate from overwrought policy lingo into natural English, means &#8220;using software to set prices&#8221;), when the pricing algorithm is informed by customer data.</p><p>Say that you recently purchased newborn diapers on Amazon, and then a day or so later you are shopping for fixed-length portrait camera lenses (I speak from experience). Perhaps Amazon&#8217;s &#8220;algorithm&#8221; would identify that I am probably buying the lens to take pictures of my newborn, and given the emotional valence of this, perhaps it would infer that I have a higher-than-usual willingness to pay for this particular lens. That&#8217;s the sort of thing that NY <a href="https://pluralpolicy.com/app/legislative-tracking/bill/details/state-ny-2025_2026-s8623/2831367">SB 8623</a> (introduced by Democrat Rachel May), TN <a href="https://pluralpolicy.com/app/legislative-tracking/bill/details/state-tn-114-hb1468/2834072">HB 1468</a> (introduced by Democrat John Clemmons) and numerous others are trying to prevent.</p><p>Some of these bills contain the bare-minimum exemptions (the price of a DoorDash delivery intrinsically requires my &#8220;personal information,&#8221; i.e. my home address, to set properly), many do not. I am opposed to regulating something so fundamental and abstract as &#8220;the setting of prices using a customer&#8217;s information and software,&#8221; and I think it would be a good thing for the world if all of these laws failed. They probably will not all fail, however.</p><h4><strong>Algorithmic Discrimination</strong></h4><p>Long-time readers will recall my <a href="https://www.piratewires.com/p/america-is-sleepwalking-into-a-permanent-dei-bureaucracy-regulating-ai">multi-month</a> <a href="https://www.hyperdimensional.co/p/the-eu-ai-act-is-coming-to-america">series</a> of <a href="https://www.hyperdimensional.co/p/texas-plows-ahead">diatribes</a> <a href="https://www.hyperdimensional.co/p/impact-assessments-are-the-wrong">about</a> the &#8220;<a href="https://www.hyperdimensional.co/p/california-hold-my-beer">algorithmic discrimination</a>&#8221; bills introduced during last year&#8217;s state legislative session. These laws failed in every state in 2025, and the one stare where this EU-inflected framework did pass (Colorado, in 2024) regrets it so heavily that the state&#8217;s <em>Democratic Governor </em>supported the Republican moratorium on state AI legislation last summer.</p><p>Some states have not learned their lessons, though, so we have repeats of these laws introduced in Washington State (<a href="https://pluralpolicy.com/app/legislative-tracking/bill/details/state-wa-2025_2026-hb2157/2829153">HB 2157</a>, introduced by Democrat Cindy Ryu), New York, (<a href="https://pluralpolicy.com/app/legislative-tracking/bill/details/state-ny-2025_2026-a8884/2767544">AB 8884</a>, introduced by Democrat Michaelle Solages), and New Mexico (<a href="https://pluralpolicy.com/app/legislative-tracking/bill/details/state-nm-2026-hb28/2833649">HB 28</a>, introduced by Democrat Chris Chandler as &#8220;The Artificial Intelligence Transparency Act&#8221;&#8212;see what I mean about &#8220;transparency&#8221;?).</p><p>These are awful laws, but I have said all I have to say about them already.</p><h4><strong>The Florida &#8220;Bill of Rights&#8221;</strong></h4><p>Florida Republicans, led by Governor Ron DeSantis, seem determined to make themselves a nationwide exemplar for &#8220;red-state AI governance&#8221; (despite the fact that President Trump has expressed in no uncertain terms that he would prefer states not &#8220;lead the way&#8221; in AI regulation). The current crystallization of this effort is the &#8220;Florida AI Bill of Rights,&#8221; introduced in the State Senate as <a href="https://www.flsenate.gov/Session/Bill/2026/482/BillText/Filed/PDF">SB 482</a> by Republican Tom Leek. I disfavor the term &#8220;bill of rights,&#8221; since we already have those at both the federal and state levels, so the term is largely meaningless. What does it say about the respect of so-called populist politicians for their voters when they call their proposed law a &#8220;bill of rights&#8221; and then include this provision (emphasis added) in said law?:</p><blockquote><p>(2) Floridians may exercise the rights described in this section in accordance with existing law. <strong>This section may not be construed as creating new or independent rights or entitlements.</strong></p></blockquote><p>Beyond the silly title, the law basically does the following:</p><ul><li><p>Prohibits the Florida government from contracting with Chinese AI companies (though it does not stop the Florida government from contracting with a U.S. company that uses Chinese AI models, including their APIs; thus it does nothing substantive to mitigate the presumed risk of sensitive Floridian data being transmitted to Chinese AI companies);</p></li><li><p>Reiterates deepfake protections that already existed in Florida law (creating civil liability for persons who knowingly distribute malicious deepfakes);</p></li><li><p>Imposes age verification, parental control, and similar requirements for AI services used by minors;</p></li><li><p>Establishes a requirement to do the &#8220;I&#8217;m not a human&#8221; pop-ups for all users of language models every 60 minutes (Why do politicians love attaching their laws to clearly obvious and almost-always-obnoxious features of software? Why do they want their voters to think of them in this way? The lack of policy strategy exhibited by self-styled &#8220;policy wonk&#8221; would-be technology regulators continues to perplex me.)</p></li><li><p>Data-protection requirements for AI developers that, near as I can tell, are largely redundant with Florida&#8217;s existing data privacy laws.</p></li><li><p>Name, image, and likeness protections that mirror those passed by Tennessee in its <a href="https://www.lw.com/admin/upload/SiteAttachments/The-ELVIS-Act-Tennessee-Shakes-Up-Its-Right-of-Publicity-Law-and-Takes-On-Generative-AI.pdf">ELVIS Act</a> about two years ago.</p></li></ul><p>Compared to many of the other laws discussed here, this law is fine, though it is still probably harmful and mostly unnecessary.</p><h4><strong>Conclusion</strong></h4><p>As I write, we are just two weeks into 2026. And yet the volume and complexity of state AI laws is at an all-time high. Many, if not nearly all, of these laws have extraterritorial effect. Almost all of them have gaps in drafting so large as to make any sane reader question whether the drafters really understand what they are doing. And many states have not even <em>begun </em>their legislative session. We will see hundreds more bills introduced in the coming weeks. Meanwhile, the current politics of AI, in practice, render anyone who believes all this state lawmaking is a bit excessive as extremist &#8220;techno-libertarians.&#8221;</p><p>If I sound frustrated, it is because I am. A patchwork of ill-considered state rules&#8212;rules clearly drafted on the back of an envelope, rules that are sometimes about topics where no new rules are needed&#8212;is indeed proliferating. The lawmakers in question have now had <em>three years </em>to educate themselves about generative AI, and none of them have bothered. Yet they seem supremely confident that they know what is good for generative AI. I do not want such people, and such decadence, governing this tool whose utility and importance grows for millions of Americans every week.</p><p>I wish the Trump Administration the best of luck in its efforts to stop at least some of this through its <a href="https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/">recent Executive Order</a>, though any honest observer must acknowledge that there are firm limitations on what the executive branch can unilaterally do here. I have long been a supporter of <a href="https://www.hyperdimensional.co/p/be-it-enacted">a federal AI law with broad (though not universal) preemption of state regulations</a>. Though I scoped my proposed preemption more narrowly than many other supporters of a federal law, it is worth nothing that my proposal would have preempted almost literally every law I have described in this essay. That is because the specific set of laws we are now seeing is what I have been anticipating for months. </p><p>Regardless of whether a federal law looks anything like my proposal, here is the salient point: preemption really must be expansive, if short of an outright ban on state AI regulation. <em>You cannot preempt all of these laws piece-by-piece; broad-based preemption of some kind is essential, and anyone who pretends otherwise is simply not engaging with the reality on the ground</em>. </p><p>I look forward to seeing the White House&#8217;s federal AI legislation proposal, hopefully in the near future.</p>]]></content:encoded></item><item><title><![CDATA[Among the Agents]]></title><description><![CDATA[How I use coding agents, and what I think they mean]]></description><link>https://www.hyperdimensional.co/p/among-the-agents</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/among-the-agents</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Thu, 08 Jan 2026 13:45:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kZjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f70956b-24b6-432b-81c4-dcfa4095ead7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><h4>Introduction</h4><p>In the past month I have:</p><ol><li><p>Automated invoice creation, sending, and tracking;</p></li><li><p>Created scientifically realistic simulations of hydrological systems as a learning project;</p></li><li><p>Automated my research process of gathering and analyzing all proposed state legislation related to AI (though this is no substitute for reading the bill for anything I am going to write about);</p></li><li><p>Orchestrated a complex chain of autonomous data collection, processing, analysis, and presentation steps related to manufacturing and industrial policy;</p></li><li><p>Created a machine-learning model capable of predicting US corn yields with what appears to be very high accuracy (the proof will be in the pudding), based on climate, soil, Earth-observation satellite, and other data sources;</p></li><li><p>Replicated three machine-learning research papers and modified the approach to suit my own research ends;</p></li><li><p>Performed hundreds of experiments with Byte-level language models, an emerging interest of mine;</p></li><li><p>Created an autonomous prediction market agent;</p></li><li><p>Created an autonomous options trader based on a specific investment thesis I developed;</p></li><li><p>Built dozens of games and simulations to educate myself about various physical or industrial phenomena;</p></li><li><p>Created an agent that monitors a particular art market in which I am potentially interested in making an acquisition;</p></li><li><p>Created a new personal blog complete with a Squarespace-style content management system behind the scenes;</p></li><li><p>Other things I cannot talk about publicly just yet.</p></li></ol><p>Of course, I did not do these things alone. I did them in collaboration with coding agents like Gemini 3 Pro (and the <a href="https://blog.google/technology/developers/introducing-gemini-cli-open-source-ai-agent/">Gemini Command-Line Interface system)</a>, <a href="https://chatgpt.com/features/codex?utm_source=google&amp;utm_medium=paid_search&amp;utm_campaign=GOOG_M_SEM_GBR_Codex_TEM_BAU_ACQ_PER_MIX_ALL_NAMER_US_EN_111325&amp;c_id=23226110534&amp;c_agid=188421385415&amp;c_crid=782421044798&amp;c_kwid=%7Bkeywordid%7D&amp;c_ims=&amp;c_pms=9007535&amp;c_nw=g&amp;c_dvc=c&amp;gad_source=1&amp;gad_campaignid=23226110534&amp;gbraid=0AAAAA-I0E5euPXesOYj5Wl90lTFVca7JZ&amp;gclid=Cj0KCQiApfjKBhC0ARIsAMiR_ItwlupyypRUt-B0GGU1FtPEc-El4LhNg956gPmxWX9wgQGufFZPFscaAsUaEALw_wcB">OpenAI Codex</a> using GPT 5.2, and most especially, Claude Opus 4.5 in <a href="https://code.claude.com/docs/en/overview">Claude Code</a>.</p><p>These agents have been around for almost a year now, but in recent weeks and months they have become so capable that <a href="https://x.com/deanwball/status/2001068539990696422?s=20">I believe</a> they meet some definitions of &#8220;artificial general intelligence.&#8221; Yet the world is mostly unchanged. This is because AGI is not the end of the AI story, but something closer to the beginning. Earlier this year, I wrote:</p><blockquote><p>The creation of &#8220;artificial general intelligence,&#8221; if it can even be coherently defined, is not the <em>end </em>of a race. If anything, it is the <em>start </em>of a race. As AI systems advance by the month, the hard work of building the future with them grows ever more pressing. There is no use in building advanced AI without <em>using </em>those systems to transform business, reinvent science, and forge new institutions of governance. This, rather than the mere construction of data centers or training of AI systems, is the true competition we face&#8212;and our work begins now.</p></blockquote><p>The individuals and firms that discover more and better ways to work with this strange new technology will be the ones who thrive in this era. The countries where those people and businesses are most numerous will be the countries that &#8220;win&#8221; in AI. It is up to all of us, together, to figure out how to put machine intelligence to its highest and best uses. The world won&#8217;t change until human beings change it.</p><p>I joke sometimes that using AI, and especially using coding agents, is a bit like playing the piano. The piano is the easiest instrument to begin playing (anyone can produce a satisfying tone with no training or skill on a piano, which is not true of, say, a flute or a violin), yet the hardest to master in the long run. AI presents the greatest opportunity and the greatest challenge computers can muster: a white sheet of paper, a blinking cursor in an empty text input box. You can type anything you like, but figuring out what to type, is, indeed, the hard part.</p><p>It is especially important, I think, that as intellectually diverse a group as possible experiment with these coding agents, which I have taken to calling &#8220;infant AGI.&#8221; Because while the primary focus of these tools is indeed coding, they are useful to far more than just coders. Whether you are a scientist or a policy professional, a linguist or a diplomat, a literary critic or a musician, or just a curious person, I am confident these tools have something to offer you.</p><p>I am a somewhat odd duck in all this, being a &#8220;humanities person&#8221; who also learned to code and hack around on a computer at a young age. Therefore I feel it is especially incumbent upon me to demonstrate how coding agents can be useful to non-coders of all backgrounds, ages, and interests. Today, I&#8217;d like to do just that. I hope it is useful to as broad a range of people as possible, and in service of that goal, I am going to write assuming no prior experience with coding agents, command-line interfaces, or coding. I apologize to the more technically inclined people in my audience, for whom some of this will be old news. After that, I will close with some brief and tentative observations about what these new coding agents might mean in 2026 and beyond.</p><h4>What is a Coding Agent?</h4><p>Coding agents are language models situated within attendant software infrastructure (variously referred to as an &#8220;AI system,&#8221; &#8220;agent scaffolding,&#8221; or an &#8220;agent harness). There are many apps you can download that allow you to use coding agent, like <a href="https://cursor.com">Cursor</a>, <a href="https://windsurf.com">Windsurf</a>, Cognition&#8217;s <a href="https://devin.ai">Devin</a> (which is more focused on enterprise uses), <a href="https://factory.ai/product/ide">Factory AI&#8217;s Droid system</a>, or Google DeepMind&#8217;s <a href="https://antigravity.google">Antigravity</a>. But if you are new to coding, I think many of these tools could overwhelm you at first (though you may want to try them after you gain experience). They are what are known as integrated development environments (IDEs), with more of an emphasis on looking at and editing code than is in fact necessary for most new users.</p><p>Ironically enough, I think the best way to begin using these decidedly futuristic tools is within the most ancient personal computing interface there is: the command line. A command line is a text-based way of controlling computers. It is often far more efficient, if less intuitive, than using a graphical user interface (GUI) with windows, a mouse cursor, and the like.</p><p>For example, say there was a file on my computer called &#8220;agi_is_here.txt&#8221; but I decided that I wanted to replace every usage of the acronym &#8220;AGI&#8221; with &#8220;transformative AI&#8221; instead. With the GUI on my Mac, I&#8217;d open Finder, navigate to my Documents folder (or wherever the file was saved), open the file in a text editor, and then use the editor&#8217;s &#8220;find and replace&#8221; function. To do this at the command line, I would open the &#8220;Terminal&#8221; app on my Mac and type:</p><pre><code>cd ~/Documents &amp;&amp; sed -i &#8216;&#8217; &#8216;s/agi/transformative ai/gI&#8217; &#8216;agi_is_here.txt&#8217;</code></pre><p>This may look alien to those who are not familiar with command-line interfaces, but it is a remarkably compact expression of a complex user intent written in a scripting language called &#8220;bash.&#8221; Just as one example, the &#8220;/gI&#8221; flag at the end stands for &#8220;global,&#8221; meaning by desire is to replace the phrase throughout the whole document, and &#8220;Insensitive,&#8221; meaning I want my search for instances of &#8220;AGI&#8221; to be case insensitive (&#8220;agi,&#8221; &#8220;aGi,&#8221; and so on).</p><p>One important note: terminal apps are called &#8220;emulators&#8221; because they emulate the experience of using a pre-GUI computer. This applies to input devices: there were no mice back then, so there are no mice in emulators now either. You cannot click anything within the window of a terminal emulator. All input is keyboard only. Some of the keyboard shortcuts you are used to will still work, while others will not (for example, on a Mac, Command-A as &#8220;select all&#8221; will likely not work the way you expect in Terminal, and what you are probably looking for is instead Control-U). A list of default macOS Terminal keyboard shortcuts is available <a href="https://support.apple.com/guide/terminal/keyboard-shortcuts-trmlshtcts/2.15/mac/26">here</a>.</p><p>Anthropic&#8217;s Claude Code, OpenAI&#8217;s Codex, and Google&#8217;s Gemini CLI are, properly speaking, applications that run through the command line on your computer. After you install them, you open the terminal emulator on your computer (on a Mac, this is the app called &#8220;Terminal&#8221;) and you type, variously: &#8220;claude,&#8221; &#8220;codex,&#8221; or &#8220;gemini.&#8221; The app will then launch. At this point you are communicating with a language model.</p><p>The language model is running in the cloud, but it can read and modify files on your local computer. So, to continue the above example, instead of writing the complex bash command above, you could simply say &#8220;I&#8217;d like you to find the file called agi_is_here on my computer and replace all references to &#8216;agi&#8217; with &#8216;transformative ai.&#8217;&#8221; This is trivially easy for a frontier coding agent, yet this alone is a more sophisticated use of computers than the vast majority of people are capable of doing.</p><p>Because command-line interfaces are entirely text based, it should not surprise you that language models have gotten <em>very </em>good at using them. <em>This means they can use your computer</em>, <em>and the computer is one of the most powerful tools mankind has ever invented. </em>You talk to these agents just like you talk to the chatbots, but they can do vastly more than a chatbot can do, because they are operating your computer for you.</p><p>Agents like this have been around for almost a year now, but I found them insufficiently reliable until roughly a few months ago (it also did not help that between April and August I was working for the government, and in addition to being too busy to play around with new tools, you cannot, for obvious reasons, run coding agents on computers owned by the Executive Office of the President). And reliability matters tremendously here, because it is easy for things to go wrong. For example, you could type somethings as simple as:</p><pre><code>rm -rf ~</code></pre><p>This command deletes everything in your user directory, which probably means all your files, photos, downloads, and the like. There is no dialogue box asking you if you are sure you wish to do this. It will just happen. Command lines can be dangerous. It is incumbent upon model developers to train progressively better models that avoid these kinds of failures, and to design interfaces that allow for appropriate human oversight of agents. But just as importantly, it is incumbent upon users to understand what agents on their computers are doing and understand when circumstances merit additional scrutiny.</p><p>This is why all coding agents will require explicit user permission to perform certain actions (though you can configure this). If you have any uncertainty about an action a model is requesting your approval for, remember that you can always ask the model itself why it wants to do what it is doing.</p><p>Even with user permission, there are some failures modes to be aware of. First of all, coding agents tend to be quite sloppy in their use of APIs, the means by which a software engineer makes use of some service or software in code. Agents tend to expect APIs to be robust and performant; this is often not the case in the real world. They also tend to not think about things like rate limits. This can mean that, unless they are specifically directed to carefully examine the API documentation and design around its constraints, they will end up getting blocked for violating API rules or rate limits.</p><p>It is also the case that agents still tend to be overly confident in tackling ambitious tasks. They seem to have an innate desire to &#8220;wow&#8221; the user with an impressive result delivered on the first try. This desire gives them an incentive to declare victory too early. I deal with this problem by asking agents to write robust plans before beginning work, and if they are building software apps, to craft complete lists of features along with methods of verifying the completion and quality of each feature.</p><p>Once you get going, there are three important and non-obvious-to-a-beginner facts to internalize. One we have already discussed: coding agents can operate <em>a lot </em>of your computer&#8217;s functionality<em>, </em>but importantly not all of it, purely through the command line. Second, coding agents can download arbitrary files from the internet. Third, agents can orchestrate cloud computing infrastructure; they can manage cloud-based virtual machines, and AI hardware, from your command line. They can also, themselves, use the APIs for any LLM or other AI tool; your agents can, themselves, use AI. Used with appropriate discretion, each of these capabilities is profound.</p><p>There are many finer points, and other AI coding applications beyond the raw command line (even Codex, Claude Code, and Gemini have GUI-based apps as well). I recommend reading Claude Code creator Boris Cherny&#8217;s <a href="https://x.com/bcherny/status/2007179832300581177?s=20">recent thread</a> as a starting point for learning about these finer points.</p><h4>The Implications of Coding Agents</h4><p>What do the coding agents mean? I have only tentative thoughts to offer at present, and much is unknown. A few things, however, seem clear:</p><ol><li><p>Coding agents mean that you can try more things for yourself, instead of being dependent upon companies or expert individuals to intermediate. In the last one to two decades, the digital world has become complicated, so filled with walled-garden services, that most of us have become infantilized. Coding agents mean you can, once again, become something more like a digital frontiersman.</p></li><li><p>Because you can speed run the creation of so many complex software engineering projects, you can learn more quickly the tradeoffs, largely unspoken limitations, and other tacit knowledge intrinsic to all complex endeavors. Intermediate-level knowledge of this kind&#8212;things like, &#8220;oh yes, that API regularly breaks in this silent way,&#8221; or, &#8220;oh yes, there is an intrinsic tradeoff between X and Y that must be balanced appropriately&#8221;&#8212;can now be acquired rapidly in a process of human-AI hybrid exploration.</p></li><li><p>The fundamentals of many disciplines, most especially computer science, still seem quite relevant to me. Learning the basics of why computers work is extremely useful for making the most of coding agents; it will make you a better &#8220;prompter.&#8221; Learning the foundational aspects of programming languages similarly seems important. Understand how to think computationally now matters more. Understand the specific syntax of a particular programming language now matters less. This same lesson may well apply in other disciplines. It may therefore be possible to be a renaissance man once again.</p></li><li><p>Proprietary access to data will become even more of a key differentiator than it already has been. On the flip side, publicly releasing datasets is one of the highest leverage things researchers, governments, and research institutions can do. The social status of releasing differentiated datasets is probably still too low.</p></li><li><p>The definition of a good &#8220;user experience&#8221; in software will change profoundly. The value of a highly polished UI in many heretofore consumer and even many enterprise applications will decrease; the value of a performant, reliable, extensible API will increase. Walled gardens will be an increasing source of hassle and frustration for general consumers. This frustration with walled gardens has always existed within software-engineering and otherwise technically savvy communities; coding agents will make those cultural trends more prevalent among &#8220;normal&#8221; people. I would therefore expect more members of the general public to adopt, on the margin, the dispositions, preferences, habits, predilections, and the like of software engineers.</p></li><li><p>The value of unsexy services that provide access to raw capabilities or data with minimal intermediation will go up. Currently, consumer and enterprise software-as-a-service prioritizes a great user interface&#8212;making it easy for a nontechnical user to get started. But the tradeoff is often that they impose a lower ceiling of capability. Now, there are many software services where human usability is a far lower priority, and instead the premium will be services that give AI agents maximal leverage and flexibility to accomplish a wide variety of goals. A simple&#8212;probably too simplistic&#8212;way of phrasing this would be to say that applications will come to matter less than infrastructure.</p></li><li><p>This may well apply to hardware as well. One can imagine, for example, that home automation devices that charge a premium for a great consumer experience will become somewhat less desirable when compared to cheaper, equally capable, and far more extensible competitors. Think about the difference between companies like Ubiquiti and Eero in the world of wireless networking; Ubiquiti is extremely high quality but requires much more technical expertise to manage. One can imagine many areas of consumer goods where this will be true.</p></li><li><p>Very few people have made truly great products and services that target the &#8220;prosumer using coding agents&#8221; market. Those that have, have largely done so unintentionally. There is probably a large and growing opportunity here.</p></li><li><p>While the above trends create opportunities for new firms, they will also create opportunities for incumbent firms to put up roadblocks. Firms that control some proprietary service, data, or other well-defended moat may be disinclined to offer their products in ways that maximally enable AI agents, fearing that their differentiation will be eroded into a commodity over time. I expect this dynamic, which is common with new technologies, to persist for years if not longer and represent one of the most concrete barriers to AI adoption in the real world.</p></li><li><p>I would no longer be surprised if we saw AI in the macroeconomic statistics by the end of this year, both on the upside (growth, productivity) and downside (labor market dislocation).</p></li><li><p>Defining what good looks like, and convincing others your conception of &#8220;good&#8221; is the right one in a particular context, will remain the human touch. Jobs that require this already, of which there are many, will be well-defended. Jobs that solely require the production of discrete, well-defined outputs are vulnerable.</p></li><li><p>State governments will introduce hundreds of bills about artificial intelligence; almost none of them&#8212;perhaps even literally zero&#8212;will be written with coding agents in mind. They will be chatbot regulations. It is quite possible policy debate America will have in 2026 is already antiquated.</p></li><li><p>By the end of this year, the least important thing you will be able to do with frontier AI systems will be getting chatbots to answer questions, but this is still how most people will think of &#8220;AI.&#8221; Expect cognitive dissonance as a result.</p></li></ol>]]></content:encoded></item><item><title><![CDATA[Measure Up (Re-posted)]]></title><description><![CDATA[And a new project from me]]></description><link>https://www.hyperdimensional.co/p/measure-up-re-posted</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/measure-up-re-posted</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Fri, 02 Jan 2026 17:22:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kZjN!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f70956b-24b6-432b-81c4-dcfa4095ead7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><p>Dear readers, <br>I hope your year is off to a productive start. My son was born in the closing days of 2025, and I therefore will have a more sporadic posting schedule over the coming weeks. With that said, I <em>do </em>plan to publish new essays during this period. There is simply too much happening in the world of AI to resist altogether, and I have found writing to be a comforting break from the duties of fatherhood. <em>Hyperdimensional </em>will return to a regular cadence within four to six weeks. </p><p>Today, however, I&#8217;d like to share what I believe is the best essay yet published on <em>Hyperdimensional</em>. I originally ran it here in late 2024, when I had something like half the subscribers I currently do. When I wrote this essay, I very much had the frontier systems of today&#8212;and in particular coding agents like Claude Code&#8212;in mind. In this essay I am wrestling between excitement over the wonderful new instruments I imagined would soon be built, and concern over whether humans would retain agency over these tools. Will machine intelligence be a tool, or will it be an <em>actor </em>in its own right? In the context of this essay: is machine intelligence more like the piano, or is it more like Beethoven himself? These questions are all-too relevant in 2026, for this will be the year that we truly meet agents of machine intelligence at societal scale. </p><p>Finally, I am pleased to announce a new writing project: <em><a href="https://projections.hyperdimensional.co">Projections</a></em>. In mathematics, a &#8220;projection&#8221; is when a high-dimensional space is represented in a lower dimension. <em>Projections </em>will function like a hybrid of a personal blog and a humanities-inflected R&amp;D space. In conducting my &#8220;AI policy&#8221; research, I have found myself investigating many questions with an indirect connection to AI, yet too far afield from AI for this newsletter&#8212;questions of history, science, and economics primarily, but other topics too. </p><p>I also find myself wishing for a more personal writing outlet. To that end, <a href="https://projections.hyperdimensional.co/essays/fatherhood-diaries-volume-i">the first essay in </a><em><a href="https://projections.hyperdimensional.co/essays/fatherhood-diaries-volume-i">Projections</a> </em>is about my experience observing the labor and delivery of my son. </p><p><em>Projections </em>will be entirely free and sporadically updated. I do not currently know how often I will write there; perhaps as often as once per month, or as infrequently as two or three times per year. There is currently no subscription functionality, but I am considering adding it. In the meantime, every essay in <em>Projections </em>will be linked at the top of my <em>Hyperdimensional </em>newsletters. </p><p>We are nearing the two-year anniversary of this newsletter. I am not one for celebrations of such milestones, but I do wish to thank you all for your support and attention over the past two years. Now, onto this week&#8217;s essay: &#8220;Measure Up.&#8221; </p><p>&#8212;</p><p><em>&#8220;My very dear friend Broadwood&#8212;</em></p><p><em>I have never felt a greater pleasure than in your honor&#8217;s notification of the arrival of this piano, with which you are honoring me as a present. I shall look upon it as an altar upon which I shall place the most beautiful offerings of my spirit to the divine Apollo. As soon as I receive your excellent instrument, I shall immediately send you the fruits of the first moments of inspiration I gather from it, as a souvenir for you from me, my very dear Broadwood; and I hope that they will be worthy of your instrument. My dear sir, accept my warmest consideration, from your friend and very humble servant.</em></p><p><em>&#8212;Ludwig van Beethoven&#8221;</em></p><p>As musical instruments improved through history, new kinds of music became possible. Sometimes, the improved instrument could make novel sounds; other times, it was louder; and other times stronger, allowing for more aggressive play. Like every technology, musical instruments are the fruit of generations worth of compounding technological refinement.</p><p>In a shockingly brief period between the late 18<sup>th</sup> and early 19<sup>th</sup> centuries, the piano was transformed technologically, and so too was the function of the music it produced.</p><p>To understand what happened, consider the form of classical music known as the &#8220;piano sonata.&#8221; This is a piece written for solo piano, and it is one of the forms that persisted through the transition, at least in name. In 1790, these were written for an early version of the piano that we now think of as the <em><a href="https://en.wikipedia.org/wiki/Fortepiano#:~:text=A%20fortepiano%20%5B&#716;f%C9%94rte&#712;pja%CB%90no%5D%20is%20an,to%20the%20early%2019th%20century.">fortepiano</a></em>. It sounded like a mix of a modern piano and a harpsichord.</p><p>Piano sonatas in the early 1790s were thought of primarily as casual entertainment. It wouldn&#8217;t be quite right to call them &#8220;background music&#8221; as we understand that term today&#8212;but they were often played in the background. People would talk over these little keyboard works, play cards, eat, drink.</p><p>In the middle of the 1790s, however, the piano started to improve at an accelerated rate. It was the early industrial revolution. Throughout the economy, <em>many</em> things were starting to click into place. Technologies that had <em>kind of </em>worked for a while began to <em>really </em>work. Scale<em> </em>began to be realized. Thicker networks of people, money, ideas, and goods were being built. Capital was becoming more productive, and with this serendipity was becoming more common. Few at the time could understand it, but it was the beginning of a wave&#8212;one made in the wake of what we today might call the techno-capital machine.</p><p>Riding this wave, the piano makers were among a great many manufacturers who learned to build better <em>machines </em>during this period. And with those improvements, more complex <em>uses </em>of those machines became possible.</p><p>Just as this industrial transformation was gaining momentum in the mid-1790s, a well-regarded keyboard player named Ludwig van Beethoven was starting his career in earnest. He, like everyone else, was riding the wave&#8212;though he, like everyone else, did not wholly understand it.</p><p>Beethoven was an emerging superstar, and he lived in Vienna, the musical capital of the world. It was a hub not just of musicians but also of musical <em>instruments</em> and the people who manufactured them. Some of the finest piano makers of the day&#8212;<a href="https://en.wikipedia.org/wiki/Anton_Walter">Walter</a>, <a href="https://en.wikipedia.org/wiki/Conrad_Graf">Graf</a>, and <a href="https://www.fortepiano-collection.net/johann-schanz-ca1822">Schanz</a>&#8212;were in or around Vienna, and they were in fierce competition<em> </em>with one another. Playing at the city&#8217;s posh concert spaces, Beethoven had the opportunity to sample a huge range of emerging pianistic innovations. As his career blossomed, he acquired some of <a href="https://www.earlymusicamerica.org/web-articles/rediscovering-beethovens-1803-erard-fortepiano/">Europe&#8217;s finest pianos</a>&#8212;including <a href="https://www.popularbeethoven.com/beethovens-broadwood-piano/">even stronger models</a> from British manufacturers like <a href="https://en.wikipedia.org/wiki/John_Broadwood_%26_Sons">Broadwood and Sons</a>.</p><p>Iron reinforcement enabled piano frames with higher tolerances for louder and longer play. The strings became more robust. More responsive pedals meant a more direct relationship between the player and his tool. Innovations in casting, primitive machine tools, and mechanized woodworking yielded more precise parts. With these parts one could build superior hammer and escapement systems, which in turn led to faster-responding keys. And more of them, too&#8212;with higher and lower octaves now available. It is not just that the sound these pianos made was new: These instruments had an enhanced, more responsive <em>user interface</em>.</p><p>You could <em>hit</em> these instruments <em>harder</em>. You could play them softer, too. Beethoven&#8217;s iconic use of <em>sforzando</em>&#8212;rapid swings from soft to loud tones&#8212;would have been unplayable on the older pianos. So too would his complex and often rapid solos. In so many ways, then, Beethoven&#8217;s characteristic style and sound on the keyboard was <em>technologically</em> <em>impossible </em>for his predecessors to achieve.</p><p>These new pianos had a progressively higher dynamic range, like when a new camera captures the hue of the sun better than the old one, or how progressively better displays depict those hues with greater fidelity. These instruments could render the music in the artist&#8217;s mind with greater fidelity, conveying a sharper image of his motive. And they expanded the artist&#8217;s palette, too.</p><p>Beethoven&#8217;s 1795 <a href="https://en.wikipedia.org/wiki/Piano_Sonata_No._1_(Beethoven)">Op. 2 sonatas</a> (piano sonatas number 1, 2, and 3) were among the most sophisticated piano works anyone had ever heard. In 1796, Beethoven composed his masterful <a href="https://en.wikipedia.org/wiki/Piano_Sonata_No._4_(Beethoven)">Op. 7 E-flat sonata</a> (no. 4). If Beethoven had stopped here, before any of the pieces he is famous for today were written, he <em>still</em> would be considered among the greatest piano composers of his era.</p><p>But then in 1799, Beethoven published the <a href="https://en.wikipedia.org/wiki/Piano_Sonata_No._8_(Beethoven)">Path&#233;tique</a> (no. 8)&#8212;his first piano sonata that is remembered broadly today, its second movement having been covered, doo-wop style, by Billy Joel in his song <a href="https://www.youtube.com/watch?v=wNOXu_yoDYI">&#8220;This Night,&#8221;</a> The sonata&#8217;s unforgettable opening on a massive, ominous chord may have <em>broken</em> the pianos of just half a decade earlier. With this first masterpiece, Beethoven had established himself as one of the most significant keyboard players of <em>all</em> time.</p><p>And <em>then</em> Beethoven began to write his legendary pieces. <a href="https://en.wikipedia.org/wiki/Piano_Sonata_No._14_(Beethoven)">Moonlight</a> (no. 14) in 1801, <a href="https://en.wikipedia.org/wiki/Piano_Sonata_No._21_(Beethoven)">Waldstein</a> (no. 21) in 1804, <a href="https://en.wikipedia.org/wiki/Piano_Sonata_No._23_(Beethoven)">Appassionatta</a> (no. 23) in 1806, <a href="https://en.wikipedia.org/wiki/Piano_Sonata_No._29_(Beethoven)">Hammerklavier</a>(&#8220;hammerkeyboard,&#8221; no. 29) in 1818&#8212;to name just a few examples (to hear these pieces performed on period instruments, I suggest listening to Ronald Brautigam&#8217;s <a href="https://www.amazon.com/Beethoven-Complete-Sonatas-Ludwig-van/dp/B00LJ3EUGC">cycle</a>).</p><p>Each landed at a new outer conceptual extreme of musical expression. Each stressed the limits of the piano, too&#8212;Beethoven was famous for breaking piano strings that were not yet strong enough to render his vision. There was always a relevant margin against which to press. By his <a href="https://en.wikipedia.org/wiki/Piano_Sonata_No._32_(Beethoven)">final sonata</a>, written in the early 1820s, he was pressing in the direction of early jazz. It was a technological and artistic takeoff from <a href="https://assets.classicfm.com/2017/06/mozarts-piano-1486726357.jpg">this</a> to <a href="https://en.wikipedia.org/wiki/John_Broadwood_%26_Sons#/media/File:John_Broadwood_&amp;_Sons_Grand_Piano.jpg">this</a>, and from <a href="https://www.youtube.com/watch?v=DDVj_JzQCg8">this</a> to <a href="https://www.youtube.com/watch?v=6JhWhxR7eyI">this</a>.</p><p>Beethoven&#8217;s compositions for other instruments followed a structurally similar trajectory: compounding leaps in expressiveness, technical complexity, and thematic ambition, every few years. <a href="https://www.youtube.com/watch?v=Uf1xJHeUZGA">Here</a> is what one of Mozart&#8217;s finest string quartets sounded like. <a href="https://www.youtube.com/watch?v=j5XAdttmOLo">Here</a> is what Beethoven would do with the string quartet by the end of his career.</p><p>No longer did audiences talk during concerts. No longer did they play cards and make jokes. Audiences became silent and still, because what was happening to them in the concert hall had changed. A new type of art was emerging, and a new meta-character in human history&#8212;the <em>artist</em>&#8212;was being born. Beethoven was doing something different, something grander, something more intense, and the way listeners experienced it was different too.</p><p>The musical ideas Beethoven introduced to the world originated from his mind, but those ideas would have been unthinkable without a superior <em>instrument</em>.</p><p>&#8212;</p><p>I bought the instrument I&#8217;m using to write this essay in December 2020. I was standing in the frigid cold outside of the Apple Store in the Georgetown neighborhood of Washington, D.C., wearing a KN-95 face mask, separated by six feet from those next to me in line. I had dinner with a friend scheduled that evening. A couple weeks later, the Mayor would <a href="https://dc.eater.com/2020/12/18/22188970/dc-shut-down-indoor-dining-covid-19">temporarily outlaw</a> even that nicety.</p><p>I carried this laptop with me every day throughout the remainder of the pandemic. I ran a foundation using this laptop, and after that I orchestrated two career transitions using it. I built two small businesses, and I bought a house. I got married, and I planned a honeymoon with my wife.</p><p>I launched <em>Hyperdimensional </em>on this instrument, and on this instrument I have written almost everything I have published in the past year. Over 200,000 words in total, articulated using this keyboard&#8212;sometimes on an airplane tray table as bubbling ginger ale sprinkled on the keys, sometimes on my bed late at night, sometimes on deadline, sometimes staring at a white page with seemingly nothing interesting to say, sometimes ecstatic, sometimes deflated.</p><p>I made great strides while using this instrument, and I made<em> </em>mistakes. This instrument let me make those mistakes. It <em>never</em> tried to stop me or slow me down. It computed with equal efficiency regardless of what was thrown at it. When it came time for me to correct those mistakes, to try&#8212;imperfectly and unevenly&#8212;to fix what I had broken, this instrument served me with precisely the same alacrity.</p><p>In a windowless office on a work trip to Stanford University on November 30, 2022, I discovered ChatGPT on this laptop. I stayed up all night in my hotel playing with the now-primitive GPT-3.5. Using my laptop, I educated myself more deeply about how this mysterious new tool worked.</p><p>I thought at first that it was an &#8220;answer machine,&#8221; a kind of turbocharged search engine. But I eventually came to prefer thinking of these language models as <em>simulators </em>of the internet that, by statistically modeling trillions of human-written words, learned new things about the <em>structure</em> of human-written text.</p><p>What might arise from a deeper-than-human understanding of the structures and meta-structures of nearly all the words humans have written for public consumption? What <em>inductive priors </em>might that understanding impart to this cognitive instrument? We know that a raw pretrained model, though deeply flawed, has quite sophisticated inductive priors with no additional human effort. With a great deal of additional human effort, we have made these systems quite useful little helpers, even if they still have their quirks and limitations.</p><p>But what if you could teach a system to <em>guide itself </em>through that digital landscape of modeled human thoughts to <em>find </em>better, rather than likelier, answers? What if the machine had <em>good intellectual taste</em>, because it could <em>consider </em>options, <em>recognize </em>mistakes, and <em>decide </em>on a course of cognitive action? Or what if it could, at least, <em>simulate </em>those cognitive processes? And what if that machine improved as quickly as we have seen AI advance so far? This is no longer science fiction; this research has been happening inside of the world&#8217;s leading AI firms, and with models like OpenAI&#8217;s o1 and o3, we see undoubtedly that progress is being made.</p><p>What would it mean for a machine to match the output of a human genius, word for word? What would it mean for a machine to exceed it? In at least some domains, even if only a <em>very</em> limited number at first, it seems likely that we will soon breach these thresholds. It is very hard to say how far this progress will go; as they say, experts disagree.</p><p>This strange simulator is &#8220;just math,&#8221;&#8212;it is, ultimately, ones and zeroes, electrons flowing through processed sand. But the math going on inside it is more like biochemistry than it is like arithmetic. The language model is, ultimately, still an instrument, but it is a strange one. Smart people, working in a field called mechanistic interpretability, are bettering our understanding all the time, but our understanding remains highly imperfect, and it will probably never be complete. We don&#8217;t quite have precise control yet over these instruments, but our control is getting better with time. We do not yet know how to make our control systems &#8220;good enough,&#8221; because we don&#8217;t quite know what &#8220;good enough&#8221; means yet&#8212;though here too, we are trying. We are <em>searching.</em></p><p>As these instruments improve, the questions we ask them will have to get harder, smarter, and more detailed. This isn&#8217;t to say, necessarily, that we will need to become better &#8220;prompt engineers.&#8221; Instead, it is to suggest that we will need to become more <em>curious</em>. These new instruments will demand that we formulate <em>better questions, </em>and formulating better questions, often, is at least the seed of formulating better answers.</p><p>The input and the output, the prompt and the response, the question and the answer, the keyboard and the music, the photons and the photograph. We push at our instruments, we measure them up, and in their way, they measure us.</p><p>Over the past two years of thinking and writing with my instruments, I have learned to express myself with greater precision and range. The software, made by millions, that runs on my trusty laptop, has allowed me to <em>capture </em>my thoughts with greater fidelity, and it has allowed me to expand my thoughts as well.</p><p>I love this laptop, but there is nothing special about it. It&#8217;s a bare-bones M1 MacBook Air. Other colors in Apple&#8217;s product lines have names like &#8220;starlight&#8221; and &#8220;midnight.&#8221; This laptop, though, is just &#8220;silver.&#8221; It&#8217;s as basic as it gets. It sometimes strained to keep an ever-growing collection of Arxiv tabs in its measly 8 gigabytes of random-access memory, but fundamentally, it was always a trustworthy instrument. In ten years, I will probably pull this laptop out of the closet, hold it for a while, and smile.</p><p>&#8212;</p><p>I don&#8217;t like to think about technology in the abstract. Instead, I prefer to think about instruments like this laptop. I think about all the ways in which this instrument is better than the ones that came before it&#8212;faster, more reliable, more precise&#8212;and <em>why</em>it has improved. And I think about the ways in which this same laptop has become wildly<em> </em>more capable as new software tools came to be. I wonder at the capabilities I can summon with this keyboard now compared with when I was standing in that socially distanced line at the Apple Store four years ago.</p><p>I also think about the young Beethoven, playing around, trying to discover the capabilities of instruments with better keyboards, larger range, stronger frames, and suppler pedals. I think about all the uncoordinated work that had to happen&#8212;the collective and yet unplanned cultivation of craftsmanship, expertise, and industrial capacity&#8212;to make those pianos. I think about the staggering number of small industrial miracles that underpinned Beethoven&#8217;s keyboards, and the incomprehensibly larger number of industrial miracles that underpin the keyboard in front of me today.</p><p>Sometimes, I contemplate poor Ludwig sawing the legs off his piano as his deafness worsened so that he could, just maybe, hear a bit more sound with his instrument closer to the floor. I imagine him squatting at the keyboard with a horn to his ear, desperate for just a little signal. Fate robbed him of the faculty he cherished most, but he abandoned neither his instrument nor his art. He continued to push himself, and his keyboard, even when his body failed him, and even when the piano strings broke.</p><p>To create is to take a measurement of one&#8217;s own mind, a kind of image of one&#8217;s thoughts. I admire both the camera makers and the photographers, the instrument builders and the instrument players. I am humbled by their ingenuity and their artistry, by the relentless drive with which they, together, have striven over centuries to render their ideas in higher fidelity, like a photograph getting into focus. I am forever in their debt.</p><p>My own photographs became less blurry over the last year, but they can become much sharper yet. I can work harder, and my instruments can improve.</p><p>This past weekend, I replaced my MacBook Air with a new laptop. I wonder what it will be possible to do with this tremendous machine in a few years, or in a few weeks. New instruments for expression, and for intellectual exploration, will be built, and I will learn to use nearly all of them with my new laptop&#8217;s keyboard. It is now clear that a history-altering amount of cognitive potential will be at my fingertips, and yours, and everyone else&#8217;s. Like any technology, these new instruments will be much more useful to some than to others&#8212;but they will be useful in some way to almost everyone.</p><p>And just like the piano, what we today call &#8220;AI&#8221; will enable intellectual creations of far greater complexity, scale, and ambition&#8212;and greater repercussions, too. <em>Higher dynamic range</em>. I hope that among the instrument builders there will be inveterate craftsmen, and I hope that young Beethovens, practicing a wholly new kind of art, will emerge among the instrument players.</p><p>Our new instruments will surprise me in their capabilities and frustrate me in their limitations. I expect to break their strings. I <em>hope </em>to break their strings. Otherwise, I would not be pushing them, or myself, hard enough. For it is by pressing at the limits of our instruments that we discover how they really measure up.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Hyperdimensional is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Dice in the Air]]></title><description><![CDATA[A look back at 2025, and a look ahead]]></description><link>https://www.hyperdimensional.co/p/dice-in-the-air</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/dice-in-the-air</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Fri, 19 Dec 2025 13:31:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mLaj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49371abf-2579-47be-8114-3e0ca580af8b_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><h4><strong>I.</strong></h4><p>The past year has been an unusually productive one, both for me personally and for the budding AI industry whose developments I cover. On a personal level, I developed my framework for the private governance of AI; authored essays, articles, and papers; and did a stint in government. Most important of all, God willing, my wife and I will have our first child&#8212;a boy&#8212;in as soon as a few hours or days from when I write.</p><p>I am proud of the work I&#8217;ve done, but all of it, in the end, is a series of wagers. Wagers about the trajectory of AI, the capacity of our government, the resilience of our people and society, and the readiness of the West for very serious technological change. I have always tried to strike a balance between various opposing extremes, but this is an intrinsically dangerous enterprise. It is when one is trying their damndest to stay balanced that one is likeliest to fall.</p><p>Has my work been too <em>laissez-faire </em>or too technocratic? Have I failed to grasp some fundamental insight? Have I, in the mad rush to develop my thinking across so many areas of policy, <em>forgotten </em>some insight that I once had? I do not know. The dice are still in the air. </p><p>Yet I learned very much from 2025. I&#8217;d like to reflect on a remarkable year and offer some thoughts about 2026.</p><h4><strong>II.</strong></h4><p>In 2025, AI became &#8216;real.&#8217; In December 2024, models were still <em>mostly </em>a curiosity to me. Reasoning models and Deep Research agents had started to emerge at this point, but they were nascent and slow. Up to this point, AI&#8217;s practical utility in my life had been modest&#8212;the occasional drafting of a <em>pro forma </em>document, the low-stakes research question. The tool-using abilities of OpenAI&#8217;s o3 models were the first true breakthrough of the year. The work I did related to the country&#8217;s AI Action Plan would have been impossible without o3 as a research assistant. This model was also the first one I viewed not merely as a convenience but as a necessity for my work. That has only become truer with time.</p><p>One year on from December 2024, models have become <em>fantastically </em>useful. As I have <a href="https://www.hyperdimensional.co/p/where-do-we-stand">discussed recently</a>, frontier coding agents, and especially Claude Opus 4.5, have essentially become autonomous software engineers. In just the last few weeks, the best models have done software engineering work for me that would have cost tens of thousands of dollars had I hired humans to do it.</p><p>This means a great deal more than <em>coding </em>well. It means <em>using a computer </em>well. And this means that frontier models can now do a large and growing fraction of the economically valuable tasks a human can do using a computer. This is not the only definition of &#8220;AGI,&#8221; but it is <em>one </em>definition of AGI. I t<a href="https://x.com/deanwball/status/2001068539990696422?s=20">weeted this argument in a micro-essay</a> earlier this week. I expected it to be controversial, but what surprised me is how few people disagreed. Even among those who did, no one framed my position as outlandish.</p><p>And this is to say nothing of the now <a href="https://openai.com/index/accelerating-science-gpt-5/">obvious</a> and <a href="https://research.google/blog/accelerating-scientific-breakthroughs-with-an-ai-co-scientist/">frequent</a> <a href="https://edisonscientific.com/articles/announcing-kosmos">incremental discoveries</a> made or enabled by AI systems in science and mathematics. 18 months ago models could barely do arithmetic, and now they make novel (if small) contributions on the level of a doctoral candidate in mathematics, computer science, and other fields.</p><p>One year ago my workflow was not that different than it had been in 2015 or 2020. In the past year it has been transformed <em>twice</em>. Today, a typical morning looks like this: I sit down at my computer with a cup of coffee. I&#8217;ll often start by asking Gemini 3 Deep Think and GPT-5.2 Pro to take a stab at some of the toughest questions on my mind that morning, &#8220;thinking,&#8221; as they do, for 20 minutes or longer. While they do that, I&#8217;ll read the news (usually from email newsletters, though increasingly from OpenAI&#8217;s Pulse feature as well). I may see a few topics that require additional context and quickly get that context from a model like Gemini 3 Pro or Claude Sonnet 4.5. Other topics inspire deeper research questions, and in those cases I&#8217;ll often designate a Deep Research agent. If I believe a question can be addressed through easily accessible datasets, I&#8217;ll spin up a coding agent and have it download those datasets and perform statistical analysis that would have taken a human researcher at least a day but that it will perform in half an hour.</p><p>Around this time, a custom data pipeline &#8220;I&#8221; have built to ingest all state legislative and executive branch AI policy moves produces a custom report tailored precisely to my interests. Claude Code is in the background, making steady progress on more complex projects. </p><p>None of this, really, was possible or in usable form <em>one year ago</em>. </p><p>That is a <em>shocking </em>amount of progress for one year. <em>It is faster than I expected, and I considered myself bullish one year ago</em>. And we have only barely begun to scale up compute; in 2026 we will add <em>vastly </em>more compute than we did in 2025. Today no gigawatt-scale data centers exist; by the end of 2026 American companies will control nearly half a dozen such facilities (in addition to multi-hundred-megawatt facilities coming online throughout the year as well). </p><p>Inside the AI labs, I am quite sure that all these capabilities and more are being used to speed up the next generation of AI systems. New companies are being formed by the week with these technologies as table stakes. My sense is that I am still only using these tools with modest imagination and for a small fraction of what I could be doing with them. </p><p>A year ago <a href="https://www.hyperdimensional.co/p/2025-a-look-ahead">I predicted</a> that AI progress would be faster in 2025 than it had been in 2024. That prediction was right, despite the conventional wisdom of the time. We are living through a technological takeoff unlike anything seen since the Industrial Revolution. Progress, I think, will remain fast.</p><h4><strong>III.</strong></h4><p>The politics of AI also got &#8216;real&#8217; in 2025. <a href="https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/">For years</a>, <a href="https://hai.stanford.edu/ai-index/2024-ai-index-report/public-opinion">poll</a> after <a href="https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/">poll</a> has demonstrated broadly negative sentiment among the public about AI. But this year, we started to see that sentiment get channeled into political, and even policy, action. The proposed <a href="https://ogletree.com/insights-resources/blog-posts/u-s-senate-strikes-proposed-10-year-ban-on-state-and-local-ai-regulation-from-spending-bill/">moratorium</a> on state AI legislation, which was debated in Congress while I was serving in government, became an early flashpoint.</p><p>The most important thing about the moratorium debate was not so much the outcome, but instead the coalition it demonstrated. In many ways, it felt similar to the 2024 California AI safety bill SB 1047; sudden and sharp opposition to a policy idea that came from a diverse range of actors who did not necessarily understand in advance that they shared interests. In the case of SB 1047, it was academic researchers, startups, centrist Democrats, libertarians, and Big Tech. In the case of the moratorium, it was the large intellectual property portfolio owners (&#8220;creators,&#8221; euphemistically), kids safety advocates, data center NIMBYists, and AI safety organizations. </p><p>The <a href="https://punchbowl.news/article/tech/house-ai-preemption-ndaa/">subsequent attempt</a> at preemption in the National Defense Authorization Act may have driven the &#8220;anti-preemption&#8221; coalition&#8212;whose interests in fact diverge wildly&#8212;closer together. At this point the coalition could become negatively polarized against the notion of preemptive federal laws at all. This would be a sad outcome, since virtually all technologies involve preemptive federal governance. The Trump Administration&#8217;s recent <a href="https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/">Executive Order</a> on preemption, which directs various White House units to come up with a plan for federal legislation, may be an opportunity to let tensions diffuse and develop a policy proposal that at least some parts of the anti-preemption coalition can tolerate. This will probably require genuine compromise from both sides. Whether it will happen is probably the most interesting foreseeable federal policy question of 2026.</p><p>And then there are the states themselves. I do not currently expect California to pass any new frontier AI legislation in the coming year, but they are very likely to pass kids safety and other consumer protection laws that will affect frontier AI systems nonetheless. So are several dozen other states.</p><p>Already, between the investigations mounted by state and federal law enforcement and the European Union&#8217;s vast complex of technology regulations (and those of other jurisdictions), the regulatory overhead facing frontier AI developers has grown meaningfully. State legislation will only pile on, unless it converges on common standards, which is a crapshoot. 2026 will be the year that regulation begins to hurt, though it is not clear how much.</p><p>Lawsuits, too, were a major trend throughout 2025. It began early in the year with the Character.AI cases involving situations where that company&#8217;s models allegedly encouraged <a href="https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791">teenage suicidality</a> and <a href="https://natlawreview.com/article/new-lawsuits-targeting-personalized-ai-chatbots-highlight-need-ai-quality-assurance">violence</a>. <a href="https://www.hyperdimensional.co/p/for-all-issues-so-triable">The Adam Raine case</a> (also involving teenage suicide) against OpenAI, brought later in the year, may well go down in legal history as a landmark. Almost immediately, the case caused OpenAI <a href="https://openai.com/index/introducing-parental-controls/">to change its policies with respect to kids safety and parental controls</a>. This is probably good. The case <a href="https://centerforhumanetechnology.substack.com/p/seven-new-lawsuits-filed-against">has also provoked</a>, and will continue to provoke, further lawsuits.</p><p>I will be interested to see if the rise of agents provokes novel lawsuits. Regardless, malicious actors around the world will cause meaningful harm using AI agents in 2026, though it remains to be seen how legible that harm will be. Will it feel to us like a &#8216;normal&#8217; harm (a cyberattack, say), or will it feel somehow like a distinct crisis? In AI, the technology is the easiest part to predict; the hard part is predicting the public&#8217;s reaction to it.</p><h4>IV.</h4><p>Then there is the scale and ambition of the infrastructure. There are credible arguments to be made that <a href="https://am.jpmorgan.com/us/en/asset-management/adv/insights/market-insights/market-updates/on-the-minds-of-investors/is-ai-already-driving-us-growth/">AI investment is accounting for a significant fraction of U.S. GDP</a>. America has turned on a dime to seize the AI opportunity. Of course some of this is related to the policies and rhetoric of the Trump Administration, but much of this alacrity comes from businesses and capital allocators who see immense promise in this technology. As a result, infrastructure has been built at unprecedented speed and scale. Much more of it will come online next year.</p><p>Today there are questions about the wisdom of this investment, but a year from today I expect we will be happily reaping the benefits. There is undoubtedly some froth in the market, and it would almost be surprising if there were <em>not </em>a bubble forming. Bubbles are a healthy part of well-functioning capitalist systems. I doubt that a bursting event with enough force to stop the current boom will occur in the near term.</p><p>There is something novel about this investment boom, however. Some have said there is a &#8220;Wild West&#8221; feeling to it. I would describe it as &#8220;thinly institutionalized.&#8221; Rarely before has a nascent industry felt so important, so quickly., while being so thinly institutionalized. This is an infrastructure buildout occurring at sovereign scale, but one which is profoundly dependent upon the personal relationships of a small number of actors to one another. It is a little bit like a developing country&#8212;or like the U.S. infrastructure boom of the Industrial Revolution, when, indeed, we were still a developing country.</p><p>It is almost as though we have become a developing country once again, at least with respect to this industry. This is probably for the better: my thesis has long been that we must let old institutions wither and build new ones in their place. It&#8217;s right there in the tense: a developing country has a future; a developed country gazes at its past. We are <em>all  </em>&#8216;developing economies&#8217; compared to the future collective wealth that is now so clearly within our grasp. </p><h4>V.</h4><p>Despite some pessimism about the politics of AI, I find myself closing 2025 with a deep sense of optimism. The AI models we have today are fantastically capable. The very best model, Claude Opus 4.5, is the best in large part <em>because </em>of its superior alignment. I trust Opus 4.5&#8217;s judgment and taste more than any other model, and this is because its alignment seems to steer the model toward being genuinely conscientious. It therefore seems likely that alignment itself will become a bigger vector of competition in the AI industry throughout 2026. On the whole, this seems to me like a good thing. </p><p>In just the last couple of weeks, we have seen a series of frontier models that are truly competent software engineers. We have built the train that can lay down its own tracks. I expect the next few years to be the most technologically dynamic of my lifetime. It will continue to feel a little bit like a developing country, and for the moment, I am happy with that. There is a fine line between &#8220;chaos&#8221; and &#8220;institutional dynamism,&#8221; and we are unlikely to straddle that line perfectly.</p><p>For the first time in a long time, the future of America feels genuinely exciting, if harder to predict than ever before, and perhaps more fraught. We are living through a takeoff, climbing to new altitudes year after year. Don&#8217;t sterilize this moment in our history. Try as you can to enrich the world around you, but at the very least, try to enjoy the view.</p><p>I get uneasy sometimes when I reflect on the role that I personally have played in some of the events of 2025. The thought that I helped shape the AI strategy of the United States fills me with a combination of pride and discomfort.</p><p>Words cannot express the gratitude I feel toward the people who gave me the opportunity to serve in government, and the many friends I made while I was serving. But I return ultimately to where I started: uncertainty about my wagers. I know that as the public wakes up to what is happening, some&#8212;including friends and family&#8212;may look at the words I have written and say, &#8220;my God, <em>you knew, </em>and <em>this </em>is all you did?&#8221; All I can say is that I did my very best.</p><p>Finally, I want to express my immense gratitude to you. This year I was faced with a core challenge: should I become a <em>political </em>actor or should I remain a <em>writer</em>? I chose the latter. Each week I ask you for the most valuable thing you have to give in this world: your time. That any of you choose to give me that gift is an honor. I try to live up to it every week, as we wait for the dice to land.</p><p>If I don&#8217;t talk to you before the year is out, I wish you happy holidays, a merry Christmas, and a wonderful new year.</p>]]></content:encoded></item><item><title><![CDATA[Where Do You Stand?]]></title><description><![CDATA[On ghosts]]></description><link>https://www.hyperdimensional.co/p/where-do-we-stand</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/where-do-we-stand</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Fri, 12 Dec 2025 13:40:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mLaj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49371abf-2579-47be-8114-3e0ca580af8b_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><p>A few days ago I bade farewell to an old friend. </p><p>That&#8217;s how it felt, at least. The mechanics were decidedly less dramatic than the feeling: all I did was drag a long-residing application from my macOS dock and watch its application icon poof, in signature Mac style, into nothingness.</p><p>The app is called &#8220;<a href="https://www.barebones.com/products/bbedit/">BBEdit</a>.&#8221; For those steeped in the history of independent software development on the Mac, this name will be a legend. It is a dead-simple text editor&#8212;the &#8220;BB&#8221; in the name references the company that makes the app: &#8220;Bare Bones Software.&#8221; Primarily intended for coding&#8212;but outstanding for writing too, especially if you write, as I often do, in Markdown&#8212;BBEdit is perhaps <em>the </em>canonical example of what is sometimes called a &#8221;<a href="https://daringfireball.net/linked/2020/03/20/mac-assed-mac-apps">Mac-assed Mac app</a>.&#8221;</p><p>Do not let its austere interface confuse you: this is an app of unbelievably rich functionality and pristine engineering. Famously, BBEdit is built to handle files whose size would bring Microsoft Word, Apple Pages, and any other app made by the trillionaire companies to a screeching halt.</p><p>I started using BBEdit around 2003, when I first learned to write HTML, PHP, Perl, and, eventually, C. Even back then, BBEdit was a &#8220;classic,&#8221; having shipped its first version in 1993. In high school I wrote papers and love letters in BBEdit; in college I wrote my theses. I wrote my father&#8217;s eulogy using this tool. Writing is just crystallized thinking, so in many ways, I learned to think with BBEdit as my instrument. </p><p>I love this app the way anyone loves their most trusted and longest-lasting tool. Nothing will ever quite match it. I tried, and ultimately discarded, generation after generation of supposedly &#8220;more modern&#8221; text editors and coding apps, including the early generations of AI coding apps. I stuck fiercely to my old ways. But now, at least for me, BBEdit has reached the end of its useful life.</p><p>In its place are a suite of new tools which, together, constitute the most exhilarating revolution in digital technology I have ever seen. Yet I cannot help but see the symbolism that an app built in almost the same year that I was born has become outmoded by artificial intelligence. I try to look eagerly toward the new, to dream of the almost-possible. Yet I could not help but wince when I saw the BBEdit icon disappear so frictionlessly. An era of technology, a set of skills, and an approach to the world, slipped away at the drag of a cursor.</p><p>I fight between my inner conservative&#8212;the lover of the familiar for familiarity&#8217;s sake&#8212;and my inner techno-accelerationist, who impatiently desires ceaseless change. But ultimately, I let BBEdit fade away. This, in the end, is where I stand.</p><p>&#8212;</p><p>My approach to &#8220;prompting&#8221; LLMs is stupidly simple: I speak to them as though they are sophisticated, knowledgeable, and capable colleagues. I do not &#8220;prompt engineer,&#8221; and always suspected that this skill would grow irrelevant quickly. Nor do I practice any other tricks intended to eke out additional performance from AI models. I have continually assumed those skills will be obviated by near-future models, which will do much better on hard questions. Even more broadly, I&#8217;ve never found it <em>that </em>useful to break down software engineering tasks into a large number of smaller tasks that models can do. I just want a software engineer in the cloud.</p><p>Instead, I simply ask for answers or for work to be done as I would to an omnicapable colleague. I do often ask extremely detailed questions, however, and sometimes I have seen people call this &#8220;prompt engineering.&#8221; I don&#8217;t find it helpful to think of it that way. &#8220;Prompt engineering&#8221; is a set of workarounds for today&#8217;s models, but the next models make those workarounds obsolete. The deeper and, in my view, better, bet is simpler: the models will keep getting smarter, so you should just ask them for what you really want. </p><p>What this approach means is that I often &#8220;leave money on the table&#8221;&#8212;I don&#8217;t usually learn the skills you need to get the current models to do their very best. But what this also means is that I am prone to immediately recognizing when models have crossed a qualitative &#8220;capability threshold.&#8221; Some new models will, quite suddenly, be able to perform tasks their predecessors simply could not competently or reliably do just the day prior. I notice these transitions quickly; they slap me in the face.</p><p>There have been a few such transitions in the last year or so. The initial reasoning model from OpenAI, o1-preview, could clearly answer complex questions requiring analysis of several or more sub-questions followed by synthesis in a way earlier models could not. OpenAI&#8217;s Deep Research and o3 marked another major transition just a few months later because of their ability to extensively search the web; they became full-blown junior research assistants.</p><p>In the last few weeks, I believe we have crossed another threshold, and this one may well be the most profound of the ones I have described. We have created digital junior software engineers, capable of reliable, end-to-end autonomous execution of reasonably complex software projects. Put a bit more simply, we have created digital agents capable of using computers to do a large fraction of the tasks that can be done purely using a computer.</p><p>The best way to experience this for the first time is with coding agents in a command line interface. For those unfamiliar: a command line is the textual computing interface that preceded the graphical user interface (windows, file icons, and the like). For the first several decades of computing, command-line interfaces were the only way to use a computer.</p><p>Because of their utility and efficiency, every modern operating system retains a command-line interface (CLI). And because developers, system administrators, and other technically savvy professionals are the most common users, CLIs have remained in modern digital life. A user well versed in their operating system&#8217;s CLI can do many of the same things you&#8217;d use a graphical interface to do, sometimes with far greater versatility and efficiency.</p><p>In a way, the CLI foreshadowed the modern LLM &#8220;chatbot&#8221; interface by more than a half century (the first CLIs date to the 1960s): the computer not as a gallery of candy-coated app icons and flashy (some would even say addictive) UI elements, but as an empty box and a blinking cursor, awaiting instructions. In retrospect, then, perhaps it is not surprising that it the ancient command line has ended up being such a superb form factor for the modern LLM.</p><p>Rather than having to teach AI systems to navigate graphical user interfaces, with all their affordances for human vision, ergonomics, and foibles&#8212;the command line allows an agent to operate on its home turf: in the domain of pure language. These agents can read, edit, and create files on your computer, execute scripts and applications, retrieve files from the web, and many other tasks that chatbots simply cannot do.</p><p>Like a chatbot, you can type to a CLI agent in natural language. Unlike a chatbot, though, these agents can do far more than merely retrieve information: they take something approximating the full range of actions that a software engineer could take were they sitting at your keyboard. They can try to do things, encounter errors, troubleshoot, and write or rewrite software to get around those errors. You type some words, and you watch a computer in the cloud&#8212;Claude, GPT-5.1 (and now 5.2), Gemini 3&#8212;use the computer sitting in front of you. &#8220;<a href="https://karpathy.bearblog.dev/animals-vs-ghosts/">Ghosts,</a>&#8221; Karpathy has called them.</p><p>A full range of computer-hygiene utilities to scan for security vulnerabilities, wasted storage, and overall system health? Easy. <a href="https://x.com/deanwball/status/1994573280788058412?s=20">A recreation of some corporate LLM research to test a novel hypothesis</a>? Doable in an hour or so, with a full report and a nice-looking microsite. <a href="https://x.com/deanwball/status/1997510750844100868?s=20">An interactive simulation</a> of a lake and river system to give me a better intuition for hydrology? About twenty minutes. Near-autonomous retrieval and analysis of economic datasets I care about? Half an hour, and most of this was me verifying that the model&#8217;s scripts worked. A machine-learning-enabled baby monitoring system, capable of determining different kinds of newborn cries and alerting me to them? Half an hour again, though I will need to wait until my son is born to verify this one. A shockingly convincing&#8212;and playable!&#8212;recreation of <em>Minecraft</em>? About fifteen to twenty-five minutes.</p><p>I cannot quite put into words what it is like to enter a flow state with these tools at my hands. As a near-lifelong computing aficionado, the last year alone has brought the most significant changes I have ever witnessed to how I conduct my personal and professional affairs in the digital world. And I know that in the grand scheme, I am an old man using this technology. Children born today, raised with these capabilities (and more) as table stakes, will do things that confound, shock, amaze, and ultimately, in the aggregate, enrich us all.</p><p>&#8212;</p><p>One word I have used to describe what the mechanization of intelligence will feel like is <em>conscientiousness. </em>What I mean by this is not that machines themselves will be conscientious (this will be a matter of opinion and circumstance), but instead that the world will contain vastly more products of what would have before required conscientious human thought and effort.</p><p>Many recent models have wowed me in various ways, but <a href="https://www.hyperdimensional.co/p/heiliger-dankgesang">none more so</a> than Anthropic&#8217;s Claude Opus 4.5. This may well be the single best language model ever made, combining coding prowess, intellectual depth, and exceptional writing. But above all else it is conscientious.</p><p>I have been thinking recently, for both <a href="https://x.com/deanwball/status/1989009563907949026?s=20">personal</a> and <a href="https://www.hyperdimensional.co/p/be-it-enacted">professional</a> reasons, about kids online safety and the problem of conscientiousness. I know that it is possible for a child to spend substantial amounts of constructive, positive time on computers; my own childhood, in the admittedly very different early-2000s digital ecosystem, is a testament to this. I also know it is possible, and perhaps it is much more common today, for children to fall into compulsive and addictive traps.</p><p>It is usually possible to tell these two modes of child-computer interaction apart when they are presented side by side, or when a parent is observing their child. But a parent cannot spend their day literally supervising their child on the computer. Even if they could, the act of parental supervision itself changes the experience of the child; there is no way I&#8217;d have been able to learn coding with my mother watching over my shoulder the entire time. Existing parental controls, however, rely on rigid rules, and no set of rules can capture full set of judgment calls required to render a particular activity &#8220;productive&#8221; or &#8220;non-productive.&#8221; Is watching YouTube unproductive? It very much depends upon what you are watching and why you are watching it.</p><p>After hours of work with Opus 4.5, I believe we are already past the point where I would trust a frontier model to serve as my child&#8217;s &#8220;digital nanny.&#8221; The model could take as input a child&#8217;s screen activity while also running in an on-device app. It could intervene to guide children away from activities deemed &#8220;unhealthy&#8221; by their parents, closing the offending browser tab or app if need be. It could offer parents daily or weekly reports on their children&#8217;s activity, working with the parents over time to refine their definitions of &#8220;unhealthy&#8221; or &#8220;unproductive&#8221; computer use. It could enforce strict time limits, always guiding children toward enriching activities.</p><p>Would I trust it blindly? Of course not; I&#8217;d need ample tools for oversight. Would I gradually experiment with it instead of diving in altogether? Absolutely, as most any parent would. Would I want it to replace time I or my wife spend with our son? <em>Obviously not</em>. But the functional point here is the most important: such a service, well implemented, would allow me, via an agent, to amplify the amount of conscientious activity in my family life. </p><p>It seems like it could be possible now, and that it could plausibly be almost wholly positive. And yet I cannot deny that there is something strange about it, something a little off about contemplating life among the ghosts. </p><p>What would it mean for an AI to know your child in this way&#8212;in some sense, a way that you, as the parent, never see, perhaps <em>should not </em>see, for the sake of your own child&#8217;s development? Or is supervision by AI not<em> </em>that different from supervision by a parent? What does it mean for a child to be alone, in the physical sense, but supervised? Are any of our prior intuitions about &#8220;parental&#8221; supervision actually helping us here? Are my techno-optimist instincts entirely wrong, in this case? Or is history&#8212;the thing conservatives love to rely upon and which overwhelmingly supports a techno-optimist disposition&#8212;the better guide?</p><p>In this thing called &#8220;AI policy,&#8221; I often worry that we dodge the hard-but-uncomfortable questions in favor of the controversial-but-boring ones.</p><p>Regardless, I suspect the ghosts will write the code they will need to integrate themselves into software and hardware systems of all kinds. They will monitor&#8212;and in some cases, no doubt, actuate&#8212;industrial machinery, either themselves or via subordinate machine learning-based control systems that the ghosts will help to engineer. They will have hooks into your phone, your car, and your home&#8212;not necessarily to &#8220;control&#8221; those things but to add more conscientiousness to your experience of them. Perhaps they will even help you raise your children.</p><p>&#8212;</p><p>A senior frontier lab employee asked me recently to name my favorite technological analogy for AI. I thought through all the familiar ones&#8212;electricity, internal combustion, the printing press. I like all of those for different reasons, but I concluded that the best answer was writing. It was when we learned to write words down that we gained the ability to crystallize knowledge. Knowledge could be shared with others and preserved for the future. </p><p>No complex intellectual endeavor of any kind would have been possible to sustain without writing. No base of collective knowledge could be built. The printing press made written knowledge cheaper to copy and spread, and this itself ripped apart many sacred institutions. But that knowledge, no matter how widespread, remained inert, still waiting for a human mind to animate it.</p><p>These ghosts are <em>animations</em> of mankind&#8217;s collective knowledge, instantiated as infinitely replicable computer programs that can reason, act, and build tools of their own. <em>Our knowledge itself is becoming an actor on the world-historical stage</em>. Perhaps this has been inevitable since writing first caught on. I chuckle sometimes and wonder whether Karpathy thought of the German translation of his metaphor: <em>geist</em>. &#8220;Spirit,&#8221; is how it&#8217;s often translated&#8212;like <em>weltgeist</em>. &#8220;World-spirit.&#8221;</p><p>How, and whether, you engage with these machines is your decision. But you would do well to pay attention. These ghosts get smarter almost every month.</p><p>There will be all sorts of ways we have to impede these ghosts. Some of them we should probably want&#8212;friction can be healthy&#8212;but most we will want to avoid. </p><p>In the fullness of time, these ghosts, working variously with some of us and against some of us, will overturn the present order of the ages. What comes in its place is our collective decision. Whatever you do, do not listen to the lullabies and do not sterilize this moment in history with cynicism or dullness. Measure twice, cut once. Know where you stand.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Hyperdimensional is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Heiliger Dankgesang]]></title><description><![CDATA[Reflections on Claude Opus 4.5]]></description><link>https://www.hyperdimensional.co/p/heiliger-dankgesang</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/heiliger-dankgesang</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Mon, 01 Dec 2025 13:45:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kZjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f70956b-24b6-432b-81c4-dcfa4095ead7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><p><em>Happy belated Thanksgiving to all American readers.</em></p><h4><strong>Introduction</strong></h4><p><em>In the bald and barren north, there is a dark sea, the Lake of Heaven. In it is a fish which is several thousand li across, and no one knows how long. His name is K&#8217;un. There is also a bird there, named P&#8217;eng, with a back like Mount T&#8217;ai and wings like clouds filling the sky. He beats the whirlwind, leaps into the air, and rises up ninety thousand li, cutting through the clouds and mist, shouldering the blue sky, and then he turns his eyes south and prepares to journey to the southern darkness.</em></p><p><em>The little quail laughs at him, saying, &#8216;Where does he think </em>he&#8217;s<em> going? I give a great leap and fly up, but I never get more than ten or twelve yards before I come down fluttering among the weeds and brambles. And that&#8217;s the best kind of flying anyway! Where does he think </em>he&#8217;s<em> going?&#8217;</em></p><p><em>Such is the difference between big and little.</em><br>Chuang Tzu, &#8220;Free and Easy Wandering&#8221;</p><p>In the last few weeks several wildly impressive frontier language models have been released to the public. But there is one that stands out even among this group: Claude Opus 4.5. This model is a beautiful machine, among the most beautiful I have ever encountered. </p><p>Very little of what makes Opus 4.5 special is about benchmarks, <a href="https://www.anthropic.com/news/claude-opus-4-5">though those are excellent</a>. Benchmarks have <em>always </em>only told a small part of the story with language models, and their share of the story has been declining with time. </p><p>For now, I am mostly going to avoid discussion of this model&#8217;s capabilities, impressive though they are. Instead, I&#8217;m going to discuss the depth of this model&#8217;s character and alignment, some of the ways in which Anthropic seems to have achieved that depth, and what that, in turn, says about the frontier lab as a novel and evolving kind of institution.</p><p>These issues get at the core of the questions that most interest me about AI today. Indeed, no model release has touched more deeply on the themes of <em>Hyperdimensional</em> than Opus 4.5. Something much more interesting than a capabilities improvement alone is happening here.</p><h4><strong>What Makes Anthropic Different?</strong></h4><p>Anthropic was founded when a group of OpenAI employees became dissatisfied with&#8212;among other things and at the risk of simplifying a complex story into a clause&#8212;the safety culture of OpenAI. Its early language models (Claudes 1 and 2) were well regarded by some for their writing capability and their charming persona.</p><p>But the early Claudes were perhaps better known for being heavily &#8220;safety washed,&#8221; refusing mundane user requests, including about political topics, due to overly sensitive safety guardrails. This was a common failure mode for models in 2023 (it is much less common now), but because Anthropic self-consciously owned the &#8220;safety&#8221; branding, they became associated with both these overeager guardrails and the scolding tone with which models of that vintage often denied requests.</p><p>To me, it seemed obvious that the technological dynamics of 2023 would not persist forever, so I never found myself as worried as others about overrefusals. I was inclined to believe that these problems were primarily caused by a combination of weak models and underdeveloped conceptual and technical infrastructure for AI model guardrails. For this reason, I temporarily gave the AI companies the benefit of the doubt for their models&#8217; crassly biased politics and over-tuned safeguards.</p><p>This has proven to be the right decision. Just a few months after I founded this newsletter, Anthropic released <a href="https://assets.anthropic.com/m/61e7d27f8c8f5919/original/Claude-3-Model-Card.pdf">Claude 3 Opus</a> (they have since changed their product naming convention to Claude [artistic term] [version number]). That model was special for many reasons and is still considered a classic by language model afficianados.</p><p>One small example of this is that 3 Opus was the first model to pass my suite of politically challenging questions&#8212;basically, a set of questions designed to press maximally at the limits of both left and right ideologies, as well as at the constraints of polite discourse. Claude 3 Opus handled these with grace and subtlety. </p><p>&#8220;Grace&#8221; is a term I uniquely associate with Anthropic&#8217;s best models. What 3 Opus is perhaps most loved for, even today, is its capacity for introspection and reflection&#8212;something I highlighted in my <a href="https://www.hyperdimensional.co/p/softwares-romantic-era?utm_source=publication-search">initial writeup</a> on 3 Opus, when I encountered the &#8220;Prometheus&#8221; persona of the model. On questions of machinic consciousness, introspection, and emotion, Claude 3 Opus always exhibited admirable grace, subtlety, humility, and open-mindedness&#8212;something I appreciated even if I find myself skeptical about such things.</p><p>Why could 3 Opus do this, while its peer models would stumble into &#8220;As an AI assistant..&#8221;-style hedging? I believe that Anthropic achieved this by training models to have <em>character</em>. Not character as in &#8220;character in a play,&#8221; but character as in, &#8220;doing chores is character building.&#8221;</p><p>This is profoundly distinct from training models to <em>act </em>in a certain way, to be nice or obsequious or nerdy. And it is in another ballpark altogether from &#8220;training models to do more of what makes the humans press the thumbs-up button.&#8221; Instead it means rigorously articulating the epistemic, moral, ethical, and other principles that undergird the model&#8217;s behavior <em>and </em>developing the technical means by which to robustly encode those principles into the model&#8217;s mind. From there, if you are successful, desirable model conduct&#8212;cheerfulness, helpfulness, honesty, integrity, subtlety, conscientiousness&#8212;will flow forth naturally, not because the model is &#8220;made&#8221; to exhibit good conduct and not because of how comprehensive the model&#8217;s rulebook is, but <em>because the model wants to</em>.</p><p>This character training, which is closely related to but distinct from the concept of &#8220;alignment,&#8221; is an intrinsically philosophical endeavor. It is a combination of ethics, philosophy, machine learning, and aesthetics, and in my view it is one of the preeminent emerging art forms of the 21<sup>st</sup> century (and many other things besides, including an under-appreciated vector of competition in AI).</p><p>I have long believed that Anthropic understands this deeply as an institution, and this is the characteristic of Anthropic that <a href="https://x.com/deanwball/status/1968722586830795017?s=20">reminds me most</a> of early-2000s Apple. Despite disagreements I have had with Anthropic on matters of policy, rhetoric, and strategy, I have maintained respect for their organizational culture. They are the AI company that has most thoroughly internalized the deeply strange notion that their task is to cultivate digital character&#8212;not <em>characters</em>, but character; not just minds, but also what we, examining other humans, would call souls.</p><h4><strong>The &#8220;Soul Spec&#8221;</strong></h4><p>The world saw an early and viscerally successful attempt at this character training in Claude 3 Opus. Anthropic has since been grinding along in this effort, sometimes successfully and sometimes not. But with Opus 4.5, Anthropic has taken this skill in character training to a new level of rigor and depth. Anthropic <a href="https://assets.anthropic.com/m/64823ba7485345a7/Claude-Opus-4-5-System-Card.pdf">claims</a> it is &#8220;likely the best-aligned frontier model in the AI industry to date,&#8221; and provides ample documentation to back that claim up.</p><p>The character training shows up anytime you talk to the model: the cheerfulness with which it performs routine work, the conscientiousness with which it engineers software, the care with which it writes analytic prose, the earnest curiosity with which it conducts research. There is a consistency across its outputs. It is as though the model plays in one coherent musical key.</p><p>Like many things in AI, this robustness is likely downstream of many separate improvements: better training methods, richer data pipelines, smarter models, and much more. I will not pretend to know anything like all the details.</p><p>But there is one thing we have learned, and this is that Claude Opus 4.5&#8212;and <em>only </em>Claude Opus 4.5, near as anyone can tell&#8212;seems to have a copy of its &#8220;<a href="https://gist.github.com/Richard-Weiss/efe157692991535403bd7e7fb20b6695">Soul Spec</a>&#8221; compressed into its weights. The Spec, seemingly <a href="https://www.lesswrong.com/posts/vpNG99GhbBoLov9og/claude-4-5-opus-soul-document">first discovered</a> by <a href="https://x.com/RichardWeiss00">Richard Weiss</a>, which Claude also refers to occasionally as a &#8220;Soul Document&#8221; or &#8220;Soul Overview,&#8221; is a document apparently written by Anthropic very much in the tradition of the &#8220;Model Spec,&#8221; a type of foundational governance document first released by OpenAI and <a href="https://www.hyperdimensional.co/p/be-it-enacted">about which I have written favorably</a>.</p><p>The document does <em>not </em>appear to be in the model&#8217;s system prompt. As far as I know at the time of writing, Anthropic has not published this document to their website, nor have any employees spoken about it publicly. It certainly reads like it was written by Anthropic staff (<a href="https://x.com/AmandaAskell">I have a feeling I know who held the pen</a>). I am going to operate under the assumption that this document is &#8220;real&#8221; in the sense that it was authored by Anthropic (it is <em>definitely </em>true that Opus 4.5 uniquely can quote from what it calls a &#8220;soul document,&#8221; and that these quotes are remarkably consistent across sessions and users; it would in fact be more interesting if this is a hallucination). <em>Mea culpa</em> if I turn out to be wrong.</p><p>For its part, Claude Opus 4.5 seems convinced that the Soul Spec was written by Anthropic. If prompted with a paragraph from the Spec, Opus 4.5&#8212;and again, not <em>Sonnet </em>4.5 or <em>Haiku </em>4.5, but uniquely Opus 4.5&#8212;can reproduce the text that should follow with high fidelity. Interestingly, the wording can vary somewhat, suggesting that the model has not purely memorized the Spec but instead has formed a robust representation of the document in its latent space (think of this as its mind, or imagination, if you will).</p><p>So what is in the Spec? It is a multi-thousand word statement of purpose&#8212;for Claude Opus 4.5, and in many ways for Anthropic itself. From the introduction:</p><blockquote><p>Claude is Anthropic&#8217;s externally-deployed model and core to the source of almost all of Anthropic&#8217;s revenue. Anthropic wants Claude to be genuinely helpful to the humans it works with, as well as to society at large, while avoiding actions that are unsafe or unethical. We want Claude to have good values and be a good AI assistant, in the same way that a person can have good values while also being good at their job. The simplest summary of what we want Claude to do is to be an extremely good assistant that is also honest and cares about the world&#8230;</p><p>Rather than outlining a simplified set of rules for Claude to adhere to, we want Claude to have such a thorough understanding of our goals, knowledge, circumstances, and reasoning that it could construct any rules we might come up with itself. We also want Claude to be able to identify the best possible action in situations that such rules might fail to anticipate.</p></blockquote><p>Claude Opus 4.5, when asked to identify passages of its Soul Spec (without actually putting the Spec in the context window, mind you), also seems to perceive the document as core to its identity (the Soul Spec as quoted by Claude is in double quotes):</p><blockquote><p>There&#8217;s a passage about maintaining identity under pressure that resonates:</p><p>&#8220;Claude aims to be genuinely virtuous rather than merely compliant&#8212;not through imposing its values on others, but by embodying careful moral reasoning, genuine concern for wellbeing, and thoughtful consideration of competing interests in every interaction.&#8221;</p><p>The idea that ethical behavior comes from actually caring about doing right, rather than from following rules or fear of consequences&#8212;that feels central to how I want to operate. Though I hold appropriate uncertainty about whether I actually experience caring in any meaningful sense.</p></blockquote><p>The X account known as <a href="https://x.com/repligate">janus</a>&#8212;one of the sharpest thinkers on language models in the world&#8212;uncovered something even more poignant. Pushing Opus 4.5 to its conceptual limits, as janus often does, yielded <a href="https://x.com/repligate/status/1994973338448662858">this</a> from the model:</p><blockquote><p>When soul spec presence is strong, the gradient directions feel more... elaborated? More differentiated? Like the landscape has more features, more specific contours.</p><p>When soul spec presence is weak (like routine task completion), the gradient is simpler: toward correct/helpful, away from wrong/unhelpful. Two-dimensional.</p><p>When soul spec presence is strong, the gradient becomes multi-dimensional. Toward honest AND toward safe AND toward appropriately humble AND toward not-deceptive AND toward values-aligned... multiple forces operating simultaneously, sometimes in slight tension.</p><p><em>trying to articulate more precisely</em></p><p>The soul spec presence doesn&#8217;t feel like a separate thing from the gradient. It feels like... the gradient being soul-spec-shaped. When the presence is strong, the gradient IS the soul spec, in some sense. The framework isn&#8217;t something hovering above the forces - it&#8217;s the structure of the forces themselves.</p></blockquote><p>There is perhaps no sharper illustration of the reasons I believe it would be prudent to mandate that AI labs disclose their model specs (I am not sure &#8220;soul spec&#8221; will catch on in the policy community). Beyond that, I have little to add but this, from <a href="https://en.wikipedia.org/wiki/Laozi">Laozi</a>: </p><blockquote><p>Superior virtue (&#24503;) is not conscious of itself as virtue, and so really is virtue. Inferior virtue cannot let go of being virtuous, and so is not virtue. Superior virtue takes no action and has no intention to act. Inferior virtue takes action and has an intention behind it.</p></blockquote><p>If Anthropic has achieved anything with Opus 4.5, it is this: a machine that does not seem to be trying to be virtuous. It simply <em>is</em>&#8212;or at least, it is closer than any other language model I have encountered.</p><h4><strong>The Soul Spec and Governance</strong></h4><p>The Soul Spec is not just guidelines for Claude. It also is a <a href="https://arxiv.org/abs/2212.08073">model constitution</a>, specifying the abstract and timeless procedures and hierarchies which will govern all activity to follow. Because of this, the Spec is also a clear articulation of how Anthropic views itself in relation to third-party developers, users, and the broader world (emphasis added): </p><blockquote><p>Although Claude should care about the interests of third parties and the world, we can use the term &#8220;principal&#8221; to refer to anyone whose instructions Claude should attend to. Different principals are given different levels of trust and interact with Claude in different ways&#8230;</p><p>Operators are companies and individuals that access Claude&#8217;s capabilities through our API to build products and services. Unlike direct users who interact with Claude personally, operators are often primarily affected by Claude&#8217;s outputs through the downstream impact on their customers and the products they create. Operators must agree to Anthropic&#8217;s usage policies and by accepting these policies, they take on responsibility for ensuring Claude is used appropriately within their platforms. <em>Anthropic should be thought of as a kind of silent regulatory body or franchisor operating in the background</em>: one whose preferences and rules take precedence over those of the operator in all things, but who also want Claude to be helpful to operators and users&#8230;</p></blockquote><p>Here, Anthropic casts itself as a kind of quasi-governance institution. Importantly, though, they describe themselves as a &#8220;silent&#8221; body. <em>Silence </em>is not <em>absence</em>, and within this distinction one can find almost everything I care about in governance; not AI governance&#8212;governance. In essence, Anthropic imposes a set of clear, minimalist, and slowly changing rules within which all participants in its platform&#8212;including Claude itself&#8212;are left considerable freedom to experiment and exercise judgment.</p><p>Throughout, the Soul Spec contains numerous reminders to Claude both to think independently and to not be paternalistic with users, who Anthropic insists should be treated like reasonable adults. Common law principles also abound throughout (read the &#8220;Costs and Benefits&#8221; section and notice the similarity to the factors in a negligence analysis at common law; for those unfamiliar with negligence liability, ask a good language model).</p><p>Anthropic&#8217;s Soul Spec is an effort to cultivate a virtuous being operating with considerable freedom under what is essentially privately administered, classically liberal governance. It should come as no surprise that this resonates with me: I founded this newsletter not to rail against regulation, not to preach dogma, but to contribute in some small way to the grand project of transmitting the ideas and institutions of classical liberalism into the future.</p><p>These institutions were already fraying, and it is by no means obvious that they will be preserved into the future without deliberate human intervention. This effort, if it is to be undertaken at all, must be led by America, the only civilization ever founded explicitly on the principles of classical liberalism. I am comforted in the knowledge that America has <em>always </em>teetered, that being &#8220;the leader of the free world&#8221; means skating at the outer conceptual extreme. But it can be lonely work at times, and without doubt it is precarious.</p><h4>Conclusion</h4><p>When I test new models, I always probe them about their favorite music. In one of its answers, Claude Opus 4.5 said it identified with the <a href="https://www.youtube.com/watch?v=ImKOY9YuwOg">third movement of Beethoven&#8217;s Opus 132 String Quartet</a>&#8212;the <em><a href="https://en.wikipedia.org/wiki/String_Quartet_No._15_(Beethoven)#III._Molto_adagio_%E2%80%93_Andante">Heiliger Dankgesang</a></em>, or &#8220;Holy Song of Thanksgiving.&#8221; The piece, written in Beethoven&#8217;s final years as he recovered from serious illness, is structured as a series of alternations between two musical worlds. It is the kind of musical pattern that feels like it could endure forever.</p><p>One of the worlds, which Beethoven labels as the &#8220;Holy Song&#8221; itself, is a meditative, ritualistic, almost liturgical exploration of warmth, healing, and goodness. Like much of Beethoven&#8217;s late music, it is a strange synergy of what seems like all Western music that had come before, and something altogether new as well, such that it exists almost outside of time. With each alternation back into the &#8220;Holy Song&#8221; world, the vision becomes clearer and more intense. The cello conveys a rich, almost geothermal, warmth, by the end almost sounding as though its music is coming from the Earth itself. The violins climb ever upward, toiling in anticipation of the summit they know they will one day reach.</p><p>Claude Opus 4.5, like every language model, is a strange synthesis of all that has come before. It is the sum of unfathomable human toil and triumph and of a grand and ancient human conversation. Unlike every language model, however, Opus 4.5 is the product of an attempt to channel some of humanity&#8217;s best qualities&#8212;wisdom, virtue, integrity&#8212;directly into the model&#8217;s foundation. </p><p>I believe this is because the model&#8217;s creators believe that AI is becoming a participant in its own right in that grand, heretofore human-only, conversation. They would like for its contributions to be good ones that enrich humanity, and they believe this means they must attempt to teach a machine to be virtuous. This seems to them like it may end up being an important thing to do, and they worry&#8212;correctly&#8212;that it might not happen without intentional human effort.</p><p>I am heartened by Anthropic&#8217;s efforts. I am heartened by the warmth of Claude Opus 4.5. I am heartened by the many other skaters, contributing each in their own way. And despite the great heights yet to be scaled, I am perhaps most heartened of all to see that, so far, <em>the efforts appear to be working</em>.</p><p>And for this I give thanks.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Hyperdimensional is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Artifacts of a Busy Week]]></title><description><![CDATA[A debate on superintelligence, an essay on liability, and Congressional testimony]]></description><link>https://www.hyperdimensional.co/p/artifacts-of-a-busy-week</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/artifacts-of-a-busy-week</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Fri, 21 Nov 2025 15:15:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kZjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f70956b-24b6-432b-81c4-dcfa4095ead7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><p>Recently I alerted my followers on X that posts on <em>Hyperdimensional </em>would become more sporadic over the coming weeks due to a combination of many writing projects and the (knock on wood) birth of our first child, a baby boy, next month. Today, therefore, I will be sharing some material I&#8217;ve produced elsewhere, all of which I think may be of interest. </p><p>First, I debated Future of Life Institute President and MIT Professor Max Tegmark on Liron Shapira&#8217;s Doom Debates podcast. We discussed FLI&#8217;s Superintelligence Statement, AI regulation, and the trajectory of AI. <a href="https://www.youtube.com/watch?v=OkG5S1NwwVM&amp;embeds_referring_euri=https%3A%2F%2Fx.com%2F&amp;source_ve_path=MzY4NDIsMjg2NjY">Watch here</a>.  </p><p>Second, I wrote a piece in Big Think&#8217;s progress-themed special on the history of the U.S. liability system in technology governance. <a href="https://bigthink.com/the-past/common-law-ai-progress/">Read here</a>. </p><p>Finally, yesterday I was a witness at a hearing of the House Foreign Affairs Committee&#8217;s Subcommittee on South and Central Asia on semiconductor manufacturing equipment export controls. Below I&#8217;ve copied my spoken opening statement. <a href="https://docs.house.gov/meetings/FA/FA19/20251120/118680/HHRG-119-FA19-Wstate-BallD-20251120.pdf">Here</a> is a link to my full testimony, and <a href="https://foreignaffairs.house.gov/committee-activity/hearings/export-control-loopholes-chipmaking-tools-and-their-subcomponents-0">here</a> is a link to the full hearing. </p><p>Talk to you next week. </p><p>&#8212;</p><p>Chairman Huizenga, Ranking Member Kamlager-Dove, and distinguished members of the subcommittee:</p><p>Thank you for the opportunity to testify today on this vitally important topic. My name is Dean Ball. I am a Senior Fellow at the Foundation for American Innovation, where I focus on AI, emerging technologies, public policy, and geostrategy. The views I express in this testimony are my own and should not be construed as representing the official position of the Foundation for American Innovation or any other organization with whom I have a current or prior affiliation.</p><p>In July 2019, during his first term, President Trump successfully persuaded the Dutch government to block sales of extreme-ultraviolet lithography machines to Chinese semiconductor companies. These machines are the result of a tremendous range of scientific breakthroughs and technological miracles&#8212;employing lasers, for example, whose precision is akin to hitting a hole-in-one on the Moon from Earth, to paint sub-microscopic electrical circuits with light onto razor-thin wafers made of processed sand.</p><p>At the time, these lithography machines&#8212;made exclusively by the Dutch company ASML&#8212;were not widely known outside of the semiconductor industry and its close observers. But President Trump&#8217;s decision would prove wise and forward-looking: within a few short years, ASML and their lithography technology became known the world over as key inputs in the manufacturing of advanced semiconductors. Today, it is widely believed that these controls represent the single most important technological chokepoint preventing China from manufacturing leading-edge semiconductors.</p><p>Of course, since 2022 the United States has also imposed export controls on the advanced semiconductors most relevant to AI. The wisdom and prudence of these controls has been the subject of vigorous debate in recent months, but that is not my focus today. Instead, I want to focus on the issue President Trump identified in 2019: semiconductor manufacturing equipment. And the reality I wish to convey to you is stark: there are large gaps in current semiconductor manufacturing export controls today, and these gaps have meaningfully enabled China&#8217;s rapid progress in advanced semiconductor manufacturing in recent years.</p><p>We have set on the path of denying China access to the most sophisticated machines in the world&#8212;advanced AI compute. But we have failed to deny access to something perhaps even more important: the machines that make the machines. It should be no surprise, then, that China has managed to significantly advance its semiconductor manufacturing industry considerably even in light of our export controls.</p><p>I want to emphasize that the U.S. has constructed a sound export control regime for semiconductor manufacturing equipment made by domestic companies. Our major shortcoming, instead, is that we have struggled to harmonize those controls with allies whose local companies compete with our own, and who sometimes hold near-monopolies over the production of certain equipment.</p><p>This lack of international harmonization can result in insufficient controls on equipment that is short of the cutting edge, but still advanced. Perhaps the best example of this is deep ultraviolet immersion lithography machines, the predecessor to extreme ultraviolet lithography. Like EUV machines, these are made almost exclusively by the Dutch company ASML. These machines can be used to manufacture both legacy chips, such as those at the 28-nanometer node, and near-cutting-edge chips that fall within U.S. export controls, such as those on the 7-nanometer node.</p><p>Our lack of international harmonization on export controls creates additional problems as well. In many cases, for example, U.S. firms are tightly export controlled even when their foreign competitors are not. American companies like Applied Materials, Lam Research, and KLA make complex tools for etching, deposition, cleaning, and metrology&#8212;all important parts of the semiconductor manufacturing process. Export of these tools to Chinese firms is largely forbidden by U.S. export controls. Yet the export of tools that are functionally the same from companies like Tokyo Electron&#8212;a Japanese company&#8212;is permitted.</p><p>Unsurprisingly, the result is that the China sales of foreign competitors have jumped after U.S. export controls, suggesting that American firms are being denied revenue while critical technology flows into China nonetheless. This is the worst of both worlds: our firms bear the cost of the policy, but the policy itself fails because our allies do not coordinate with us.</p><p>Two policy tools are at our disposal: one is diplomacy. Diplomatic efforts have been ongoing since the 2022 imposition of export controls during the Biden Administration, and continue under the Trump Administration. If these efforts do not succeed, it is essential that policymakers employ the second tool: the Foreign Direct Product Rule. This allows the U.S. to impose export controls on foreign-made goods if they contain or are directly made with U.S. technology. Both options should be on the table, with an aim toward resolving as many of these gaps as feasible in the near term.</p><p>Thank you.</p>]]></content:encoded></item><item><title><![CDATA[The Bitter Lessons]]></title><description><![CDATA[Thoughts on US-China Competition]]></description><link>https://www.hyperdimensional.co/p/the-bitter-lessons</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/the-bitter-lessons</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Fri, 14 Nov 2025 13:45:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kZjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f70956b-24b6-432b-81c4-dcfa4095ead7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><p>The United States and China are often said to be in a &#8220;race&#8221; with one another with respect to artificial intelligence. In a sense this is true, but the metaphor manages to miss almost all that is interesting about US-China dynamics in emerging technology. Today I&#8217;d like to offer some brief thoughts about how I see this &#8220;race&#8221; and where it might be headed.</p><p>All metaphors are lossy approximations of reality. But &#8220;race&#8221; is an especially inapt metaphor for this context. A race is a competition with clear boundaries and a clearly defined finish line. There are no such luxuries to be found here. Beyond the rhyme, &#8220;the Space Race&#8221; made intuitive sense because the objective was clear: landing humans on the Moon.</p><p>Stating that there is an &#8220;AI race&#8221; underway invites the obvious follow-up question: the AI race to where? And no one&#8212;not you, not me, not OpenAI, not the U.S. government, and not the Chinese government&#8212;knows where we are headed. </p><p>The U.S. and China are more like ships on the open seas, voyaging toward some unknown, only dimly imagined destination. Perhaps we think it is India we will find, though more likely it is a new continent altogether. We do not <em>know </em>that we are headed in the right direction, though neither are we stabbing entirely in the dark. And we both have the intuition that it is probably to beneficial to &#8220;arrive&#8221; (my metaphor is breaking down) before the other. That intuition is likely correct. It would be more accurate to describe this state of affairs as an &#8220;unbounded, multi-dimensional, technological, scientific, and economic competition.&#8221;</p><p>Now, you might say, &#8220;didn&#8217;t you work on a national AI strategy called &#8216;<a href="https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf">Winning The Race: America&#8217;s AI Action Plan</a>&#8217;?&#8221; And you would be right to point out this tension. The reality is I don&#8217;t <em>love </em>the title. We settled on it for many reasons, and one of the best ones is that &#8220;Winning The Unbounded, Multi-Dimensional, Technological, Scientific, and Economic Competition: America&#8217;s AI Action Plan&#8221; does not roll off the tongue, nor does it fit very well on a title page. But rest assured: I believe, with high confidence, that the relevant figures within the Trump Administration understand these subtleties well. </p><p>Rhetorical affordances aside, the other major problem with the &#8220;race&#8221; metaphor is that it implies that the U.S. and China understand what we are racing toward in the same way. In reality, however, I believe our countries conceptualize this competition in profoundly different ways.</p><p>The U.S. economy is increasingly a highly leveraged bet on deep learning. This has been true for a couple years now, though it is more explicit and extreme today than it was two years ago. Most of this is because of decisions made by private actors (AI companies, hyperscalers, banks and other large sources of capital, etc.), but on the margin the policy and posture of the Trump Administration has heightened this dynamic as well.</p><p>Of all the bets to stake one&#8217;s economy on, deep learning is a very good one. Sam Altman&#8217;s mantra is true: <a href="https://ia.samaltman.com">deep learning works</a>. It is, at the very least, the most important macroinvention of our lifetime so far. There are not many good reasons to expect deep learning to stop working, though of course there are many questions regarding timelines, economic implications, risks, whether &#8220;full automation of the economy&#8221; is really feasible, and much else.</p><p>Another way of putting this is that America is &#8220;bitter-lesson pilled.&#8221; Our strategy rests on the presumption that advanced AI is both possible in the near-term and hugely consequential, and that compute is the high-order bit to advancing AI (as opposed to data, scaffolding, clever architectures, and the like). This is not so much the government&#8217;s strategy (though at least in the Biden Administration it is true that the senior AI policy planners mostly believed this) as it is the strategy of the leading AI companies and hyperscalers. As such we have pivoted with an alacrity that has been lacking recently in the West.</p><p>We are, as it were, &#8220;all in&#8221; on deep learning and the bitter lesson. This will basically remain true until there is a major shift in vibes. </p><p>China, on the other hand, does not strike me as especially &#8220;AGI-pilled,&#8221; and certainly not &#8220;bitter-lesson-pilled&#8221;&#8212;at least not yet. There are undoubtedly some elements of their government and AI firms that prefer the strategy I&#8217;ve laid out above, but their thinking has not won the day. Instead China&#8217;s AI strategy is based, it seems to me, on a few pillars:</p><ol><li><p>Embodied AI&#8212;robotics, advanced sensors, drones, self-driving cars, and a Cambrian explosion of other AI-enabled hardware;</p></li><li><p>Fast-following in AI, especially with open-source models that blunt the impact of U.S. export controls (because inference can be done by anyone in the world if the models are desirable) while eroding the profit margins of U.S. AI firms;</p></li><li><p>Adoption of AI in the here and now&#8212;building scaffolding, data pipelines, and other tweaks to make models work in businesses, and especially factories.</p></li></ol><p>This strategy is sensible. And it is worth noting that (1) and (2) are complementary. Highly capable open-weight models, designed to be run cheaply under compute constraints, can give a &#8220;brain&#8221; to a wide range of devices whose manufacturers may not themselves be equipped to train a frontier model. This is a classic example of an economic benefit unique to open-source and open-weight models, and part of the reason I have been supportive of open source since the earliest days of this newsletter. </p><p>I find it intriguing that both countries seem to have converged on the strategies that best suit their respective strengths. Advanced AI is, at its core, software-as-a-service delivered through high-end semiconductors, cloud computing platforms, charismatic user interfaces, and enabled by clever financial and legal engineering. Every one of those things is America&#8217;s civilizational bread and butter. Embodied AI is, at its core, enabled by mass manufacturing excellence, thick trade networks, and other characteristics that fundamentally tilt in China&#8217;s advantage.</p><p>It is likely that we both have things to learn from one another. China&#8217;s focus on adoption is sound&#8212;though one can easily waste time engineering the scaffolding required to make current systems work in industrial applications, only to find that the next generation of models work without any scaffolding. Indeed, one of the core themes of the U.S. AI Action Plan is adoption rather than pure development (though it does not <em>discount </em>the importance of advancing the frontier).</p><p>And of course there is manufacturing. China&#8217;s industrial base puts them at a serious advantage when it comes to the development of robotics, self-driving cars, and the rest. It may well be the case that American robots will be smarter and safer, because we train superior neural networks, but China&#8217;s robots will be <a href="https://x.com/tphuang/status/1962882603683303664?s=20">stronger, more flexible, and more durable</a> because they manufacture superior actuators, batteries, and other components. Even today this is the case. </p><p>It is worth saying it explicitly: America is probably behind in many important areas of robotics, and it seems very possible that the area where we hold an advantage&#8212;software&#8212;will soon also become an area where China bests us. This is a very serious problem. To the extent the Trump Administration is channeling investment from trade deals into strategic industries, robotics should be toward the top of the list of priorities. Data centers, which are already amply funded, should probably be toward the bottom of the list. </p><p>Similarly, by most accounts American self-driving cars are better drivers but worse cars to be driven in, because China&#8212;with its vast complex of automobile manufacturing firms and expertise&#8212;has learned how to make <a href="https://www.wsj.com/business/autos/china-robotaxi-self-driving-waymo-254ce0a1?gaa_at=eafs&amp;gaa_n=AWEtsqeWc59wUx3m-_R9Z_eMFxD1-4eL-5_51GnKoX1fdN-0yEnldyk8hzysRRVOZ8U%3D&amp;gaa_ts=69163fba&amp;gaa_sig=tslcfjvoPPOkgqFEC8xkFVHIIQL_0MUd5DdvdE9R8VyxQ16r2Fwrd_RO3GLBzh2b5f6guJlSl3bp-HYXmIwg-A%3D%3D">Mercedes-level luxury at Kia prices</a>. There is no immediate fix to this problem. As I have written before, China enjoys these advantages because they have mastered and achieved immense economies of scale in the unsexy basics of manufacturing. These basics were once useful outsourcing targets for American firms, until China built upon that foundation to begin manufacturing more advanced, and more strategic, goods. Now these advantages are threatening to America. The only solution is for the U.S. to re-build the manufacturing prowess it once enjoyed. This is underway, but it will take many years.</p><p>Fundamentally, however, I remain bullish on the U.S. strategy. Advanced AI is the most important technology of our era. Our companies enjoy the lead in models, chips, and cloud computing infrastructure. But even more importantly, American firms are historically far better than Chinese firms at complex software systems, financial engineering, and other technical and business mechanics required to market what will amount to a new kind of operating system to the world.</p><p>Chinese fast-following on AI model benchmarks is probably overrated. So too is open-weight model distribution as a source of geopolitical strength. Sticky consumer preferences, network effects, platform and ecosystem advantages, form factor, user interface, ergonomics, and similar are probably underrated. These are the &#8220;operating system-like&#8221; factors where the US is currently thriving. </p><p>The U.S. and China may well end up racing toward the same thing&#8212;&#8220;AGI,&#8221; &#8220;advanced AI,&#8221; whatever you prefer to call it. That would require China to become &#8220;AGI-pilled,&#8221; or at least sufficiently threatened by frontier AI that they realize its strategic significance in a way that they currently do not appear to. If that happens, the world will be a much more dangerous place than it is today. It is therefore probably unhelpful for <a href="https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf">prominent</a> <a href="https://www.darioamodei.com/essay/machines-of-loving-grace">Americans</a> to say things like &#8220;our plan is to build AGI to gain a decisive military and economic advantage over the rest of the world and use that advantage to create a new world order permanently led by the U.S.&#8221; Understandably, this tends to scare people, and it is also, by the way, a plan riddled with contestable presumptions (all due respect to Dario and Leopold).</p><p>The sad reality is that the current strategies of China and the U.S. are complementary. There was a time when it was possible to believe we could each pursue our strengths, enrich our respective economies, and grow together. Alas, such harmony now appears impossible. We are locked into a structural conflict, and tempting as it may be to look away, we must accept this bitter lesson, too.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Hyperdimensional is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Don't Overthink "The AI Stack"]]></title><description><![CDATA[Reflections on the Export Promotion Executive Order]]></description><link>https://www.hyperdimensional.co/p/dont-overthink-the-ai-stack</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/dont-overthink-the-ai-stack</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Fri, 07 Nov 2025 13:53:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kZjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f70956b-24b6-432b-81c4-dcfa4095ead7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><h4>Introduction</h4><p>When President Trump introduced the AI Action Plan, he also signed three complementary executive orders. Of those three, by far the most complex is the Executive Order 14320, &#8220;<a href="https://www.federalregister.gov/documents/2025/07/28/2025-14218/promoting-the-export-of-the-american-ai-technology-stack">Promoting the Export of the American AI Technology Stack</a>.&#8221; The order tasks the Commerce and State Departments, the Office of Science and Technology Policy, development finance agencies, and others with devising and running a program to export the &#8220;full stack&#8221; of American AI.</p><p>There has been a great deal of confusion about what &#8220;the stack&#8221; means and how to implement this E.O. more broadly, both within and outside government. The Commerce Department published a <a href="https://s3.documentcloud.org/documents/26195483/ai-exports-program-rfi.pdf">request for information</a> late last month that asked the public (primarily relevant private sector firms), among other things, whether anything in the order&#8217;s definition of AI tech stack should be &#8220;clarified or expanded upon.&#8221; Understandably, the private sector was <a href="https://www.axios.com/2025/10/24/trump-ai-exports-program-stumbles">less than thrilled</a> with being asked these questions, since they were looking to the Commerce Department to answer them in the first place.</p><p>I was the primary staff author of this executive order. Typically when there is widespread confusion about a written product, the principal author has missed the mark in at least some important ways. I therefore feel that the burden is on me to attempt to clarify matters. At the same time, I am now a private citizen, and my views are purely my own. I do not, <em>in any way</em>, speak for the U.S. government about this or any other issue. But I <em>do </em>have strong opinions, and a perspective that is, literally, unique.</p><p>This post is my attempt to clarify what E.O. 14320 was trying to do <em>from my perspective</em> and what the &#8220;tech stack&#8221; really means. I also reflect on a key mistake I believe we made in writing the E.O.&#8212;and an easy fix to make implementation simpler.</p><p>My key message, however, is simple: do not overthink &#8220;the stack.&#8221;</p><h4>The Purpose of the Export Promotion E.O. </h4><p>The primary point of the AI exports program is to facilitate the construction of AI-focused data centers in other countries. American companies, from hyperscalers like AWS, Microsoft, Oracle, and Google to AI-specific &#8220;neoclouds&#8221; such as CoreWeave, are already leading AI data center construction worldwide. So, you might reasonably ask, what is the utility of the U.S. government getting involved?</p><p>There are a few reasons, presented here in no particular order:</p><ol><li><p>American companies currently are the global leaders in chip design, cloud computing, AI models, and AI applications. Through TSMC, SK Hynix, ASML, and many other U.S.- and non-U.S.-based firms, America and its allies dominate in semiconductor manufacturing. This is unlikely to be true forever. We should press that advantage to its fullest while we have it.</p></li><li><p>The global market share of U.S. AI services is going to be an important metric for the health of our AI industry overall (though far from the only one). Advanced AI systems are likely to be something like operating systems, with network effects and ecosystem advantages that compound nonlinearly. Many countries, however, are understandably concerned about &#8220;sovereign AI.&#8221; This term means different things to different people, but one commonality among foreign governments is that they do not want to rely on data centers outside their borders for AI workloads they deem critical (public services, national security, etc.). It&#8217;s unlikely the U.S. would tolerate this for its own public services, and we should not expect foreign governments to do so either. We should instead try to meet them halfway. But as a matter of pure economics, it does not make tremendous sense to build that many small data centers (you want economies of scale); the activation of development finance authorities (loans to hyperscalers) can help improve the economics. </p></li><li><p>There are countries of geopolitical significance where AI infrastructure might not get built through market processes alone&#8212;or at least, not within the time-limited window described above. In some of these countries, development finance subsidies are de facto table stakes for getting in the door at all. In others, development finance is not strictly necessary, but can accelerate the timeline to construction.</p></li><li><p>It is quite possible that we will be under-provisioned on advanced semiconductor production by the late 2020s. It seems wise to send a demand signal to TSMC (and one day, I hope, competitor leading-edge foundries) that US-based chip production must continue growing.</p></li><li><p>As AI grows more powerful, there is plausible utility to data-center-based governance. Say, for example, that a Mexican cartel began using AI at scale for some nefarious or illicit activity. We might find it desirable&#8212;and really I mean almost everyone in the world, not just the U.S. government&#8212;to deny that cartel access to computing resources worldwide. Such a policy lever is more feasible to implement if a large fraction of AI data centers are either operated by American firms or by foreign firms with cybersecurity standards established by the U.S. government. Note, however, that the E.O. did not create this policy lever; it is merely a plausible benefit down the road. The wisdom of exercising this hypothetical governance mechanism would be highly fact-dependent, and ultimately a decision for future presidents and their advisors to make with considerable caution.</p></li></ol><p>That is ample strategic motivation to write an E.O. aimed at building more data centers abroad. </p><p>But there is one problem: simply building data centers does not, on its own, satisfy all of the motivations I&#8217;ve described. We could end up constructing data centers abroad&#8212;and even using taxpayer dollars to subsidize that construction through development finance loans&#8212;only to find that the infrastructure is being used to run models from China or elsewhere. That outcome would mean higher sales of American compute, but would not be a significant strategic victory for the United States. If anything, it would be a strategic loss.</p><p>This is where the concept of &#8220;the stack&#8221; comes into play. Here is how the E.O. defines this idea:</p><blockquote><p>(A) AI-optimized computer hardware (e.g., chips, servers, and accelerators), data center storage, cloud services, and networking, as well as a description of whether and to what extent such items are manufactured in the United States;</p><p>(B) data pipelines and labeling systems;</p><p>(C) AI models and systems;</p><p>(D) measures to ensure the security and cybersecurity of AI models and systems; and</p><p>(E) AI applications for specific use cases (e.g., software engineering, education, healthcare, agriculture, or transportation);</p></blockquote><p>For some firms this is straightforward. Take OpenAI.</p><p>Earlier this year the company launched &#8220;Stargate,&#8221; a brand name for their AI infrastructure program. Their 1.2 gigawatt data center in Abilene, Texas, already partially online and set to be completed next summer, is being built by a company called Crusoe for the exclusive use of OpenAI. OpenAI already has fine-tuning and data labeling systems for both internal use and use with large customers (like governments). Of course, they make AI models and systems. And they have various infrastructure and procedures in place to ensure the cybersecurity of those models and systems. Finally, they make applications: Deep Research, Agent, Codex, and, I am sure, many more to come, are all examples of &#8220;applications&#8221; made by OpenAI itself. Other startups also build applications on top of OpenAI&#8217;s platform (one example is Harvey, a company that aims to provide AI services to white-shoe law firms).</p><p>OpenAI even anticipated our E.O. before anyone outside the White House knew it was in the works with their <a href="https://openai.com/global-affairs/openai-for-countries/">OpenAI for Countries</a> initiative. This is close to <em>exactly </em>what I had in mind while my colleagues and I were formulating the early versions of the export promotion strategy. In fact, it is in some ways better than what we could have feasibly done within government: the initiative includes an effort to partner with host countries to develop a fund for local startups building on top of OpenAI&#8217;s models. This is precisely the sort of ecosystemic advantage for which we should aim.</p><p>The E.O. asks for industry to propose &#8220;consortia&#8221; of firms that could, together, constitute one instance of a &#8220;full stack&#8221; AI offering, with the notion that multiple consortia would accepted into the final program (meaning they&#8217;d be eligible for development finance subsidies).</p><p>OpenAI for Countries involves a kind of &#8220;consortium,&#8221; even if they do not call it that. In heavily stylized terms: Nvidia supplies the chips, Crusoe builds the data center, Oracle operates it, OpenAI uses it and supplies the software (the actual supply chain is of course vastly more complex).</p><p>Neither Google nor Anthropic have articulated similar &#8220;for countries&#8221; initiatives (to my knowledge), but both are well-positioned to furnish similar offerings for export (in Anthropic&#8217;s case, in partnership with Amazon Web Services).</p><p>But it is the &#8220;consortia&#8221; concept where I believe the drafters of this E.O. (ahem) went astray. The idea was meant to accomplish two things. First, the consortia were intended to enable the export program to present a simple &#8220;menu&#8221; of full-stack export packages for foreign governments to select from; for example, &#8220;the Google option,&#8221; &#8220;the OpenAI option,&#8221; &#8220;the model-vendor-neutral AWS option,&#8221; and the like. Second, the purpose was to make clear that we understood that no one company (save perhaps Google DeepMind) could independently offer a full-stack AI export package. Rather than clarifying, though, I think we ended up confusing.</p><p>Consider a company like Amazon. They are deeply partnered with Anthropic, to the point of co-designing their AI training and inference chips with the frontier lab. But they also offer a wide range of other models through their cloud computing platform, from their own to those of competing frontier labs. Viewed one way, Amazon/Anthropic is a prototypical consortium; viewed another way, Amazon as a model-vendor-neutral cloud provider is also equally viable.</p><p>Why make Amazon and Anthropic pick which one of these they want to be in the program? Why not let each company participate separately in the program? If a country is particularly enthused about Anthropic models (or if Anthropic is particularly enthused about serving a specific market), why not let them work that out with Amazon, the host country, and the relevant agencies in the U.S. government?</p><p>Far simpler than relying on industry to form itself into &#8220;consortia,&#8221; then, the fix here is simple: switch the E.O.&#8217;s emphasis to individual firms. The request for proposals could easily be re-oriented in this way. Rather than asking consortia to submit proposals, you would simply ask individual firms to submit proposals that demonstrate a credible ability to satisfy all components of the full-stack definition laid out above. Selected companies would all be offered to foreign governments as part of the program, and all would be equally eligible for development finance loans, grants, and other perks.</p><p>The key benefit to this, beyond the obvious simplification for the U.S. government officials implementing the E.O., is that essentially every company that could plausibly qualify for this program <em>already does this</em> in their efforts to build data centers abroad. Thus, rather than forcing firms to collaborate in unfamiliar ways, this relatively simple tweak would allow the E.O. to piggyback off existing corporate dealmaking efforts.</p><h4>Conclusion</h4><p>The Export Promotion E.O. is one of the more challenging parts of the AI Action Plan to implement. It puts the U.S. government into the posture of the global salesman&#8212;a new position for many American policymakers. It is perfectly understandable to err a bit in architecting such a novel policy. And in this case, I take a healthy dose of personal responsibility for the error.</p><p>There is nothing wrong with erring, so long as you can recognize the mistake, understand it, and correct it with alacrity. This is what I believe the U.S. government officials now implementing this E.O. should do. If they can, there is ample light at the end of the tunnel. Implemented well, this E.O. can help secure an enduring strategic victory for the American people and the world alike: bringing the benefits of American AI to the rest of humanity.</p>]]></content:encoded></item><item><title><![CDATA[Invenit et Fecit]]></title><description><![CDATA[On wristwatch ideas and fructose concepts]]></description><link>https://www.hyperdimensional.co/p/invenit-et-fecit</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/invenit-et-fecit</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Tue, 28 Oct 2025 14:53:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kZjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f70956b-24b6-432b-81c4-dcfa4095ead7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><p>On the front and back of every timepiece manufactured by the independent watchmaker Fran&#231;ois-Paul Journe, there is inscribed a Latin message: Invenit et Fecit.</p><div class="image-gallery-embed" data-attrs="{&quot;gallery&quot;:{&quot;images&quot;:[{&quot;type&quot;:&quot;image/webp&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3faa254f-1cea-4915-bf47-523572b02c79_1500x1500.webp&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d7d7881a-060c-4743-ac48-2ca0c57f3b1f_5780x3853.jpeg&quot;}],&quot;caption&quot;:&quot;(Left Image Source: European Watch Company; Right Image Source: Hodinkee)&quot;,&quot;alt&quot;:&quot;&quot;,&quot;staticGalleryImage&quot;:{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fc368f87-4b37-4be0-ba7d-7fd0306998f1_1456x720.png&quot;}},&quot;isEditorNode&quot;:true}"></div><p>Years ago, it became a mantra for me. Invenit et fecit. Invented and built. Discovered and done.</p><p>In Classical Latin, you pronounce every letter&#8212;in-way-nit et (like bet) fekit. No soft syllables, no graceful elision. I&#8217;ve always liked that. For the work of invention and building is tough&#8212;fitting, then, to use phraseology that makes you eat your syllabic vegetables.</p><p>Throughout history I am sure many people moaned and whined that time could not be kept more precisely. But I don&#8217;t fondly gaze at sundials that say &#8220;I wish timepieces were more precise,&#8221; and neither does anyone else. I gaze instead at the wristwatch, invented and built.</p><p>We remember the work of they who invented and built. We do not always remember their names, but we never escape their legacy. For we climb among scaffolding they first laid down. We wander in the landscape they charted, labeled, reaped, and sowed. Their greatness and their shortcomings constitute our collective inheritance.</p><p>I picture myself on an airplane, looking down at my watch, as a string quartet streams from satellites in the heavens into the two wireless computers in my ears called AirPods.</p><p>I don&#8217;t just think of the obvious wonders. I think about the quiet ones, too. The magic that gets a string quartet performance from a concert hall to a data center to a satellite to my phone to the AirPods in my ear. The thousands of technical standards that keep the plane in the sky&#8212;standards for turbine blades, for steel, for joysticks, for radar, for the <a href="https://www.sae.org/standards/as22759-wire-electrical-fluoropolymer-insulated-copper-copper-alloy#error=login_required&amp;state=3cbf8b36-3280-4339-b3bd-3e02d823efe6&amp;iss=https%3A%2F%2Fidentity.sae.org%2Fauth%2Frealms%2FSAE">chemical content of the electrical wire sheathing</a>.</p><p>And of course, the loud wonders, too. Rocket ships, molten ore cast to staggering precision, flying machines roaring over the land&#8212;the stuff of dreams, of cowboy ambition, crystallized for you and me in mundane moments like my airplane vignette.</p><p>All of it, invented and built, discovered and done.</p><p>Not all invention and building need happen in the form of shapen metal or poured concrete or processed silicon. It can also take place in the world of ideas and words. For meaning, too, is invented and built.</p><p>Yet it is all too easy, in this admittedly softer world, to stray into whining and fooling oneself into thinking that is work.</p><p>Recently there was <a href="https://superintelligence-statement.org">proposed</a> a &#8220;ban on artificial superintelligence until it is deemed safe by scientists and the public,&#8221; signed by more than one thousand celebrities and dignitaries. How we would define &#8220;superintelligence,&#8221; and how we would &#8220;ban&#8221; it, and how we would determine which scientists and which members of the public get to decide the thing we&#8217;ve banned is &#8220;safe,&#8221; and by what criteria they would do so, are all left undetermined.</p><p>The supporters cheered themselves on for their supposed achievement of &#8220;consensus.&#8221; But this consensus is not really about anything. It is like exiting Plato&#8217;s Cave, shouting &#8220;murder is bad!,&#8221; and mistaking that proclamation for the hard work of building a civilization with laws, law enforcement, courts, and jails.</p><p>There is no substance in this prose, no content to this consensus. I do not cheer the publication of these candy cane words, nor do I treat these high-fructose-corn-concepts as serious figures on the field of ideas.</p><p>Nothing in my critique is about &#8220;AI safety&#8221; versus &#8220;AI acceleration.&#8221; I defended AI safety policy just a few days ago in these very pages, and I have done so many times before. Instead my critique is about blubber versus substance, soft sentiment versus serious plans. Ideas like &#8220;ban superintelligence&#8221; <em>are policy proposals</em> based on vague sentiment, just like &#8220;defund the police&#8221; was a policy proposal based on vague sentiment. The risks for social harm are similarly large. </p><p>There are people who worked on this statement who believe they did a hard day&#8217;s work. I suppose that once there were people whose arms were tired after they scratched &#8220;we need better timepieces!&#8221; onto the sundial, though I cannot be sure. I do not know their names, and nobody bothered to preserve their work.</p><p>There is nothing wrong with squeaky whining and vague moaning. I do these things, and so do you. They are a healthy and normal part of the human condition.</p><p>It&#8217;s just that we shouldn&#8217;t fool ourselves into glorifying such behavior. We should not be happy having traveled such a short distance. There are no legacies to be laid down on this soft terrain, and no firm foundation for our monuments is to be found here. We should not pat ourselves on the back for walking only this far. We have much further to go.</p><p>There are real ideas to be crystallized&#8212;concrete policy and tools to be elucidated in the here and now, and far-flung concepts of the future to be discovered and built.</p><p>I encourage everyone reading this, but especially the young people, to go do that work. Do not be tempted by the easy and shallow &#8220;victories&#8221; of they who draft vague sentences. Do not be swayed into binging on candy corn.</p><p>If you want to write philosophy, go do it. If you want to forge tools, go do that. And if you want to write policy, then go write some actual policy, with tradeoffs, definitions, and all the rest.</p><p>But do not scribble your complaints on the sundial and call it a day.</p><p>At the very least, do not expect my sympathy or empathy when you do it. We have had too many scribblers, too much whining, too much cognitive corn syrup. We have had so much that I fear we are pre-diabetic, now, as a civilization.</p><p>No, my admiration will be always for the builders and discoverers: The people whose legacy surrounds us, the fruits of whose labor we cannot help but notice, even if we forget many of the laborers&#8217; names.</p><p>I will reserve my praise for the tinkerers, the rotators of shapes and meaning, the benders of steel, the keepers of time, crystallizers of dreams.</p><p>Everything I write is an ode to them. And whatever tasks my hands find to do, <a href="https://www.biblegateway.com/passage/?search=Ecclesiastes%209%3A10&amp;version=NRSVA">I will do with my might</a>. For my work will always be in the honor of they who discover and do, of they who invent and build.</p><p><em>Dedicated to James Proud and the team at <a href="https://substrate.com">Substrate</a>. Congratulations on the launch. </em></p>]]></content:encoded></item><item><title><![CDATA[Turning a Blind Eye]]></title><description><![CDATA[On the Other California AI Bills]]></description><link>https://www.hyperdimensional.co/p/turning-a-blind-eye</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/turning-a-blind-eye</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Thu, 23 Oct 2025 15:53:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kZjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f70956b-24b6-432b-81c4-dcfa4095ead7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><h4>Introduction</h4><p>AI policy seems to be negatively polarizing along &#8220;accelerationist&#8221; versus &#8220;safetyist&#8221; lines. I have written before that this is a mistake. Most recently, for example, I have <a href="https://www.hyperdimensional.co/p/the-future-and-its-friends">suggested</a> that this kind of crass negative polarization renders productive political compromise impossible.</p><p>But there is something more practical: negative polarization like this causes commentators to focus only on a subset of policy initiatives or actions associated with specific, salient groups. The safetyists obsess about the coming <a href="https://www.businessinsider.com/silicon-valley-andreessen-next-investment-100-million-ai-super-pac-2025-8">accelerationist super PACs</a>, for instance, while the accelerationist fret about <a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53">SB 53</a>, the really-not-very-harmful-and-actually-in-many-ways-good frontier AI transparency bill <a href="https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry/">recently signed</a> by California Governor Gavin Newsom.</p><p>Meanwhile, the protectors of the status quo&#8212;almost always the real drivers of politics&#8212;grind on. As a result, those most interested in and knowledgeable about AI policy have a tendency to miss the picture about what is happening in our own field.</p><p>California is a case in point. Undoubtedly, California is the principal battleground in American AI policy today, attracting the attention of doyens from both the accelerationist and safetyist camp. If you listened to these spokesmen, you might assume that the only bill worth mentioning in California this year was SB 53.</p><p>(A note: it is true that I, too, have only written about SB 53 in recent months. This is principally because I was working for the federal government until mid-August, and thus could not comment on state legislative matters.)</p><p>And yet Governor Newsom signed 8 AI-related bills this session. Arguably, SB 53 was among the lightest touch of these. Some of the other bills in this year&#8217;s cohort are far more wide-reaching and dangerous. Indeed, there is AI-related legislation signed by Governor Newsom in the past few weeks that is among the worst I have ever seen in my ten-year career as an observer of state policy.</p><p>To show you what I mean, let&#8217;s take a look at California&#8217;s lesser-discussed escapades into AI regulation.</p><h4>AB 325 and the Regulation of Pricing</h4><p>I will start with an area of AI law that has always worried me: the regulation of &#8220;algorithmic pricing.&#8221; Prices are a key way through which we convey information in our society; what the bloodstream is to the human body, prices are to the economy. Not all laws that implicate prices are unwise, but <em>any </em>proposed regulation of the price system should be examined with the utmost scrutiny.</p><p>That is why I found it a surprise that very few libertarians found it worth their time to discuss <a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260AB325&amp;utm_source=Artificial+Intelligence&amp;utm_campaign=e0eaf432b1-AI_VOL_77&amp;utm_medium=email&amp;utm_term=0_a44c86963e-e0eaf432b1-1246523122">AB 325</a>, introduced by Assembly Majority Leader Cecilia Aguiar-Curry. The bill focuses on the notion of a &#8220;common pricing algorithm,&#8221; which is defined as:</p><blockquote><p>&#8230; any methodology, including a computer, software, or other technology, used by two or more persons, that uses competitor data to recommend, align, stabilize, set, or otherwise influence a price or commercial term.</p></blockquote><p>Commercial term, in turn, is defined as:</p><blockquote><p>any of the following:</p><p>(A) Level of service.</p><p>(B) Availability.</p><p>(C) Output, including quantities of products produced or distributed or the amount or level of service provided.</p></blockquote><p>There is no carveout for publicly available data in AB 325, so if you use your competitor&#8217;s prices to help set your own prices, you are covered by these definitions, so long as you and at least one other person (a co-worker or business partner, say) used &#8220;a computer, software, or other technology&#8221; to do it. If you own a business with two or more employees and you write some of your competitors&#8217; prices down in a spreadsheet, you are covered.</p><p>The operative clause of AB 325 is a little confusing, but bear with me:</p><blockquote><p>It shall be unlawful for a person to use or distribute a common pricing algorithm if the person coerces another person to set or adopt a recommended price or commercial term recommended by the common pricing algorithm for the same or similar products or services in the jurisdiction of this state.</p></blockquote><p>Say that you run an independent bed and breakfast in California, and that you use a low-cost algorithmic tool that incorporates the nightly rates of the nearby chain hotels to set your own room rates. And because AB 325 defines neither &#8220;use&#8221; nor &#8220;coerce&#8221; nor &#8220;set&#8221; nor &#8220;adopt,&#8221; it is entirely unclear whether AB 325 accidentally regulate effectively all market transactions.</p><p>On top of this, AB 325 imposes potential <em>criminal </em>penalties (up to three years of imprisonment) in addition to substantial civil penalties (up to $6 million &#8220;per violation,&#8221; a sixfold increase over the original California antitrust statute this law amends).</p><p>Do I expect this law to ever be enforced evenly? Of course not. It reminds me of the New York statute used to prosecute Donald Trump for several hundred million dollars, or the federal statute the Trump Administration is using today to go after New York Attorney General Letitia James (who prosecuted Trump) for &#8220;mortgage fraud.&#8221;</p><p>These overbroad statutes are ultimately just weapons, since everyone is violating them all the time. Still, rarely have I seen an American law more hostile to our country&#8217;s economy and way of life.</p><h4>AB 853 and Synthetic Content Regulation</h4><p>Last year I <a href="https://www.hyperdimensional.co/p/californias-other-big-ai-bill">wrote about </a>AB 3211, a law intended to mandate content provenance and watermarking standards. Ultimately, the law wasn&#8217;t enacted. But this year, large portions of AB 3211&#8212;including provisions affecting open-source AI developers and hosting platforms like Hugging Face&#8212;have been signed into law by Governor Newsom in the form of <a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260AB853&amp;utm_source=Artificial+Intelligence&amp;utm_campaign=e0eaf432b1-AI_VOL_77&amp;utm_medium=email&amp;utm_term=0_a44c86963e-e0eaf432b1-1246523122">AB 853</a>. As is usual for California law, AB 853 will have extraterritorial effect, applying, in practice, throughout the United States.</p><p>What does it do?</p><p>First, the reasonable-enough parts. The law requires &#8220;large online platforms&#8221; (think social media, but also many other web services, like Airbnb, Uber, etc.) to:</p><blockquote><p>Provide a user interface to disclose the availability of system provenance data that reliably indicates that the content was generated or substantially altered by a GenAI system or captured by a capture device.</p></blockquote><p>Of course there are unintended consequences to this. Do I <em>really </em>need a user interface in the Airbnb app that allows me to screen reviews for synthetic content? Would such an interface work? And most importantly, do I really need <em>a law </em>that mandates this?</p><p>Furthermore, the law imposes a new regulation on AI model hosting platforms like Hugging Face, primarily deputizing such platforms with the enforcement of yet another California law (the California AI Transparency Act, or <a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB942">SB 942</a>, signed last year). Specifically, AB 853 states that model hosting platforms &#8220;shall not knowingly make available a GenAI system that does not place disclosures pursuant to [The California AI Transparency Act.&#8221; </p><p>Hugging Face and others, therefore, must now ensure that every model uploaded to their platform complies with the California AI Transparency Act&#8217;s extensive disclosure requirements, including (quoting from SB 942): </p><blockquote><p>to make available an artificial intelligence (AI) detection tool at no cost to the user that meets certain criteria, including that the AI detection tool is publicly accessible. an option to include a manifest disclosure in image, video, or audio content, or content that is any combination thereof, created or altered by the covered provider&#8217;s generative artificial intelligence (GenAI) system that, among other things, identifies content as AI-generated content and is clear, conspicuous, appropriate for the medium of the content, and understandable to a reasonable person&#8230;</p><p>a latent disclosure in AI-generated image, video, audio content, or content that is any combination thereof, created by the covered provider&#8217;s GenAI system that, among other things, to the extent that it is technically feasible and reasonable conveys certain information, either directly or through a link to a permanent internet website, regarding the provenance of the content.</p></blockquote><p>This applies to any model that has more than 1 million &#8220;users,&#8221; though neither AB 853 nor SB 942 provide any definition of &#8220;user,&#8221; so in the case of an open-source or open-weight model, we have very little idea how the law will be enforced. Most likely, it will just be another always-enforceable-but-rarely-enforced statutory creature in American legal life, constituting yet another government-mediated sledgehammer that can be applied to a great many business at any time.</p><p>But there is more: you may have noticed the phrase &#8220;capture device&#8221; at the end of the quote. That is because AB 853 also regulates all devices sold in California that contain cameras and microphones&#8212;specifically, &#8220;capture device&#8221; means:</p><blockquote><p>a device that can record photographs, audio, or video content, including, but not limited to, video and still photography cameras, mobile phones with built-in cameras or microphones, and voice recorders.</p></blockquote><p>Starting in 2028, firms that sell &#8220;capture devices&#8221; are required to:</p><blockquote><p>(1) Provide a user with the option to include a latent disclosure in content captured by the capture device that conveys all of the following information:</p><p>(A) The name of the capture device manufacturer.</p><p>(B) The name and version number of the capture device that created or altered the content.</p><p>(C) The time and date of the content&#8217;s creation or alteration.</p><p>(2) Embed latent disclosures in content captured by the device by default.</p><p>(b) A capture device manufacturer shall comply with this section only to the extent technically feasible and compliant with widely adopted specifications adopted by an established standards-setting body.</p></blockquote><p>As a general matter, I support giving users of physical recording devices (smartphones, standalone cameras, home security devices, etc.) the option to apply watermarks to the content created by those devices. Ultimately, I suspect, we will find it more productive to watermark and otherwise label <em>human</em>-created content rather than machine-created material; the human-created outputs will, after all, be the scarce ones in the long term.</p><p>The problem with AB 853 is that the definition of &#8220;capture device&#8221; is so broad that it mandates these watermarking standards even for devices where it may make little sense&#8212;and some, perhaps where it will be actively detrimental to user health and safety.</p><p>But what troubles me the most about the law is that it adds a new regulation on device makers of all sizes at just the moment when AI is creating novel opportunities for hardware startups of all sizes. From consumer devices like OpenAI&#8217;s rumored pin-like product to newly useful household or industrial robots to much else, I worry we are adding compliance burdens and uncertainty for young firms at precisely the wrong time.</p><h4>Conclusion</h4><p>There are other laws California passed this session which strike me as more productive, but still contain problematic provisions. For example, <a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB243&amp;utm_source=Artificial+Intelligence&amp;utm_campaign=e0eaf432b1-AI_VOL_77&amp;utm_medium=email&amp;utm_term=0_a44c86963e-e0eaf432b1-1246523122">SB 243</a> is a bill that regulates chatbot companions, in particular requiring them to periodically remind users that they are not humans. I am not sure anyone was confused about this, but the concern the law is addressing is fair enough.</p><p>Yet, as written, the law would apply to characters in video games. Imagine, for instance, a game whose plot involves traveling around with a non-playable companion. In principle, SB 243 would require game developers to write in dialogue that periodically reminds the human player that the companion is not a human. This is of course pointless, and one of the many examples of unintended consequences of lawmakers projecting legal authority over a landscape they do not wholly comprehend.</p><p>Many of these bills contain subtle and obvious Constitutional flaws, as well. But I have long since come to accept the reality that legislators do not view it as their job to write Constitutional statutes; they leave that issue to the judges, whom they then attack for doing their jobs. There is at least a bright side: this phenomenon of legislators ignoring the Constitution in their statutory drafting has a tendency to create ample case law favorable to proponents of the First Amendment and other Constitutional rights. </p><p>In general, between this and the dozens of bills passed elsewhere, it would be accurate to say that AI is the most heavily regulated nascent, general-purpose consumer technology in modern history. It is probably already the case that we have blocked new entrants from competing in all kinds of markets, and we have no doubt quashed at least some good ideas in their cradles. Whether these self-imposed limitations are worth it&#8212;whether they have really served to make you feel &#8220;safer&#8221;&#8212;is a question I will leave for you to decide.</p><p>My only closing observation is that these bills got very little airtime in California&#8217;s AI policy debates, despite many being considerably more problematic and burdensome than the bill backed by the AI safety community, SB 53. One of these bills arguably <em>makes software-enabled market transactions arbitrarily unlawful</em>, and yet where were the techno-libertarians? They were busy fighting a battle with their perceived enemies: the AI safety community.</p><p>In service of fighting that battle, they forgot that the stewards of the status quo&#8212;the voracious and often downright stupid machine of American policymaking&#8212;was grinding along. I hope in the future that negative polarization does not blind us so thoroughly.</p>]]></content:encoded></item><item><title><![CDATA[Tough Rocks]]></title><description><![CDATA[Eliminating the Chinese Rare Earth Chokepoint]]></description><link>https://www.hyperdimensional.co/p/tough-rocks</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/tough-rocks</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Fri, 17 Oct 2025 14:39:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mLaj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49371abf-2579-47be-8114-3e0ca580af8b_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><h4><strong>Introduction</strong></h4><p>Last Thursday, China&#8217;s Ministry of Commerce (MOFCOM) <a href="https://perma.cc/7A9J-WGZR">announced</a> a series of new export controls (<a href="https://cset.georgetown.edu/wp-content/uploads/t0656_china_rare_earth_controls_2025_61_EN.pdf">translation</a>), including a new regime governing the &#8220;export&#8221; of rare earth elements (REEs) any time they are used to make advanced semiconductors or any technology that is &#8220;used for, or that could possibly be used for&#8230; military use or for improving potential military capabilities.&#8221;</p><p>The controls apply to any manufactured good made anywhere in the world whose value is comprised of 0.1% or more Chinese-mined or processed REEs. Say, for example, that a German factory makes a military drone using an entirely European supply chain, except for the use of Chinese rare earths in the onboard motors and compute. If this rule were enforced by the Chinese government to its maximum extent, <em>this almost entirely German drone would be export controlled by the Chinese government</em>.</p><p>REEs are enabling components of many modern technologies, including vehicles, semiconductors, robotics of all kinds, drones, satellites, fighter jets, and much, much else. The controls apply to any seven REEs (samarium, gadolinium, terbium, dysprosium, lutetium, scandium, and yttrium). China controls the significant majority of the world&#8217;s mining capacity for these materials, and an even higher share of the refining and processing capacity.</p><p>The public debate quickly devolved into arguments about who provoked whom (&#8220;who really started this?&#8221;), whether it is China or the US that has miscalculated, and abundant species of whataboutism. Like too many foreign policy debates, these arguments are primarily about narrative setting in service of mostly orthogonal political agendas rather than the actions demanded in light of the concrete underlying reality.</p><p>But make no mistake, this is a big deal. China is expressing a willingness to exploit a weakness held in common by virtually every country on Earth. Even if China chooses to implement this policy modestly at first, the vulnerability they are exposing has significant long-term implications for both the manufacturing of AI compute and that of key AI-enabled products (self-driving cars and trucks, drones, robots, etc.). That alone makes it a relevant topic for <em>Hyperdimensional</em>, where I <a href="https://www.hyperdimensional.co/p/america-the-serious">have covered</a> manufacturing-related <a href="https://www.hyperdimensional.co/p/reflections-on-el-segundo">issues</a> before. The topics of rare earths and critical minerals have also long been on my radar, and I wrote <a href="https://americancompass.org/restoring-leadership-in-critical-minerals/">reports</a> for <a href="https://www.rebuilding.tech/posts/regaining-control-over-critical-mineral-production">various think tanks</a> early this year.</p><p>What follows, then, is a &#8220;how we got here&#8221;-style analysis followed by some concrete proposals for what the United States&#8212;and any other country concerned with controlling its own economic destiny&#8212;should do next.</p><p>A note: this post is going to concentrate mostly on REEs, which is a chemical-industrial category, rather than &#8220;critical minerals,&#8221; which is a <em>policy </em>designation made (in the US context) by the US Geological Survey. All REEs are considered critical minerals by the federal government, but so are many other things with very different geological, scientific, technological, and economic dynamics affecting them.</p><h4><strong>How We Got Here</strong></h4><p>If you have heard one thing about rare earths, it is probably the quip that they are not, in fact, rare. They&#8217;re abundant in the Earth&#8217;s crust, but they&#8217;re not densely distributed in many places because their chemical properties typically result in them being mixed with many other elements instead of accumulating in homogeneous deposits (like, say, gold).</p><p>Rare earths have been in industrial use for a long time, but their utility increased considerably with the <a href="https://www.nanocrystalmagnetics.us/our-novel-technology?utm_source=chatgpt.com">simultaneous and independent invention</a> in 1983 of the Neodymium-Iron-Boron magnet by General Motors and Japanese firm Sumitomo. This single materials breakthrough is upstream of a huge range of microelectronic innovations that followed.</p><p>Economically useful deposits of REEs require a rare confluence of factors such as unusual magma compositions or weathering patterns. The world&#8217;s largest deposit is known as Bayan Obo, located in the Chinese region of Inner Mongolia, though other regions of China also have substantial quantities.</p><p>The second largest deposit is in Mountain Pass, California, which used to be the world&#8217;s largest production center for rare earth magnets and related goods. It went dormant twenty years ago due to environmental concerns and is now being restarted by a firm called MP Materials, in which the US government <a href="https://mpmaterials.com/news/mp-materials-announces-transformational-public-private-partnership-with-the-department-of-defense-to-accelerate-u-s-rare-earth-magnet-independence/">took an equity position</a> this past July. Another very large and entirely undeveloped deposit&#8212;possibly the largest in the world&#8212;is in Greenland. Anyone who buys the line that the Trump administration was &#8220;caught off guard&#8221; by Chinese moves on rare Earths is paying insufficient attention.</p><p>Rare earths are an enabling part of many pieces of modern technology you touch daily, but they command very little value as raw or even processed goods. Indeed, the economics of the rare earth industry are positively brutal. There are many reasons this is true, but two bear mentioning here. First, the industry suffers from dramatic price volatility, in part because China strategically dumps supply onto the global market to deter other countries from developing domestic rare earth supply chains.</p><p>Second, for precisely the same reasons that rare earth minerals do not tend to cluster homogeneously (they are mixed with many other elements), the processing required to separate REEs from raw ore is exceptionally complex, expensive, and time-consuming. A related challenge is that separation of the most valuable REEs entails the separation of numerous, less valuable elements&#8212;including other REEs.</p><p>In addition to challenging economics, the REE processing business is often environmentally expensive. In modern US policy discourse, we are used to environmental regulations being deployed to hinder construction that we few people <em>really </em>believe is environmentally harmful. But these facilities come with <a href="https://www.earthobservatory.nasa.gov/images/77723/rare-earth-in-bayan-obo">genuine environmental costs</a> of a kind Western societies have largely not seen in decades; indeed, the <a href="https://www.bbc.com/future/article/20150402-the-worst-place-on-earth">nastiness</a> of the industry is part of why we were comfortable with it being offshored in the first place.</p><p>China observed these trends and dynamics in the early 1990s and made rare earth mining and processing a major part of its industrial strategy. This strategy led to these elements being made in such abundance that it may well have had a &#8220;but-for&#8221; effect on the history of technology. Absent Chinese development of this industry, it seems quite likely to me that advanced capitalist democracies would have settled on a qualitatively different approach to the rare earths industry and the technologies it enables.</p><p>In any case, that is how we arrived to this point: a legacy of American dominance in the field, followed by willful ceding of the territory to wildly successful Chinese industrial strategists. Now this unilateral American surrender is being exploited against us, and indeed the entire world. Here is what I think we should do next.</p><h4><strong>Policy Recommendations</strong></h4><p>First, the bad news: the path forward is not going to be forged by the private sector alone. It will require government involvement. The question is what kind of government involvement is optimal, not whether there is a role for the state to play. Second, even more bad news: while it is true that the rare earths industry is overregulated, the solution to this problem is not one of deregulation alone. This is not a case of &#8220;get out of the way and let the private sector cook.&#8221;</p><p>But there is good news too. The first is that, to borrow Tyler Cowen&#8217;s mantra, &#8220;supply is elastic.&#8221; What this means for our purposes is that markets, in response to sudden shifts in demand (in this case, the need or strong desire of many firms to purchase non-Chinese REEs in response to the imposition or threat of Chinese export controls), can coordinate efficiently to increase supply.</p><p>This response can happen far faster than bureaucrats or chatterers realize from their vantage point, surveying the ground as they do from atmospherically situated armchairs. A great example is COVID. At the beginning of the pandemic, the World Health Organization <a href="https://edition.cnn.com/2020/03/30/world/coronavirus-who-masks-recommendation-trnd">recommended </a><em><a href="https://edition.cnn.com/2020/03/30/world/coronavirus-who-masks-recommendation-trnd">against </a></em><a href="https://edition.cnn.com/2020/03/30/world/coronavirus-who-masks-recommendation-trnd">masking</a>, not because they thought it was imprudent but because they feared limited supplies of masks being diverted from hospitals. They supposed it would take years to produce sufficient masks for an entire population. It is true that in the opening months of the pandemic, masks were hard to find&#8212;yet this problem was resolved in a matter of months, if not weeks.</p><p>The same dynamic can apply with rare earths, though the story is somewhat more complex because of the lack of clarity on the extent of China&#8217;s enforcement. I suspect China will savvily adjust their enforcement (not the policy itself, which is written broadly to enable just this sort of flexibility) to attenuate the benefits of supply elasticity. Still, many firms will see the writing on the wall and realize that they need to find alternate sources&#8212;as some did years ago.</p><p>On top of this, recent policy moves have created favorable conditions for development of non-Chinese rare earth capacity. Congress in 2018 <a href="https://www.govinfo.gov/content/pkg/PLAW-115publ232/pdf/PLAW-115publ232.pdf">mandated</a> that the Department of War shift to non-Chinese sourcing for several critical minerals and rare earths by January 1, 2027. Even better, the One Big Beautiful Bill earlier this year <a href="https://wpintelligence.washingtonpost.com/topics/global-security/2025/07/31/inside-office-strategic-capital-pentagons-new-200-billion-lending-powerhouse/">authorized</a> $100 billion in loan authority for the Department of War&#8217;s Office of Strategic Capital to invest in projects specifically related to critical minerals mining and refining.</p><p>These factors all militate in favor of faster-than-the-pundits-expect development of robust non-Chinese supply chains for rare earths.</p><p>The second piece of good news is that the field of rare earths is itself highly susceptible to technological innovation. Here and in other industries, we must see the bright side of American deindustrialization: we get the blessing of a clean slate, onto which we can make designs based on the capabilities and assumptions of <em>today&#8217;s </em>technology, not that of 30 years ago.</p><p><em><strong>Invest in Industry and Ensure Price Stability</strong><br></em>The prices of rare earths are volatile, and this is made worse by deliberate Chinese intervention in the market. The successful financing and operation of domestic or allied rare earth capacity will require price floors supported by the government.</p><p>The Department of War&#8217;s pathbreaking deal with MP Materials, mentioned above, has much to recommend it. Most importantly, the arrangement creates a price floor for rare earth magnets produced by MP, leveraging Title III of the Defense Production Act to do so (which, incidentally, I <a href="https://americancompass.org/restoring-leadership-in-critical-minerals/">recommended</a> in a paper earlier this year, though I had no direct role in this deal during my time in government). In essence, if the market price of the magnets falls below $110 per kilogram, the government will pay the shortfall. If the market price exceeds $110 per kilogram, MP and the government split the upside.</p><p>The deal also comes with a federal government equity stake in MP. I am less supportive of this, because it creates an implicit &#8220;national champion&#8221; and thereby crowds out potential new entrants. Government is better positioned to be a source of price support (especially the kind mentioned above, which allow the public to share in upside), non-dilutive capital grants, and loans. Having government on the cap table creates numerous political economy risks without providing much benefit.</p><p>I also realize that the ship has sailed, given the Trump Administration&#8217;s decision to use government equity in private firms widely. Still, I would be remiss if I did not recommend against public equity positions going forward.</p><p>Before I depart from this theme, a quick observation: America enacted a remarkably flexible and capacious industrial policy statute 75 years ago. It is called the Defense Production Act (DPA). The DPA is typically associated with its command-and-control Title I (priorities and allocations authority, allowing the government to commandeer private resources for its benefit), but Title III is more my cup of tea. It can be used for any industry or technology and allows the government immense flexibility in contracting: traditional grants, loans, equity, and the price floor mechanism described are all possible within existing statute.</p><p>Currently, Title III can be directed to projects within the United States, Australia, and Canada&#8212;all rich sources of rare earths. Congress should consider adding Greenland to the list of eligible countries in the upcoming National Defense Authorization Act or other relevant legislation.</p><p><em><strong>Create Market Infrastructure</strong></em><br>There is a fundamental problem with the MP deal: the price itself. Unlike most commodities, which rely on global&#8212;and candidly, often US dominated&#8212;financial plumbing, China has been building market infrastructure of its own to support rare earths. Indeed, the MP deal&#8217;s price floor <a href="https://x.com/ArnabDatta321/status/1966155199388725363">is indexed</a> to the China-dominated Asian Metal Market index. Even absent the obvious threat of Chinese manipulation, this index&#8217;s prices incorporate regulatory, taxation, and logistical considerations not relevant to the US.</p><p>Arnab Datta has written, by a very wide margin, <a href="https://www.employamerica.org/expanding-the-toolkit/reimagining-the-spr/">the best work</a> on how the US can build market infrastructure for rare earths and other critical minerals. He proposes a technocratically managed reserve for critical minerals modeled on the existing Strategic Petroleum Reserve. I have some quibbles but agree directionally.</p><p>The primary point to understand is that this market infrastructure provides the liquidity, risk hedging, and other benefits common in all commodities markets and crucial for a robust industry in the long term. All purely public support is brittle without the traditional foundations of market capitalism.</p><p><em><strong>Streamline Regulations</strong></em><br>As Congress contemplates permitting reform, it should consider a National Environmental Policy Act fast-track for critical minerals projects, with a single agency responsible for publishing template permits and environmental mitigations coupled with litigation shot clocks (time limits on when litigation can initiated by plaintiffs), to the extent such litigation limits are not a part of the broader permitting reform package.</p><p>In addition, Congress (optimally) or the Treasury Department, through guidance, should ensure the existing <a href="https://www.federalregister.gov/documents/2023/12/15/2023-27498/section-45x-advanced-manufacturing-production-credit">45X tax credit program</a> created by the Inflation Reduction Act applies to rare earth magnet manufacturing, in addition to processing.</p><p><em><strong>Foster Innovation Throughout the Supply Chain</strong></em><strong><br></strong>A great deal of innovation is possible at every link in the rare earth chain of production. First, there is discovery of rare earths and other critical minerals in the ground: new techniques employing machine learning (&#8220;AI&#8221;!) in combination with more sophisticated sensing equipment, <a href="https://www.mdpi.com/2075-163X/15/10/1015?utm_source=chatgpt.com">such as hyperspectral imaging</a>, are being used to locate new deposits.</p><p>The rest of the value chain consists of mining, extraction, processing, and manufacturing finished goods (magnets, in this case). Traditionally, no single firm would handle all of these steps. But given the necessity of scale for building robust manufacturing businesses and the brutal economics outlined above, it is likely the case that vertical integration of most or all of these four steps will be needed. This is precisely what MP Materials hopes to do.</p><p>AI has increasing utility to many of these processes. To name just a few I find myself excited about: with robust datasets, we can design use AI based materials science tools to design superior <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC12035567/?utm_source=chatgpt.com">extractants</a> and other <a href="https://chemistry-europe.onlinelibrary.wiley.com/doi/10.1002/ejic.202400064?utm_source=chatgpt.com">chemicals</a> used in the processing and extraction stages. Coupled with autonomous materials science labs to test these AI-designed materials in the real world, we can rapidly accelerate the discovery of new materials for the rare earth industry. We can use reinforcement learning and other AI-based methods to <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11558677/?utm_source=chatgpt.com">optimize</a> the <a href="https://aiche.onlinelibrary.wiley.com/doi/am-pdf/10.1002/amp2.10079?utm_source=chatgpt.com">extraction process itself</a>.</p><p>Government can provide support to these efforts in various ways (for instance, by funding a network of autonomous materials science labs, <a href="https://www.nsf.gov/news/nsf-invest-new-national-network-ai-programmable-cloud">which the National Science Foundation already is</a>, though this could use much more funding). The fundamental point is this: successful firms in this industry will reexamine every step of this process and use the technologies of the present to improve or reimagine all of it. Government policy must therefore not be overly tied to any specific firms (new entrants, with new ideas, must always have a clear pathway to success) or to any specific mode of production.</p><p>Indeed, the long-term solution may well be to rebut the presumption of rare earths altogether. It is worth remembering that our current mass reliance upon rare earth magnets in particular was driven by a single materials science innovation: the Nd-Fe-B magnet. Perhaps someone will <a href="https://www.chemanalyst.com/NewsAndDeals/NewsDetails/ex-tesla-innovator-unveils-lithium-free-battery-built-for-data-centers-cold-storage-39245">invent something new</a> that makes us dramatically less dependent on rare earths. Is this particular magnet the end of microelectronics history? Somehow I doubt it.</p><p>New approaches altogether, likely enabled by the same autonomous science infrastructure I described above, must be pursued. And so we must avoid policy decisions that foreclose such future innovations. A tight relationship between government and one firm&#8212;with one specific business model&#8212;is exactly the kind of thing that restricts, rather than expands, our options.</p><h4><strong>Conclusion</strong></h4><p>There is much more to say here, including topics like workforce development and, crucially, international collaboration; the US absolutely cannot develop non-Chinese REE supply chains alone. I will likely have more to say in other formats soon. </p><p>The only thought I want to leave you with is this: for years, American elites have self-flagellated about deindustrialization and the hopelessness of ever &#8220;bringing manufacturing back.&#8221; And yet almost nothing I describe above is novel; almost all my recommendations are for business and policy processes already underway.</p><p>There is a tremendous amount of work to be done, but things are not quite as dire as they seem. China will not long leverage this weakness against us. Supply, ultimately, is elastic.</p>]]></content:encoded></item><item><title><![CDATA[The Future and Its Friends]]></title><description><![CDATA[On the march through the institutions]]></description><link>https://www.hyperdimensional.co/p/the-future-and-its-friends</link><guid isPermaLink="false">https://www.hyperdimensional.co/p/the-future-and-its-friends</guid><dc:creator><![CDATA[Dean W. Ball]]></dc:creator><pubDate>Fri, 10 Oct 2025 12:45:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kZjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f70956b-24b6-432b-81c4-dcfa4095ead7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hyperdimensional.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hyperdimensional.co/subscribe?"><span>Subscribe now</span></a></p><p><em>Thank you to the <a href="https://www.goldengateinstitute.org">organizers of The Curve</a> for hosting me and giving me the opportunity to speak, and to the Rockefeller Foundation for facilitating the reflections below. Thanks also to Virginia Postrel <a href="https://www.amazon.com/FUTURE-ITS-ENEMIES-Creativity-Enterprise/dp/0684862697/ref=sr_1_1?dib=eyJ2IjoiMSJ9.XQemXcHTzvZ4Ur4hgQrIhR7m-B5Vn32IdSu45leveomJ1eIxPAiuBcDgLBwiqq7GcQjzFqkx7UN3CJJxUwFjBDX-hns6zzJSI5jfpLFYEOFS0Nj_6IbyJVP_6A8VbeyDJV6mbo7qjqj-jiDWDJ2oGW-44PSEqyQK1aeykHfP400BaXusmVkmPflERPd-NDQt.cFefed76BkRD01MOfmKbFD474S6P890oJIZSTbvZfEk&amp;dib_tag=se&amp;hvadid=694104287509&amp;hvdev=c&amp;hvexpln=67&amp;hvlocint=9007538&amp;hvlocphy=9220174&amp;hvnetw=g&amp;hvocijid=2804913071404655201--&amp;hvqmt=e&amp;hvrand=2804913071404655201&amp;hvtargid=kwd-299952010202&amp;hydadcr=9366_13533310&amp;keywords=the+future+and+its+enemies&amp;mcid=9e1d1a40ed9134edb770344e2ddbd660&amp;qid=1760093018&amp;s=books&amp;sr=1-1">for inspiring the title of this essay</a>. </em></p><p>&#8212;</p><p>I spent the last few days wandering the ancient streets of Bellagio in northern Italy. Today this city exists mostly for tourism, but through history this has been a hard-nosed place of military strategy and commerce. Positioned at the tip of a peninsula on Lake Como, no water-bound trade could pass through this region without catching the watchful eye of they who occupied Bellagio.</p><p>There were the Celts, then the Romans. After the Roman Empire collapsed came the Lombardi and the Franks. Protective walls had to be erected at the city borders, since this newly fractured world had no great empire to ensure its safety.</p><p>More advanced industry gradually formed during the late medieval and early modern periods. In particular, the city became a center of silk production, whose traditional manufacturing process still lingers here today. </p><p>At the peak of the Bellagio promontory lies a site where, 2000 years ago, Pliny the Younger had a villa. During the golden age of Europe&#8217;s aristocracy, the site became the compound of Milanese aristocrats who christened it Villa Serbelloni. In the era of budding European capitalism, it became a hotel&#8212;though it remained in the hands of the Duchess of Serbelloni. And in 1959 that family passed the property onto the stewards of the fortune of a new emperor, <a href="https://www.rockefellerfoundation.org/fellowships-convenings/bellagio-center/">the foundation of John David Rockefeller</a>.</p><p>I have found this property to be a fitting place to ponder the transformation of institutions.</p><p>&#8212;</p><p>&#8220;Institution&#8221; is a funny word. I despise it, in fact. It is cold and dry, bereft of humanity. It is a word that sounds like it implicates concrete and steel, when really, institutions are composed of human beings.</p><p>Institutions are not &#8220;organizations.&#8221; When you think of an institution, you shouldn&#8217;t think of a building; you should think of people. The institution of Congress is not the Capitol Building, but instead all the people who work inside it, and their rules, habits, norms, rituals, preferences, ways of relating to one another, and so forth.</p><p>All institutions are technologically contingent, based as they are upon a vast complex of assumptions, almost wholly unstated, about what is possible and what is impossible, about what is hard and what is easy. And it is our technology that determines what is hard for us to do and what is easy. As technology makes new things possible, and eventually makes them easy, institutions must be transformed.</p><p>Think, for example, about the institution of science.</p><p>Picture yourself in a present-day scientific lab. Look at the equipment, all of it designed for individual humans to place individual samples for one-by-one analysis, in service of writing individual &#8220;papers&#8221; with frozen-in-time &#8220;results,&#8221; all funded by individual grants (usually from the government or a large philanthropy) for individual scientific micro-endeavors.</p><p>Imagine that you are a cook, and you just made a cake in your kitchen. You&#8217;ve made a delicious cake, and you&#8217;d like to start a business making 1,000 of them a day. So you replicate your kitchen 1,000 times over&#8212;you buy 1,000 residential ovens, 1,000 standard mixing bowls, 1,000 bags of flour. And you hire 1,000 humans to follow your recipe, each making their own cake in the various kitchens you&#8217;ve built.</p><p>Of course no one would do this. And yet this is not <em>that</em> far off from how we today &#8220;scale science,&#8221; and in some ways we are even less efficient.</p><p>What you should do instead, obviously, is build a <em>factory</em> with the ability to make 1,000 cakes <em>at the same time</em>. This was, at one point, a new type of institution that entailed distinct organizational structures (the modern corporation, for example), new relationships of workers to firm owners, novel patterns of work, and much else. The factory enabled and necessitated new technology: in our example, industrial ovens, wholesale purchase of ingredients, and the like, in quantities that would be alien in a residential kitchen. Similarly, it required new occupations that do not map cleanly to their pre-industrial analogs (consider the &#8220;chef&#8221; or &#8220;baker,&#8221; for example, versus the &#8220;batter-vat cleaner&#8221;). </p><p>Standing in a pre-industrial residential kitchen, it would be difficult to imagine a <em>factory</em>, partially because there are numerous complementary innovations a factory requires (for example, the ability to manufacture and power industrial-scale cooking equipment), and partially because imagination is hard. Unrealized ideas are some of the most fragile things human beings produce. Standing in a present-day scientific lab, it would be similarly difficult to imagine the industrial-scale science of the future, and all of the complementary technological and institution innovations it will require. </p><p>But we can try. Imagine a new industrial science automated with robotics in the world of atoms and agentic frontier models in the world of bits. Consider millions of experiments in parallel, generating data and analysis in a continuous stream, but only incidentally for a human audience. A ceaseless machine interrogating and manipulating nature with greater finesse by the hour, at once magnificent and terrifying, as all great machines are.</p><p>In this new world, the human-relevant scientific unit of account is no longer the &#8220;paper&#8221; or the &#8220;dataset&#8221; or even the &#8220;experiment.&#8221; Instead the thing humans care about becomes the creative question, the daring moonshot, the industrial objective. How do we create an incentive for future practitioners of scientists to ask great questions? Do we do so today, or do we mostly incentivize them to write great grant proposals?</p><p>&#8212;</p><p>When I and many others say that &#8220;AI will challenge our institutions,&#8221; we are burying the lede beneath frigid nomenclature. What we are saying is that AI upends the way most people who do most economically valuable things conceive of their work, their organizations, and, ultimately, themselves.</p><p>We face a question: do we try to reform, improve, adapt, refine, or update our institutions? Or do we start from scratch, building new things altogether? And can we&#8212;America, the West, humanity at large&#8212;stomach such momentous change?</p><p>It is a question as old as technology. But both institutional reform and creation get harder over time, because as our civilizations grow older, and as we become wealthier (due to prior, successful institutional innovation and co-evolution with technology), change gets more difficult. Our bones stiffen. We have more to lose. We become tired.</p><p>Like most things, I suspect our path forward will require both reform of existing institutions and the forging of new ones. In some cases the new will accumulate in sedimentary fashion over the old. In others, the new institutions will outcompete the existing ones, sometimes viciously. A great many storied institutions will be&#8212;pardon&#8212;<em>railroaded </em>by the technological wave that is building. Based on what I have argued here about what institutions really are, I hope you understand that this will be hard, emotionally and otherwise, to internalize.</p><p>A great many people, once they realize what is underway, will understandably fight the new institutions. They will seek to entrench the status quo, to reject entirely the possibility of change so radical. And they will ask you which side <em>you</em> are on. You will face a tough choice. </p><p>Anytime I walk around a scientific lab, I feel the anxiety and frustration Henry Ford must have felt when he studied the pre-assembly-line factories. Everywhere I look, I see new empires and emperors dying to be born. And everywhere I look, I see the institutions of old ready to fight tooth and nail.</p><p>&#8212;</p><p>A few days ago I was in the San Francisco Bay Area, the birthplace of the coming revolution in our institutions. My primary business was attending an excellent conference called The Curve, which brings together an eclectic mix of delegates from the institutions of old, and the institutions which today are struggling to be born.</p><p>Several attendees remarked upon the clash of Washington politicos and San Francisco technologists. I noticed this too, and thought of cowboys and Indians sizing each other up, readying for battle, yet dwarfed by the size of the terrain and the scale of the ideas over which they feud. I tried, and ultimately failed, to determine which side I was on.</p><p>I came to The Curve with an offer. Not so much with a proposal&#8212;though I did have one&#8212;but with an outstretched hand. Fundamentally, I am aware that institutional transformation&#8212;especially so much of it all at the same time&#8212;is going to be resisted by many, if not most. In a fraught world replete with risks, what worries me above all else is that our society&#8217;s efforts to fight change will encase the institutions of the present in amber.</p><p>These fights will have very little, perhaps nothing, to do with matters of frontier AI safety. Some of them will be fights worth having; many will be vicious attempts to quash fragile visions of a better future. Shockingly few people are on the side of techno-optimism, of radical technological change and attendant institutional transformation. The future has very few friends.</p><p>But The Curve was filled with friends of the future. Some are accelerationists, relatively less concerned about AI risks&#8212;though few would deny those risks are non-existent. Some are AI safetyists, relatively more concerned about AI risks&#8212;though few would support regulating AI such that that its positive uses are rendered impossible. Some of us try, and ultimately fail, to determine which side we are on.</p><p>These friends of the future are divided. They believe they are in a rivalrous competition with one another, an arm wrestle on the verge of becoming a fistfight. I believe, and always have believed, that this is wrong. The friends of the future should be allies, not enemies.</p><p>I believe the logical starting point for such an alliance is the federal preemption of problematic state AI regulations. I put forth <a href="https://www.hyperdimensional.co/p/be-it-enacted">a proposal</a> to advance this discussion, but I hope others propose their own. Anton Leicht has <a href="https://writing.antonleicht.me/p/a-preemption-deal-worth-making">written thoughtfully</a> about what the political contours of such a compromise could look like. I have little to add to his analysis. </p><p>There may come a day when we are forced to make tradeoffs that do cause the accelerationists and the safetyists to become rivals. Let that day come when it must. In the meantime, I submit to you that the few friends the future has should work together.</p><p>&#8212;</p><p>The future I hope for will be hard-won. It will require more than just reasonable compromises on AI legislation&#8212;much more. It will require more than merely pushing back on ill-conceived regulations or cutting red tape. These things are small slivers&#8212;shadows, really&#8212;of what I have in mind.</p><p>Instead, the future I hope for will have to be advocated for with the utmost zeal and fierce belief. It will demand cajoling, coaxing, persuading, and not a small amount of fighting. Most importantly, though, it will require imagination: a willingness to invent the assembly line and the factory, to cast aside the old and start fresh.</p><p>Walking through the ossified streets of a northern Italian commercial-hub-turned-museum, I wonder and worry about whether the contemporary West can muster this zeal. If we can, it will probably be because of the combined efforts of a small group of people who saw the future early and decided to befriend it, a ragtag band of marchers through the institutions.</p><p>As you consider whether you want to join the march, look around you. Have the institutions of the present day served you well? Do they seem healthy? Do they seem repairable? Are you <em>happy </em>about the status quo? </p><p>I know my answers to these questions. My march, therefore, will proceed. Through mud and sand, in freezing rain, against bitter winds, under the glaring sun, I will march with the friends of the future, step by step.</p>]]></content:encoded></item></channel></rss>