Apologies for posts two days in a row, but with the end of the Supreme Court term, there’s some valuable and timely analysis to be done. For new subscribers: as a general matter, Hyperdimensional publishes once, and only once, per week. The next post in this series will be published next week.
The Supreme Court on Monday issued a decision in two major social media cases: Moody v. NetChoice and NetChoice v. Paxton. Proponents of online speech are celebrating the court’s opinion, about which more below. But the decision also reveals the Supreme Court’s early thoughts on AI, and here, the Justices’ perspective is more complex.
Indeed, almost every Justice expressed skepticism about whether the use of AI tools can be considered Constitutionally protected speech, at least in this context. In doing so, the Justices are questioning the legal basis for the notion of AI as a tool—that is, as an extension of a user’s will. And because AI is already pervasive in curating social media feeds, and will only become more so, the decision may not be as positive for online speech as some believe.
In short: I’m not sure the argument that onerous AI regulation is going to be thwarted by the First Amendment will hold up very well in practice.
To be clear, when I say early thoughts on AI, I do mean early. AI’s First Amendment status was not an explicit part of this case, and the Court’s only substantive ruling in the case was that the record is underdeveloped. Still, the Justices seemed eager to offer broader thoughts, not just on AI but also seemingly, in Justice Barrett’s case, on the TikTok ban. It does seem as though the Court took this opportunity to telegraph their bigger picture thinking, and that alone deserves analysis.
In this two-part post, I’ll explain the NetChoice cases, outline what they reveal about the Court’s early thoughts on AI, and, in the second part, consider the implications of it all. I hope you enjoy.
Introduction: Explaining the NetChoice Cases
The cases centered on regulations passed by Florida and Texas affecting any sufficiently large website that hosts user-generated content (100 million monthly active users or dollars of annual revenue in the case of Florida; 50 million monthly active users in the case of Texas). The laws prohibit such websites from exercising viewpoint discrimination in moderation of user-generated content.
The laws were explicitly intended to prevent Facebook, Twitter, YouTube, and other major platforms from censoring conservative political views. But as written, the laws would also make it so that these companies—and smaller companies such as Etsy, Yelp, and AllTrails—could not remove genuinely vile content of all kinds (at least, not if they removed it on the grounds of “viewpoint”).
NetChoice, an industry group, sued to stop these laws on what is called a “facial” challenge—meaning that they asserted the laws were, on their face, unconstitutional in all conceivable applications. Yet the litigation itself focused primarily on the effect of these laws on the most popular social media feeds like the Twitter timeline. Other potentially affected services, from smaller platforms to direct messaging services, were not a major focus of the arguments in court.
The Court unanimously agreed that this was not good enough—if you’re going to argue a law is unconstitutional in all applications, you have to prove that claim for all applications, or at least many more applications than the litigants covered. The Court sent the cases back to the lower levels of the federal judiciary from whence they came, saying to all parties, essentially, “think about this both more comprehensively and more carefully.”
But Justice Kagan, writing the majority opinion, did not stop there. She also suggested that the Texas and Florida laws were flatly unconstitutional with respect to social media feeds because they compel the companies that distribute those products to disseminate speech against their will. In other words, every online company, regardless of size, has free speech rights that it can exercise as it chooses. As she writes:
We have repeatedly held that laws curtailing their editorial choices must meet the First Amendment’s requirements. The principle does not change because the curated compilation has gone from the physical to the virtual world.
Justice Kagan’s opinion is a win for those who favor free expression online. Critics might argue that the Texas and Florida laws aimed to increase the viewpoint diversity of online conversations. As Justice Kagan points out, however, once government has gotten into the business of regulating speech online… it’s in the business of regulating speech online, and that crosses a Constitutional red line:
But a State may not interfere with private actors' speech to advance its own vision of ideological balance. States (and their citizens) are of course right to want an expressive realm in which the public has access to a wide range of views. But the way the First Amendment achieves that goal is by preventing the government from “tilt[ing] public debate in a preferred direction.”
…
To give government that power is to enable it to control the expression of ideas, promoting those it favors and suppressing those it does not. And that is what the First Amendment protects all of us from.
A win, then, for proponents of free speech online. Is it time to bust out our Cyberspace Declaration of Independence?
Not so fast, I think. AI was lurking in the background throughout the Court’s decision, and I am not sure the news on that front is so straightforwardly positive.
The AI Caveat
At issue is the concept of the “algorithm.” Most of the Justices’ opinions focused on content moderation decisions made in accordance with published “community guidelines”—the rules social media platforms have about what is and is not allowed on their platforms.
But periodically, the justices acknowledged that AI algorithms automate both content moderation itself and the curation of content within a user’s feed, regardless of platform rules (i.e., algorithmic timelines that show users personalized feeds). And about these algorithms, they are more reserved. Justice Kagan clarifies, in a footnote, that her opinion is not about algorithmic content curation based on user preferences:
We therefore do not deal here with feeds whose algorithms respond solely to how users act online—giving them the content they appear to want, without any regard to independent content standards.
Justices Kavanaugh, Sotomayor, Barrett, and Roberts all signed on to this opinion (so did Justice Jackson, but she did not sign on to the specific part of the opinion from which this footnote is drawn). It is reasonable to infer, then, that at least five of the justices agreed to exclude algorithmic content curation from the analysis (so long as that prioritization is not connected to enforcing community rules).
The vast majority of decisions regarding the placement of content on major social media platforms are of this variety, so the exclusion is noteworthy. The majority opinion only applies to content moderation decisions based on community rules, and not to the content curation decisions that define most social media feeds.
What if an algorithm does both? That is, what if it primarily curates based on user preferences, but also screens content for violations of community guidelines? Are all actions taken by the algorithm safe as long as it does a little bit of moderation? Or are only the moderation decisions safe from regulation? I think it’s fair to say that the answer is unclear.
Justice Barrett, in a concurring opinion, offers more explicit skepticism about whether the use of AI in social media content moderation can be considered protected speech:
And what about AI, which is rapidly evolving? What if a platform's owners hand the reins to an AI tool and ask it simply to remove 'hateful' content? If the AI relies on large language models to determine what is 'hateful' and should be removed, has a human being with First Amendment rights made an inherently expressive “choice . . . not to propound a particular point of view”?
I’m not totally sure what “an AI relying on large language models” means, but let’s assume that Justice Barrett is referring to an LLM that does content moderation, which is very much already happening. Justice Barrett is unsure whether any decision made by an LLM, even when operating under instructions from humans (which they always are), counts as First Amendment expression.
Justices Alito, Gorsuch, and Thomas, in their joint concurring opinion, are similarly skeptical:
And when AI algorithms make a decision, “even the researchers and programmers creating them don't really understand why the models they have built make the decisions they make.” Are such decisions equally expressive as the decisions made by humans? Should we at least think about this?
Indeed, they should—and so should all of us.
By my count, eight of nine Supreme Court Justices have expressed some degree of skepticism about whether AI content curation and/or moderation decisions should be considered protected speech under the First Amendment. Alito, Gorsuch, Thomas, and Barrett all have expressed doubts about whether any use of AI, whether for curation or moderation, should be protected. Kagan, Roberts, Sotomayor, and Kavanaugh, on the other hand, simply excluded AI-driven curation from their analysis, while seeming to support the notion that AI-enabled moderation is protected speech.
Given “algorithmic design” laws that have recently passed in states like California (already under legal review), New York, and Maryland, I suspect this is not the last we will hear from the court on these issues. There is years of litigation still to come, and I’m not sure the early signs look especially favorable to those who are inclined to see algorithms as a form of speech with sweeping Constitutional protections.
This has profound implications for AI as a whole. If algorithms used to automatically curate expressive content for hundreds of millions of Americans, are not speech, what does that mean for ChatGPT? What does that mean for Evo, the Arc Institute’s DNA foundation model? And what does that mean for the government’s ability to impose regulations, such as export controls, that would effectively ban open-source AI?
Consider also that AI regulations are unlikely to be explicitly premised on boosting one side of the political divide, as the Texas and Florida laws both were. Instead, those laws will be premised on national security and protecting against catastrophic harms—areas of the law where courts often are inclined to grant the government more leeway.
This is all very preliminary, so everything here should be taken with a big grain of salt. But the point stands: anyone hoping that the First Amendment is a backdoor to stopping AI regulation should think twice. Wishing, including my own, does not make it so.
But even more intriguingly, the justices’ questions about AI suggest an alternate path forward. In the next piece, I’ll explore what that might be.
To the extent that recursive self improvement of ai algorithms is eventually a thing, a lot of this kind of discussion seems moot.