Men on a river, on a vessel afloat, oft see the land move, not their boat.
Mathurin Régnier
I’ve long admired prediction markets as an epistemic tool. They allow individuals to leverage the wisdom of crowds in the same way stock markets do. Prediction markets anticipated the serious prospect of President Biden dropping out of the election significantly before most experts did, to name a recent notable example. The latent intelligence embedded in prices has been understood for a long time now, though it remains underrated.
Despite this, I’ve always hesitated to use prediction markets myself. In practice, for the kinds of things I am interested in betting on—mostly predictions related to the issues I write about—I have found prediction markets to be wanting for a few reasons:
Many prediction markets are thinly traded, meaning that the prices they compute lack the latent intelligence that makes markets epistemically useful. This is true for the same basic reason that selling a house in a small village with few interested buyers is suboptimal compared to selling a house in a dense city. It is likely that the house in the dense city will be easier to price, both because the seller will have more comparable sales on which to base his price, and because there will be more potential buyers.
It is difficult to operationalize bets about the high-dimensional and complex issues that interest me most. In practice, bets on these things reduce to bets on epiphenomena that seem to me almost random. Recently, for example, I was asked to bet on whether the passage of SB 1047 would cause Meta’s stock to go down. This isn’t a bad question to ponder, but I fail to see how having a financial stake in that particular outcome gives me meaningful “skin in the game,” or accountability, in the broader set of implicit and explicit predictions I have made about SB 1047. I think SB 1047 would be bad for far more interesting reasons than what it would do Meta’s stock price. In theory, I could identify hundreds of bets like this, Meta’s stock price perhaps being one, but there isn’t enough time in anyone’s day to examine all of my implicit and explicit SB 1047 predictions and operationalize a portfolio of bets on related epiphenomena.
On the flip side, some markets are too broad to be useful on their own. For example, a market on timelines to AGI is interesting, but in and of itself not all that useful for rigorously reasoning about that specific question, or the broader question of what AGI even is (the answer to this question is assumed in the way that bets’ rules are codified).
It’s not obvious to me that having a personal financial stake in my predictions would make me a better analyst or writer. In fact, I could argue the opposite: that taking the time to place bets related to the things I write about could result in a kind of intellectual path dependency, making me feel overly committed to opinions I should be holding more loosely. This is also why I ceased actively trading stocks when I began writing.
But the fact remains: prediction markets are an excellent epistemic tool. I want there to be many thickly traded markets on world events, especially those related to my writing. How could we square this circle?
What if an LLM read all my writing, listened to all my podcast appearances, and perhaps even to some of my private or semi-private conversations, and then placed hundreds of micro-bets for me, updating them as my own thinking evolved? What if LLMs did this for everyone who cares about AI, or any other topic? The income I would gain or lose needn’t be significant. If the bets were small, it could be a modest income stream, similar to what most artists get from streaming royalties, or what many mid-sized X accounts receive in revenue sharing. That way, any losses would not be the end of the world for most people. The real value would be the knowledge society could construct.
What if the debate over the capabilities trajectory of AI, for example, was also operationalized in 1000s of prediction markets, thickly traded in micro-bets made on behalf of millions?
And what if other LLMs also surveyed the broader media environment and placed their own bets? If you think of my writing and thinking (or yours) as a kind of one-man intellectual hedge fund, these latter groups would be something like funds of funds.
What if we could simulate financial markets for every question about the future that concerns us? And what if it cost next to nothing to do? What if, after the work of setting it up was complete, all this just carried on each day, in a way that few humans had to devote much time to maintaining or thinking about?
(Financial markets nerds: I took some effort to scope this out, and I think it is feasible, but I fully admit that the exact scheme I’ve laid out might not actually be ideal, or possible; think of this more as a sketch than as a blueprint.)
You don’t need superintelligence, or even AGI, to do this. LLMs with the requisite context windows and intelligence to do this could well be right around the corner. They may even exist now. Indeed, some people are already taking baby steps toward the idea I’ve just described, and Dan Hendrycks announced a new “superhuman” AI forecaster just this week (though I am skeptical that it is meaningfully superhuman, and suspect “superhuman” is a word we’ll want to start being judicious in using over the coming years).
This is why technology diffusion is both the most difficult and the most rewarding part of technology development. Innovation—the LLMs, in this case—is great, but it means little if it is not used. Diffusion requires human beings to ask themselves: what is it that I want to be different about the world and what tools can I use to make that change happen? This has always been a difficult question to answer. It is made more difficult because with a new technology, the answer often involves entirely novel things, not one-to-one replacements of existing things.
I suspect that AI diffusion will frequently look like this. It’s not so much that we can automate the work of an investment bank analyst, even though we can likely automate much of it (importantly, though, being able to automate all of it is an altogether different challenge). Instead, the power comes in being able to do things like simulate millions of traders in heretofore untradeable goods.
Things like this, and their second- and third-order consequences, are, in my view, the likelier sources of radical technological change than building Dyson Spheres or solving intergalactic travel in the next 15 years.
So the real “what if?” to ponder is: what if the idea I’ve just described is one of thousands, or millions, of similarly (or much more) impactful ideas that will come to fruition over the coming decade or two?
All of a sudden, a $7 trillion bill for construction of data centers, semiconductor manufacturing facilities, and electricity generation seems less expensive than it may have at first glance.
And, to invoke SB 1047, what might happen if someone used an LLM to manipulate this nascent market of LLMs, where money is at stake? What if an aggressive prosecutor argues, perhaps plausibly, that additional billions or trillions of dollars of other capital allocation decisions (say, in the stock market) are partially based on these new prediction markets? Could a market manipulation here be considered a critical harm under SB 1047? Remember that the markets only exist because of LLMs, so almost anything bad that happens within these markets was probably “materially enabled” by AI.
All of a sudden, $500 million seems smaller than it may have at first glance.
And remember also the Commodities Futures Trading Commission (CFTC) could simply outlaw this idea, and maybe others, with the stroke of a pen, with minimal public pushback save a few angry X threads, for reasons that superficially seem wise, or at least justifiable (though the CFTC did suffer a setback in court this week in its effort to forbid betting on Congressional elections). Just as there are thousands (or more) of live opportunities with AI, there are thousands (or more) of ways those opportunities could be killed without anyone even noticing. What might have been killed in the roughly 3,000 agency rules (essentially laws) added to the Code of Federal Regulations in 2023 alone? This is to say nothing of the 800 AI bills that have been proposed at the state level, and of the rules that state agencies can also promulgate.
Anyone who thinks American government is insufficiently active does not understand, in a fundamental way, how to analyze American government. The problems of our governments do not stem from a lack of activity.
Technology diffusion, when it is done well, is the kaleidoscopic process by which humans discover the purpose of our inventions. But it is also fragile. We cannot measure the cost of ideas that were never tried. The stock market does not crash when ideas no one has yet had are prohibited from existence. There is no death toll for the things that never existed, or the people who were never born.
I think one aspect to consider here though is prediction markets are zero-sum. So most losers would have an incentive to drop out. I still think the market would run, but it'd be a bit weird. I'm not sure exactly what the equilibrium would be.
Several tools are also moving towards this goal: Presagio, PredX, Konsensus, and custom GPTs.