9 Comments

I think one aspect to consider here though is prediction markets are zero-sum. So most losers would have an incentive to drop out. I still think the market would run, but it'd be a bit weird. I'm not sure exactly what the equilibrium would be.

Expand full comment

Not all are binary outcomes. There are scalar markets, like the Iowa Electronic Markets' presidential vote share markets.

Expand full comment
Sep 17Liked by Dean W. Ball

Several tools are also moving towards this goal: Presagio, PredX, Konsensus, and custom GPTs.

Expand full comment
Sep 17Liked by Dean W. Ball

1. "Many prediction markets are thinly traded, meaning that the prices they compute lack the latent intelligence that makes markets epistemically useful" -- a single great trade by a superforecaster can contain a lot of latent intelligence. (though agree that the incentive for the effort required by that trade is lacking)

2. Good point. Prediction markets, much like most of rationalist thinking, exists only in the realm of what can plausibly be measured in an unbiased manner. It is hard to exit that realm without inviting bad faith and bad actors. An okay solution might be to peg prediction markets to decisions by randomly selected juries.

3. I think the main problem with these markets isn't so much the broadness of the definition, but more so the consequentiality of something many would describe as AGI. Many, including myself, have difficulty modeling the financial or reputational upside of having correctly predicted AGI in a post-AGI world.

4. Curious how you would have an LLM distinguish between your loosely held and strongly held opinions when making trades. Also, I'm sensing the main point is that you want reputational skin in the game rather than financial skin in the game, and an LLM making micro-bets would be a good way to rigorously enforce that without exposing you to the distraction of massive financial risk. Am I reading that right, and if so, why not just trade play money?

Expand full comment
author

Agree with you on points 1-3.

Re: point 4, I think an LLM would probably natively be pretty good at grokking what I feel strongly about and what I am less certain of (to the extent I communicated that properly), and could also run its assessments by me for verification. You are reading the financial vs. reputational point correctly, so in principle there is nothing wrong with trading play money. However, I think the prospect of some financial return would probably be a useful incentive, even if it is small.

In practice, I could be completely wrong about that in either direction--maybe people will just enjoy using play money. Or maybe a small return isn't sufficient.

Expand full comment

"What if an LLM read all my writing, listened to all my podcast appearances, and perhaps even to some of my private or semi-private conversations, and then placed hundreds of micro-bets for me, updating them as my own thinking evolved?"

I love this idea, though presumably all the alpha is in your private and semi-private conversations. Anything public could be fed into any single LLM trader -- no reason for it to be your personal LLM, except insofar as you think the curation of your personal media diet has more information value than the superset of all relevant media.

Expand full comment
author

Well, I do think that the portfolio of ideas represented my by own public writing and speaking represents a kind of alpha over the bets implied by the broader set of writing about AI (it would be weird if I took the time to write and *didn’t* believe that, right?)—if I could make an allocation to the Hammond portfolio I would, too!

Expand full comment

My only point is that those writings are public and available to any other trader. If the LLM trader that gets fed Dean Ball pieces outperforms, everyone will start including Dean Ball pieces in their dataset. The would help reveal which analyst is most worth listening to, but to the extent its published publicly, the analyst themself doesn't capture the excess return. In equilibrium, the best analysts may therefore stop sharing their insight publicly, or paywall / delay the release to non-subscribers, just like hedge funds and many deeply researched trade publications do now.

Expand full comment
author

True--though I wonder the extent to which this already happens (my suspicion is that in AI, the people who know the most don't speak in public that often, and when they do, it's at a very high level). Plus, the LLM trader need not publicly reveal what is in its portfolio!

Expand full comment