A Penny For Your Thoughts
March 17, 2026
For at least a little while longer, many people’s jobs will revolve around information: retrieving it, synthesizing it, distilling it, packaging it up and depositing it on their manager’s desk wrapped in a bow. But what’s any of that actually worth?
What Is Information Anyway?
Suppose I tell you that the Sun is going to rise tomorrow. You probably won’t feel particularly inclined to pay me for this insight. Why? Claude (Shannon) would say it’s because I didn’t surprise you. He went ahead and defined surprisal as follows:
Here, is a probability density function describing a random variable , and represents some specific realized value of . Since , a guaranteed outcome has a surprisal of zero. A fair coin, by contrast, has a surprisal of regardless of how it lands: either outcome is equally surprising. To compute the expected surprisal, we can simply add up the surprisal for each possible , weighted by its probability:
This quantity is known as the Shannon entropy.1 We can imagine as representing the unavoidable surprisal associated with : after all, we’ve assumed the existence of a perfect model .
Surprise, Surprise… The Model Is Wrong
If the world is surprising and hard to predict with a good model, it’s even harder to predict with a bad one. Suppose I have a coin which I think has a probability of landing heads, but the true probability is . When the coin does in fact land heads, I have a surprisal , and when it lands tails, I have . But my model is wrong! In the more general case, where and represent distributions over possible outcomes, my expected surprisal is now
Luckily, this quantity has a helpful name! It is called the cross-entropy of and . The difference between the cross-entropy and the true entropy tells us how much extra surprisal our wrong model costs us:
This is the Kullback-Leibler divergence, and it is always non-negative — an incorrect model can never make results less surprising than a correct one. Furthermore, if you’re willing to accept Shannon’s notion of surprisal as an accounting scheme for information, then we’re getting close to an answer. The value (in ethereal information units) of receiving the correct model when you previously had only is precisely .
Buckle Up, It’s Prediction Market Time
A prediction market trading at a price is many things, one of which is a probabilistic model of reality. For anybody making a decision impacted by the market’s resolution, the accuracy of that model matters. Traders that make the price more accurate should, ideally, be compensated in proportion to the value they provide. We now know how to measure informational contributions! What’s left is to design a market structure that enforces fair compensation. Luckily, Robin Hanson already did. In 2002, he proposed a logarithmic market scoring rule (LMSR) as a mechanism for information aggregation. Trading affects the marginal price of the market via a simple formula:
Here represents the signed quantity of contracts which have been bought or sold by market participants. The average price for contracts from an initial market state is then simply:
Suppose a trader wants to move the market from an initial price to a fair price . Sparing you the gory details, they will need to purchase exactly the following number of contracts:
They will do so at the following total cost:
Such a trader would then own assets worth acquired for , giving them the following PnL:
It’s worth highlighting that this is really the expected value of their PnL in the future. Much as Bayesian statisticians might wish it were otherwise, there’s no way to go back and retroactively declare that was the correct market prior. If the market resolves to , our clever trader made in expectation only to lose in reality.2
No More Second Opinions
Time for a thought experiment. Suppose you’re concerned that you may have a serious but rare disease. You consult your primary care physician, who estimates a chance of infection, but emphasizes her own lack of experience with the condition. You seek a second opinion from a leading expert in the field. He consults the available data and informs you that exactly of cases like yours are true positives. Have you gained any valuable information? Not on an LMSR you haven’t. And in a narrow sense, we can prove that having received the second opinion won’t change any (rational) decision making process. Let denote the presence of the disease, the treatment strategy you select, and let be your subjective utility. You should presumably choose such that
This would suggest that uncertainty in the value of itself shouldn’t influence your actions. But is that really right?
Second Order Uncertainty
Confidence feels good. Knowing what will happen is definitely better than knowing the probability that something happens. And it also feels better to know the probability than to wildly guess at it. There are at least three different cases in which that intuition can be justified mathematically:
- You’re a market maker, a geopolitical strategist, or any other participant in a multiplayer environment. Your biggest fear is that somebody else has a different prior than you and their prior is better. Knowing that there does not exist information which would change your prior is therefore valuable.
- The outcome you actually care about depends on multiple random events, many of which are conditionally dependent upon . Uncertainty over won’t affect the expected value or the variance of a single coin flip, but it will affect the variance of multiple flips of a fixed coin.
- A signal with realized informational value could still have been valuable in expectation. Your primary care physician presumably believed that the expert would, on average, also report a probability of . If she believed anything else, she should have adjusted her own prior. But your physician is far from certain as to what the expert will say, and a response other than would have contributed information legible to an LMSR. By asking, you obtain a signal with positive expected information.
In other words, you should probably still get a second opinion.
What’s Next
Stay tuned for an upcoming article outlining practical strategies for integrating with the Talarion API.
Unless I can’t help myself and write the entropy article first.
1 Physics enthusiasts will note that this differs only by a factor of from the definition of entropy in statistical physics. I could barely restrain myself from talking about this here, and will likely be unable to restrain myself from writing an article about it in the future.
2 This risk exposes a subtle issue for LMSRs (or any other type of market) as a tool for information aggregation. Assuming that traders have any risk aversion, they will be incentivized to stop short of their true fair value.