Hide table of contents

There have been some discussions about prediction markets on the EA Forum, and in general, prediction markets seem pretty popular in EA circles. So I thought people on the EA Forum might find this blog post by Dynomight interesting; I think it articulates an important issue we face when trying to interpret conditional prediction markets (the fact that conditionality does not necessarily imply causality) — as well as some potential solutions. The post was written for a general audience, and, as it says on the top (in the real post, not this link-post), people more familiar with conditional prediction markets might want to skip to section 3 or even to section 6. 

(Please note that I haven't read the post carefully.)

Here are some excerpts (shared with permission): 

Examples of conditionality not implying causality

2. 

People worry about prediction markets for lots of reasons. Maybe someone will manipulate prices for political reasons. Maybe fees will distort prices. Maybe you’ll go Dr. Evil and bet that emissions will go up and then go emit a gazillion tons of CO₂ to ensure that you win. Valid concerns, but let’s ignore them and assume markets output “true” probabilities.

Now, what would explain the odds of emissions going up being higher with the treaty than without? The obvious explanation is that the market thinks the treaty will cause emissions to go up:

Treaty becomes law

Emissions go up

Totally plausible. But maybe the market thinks something else. Maybe the treaty does nothing but voters believe it does something, so emissions going up would cause the treaty to be signed:

Emissions go up

Climate does scary things

People freak out

People demand treaty

Treaty becomes law

In this chain of events, the treaty acts as a kind of “emissions have gone up” award. Even though signing the treaty has no effect on emissions, the fact that it became law increases the odds that emissions have increased. You could still get the same probabilities as in a world where the treaty caused increased emissions.

3. 

Here’s a market that actually exists (albeit with internet points instead of money): “Conditional on NATO declaring a No-Fly Zone anywhere in Ukraine, will a nuclear weapon be launched in combat in 2022?”

This market currently says

P[launch | declare] = 18%,

P[launch | don’t declare] = 5.4%.

Technically there is no market for P[launch | don’t declare] but you can find an implied price using (1) the market for P[launch] (2) the market for P[declare] and (3) the ᴘᴏᴡᴇʀ ᴏꜰ ᴍᴀᴛʜ. [...]

So launch is 3.3x more likely given declare than given don’t declare. The obvious way of looking at this would be that NATO declaring a no-fly zone would increase the odds of a nuclear launch:

NATO declares no-fly zone

NATO and Russian planes clash over Ukraine

Conflict escalates

Nuclear weapon launched

That’s probably the right interpretation. But not necessarily. For example, do we really know the mettle of NATO leaders? It could be that declaring a no-fly zone has no direct impact on the odds of a launch, but the fact that NATO declares one reveals that NATO leaders have aggressive temperaments and are thus more likely to take other aggressive actions (note the first arrow points up):

NATO declares no-fly zone

NATO leaders are aggressive

NATO sends NATO tanks to Ukraine

NATO and Russian tanks clash in Ukraine

Nuclear weapon launched

This could also explain the current probabilities.

[...]

A mid-post summary of the argument (up to that point)

So far, this article has made this argument:

  1. You can use conditional prediction markets to get the probability of outcome B given different actions A.
  2. But just because changing the value of A changes the conditional probability of B doesn’t mean that doing A changes the probability of B.
  3. For that to be true, you need a particular causal structure for the variables being studied. (No causal path from B to A, no variable C with a causal path to both A and B)
  4. You can guarantee the right causal structure by randomizing the choice of A. If you do that, then conditional prediction market prices do imply causation.

Basically: If you run a prediction market to predict correlations, you get correlations. If you run a prediction market to predict the outcome of a randomized trial, you get causality. But to incentivize people to predict the outcomes of a randomized trial you have to actually run a randomized trial, and this is costly.

Some potential solutions to the problem

  1. "Get the arrows right." Find careful markets to run such that the causal structure is ok (no reverse causality, no confounders, and you have a safe conclusion — explained in the post)
  2. "Commit to randomization" — "randomize decisions sometimes, at random." (Explained in the post.) (There's also a sketch of a proposal for getting lots of information about the world at the cost of running a few very expensive RCTs.) 
  3. "Bet re-weighting" (explained in the post)
  4. "Natural experiments" (explained in the post)
  5. "The arrow of time" (explained in the post, resolves reverse causality)
  6. "Controlled conditional prediction markets" — trying to add all relevant control variables about the possible confounders — explained in the post. 

29

0
0

Reactions

0
0

More posts like this

Comments19
Sorted by Click to highlight new comments since:

I'm surprised the author doesn't offer "the market decides" as a solution to this. The original idea of decision markets is that the actions are taken on the basis of market prices, and under this structure causality seems like it might be handled just fine. 

I don't have a rigorous proof of this - proof is difficult because decision theories tend to have vague "I know it when I see it" definitions to begin with. However, we can at least see that the original author's objections are answered. Suppose that the market prices express expectations  and  for some outcome  and some pair of options . The author worries that whether  or  is chosen might be informed by some other events or states of the world which, if they transpired or were known to hold, would modify  and .  But if the choice is determined by the closing price of the market, then there obviously cannot be any events or states of the world that inform the choice but not the closing price.

It's not obvious to me that such markets can successfully integrate all of the available information by the time it closes. The closing price can, in general, reflect information about the world not reflected by the price before closing, and the price before closing is trying to anticipate any such developments. It seems like it usually ought to converge, but I can imagine there might be some way to bake self-reference into the market such that it does not converge. Also, once it becomes clear that one choice is preferred to another, there's little incentive to trade the loser, but this might not be much of a problem in practice. If convergence is a problem, adding some randomisation to the choice might help.

Also, there's always a way to implement "the market decides". Instead of asking P(Emissions|treaty), ask P(Emissions|market advises treaty), and make the market advice = the closing prices. This obviously won't be very helpful if no-one is likely to listen to the market, but again the point is to think about markets that people are likely to listen to.

Certainly, if your decision is a deterministic function of the final market price, then there's no way that any hidden information can influence the decision except via the market price. However, what I worry about here is: Do investors in such a market still have the right incentives—will they produce the same prices as they would if the decision was guaranteed to be made randomly? That might be true—and I can't easily come up with a counterexample—but it would be nice to have an argument. Do I correctly understand your second to last paragraph as meaning that you aren't sure of this either?

Just a quick note: I wrote a post on issues with Futarchy a while back. (I haven't read it in months, have changed my mind on a number of things since then — some of which would probably affect my arguments in that post, and don't know how much of it I'd still endorse, but am sharing it in case it's useful.)

Yeah, I’m also not sure. The main issue I see is whether we can be confident that the loser is really worse without randomising ( I don’t expect the price of the loser to accurately tell us how much worse it is).

Edit: turns out that this question has been partially addressed. They sort of say “no”, but I’m not convinced. In their derivation of incompatible incentives, they condition on the final price, but traders are actually going to be calculating an expectation over final prices. They discuss an example where, if the losing action is priced too low, there’s an incentive to manipulate the market to make that action win. However, the risk of such manipulation is also an incentive to correctly price the loser, even if you’re not planning on manipulation.

I think it definitely breaks if the consequences depend on the price and the choice (in which case, I think what goes wrong is that you can’t expect the market to convert to the right probability).

E.g. there is one box, and the market can open it () or not (). The choice is 75% determined by the market prices and 25% determined by a coin flip. A “powerful computer” (jokes) has specified that the box will be filled with $1m if the market price favours , and nothing otherwise.

So, whenever the market price favours , contracts are conditionally worth $1m (or whatever). However, contracts are always seemingly worthless, and as soon as contracts are worth more than they’re also worthless. There might be an equilibrium where gets bid up to $250k and to $250k-, but this doesn’t reflect the conditional probability of outcomes, and in fact is the better outcome in spite of its lower price.

I’m playing a bit loose with the payouts here, but I don’t think it matters.

OK I tried to think of an intuitive example where using the market could cause heavy distortions in incentives. Maybe something like the following works?

  • Suppose that we are betting on if a certain coin will come up heads if flipped. If the market is above 50% the coin is flipped and bets activate. If the market is below 50% the coin is not flipped and bets are returned.
  • I happen to know that the coin either ALWAYS comes up heads or ALWAYS comes up tails. I don't know which of these is true, but I think there is a 60% chance the coin is all-heads and a 40% chance the coin is all-tails.
  • Furthermore, I know that the coin will tomorrow be laser scanned and the laser scan published. This means that after tomorrow everyone will realize the coin is either all-heads or all-tails.
  • Ideally, I would have an incentive to buy if the market price is below 60% and sell if the market price is above 60% (to reveal my true probability).
  • But in reality, I would be happy to buy at a price up to 99%. Because: Even at 99%, if the coin ends up being revealed to be  all-tails, the market prices will collapse.

If I've got that right, then having the market make decisions could be very harmful. (Let me know if this example isn't clear.)

In this case, either the price finalises before the scan and no collapse happens, or it finalises after the scan and so the information from the scan is incorporated into the price at the time that it informs the decision. So as long as you aren’t jumping the gun and making decisions based on the non-final price, I don’t think this fails in a straightforward way.

But I’m really not sure whether or not it fails in a complicated way. Suppose if the market is below 50%, the coin is still flipped but tails pays out instead (I think this is closer to the standard scheme). Suppose both heads and tails are priced at 99c before the scan. After a scan that shows “heads”, there’s not much point to buy more heads. However, if you shorted tails and you’re able to push the price of heads very low, you’re in a great spot. The market ends up being on tails, and you profit from selling all those worthless tails contracts at 99c (even if you pay, say, 60c for them in order to keep the price above heads). In fact, if you’re sure the market will exploit this opportunity in the end, there is expected value in shorting both contracts before the scan - and this is true at any price! Obviously we shouldn’t be 100% confident it will be exploited. However, if both heads and tails trade for 99c prior to the scan then you lose essentially nothing by shorting both, and you therefore might expect many other people to also want to be short both and so the chance of manipulation might be high.

A wild guess: I think both prices close to $1 might be a strong enough signal of the failure of a manipulation attempt to outweigh the incentive to try.

I was thinking about a scenario where the scan has not yet happened, but the scan will happen before prices finalize. In that scenario at a minimum, you are not incentivized to bid according to your true beliefs of what will happen. Maybe that incentive disappears before the market finalizes in this particular case, but it's still pretty disturbing—to me it suggests that the basic idea of having the market make the choices is a dangerous one. Even if the incentives problem were to go away before finalization in general (which is unclear to me) it still means that earlier market prices won't work properly for sharing information.

In this case it would be best to use the language of counterfactuals (aka potential outcomes) instead of conditional expectations. In practice, the market would estimate and for the two random functions and , and you would choose the option with the highest estimated expected value. There is no need to put conditional probability into the mix at all, and it's probably best not to, as there is no obvious probability to assign to the "events" and .

You can bet not on probabilities but on utility, see e.g. the futarchy specification by Hanson (Lizka's summary and notes).

Phrasing it in terms of potential outcomes could definitely help the understanding of people who use that approach to talk about causal questions (which is a lot of people!). I’m not sure it helps anyone else, though. Under the standard account, the price of a prediction market is a probability estimate, modulo the assumption that utility = money (which is independent of the present concerns). So we’d need to offer an argument that conditional probability = marginal probability of potential outcomes.

Potential outcomes are IMO in the same boat as decision theories - their interpretation depends on a vague “I know it when I see it” type of notion. However we deal with that, I expect the story ends up sounding quite similar to my original comment - the critical step is that the choice does not depend on anything but the closing price.

a and b definitely are events, though! We could create a separate market on how the decision market resolves, and it will resolve unambiguously.

Potential outcomes are very clearly and rigorously defined as collections of separate random variables, there is no "I know it when I see it" involved. In this case you choose between two options, and there is no conditional probability involved unless you actually need it for estimation purposes.

Let's put it a different way. You have the option of flipping two coins, either a blue coin or a red coin. You estimate the expected probability of heads as and . You base your choice of which coin to toss on which probability is the largest. There is actually no need to use scary-sounding terms like counterfactuals or potential outcomes at all, you're just choosing between random outcomes.

We could create a separate market on how the decision market resolves, and it will resolve unambiguously.

That sounds like an unnecessarily convoluted solution to a question we do not need to solve!

However we deal with that, I expect the story ends up sounding quite similar to my original comment - the critical step is that the choice does not depend on anything but the closing price.

Yes, I agree. And that's why I believe we shouldn't use conditional probabilities at all, as it makes it confusion possible.

The definition of potential outcomes you refer to does not allow us to answer the question of whether they are estimated by the market in question.

The essence of all the decision theoretic paradoxes is that everyone agrees that we need some function options -> distributions over consequences to make decisions, and no one knows how exactly to explain what that function is.

Sorry, but I don't understand what you mean.

Here's the context I'm thinking about. Say you have two options and . They have different true expected values and . The market estimates their expectations as and . And you (or the decider) choose the option with highest estimated expectation. (I was unclear about estimation vs. true values in my previous comment.)

Does this have something to do with your remarks here?

Also, there's always a way to implement "the market decides". Instead of asking P(Emissions|treaty), ask P(Emissions|market advises treaty), and make the market advice = the closing prices. This obviously won't be very helpful if no-one is likely to listen to the market, but again the point is to think about markets that people are likely to listen to.

I believe we agree on the following: we evaluate the desirability of each available option by appealing to some map from options to distributions over consequences of interest .

We also both suggest that maybe should be equal to the map where is the closing price of the decision market conditional on .

You say the price map is equal to the map , I say it is equal to where the expectation is with respect to some predictive subjective probability.

The reason why I make this claim is due to work like Chen 2009 that finds, under certain conditions, that prediction market prices reflect predicting subjective probabilities, and so I identify the prices with predictive subjective probabilities. I don’t think any similar work exists for potential outcomes.

The main question is: is the price map really the right function ? This is a famously controversial question, and causal decision theorists say: you shouldn’t always use subjective conditional probabilities to decide what to do (see Newcomb etc.) On the basis of results like Chen’s, I surmise that causal decision theorists at least don’t necessarily agree that the closing prices of the decision market defines the right kind of function, because it is a subjective conditional probability (but the devil might be in the details).

Now, let’s try to solve the problem with potential outcomes. Potential outcomes have two faces. On the one hand, is a random variable equal to in the event (this is called consistency). But there are many such variables - notably, itself. The other face of potential outcomes is that should be interpreted as representing a counterfactual variable in the event . What potential outcomes don’t come with is a precise theory of counterfactual variable. This is the reason for my “I know it when I see it” comment.

Here’s how you could argue that : first, suppose it’s a decision market with randomisation, so the choice is jointly determined by the price and some physical random signal . Assume - this is our “theory of counterfactual variables”. By determinism, we also have where Q is the closing price of the pair of markets. By contraction , and the result follows from consistency (apologies if this is overly brief). Then we also say is the function and we conclude that indeed .

This is nicer than I expected, but I figure you could go through basically the same reasoning, but with F directly. Assume and (and similarly for b). Then by similar reasoning we get (Noting that, by assumption, )

I’ll get back to you

I agree.

Please may I include this in the Prediction Markets section of the EA forum wiki. Not as a tag summarised in the main body. 

Likewise, I'd like to include some other similar issues. (eg those listed here https://docs.google.com/document/d/10_NXeK042lgoFzUrcBOQoRENDRAF0eWnQ5c57i9RIK8/edit#heading=h.7smpime4d3i6 )

For those confused as to why I ask, the Forum wiki has an explicit norm that wiki posts are short introductions rather than attempted summaries. 

For the traditional firing the CEO case, you can solve this by firing the CEO 1% of the time randomly, and people betting on the outcome for the company in that case.

You could even make it so that you have 100 fire the CEO markets and you'll always choose one and only one to fire, which might be more attractive to market participants. 

Ignoring the exponential blowup, one could have a prediction market over all the causal models to elicit the best one (including an option "all of these are wrong/important variables are missing").

[on reflection, this seems hard unless you commit to doing a bunch of experiments or otherwise have a way to get the right outcome]

Then, with a presumptively trustworthy causal model, the "make adjustments to observational data" approach would be more reliable to estimate from other markets.

However, it feels like it could be the case that trying to do both of these things at once might screw up the incentives -- in other cases there are sometimes impossibility results like this.

"How can we design mechanisms to elicit causal information, not just distributional properties" seems like a really interesting question that seemingly hasn't received much attention.

More from Lizka
Curated and popular this week
Relevant opportunities