Can one estimate the uncertainty over the estimated outcome probabilities of an event?

47 Views Asked by At

Say I'm making the following bet: I flip a coin. If it's heads, I win \$2; if it's tails, I lose \$1. The coin is not unbiased in general: there is some probability $p\in[0,1]$ that it's going to be heads.

If I know $p$, I can estimate my average return over such a bet, and decide whether it's a good idea to wage it. Suppose I have some information that would allow me to make an estimate over what $p$ is. However, I am not certain whether my analysis of such information is accurate.

Is there a way to formalise this sort of situation? That is, if I estimate the outcome probabilities of some event, is there a way to meaningfully take into account how (not) confident I am in my own analysis?

For example, say the real probability, as estimated by someone else by tossing the coin many times, is $p=0.1$. Having no information about $p$, Alice's (who is deciding whether the bet is profitable) Bayesian prior on the probability should be $1/2$, and thus she would conclude that the bet is profitable on average (the win reward being larger than the loss cost). Similarly, she might not be very good at evaluating the available information and wrongly estimate $p$ to be $1/2$, leading to the same conclusion. Such conclusion would be wrong, and she would end up losing money, on average.

Now Bob, who is more careful, knows he is not that great at evaluating the available information, and wants to take this into account for his own estimate. Is there a general way to do this sort of thing?

Of course, if the bet is repeated multiple times, one can update their prior using the observed outcomes. Here I'm however considering the "first-shot" scenario, that is, what one would estimate before tossing any coin.

2

There are 2 best solutions below

1
On

You can assume a probability distribution for $p$. At fixed $p$, you know how to compute the expected outcome: then you have to average the expected outcome over $P(p)$ to decide if the bet is worth the risk.

0
On

I think what you're describing is called ambiguity. When we know that something is random, and we know the distribution that is called uncertainty. When we know that something is random, but we do not know the distribution that is called ambiguity.

There are several ways how we can deal (make decisions) with ambiguity. The simplest approach is Bayesian, i.e. we assume that there is some subjective distribution of the parameter. Some other approaches to decision-making under ambiguity which I am aware of are maximin and minimax regret. Google "decision making under ambiguity" for more details.

Let me illustrate the latter two criteria. Suppose we know ex-ante that $p\in[\underline{p}, \bar{p}], 0\le \underline{p} \le \bar{p} \le 1$. Also, suppose that we want to choose $\delta\in[0, 1]$ which is the probability with which we play the game you've described. Then maximin criterion looks as follows: $$\delta^{MM} \in arg \underset{\delta \in [0, 1]}{\max} \underset{p \in [\underline{p}, \bar{p}]}{\min} \delta(3p-1)$$ the solution would be $$\delta^{MM} = \begin{cases} 0, \underline{p} < \frac{1}{3}\\ 1, \underline{p} \ge \frac{1}{3} \end{cases}$$ Minimax regret criterion would be: $$\delta^{MMR} \in arg \underset{\delta\in[0, 1]}{\min} \underset{p \in [\underline{p}, \bar{p}]}{\max} \underbrace{\underset{\delta'\in[0, 1]}{\max} (3p-1)(\delta'-\delta)}_{\text{Regret}}$$ and the solution is $$\delta^{MMR} = \begin{cases} 0,\ \ &\underline{p} \le \bar{p} \le \frac{1}{3}\\ \frac{3\bar{p} - 1}{3(\bar{p} - \underline{p})}, &\underline{p} \le \frac{1}{3} < \bar{p}\\ 1, & \frac{1}{3} \le \underline{p}<\bar{p} \end{cases}$$