Say I'm making the following bet: I flip a coin. If it's heads, I win \$2; if it's tails, I lose \$1. The coin is not unbiased in general: there is some probability $p\in[0,1]$ that it's going to be heads.
If I know $p$, I can estimate my average return over such a bet, and decide whether it's a good idea to wage it. Suppose I have some information that would allow me to make an estimate over what $p$ is. However, I am not certain whether my analysis of such information is accurate.
Is there a way to formalise this sort of situation? That is, if I estimate the outcome probabilities of some event, is there a way to meaningfully take into account how (not) confident I am in my own analysis?
For example, say the real probability, as estimated by someone else by tossing the coin many times, is $p=0.1$. Having no information about $p$, Alice's (who is deciding whether the bet is profitable) Bayesian prior on the probability should be $1/2$, and thus she would conclude that the bet is profitable on average (the win reward being larger than the loss cost). Similarly, she might not be very good at evaluating the available information and wrongly estimate $p$ to be $1/2$, leading to the same conclusion. Such conclusion would be wrong, and she would end up losing money, on average.
Now Bob, who is more careful, knows he is not that great at evaluating the available information, and wants to take this into account for his own estimate. Is there a general way to do this sort of thing?
Of course, if the bet is repeated multiple times, one can update their prior using the observed outcomes. Here I'm however considering the "first-shot" scenario, that is, what one would estimate before tossing any coin.
You can assume a probability distribution for $p$. At fixed $p$, you know how to compute the expected outcome: then you have to average the expected outcome over $P(p)$ to decide if the bet is worth the risk.