Let $p$ a random variable, uniformed distributed in $[0,1]$. Two player $A$ and $B$ play the following game:
Starting from A, a player gets a random value $p(\omega)\in[0,1]$, and he has two choices:
i) He can flip a coin, with a probability $p(\omega)$ of an head. If he get an head he wins the game, otherwise the other player will play with the same distribution $p$.
ii) He can pass the turn to the other player, but giving him a penalized distribution, namely $p$ is replaced, for that turn, by an uniform distribution on $[0,1-p(\omega)]$
Suppose that both players play optimally, i.e. they choose between (i) and (ii) the one which gives the highest probability of winning.
What is the probability of a winning for the first player?
EDIT: let me try to clarify how the game is played. At each turn, the current player gets a random probability $p$ in the following way: if in the previous turn his adversary has flipped the coin (without getting head, in which case the game ended), he takes $p$ uniformly in $[0,1]$. If his adversary hasn't flipped the coin, he takes $p$ uniformly in $[0,1-\tilde{p}]$, where $\tilde{p}$ is the probability his adversary play with during the previous turn. Now the current player can choose if to flip the coin (with a winning probability $p$), or to pass the turn.
The optimal strategy is to always flip the coin.
First, assume that the first player always flips the coin. Then the second player always draws $p$ from $[0,1]$ and has two options. If she flips, her winning probability is $p+\frac12(1-p)E$, where $E$ is her winning probability averaged over $p\in[0,1]$; the first term represents that she wins immediately, and the second term represents that she wins after the first player flips and fails. If she passes, her winning probability is $(1-\frac12(1-p))E$, the probability that she wins after the first player flips and fails. The difference is $p+\frac12(1-p)E-(1-\frac12(1-p))E=p(1-E)\ge0$. Thus, if the first player always flips, it is optimal for the second player to always flip.
It follows that the first player's winning probability when both players always flip is a lower bound for the first player's optimal winning probability. If the first player draws $p$, his winning probability if both players always flip is $p+\frac12(1-p)E$ (where $E$ is again his winning probability averaged over $[0,1]$), and integrating over $[0,1]$ yields $E=\frac12+\frac12(1-\frac12)E$, so $E=2/3$. Thus, if both players always flip, the first player's winning probability, having drawn $p$, is $p+\frac12(1-p)\frac23=\frac13+\frac23p$. Thus the optimal winning probability $E_p$ having drawn $p$ satisfies $\frac13+\frac23p\le E_p\le1$.
Now we can use this bound to show that it is in fact optimal to always flip. Having drawn $p$, a player can flip to get a winning probability
$$ 1-(1-p)\int_0^1E_s\,\mathrm ds $$
or pass to get a winning probability
$$ 1-\frac1{1-p}\int_0^{1-p}E_s\,\mathrm ds\;. $$
The difference is
\begin{align} &-(1-p)\int_0^1E_s\,\mathrm ds+\frac1{1-p}\int_0^{1-p}E_s\,\mathrm ds\\ ={}&\left(\frac1{1-p}-(1-p)\right)\int_0^{1-p}E_s\,\mathrm ds-(1-p)\int_{1-p}^1E_s\,\mathrm ds\\ \ge{}&\left(\frac1{1-p}-(1-p)\right)\int_0^{1-p}\left(\frac13+\frac23s\right)\,\mathrm ds-(1-p)\int_{1-p}^11\,\mathrm ds\\ ={}&\frac13\left(\frac1{1-p}-(1-p)\right)\left((1-p)+(1-p)^2\right)-p(1-p)\\ ={}&\frac13p\left(p^2-p+1\right)\\ \ge{}&0\;. \end{align}