Imagine a simple Markov chain to model a game where the objective is to win two rounds. Let the probability of Player 1 winning a round be $p$, and Player 2 be $1 - p$. There are a maximum of three rounds, and the possible score outcomes are $2:0$, $2:1$, $1:2$, and $0:2$. Now, let's say that we know that the probability of Player 1 winning the whole game is some probability e.g. $0.55$. Then, we can solve for $p$ to find the probability of Player 1 winning any particular round:
$$ P(\text{P1 wins}) = P(2:0) + P(2:1) = p^2 + 2p^2(1-p) = 0.55 $$
So, after simplifying: $$ \begin{align} -2p^3 + 3p^2 &= 0.55 \\ => p &\in \{-0.3822, 0.5334, 1.3489\} \end{align} $$
Of course, we know that the real $p$ that parametrises the Markov chain will be in $[0,1]$, so it must be $0.5334$. However, I've programmatically found and solved polynomials for much longer games (e.g., the winner is the player that wins 20 rounds), and there is always exactly one real root in the range $[0, 1]$. Intuitively it makes sense, but I am struggling to find a proof that this should be the case. Can anyone think of a way to show this, for an arbitrary number of rounds?
The reason has already been stated in a comment: The probability is $0$ at $0$ and $1$ at $1$ and monotonically increases, so it takes on every value between $0$ and $1$ exactly once.
If e.g. Player $1$ wins by winning exactly one game more than Player $2$, then the winning probability is not a monotonic function of $p$, and you get more than one value of $p$ for each winning probability.