Two players $A$ and $B$ are flipping a coin. $A$ starts with $6$ points and $B$ has $4$ points. They flip a coin and if it’s a head, then $A$ gets a point from $B$. If it’s a tail, then $B$ gets a point from $A$. What’s the probability that $A$ is the first to $10$ points?
The answer is $\frac{6}{10}$, which can be verified by numerical simulation. To get this result mathematically it is also clear that we can consider the combinations in an infinite sequence as:
$$ P(A_{\text{wins}}) = \sum_{i} (\frac{1}{2})^{2i + 4} \cdot {2i + 4 \choose i } $$
However the source where I found this questions also states that the result can be obtained straightforwardly by symmetry arguments - by observing that the players have equal probability of winning if the first toss is a tail, since in this case on the second toss each player would have $5$ points and be equally likely to reach $10$ first.
Although I can see this is true, I can't see how to arrive at the probability of $\frac{6}{10}$ of an $A-\text{win}$ from this starting point - can anyone see what the necessary further arguments are?
If you are familiar with Markov chains, then you must be made familiar with a few concepts , with the help of which we can write a theorem and use it to solve this problem. Our theorem is referred to as $\color{green}{\mathit{first\ step\ analysis}}$.
A "stopping time" of the Markov chain, is a function from the "path space" to time, which tells us when to stop the chain, and observe its results. For example, you could create a Markov chain out of coin tosses (heads and tails), and ask to stop when you get three consecutive heads.
Now, you could observe a "function of the current state" at a stopping time. For example, when you choose to stop, you could look if you belong in a "good set" of states or a "bad" set of states. For example if you are gambling and stop after some time, a "good" set of states would be those for which you've gained money, and bad for which you've lost money.
First step analysis deals specifically with the question : Let $f(x)$ be the expected value of a state function at a stopping time, given I start at the state $x$. Can I calculate $f(x)$? Indeed, it turns out that $f(x)$ depends on $f(x')$ for some other states $x'$ : this gives rise to a sequence of equations which can be solved.
In our case, we have the Markov chain as the set of all states of the game. Let $(a,b)$ mean that $A$ has $a$ points and $B$ has $b$ points. Then, since $a+b = 10$, we have the state space $\{(0,10),(1,9),(2,8),...,(9,1),(10,10)\}$.
On this , we define the stopping time as : stop when either you hit $(0,10)$ or if you hit $(10,0)$. We also define the function of state $f(x)$ as the probability that you hit $(0,10)$ before you hit $(10,0)$.
You could also write this as follows : if the function $g(y)$ is $1$ at $(10,0)$ and $0$ elsewhere, then the expected value of $g$ given you start at $x$, is $f(x)$. (Figure this out yourself).
According to first step analysis, we "step forward and look at how we reach the stopping time now". Let us take a simple example : suppose I want to find the probability that from $(9,1)$ I reach $(10,0)$ before $(0,10)$. Now, look at the next step : with probability $\frac 12$, I hit $(10,0)$ so I reach $(10,0)$, so now the probability of reaching $(10,0)$ before $(0,10)$ is $f((10,0)) = 1$ (obviously). On the other hand, I could hit $(8,2)$ with probability $\frac 12$ : then the probability of reaching $(10,0)$ before $(0,10)$ is $f((8,2))$.
In other words : $$ f((9,1)) = \frac{f(10,0) + f(8,2)}{2} = \frac{1 + f(8,2)}{2} $$
we can very similarly write : $$ f((x,10-x)) = \frac{f((x+1,10-x-1)) + f(x-1,10-x+1)}{2} $$
along with $f(10,0) = 1$ and $f(0,10) = 0$. Ten equations, ten unknowns, but we know the pattern : look up "gambler's ruin" to see a generalization of this phenomena, along with how to solve this kind of equation.