I looked around and found many questions related to two-player variants of Gambler's Ruin but could not find what I had in mind. I needed clarification on a certain step while deriving the expression.
The following is the question:
Consider the following game. Two players flip a coin.
Player A has $N$ units and player B has $N - i$ units.
If a head is flipped, player A gets a unit from player B.
For a tails, player B gets a unit from player A
Assume the coin flips are independent and the probability of heads = $p$
Find the probability that A wins all the units.
After following the derivation mentioned below, it was relatively simple to find the expression:
$$ P(E_{i}) = \frac{1 - \left(\dfrac{1-p}{p}\right)^{i}}{1 - \left(\dfrac{1-p}{p}\right)^N} $$
I have mentioned in bold the two lines of the derivation where I could not understand how a conceptual leap was made between the two.
Let $ E_{i} $ be the event where player A wins starting with $ i $ units.
$ P(E_{i}) = P(E_{i} \ | \ H) P(H) + P(E \ | \ T) P(T) \ \ $ via Bayes' Theorem.
$ \therefore \ P(E_{i} \ | \ H) \cdot p + P(E_{i-1}) \cdot(1-p) = P(E_{i}) $
$ \therefore \ P(E_{i} \ | \ H) \cdot p - P(E_{i}) = - P(E_{i-1}) \cdot (1-p)$
$ \therefore \ \mathbf{ p \cdot P(E_{i+1}) - P(E_{i}) = - \ (1-p) \cdot P(E_{i -1})} \ \ $ since $ \ P(E_{i} \ | \ H) = P(E_{i+1}) $
$ \therefore \ \mathbf{ p \cdot \ [ \ P(E_{i+1}) - P(E_{i}) \ ] = (1-p) \cdot (P(E_{i}) - P(E_{i-1}) ) } $$ \therefore \ P(E_{i+1}) - P(E_{i}) = \frac{(1-p)}{p} (P(E_{i}) - P(E_{i-1})) $
This gives us the telescoping sum:
$ \require{cancel} P(E_{2}) - P(E_{1}) = \frac{(1-p)}{p} ( \ P(E_{1}) - \cancelto{0}{P(E_{0})} \ ) $ $ P(E_{3}) - P(E_{2}) = \frac{(1-p)}{p} ( \ P(E_{2}) - P(E_{1}) \ ) = \left(\frac{1-p}{p}\right)^{2} \cdot P(E_{1}) $
$ \vdots \qquad \qquad \vdots \qquad \qquad \vdots $
$ P(E_{i}) - P(E_{i-1}) = (\frac{1-p}{p})^{i-1} \cdot P(E_{1}) $$ P(E_{i}) - P(E_{i-1}) = \sum\limits_{j=1}^{i-1} \left(\frac{(1-p)}{p}\right)^{j} \cdot P(E_{1}) $
$ \boxed{ P(E_{i}) = \sum\limits_{j=0}^{i-1} \left(\frac{(1-p)}{p}\right)^{i} \cdot P(E_{1}) }$
In case @joriki's "mere arithmetic rearrangement" is still unclear, \begin{align*} \mathbf{P}[E_{i+1}]\cdot p-\mathbf{P}[E_i]&=-(1-p)\cdot\mathbf{P}[E_{i=1}]\\ \mathbf{P}[E_{i+1}]\cdot p+(p-1-p)\cdot\mathbf{P}[E_i]&=(p-1)\cdot\mathbf{P}[E_{i=1}] \end{align*} Basically we've just added and subtracted a $p$ from the coefficient of $\mathbf{P}[E_i]$ on the LHS. Factoring out a $p$, $$p\cdot\left(\mathbf{P}[E_{i+1}]-\mathbf{P}[E_i]\right)+(p-1)\mathbf{P}[E_i]=(p-1)\cdot\mathbf{P}[E_{i=1}]$$ Now subtracting $(p-1)\mathbf{P}[E_i]$ from both sides and factoring out -1 from the RHS, $$p\cdot\left(\mathbf{P}[E_{i+1}]-\mathbf{P}[E_i]\right)=(1-p)\cdot\left(\mathbf{P}[E_i]-\mathbf{P}[E_{i-1}]\right)$$ It's just some subtle rearranging. As you go through more probability theory, these rearrangements become easier to see. Please comment if you still have questions.