Suppose we start with $n$ dollars and make a sequence of bets. For each bet, we win $1$ dollar with probability $p$ and lose $1$ dollar with probability $1−p$. We quit if either we go broke, in which case we lose, or when we reach $n+m$ dollars, that is, when we win $m$ dollars. From what I understand the probability we reach $n+m$ dollars before we go broke, given that we start with $n$ dollars, denoted by $P_n$, is given by
$$P_n = \Big( \frac{p}{1-p} \Big) ^m $$
However, $P_n > 1$ whenever $p > 0.5$. In other words, $P_n$ is not bounded by the interval $[0,1]$ as it should be. So how is this answer correct?
paper for reference: https://web.mit.edu/neboat/Public/6.042/randomwalks.pdf
The exact formula is $$ P_n = \frac{\alpha^n - 1}{\alpha^{m+n}-1} $$ where $\alpha = \frac{1-p}{p}$. When $\alpha > 1$ (so $p < 0.5$) and $n,m+n$ are large, $\alpha^n$ and $\alpha^{m+n}$ are much bigger than $1$, so $P_n \approx \frac{\alpha^n}{\alpha^{m+n}} = \alpha^{-m} = (\frac{p}{1-p})^m$, but this is only an approximation.
In general, the formula for $P_k$ must have the form $C_1 \alpha^k + C_2$ in order to satisfy the recurrence $P_k = p P_{k+1} + (1-p) P_{k+1}$, and satisfy the boundary conditions $P_0 = 0$, $P_{m+n}=1$. The boundary condition $P_0 = 0$ tells us that $C_1 + C_2 = 0$ or $C_2 = -C_1$, so $P_k$ is proportional to $\alpha^k - 1$. In order to get $1$ when $k=m+n$, the constant $C_1$ must be $\frac1{\alpha^{m+n}-1}$, giving us $P_k = \frac{\alpha^k-1}{\alpha^{m+n}-1}$.