I'm struggling with solving the following problem numerically:
Given $\sigma>0, X_0 \in (0,1)$ and a markov chain defined by $X_{i+1}=X_{i}+N(0,\sigma^2) \ \forall i \in \mathbb{Z^{0+}}$. What is the probability that $min(\{i : X_i < 0\}) < min(\{i : X_i > 1\})$? In other words, what is the probability this markov chains hits $0$ before it hits $1$?
I've managed to reduce the problem to solving the following integral equation:
Define $E = min(\{i : X_i < 0\}) < min(\{i : X_i > 1\})\\$
Write $$f(x) = \mathbb{P}(E | X_0 = x) $$ then we can say that $$f(x) = \int_{-\infty}^{\infty}\mathbb{P}(E|X_0=x,X_1=y)P(X_1=y) dy \\ = \int_{-\infty}^{0}1 \times P(X_1=y) dy + \int_{0}^{1} f(y) \times P(X_1=y) dy + \int_{1}^{\infty}0 \times P(X_1=y) dy \\ = \Phi(\frac{-x}{\sigma}) + \frac{1}{\sqrt{2\pi\sigma^2}}\int_{0}^{1} f(y)\exp{(-\frac{(y-x)^2}{2\sigma^2})} dy$$
So we have a fredholm equation of the second kind.
Unfortunately, I've been trying to get mathematica to solve this numerically (for a given $\sigma$) but I cannot yield any reasonable numerical solutions. (I am expecting a decreasing function, with $f(0)=1$ and $f(1)=0$)
So I have three questions:
- Is my logic up to the point of the integral equation correct?
- Are there any tricks I'm missing that would simplify this equation?
- How can I solve this numerically?
One numerical approach goes like this. I like this kind of method because it is giving an exact solution to a related finite dimensional problem.
Given an integer $n \geq 1$, consider a finite state Markov chain on $n+2$ states, which we identify with $(-\infty,0),[0,1/n),[1/n,2/n),\dots,[(n-1)/n,1),[1,\infty)$. We'll zero index the states. We treat state $0$ (identified with $(-\infty,0)$) and state $n+1$ (identified with $[1,\infty)$) as absorbing states.
Starting from one of the other states $i$, define the probability to go from $i$ to $j$ to be the exact probability that the original process would go from the center of the $i$th interval to anywhere in the $j$th interval.
Then the transition probabilities are given as
\begin{align} p_{i,j} & =F \left ( \frac{j}{n} - \frac{2i-1}{2n} \right ) - F \left ( \frac{j-1}{n} - \frac{2i-1}{2n} \right ) \quad i,j=1,2,\dots,n \\ p_{i,0} & = F \left ( -\frac{2i-1}{2n} \right ) \quad i=1,2,\dots,n \\ p_{i,n+1} & = 1-F \left ( 1-\frac{2i-1}{2n} \right ) \quad i=1,2,\dots,n \end{align}
where $F(x)=\Phi(x/\sigma)$.
Finally, consider $u_i$ to be the probability to hit state $0$ before state $n+1$ starting from state $i$, and assemble the $p_{i,j}$ into a $(n+2) \times (n+2)$ matrix $P$. Then the desired system of equations reads
\begin{align} (Pu)_i - u_i & = 0 \quad i=1,2,\dots,n \\ u_0 & = 1 \\ u_{n+1} & = 0. \end{align}
You can then construct an approximate solution to the original problem by e.g. linear interpolation.
Interestingly, these numerics show a discontinuity at the boundaries, which actually makes sense: no matter how close you get to the boundary, there is a chance, however small, that you will instantly jump past the other boundary in the very next step. This goes to zero as $\sigma \to 0$ of course, and is already very small as soon as $\sigma$ is say $1/3$, but still, it is there as long as $\sigma>0$.