Continuous limit of a discrete stochastic process

361 Views Asked by At

Suppose that I divide the time interval $[0,T]$ into $n$ subintervals with end points $t_k = \frac{k}{n}T$ for $k=0,1,\cdots,n$ and I consider the following process $X^n_t = X^n_{t_k}$ if $t_k < t < t_{k+1}$ where \begin{equation} X_{t_{k+1}} = \begin{cases} X^n_{t_k} + \delta X^n_{t_k}(1-X^n_{t_k}), &\qquad \text{with probability $\frac{1}{2}$}\\ X^n_{t_k} - \delta X^n_{t_k}(1-X^n_{t_k}), &\qquad \text{with probability $\frac{1}{2}$} \end{cases} \end{equation} for some constant $\delta > 0$ and an initial condition $X^n_0 \in [0,1]$. If I set $\delta = \sqrt{\frac{T}{n}}$ and send $n\rightarrow +\infty$, would the process $\{X^n_t\}$ converges (in distribution, or any sense) to the continuous time process \begin{equation} X_T = X_0 + \int_0^T X_t(1-X_t)dW_t? \end{equation} Where $W_t$ denotes the standard Brownian motion. From my understanding, if $X^n_{t_{k+1}} = X^n_{t_k} + \delta$ or $X^n_{t_{k+1}} = X^n_{t_k} - \delta$ with probability $\frac{1}{2}$ each then the limit would be the standard Brownian motion itself. But I'm not sure how to do it in the case where we are getting a SDE.

1

There are 1 best solutions below

0
On

If $\delta^n = \sqrt{\frac{T}{n}}$, then $\left\{\left(X^n_t\right)_{t\geq 0}\right\}_{n\in \mathbb{N}}$ converges in law, also know as convergence in distribution, to a continuous-time process satisfying $$ X_t = X_0 + \int_0^t X_s \left(1 - X_s\right)dW_s.$$

We can show this by a classic identification of limit, tightness argument. For the details of this style of argument see Billingsley's Convergence of Probability Measures. The idea is that when a sequence of random variables is tight then the sequence admits a convergent subsequence. If we can identify a unique limit point for a subsequence under the assumption that it converges, we may conclude that the sequence converges. The convergence follows from a simple result from analysis: if a sequence in metric space is such that every subsequence admits a convergent subsubsequence and each of these convergent subsubsequences converges to the same point, then the sequence converges to that point.

Let $\mathbb{D}_T := \mathbb{D}_T\left(\left[0,T\right], \mathbb{R}\right)$ be the Skorohod space of right continuous real functions on $\left[0, T\right]$, see section 12 of Billingsley. In what follows, we write $\Delta t^n := \frac{T}{n}$ and $\delta^n := \sqrt{\frac{T}{n}}$.

For the identification of the limit, we have that the $X^n$'s and $X$ are Markov processes and $X$ is characterised by its generator $\mathcal{L} = \frac{1}{2}\left(x\left(1-x\right)\right)^2 \partial_x^2$, with domain $C^2\left(\left[0,1\right]\right)$. The fact that $X$ is characterised by its generator can be deduced from either general results on the martingale problem, see for example theorem 4.1 in Ethier and Kurtz's Markov Processes, or from the fact that the SDE satisfied by $X$ has Lipschitz coefficients and therefore admits a unique strong solution. We denote by $P_n$ the transition kernel of $X^n$ given by $$P_nf(x) = \mathbb{E}\left[f\left(X_{t_1}^n\right)\left| X_0 = x\right.\right], $$ for functions $f$ of a suitable class (we will take that class to be $C^2\left(\left[0,1\right]\right)$). We will show that the discrete generators of $X^n$, $$\mathcal{L}_n = \frac{P - I}{\Delta t^n}$$ where $I$ is the identity operator (i.e. $If = f$ for any function), converges to $\mathcal{L}$. Let $f \in C^2\left(\left[0,1\right]\right)$, \begin{gather*} P^nf(x) = \mathbb{E}\left[f\left(X_{t_1}^n\right)\left| X_0^n = x \right.\right] = \frac{1}{2} f\left(x + \delta^n x\left(1-x\right)\right) + \frac{1}{2}f\left(x - \delta^n x\left(1-x\right)\right)\\ =\frac{1}{2} \left[f(x) + f'\left(x\right)\left(\delta^n x(1-x)\right) + \frac{1}{2}f''(x)\left(\delta^n x(1-x)\right)^2 + f(x) - f'(x) \left(\delta^nx(1-x)\right) + \frac{1}{2}f''(x) \left(\delta^nx(1-x)\right)^2\right] + O\left(\left(\Delta t^n\right)^{3/2}\right)\\ = f(x) + \Delta t^n\frac{1}{2}f''(x) \left(x(1-x)\right)^2 + O\left(\left(\Delta t^n\right)^{3/2}\right). \end{gather*} This entails that for $f\in C^2\left(\left[0,1\right]\right)$, $$ \mathcal{L}_nf(x) = \left(x(1-x)\right)^2\frac{1}{2}f''(x) + O\left(\left(\Delta t^n\right)^{1/2}\right),$$ and $\mathcal{L}_n f \to \mathcal{L}f$ as $n \to \infty$, which is what we wanted to show.

For tightness, we will use a standard criterion, see section 13 of Billingsley. For $x\in \mathbb{D}_T$ and $A \subset \left[0,T\right]$, define $\omega\left(x, A\right) = \sup_{s,t \in A}\left|x(s) - x(t)\right|$ and for $\eta > 0$, $$\omega'\left(x, \eta\right) = \inf\left\{\max_{0\leq i \leq r-1}\omega\left(x, \left[t_i, t_{i+1}\right[\right): r \in \mathbb{N}^*, \; 0=t_0 < t_1< \dots < t_r = T, \; t_i - t_{i-1} \geq \eta \; \forall i\right\}.$$ A sequence of random variables $\left(Y_n\right)_{n\geq 1}$ is tight in $\mathbb{D}_T$ iff

  1. The sequence $\left(\sup_{t \in \left[0,T \right]}\left|Y_n(t)\right|\right)_{n\geq 1}$ is tight in $\mathbb{R}$
  2. For any $\varepsilon_1, \varepsilon_2 > 0$, there exists $\eta > 0$ such that $\limsup_{n\to \infty} \mathbb{P}\left(\omega'\left(Y_n, \eta\right) \geq \varepsilon_1\right) \leq \varepsilon_2$.

For the first point, as for each $n$ $X_0^n \in \left[0,1\right]$ implies that $X_{t_k^n}^n \in \left[0,1\right]$ for all $k = 1, \dots, n$, the sequence $\left\{\sup_{t \in \left[0, T\right]}\left|X^n_t\right|\right\}_{n\in \mathbb{N}^*}$ is bounded and therefore tight.

For the second, we will control $\limsup_{n\to\infty}\mathbb{P}\left(\omega'\left(X^n, \delta^m\right) \geq \varepsilon_1\right)$. We fix a specific interval $\left[t_i^m, t_{i+1}^m\right[$. Let $n > m$ and $j$ such that $t_j^n \in \left[t_i^m, t_{i+1}^m\right[$, then for $k$ such that $t_{j+k}^n \in \left[t_i^m, t_{i+1}^m\right[$ $$X_{t_{j + k}^n}^n = X_{t_i^n}^n + \sum_{j = 1}^k Y_j \delta^n X_{t_{i - 1 + j}^n}\left(1 - X_{t_{i - 1 + j}^n}\right),$$ where $\left(Y_j\right)$ is an iid sequence such that $\mathbb{P}\left(Y_1 = 1\right) = \mathbb{P}\left(Y_1 = -1\right) = \frac{1}{2}$. Furthermore, we have $$ \left|X_{t_{j + k}^n}^n - X_{t_i^n}^n\right| \leq \frac{1}{4}\delta^n\left|\sum_{j = 1}^{\left\lfloor \frac{n}{m}\right\rfloor} Y_j\right|.$$ The $\frac{1}{4}$ is the maximum of $x \to x(1-x)$ over $\left[0,1\right]$. The bound on right hand side uniformly bounds $\left|X_t^n - X_s^n\right|$ for any $s, t \in \left[t_i^m, t_{i+1}^m\right[$. By the central limit theorem, for $n$ large enough $$ \frac{1}{4}\left|\sum_{j = 1}^{\left\lfloor \frac{n}{m}\right\rfloor} Y_j\right| = \frac{\sqrt{T}}{4\sqrt{m}}\left|\sum_{j = 1}^{\left\lfloor \frac{n}{m}\right\rfloor} \frac{Y_j}{\sqrt{\frac{n}{m}}}\right| \approx \frac{\sqrt{T}}{8 \sqrt{m}} \left|G\right|,$$ where $G \sim \mathcal{N}\left(0,1\right)$. Thus with a union bound, we have that $$\limsup_{n \to \infty}\mathbb{P}\left(\omega'\left(X^n, \delta^m\right) \geq \varepsilon_1\right) \leq \limsup_{n \to \infty}\mathbb{P}\left(\max_{0 \leq i \leq m - 1}\omega\left(X^n, \left[t^m_i, t^m_{i+1}\right[\right) \geq \varepsilon_1 \right) \leq m \mathbb{P}\left(\left|G\right| \geq \frac{8\sqrt{m}}{\sqrt{T}}\varepsilon_1\right).$$ We can make the right hand side as small as we want by taking $m$ large enough, which shows the second point. With that we have proved that $X^n$ converges to $X$, the solution of $$X_t = X_0 + \int_0^t X_s \left(1- X_s\right)dW_s. $$