Karatzas and Shreve use of the localization technique to prove convergence in probability of the quadratic sum of $X$ to $\langle X \rangle_t$.

159 Views Asked by At

This is the proof of the convergence in probability of the quadratic variation of a continuous martingale to $\langle X \rangle_t$ from Karatzas and Shreve. I do not understand the final sentence of the proof. So the first part proves the convergence in probability of $V_t^{(2)}(\Pi) = \sum_{k=1}^m |X_{t_k} - X_{t_{k-1}}|^2$ to $\langle X \rangle_t$, which is the natural increasing process of the Doob-Meyer decomposition of $X^2$, given that both $X$ and $\langle X \rangle$ are bounded. Now they introduce the technique of localization. I can understand that given the stopping time $T_n$, we get the desired convergence. But how do we combine the final two facts, that is, $\lim_{n \to \infty} P[T_n < t] = 0$ and $$\lim_{\Vert \Pi \Vert \to 0}E[\sum_{k=1}^m (X_{t_k \wedge T_n} - X_{t_{k-1} \wedge T_n})^2 - \langle X \rangle_{t \wedge T_n}]^2 = 0$$ to prove that $V_t^(2)(\Pi)$ converges to $\langle X \rangle_t$ in probability? enter image description here

enter image description here

1

There are 1 best solutions below

0
On BEST ANSWER

Some notation first to abstract away the irrelevant details. $t$ is fixed so we drop it in the new notation altogether.

$$Z_{m,n} = \sum_{k=1}^m\left(X_{t_k\wedge T_n} - X_{t_{k-1}\wedge T_n}\right)^2 $$ $$Z_{n} = \sum_{k=1}^m\left(X_{t_k} - X_{t_{k-1}}\right)^2 $$ You can think of $m$ as some kind of partition index.

$$Y_n = \langle X\rangle_{t\wedge T_n}$$ $$Y = \langle X\rangle_{t}$$

We have $Z_{m,n} \to Z_{m}$ and $Y_n \to Y$ almost surely for every $m$. Also, $Z_{m,n} \to Y_{n}$ in $L^2$ for every $n$.

What we need to show is that $Z_{m} \to Y$ in probability.

Fix some $\varepsilon > 0$. Then

$$P\{\lvert Z_{m}-Y\rvert > \varepsilon\} \leq P\{\lvert Z_{m}-Z_{m,n}\rvert > \varepsilon/3\} + P\{\lvert Y-Y_n\rvert > \varepsilon/3\} + P\{\lvert Y_{n}-Z_{m,n}\rvert > \varepsilon/3\}$$

I can make the first two terms on the RHS as small as I want by picking the right $n$. Then I go along the $m$ coordinate as much as needed to make the third term as small as I want. This finishes the proof.