More preliminaries of the Martingale Convergence Theorem

188 Views Asked by At

enter image description here

enter image description here

Really struggling with this lemma.

Not sure about the general structure of the proof. Why have we chosen g to be orthogonal to all functions of the form 4.3.1?

Why should $G(\lambda)=0$, does it have something to do with the inner product on $L^2$.

What gives us the justification of the analytic extension and why would we want to do this?

I cannot find any justification in my book that if $G=0$ on $\mathbb{R^n}$ that as $G$ is analytic $G$ would be equal to $0$ on $\mathbb{C^n}$.

Next I cannot understand why we are using the fourier inversion.

Finally I cannot see how the last paragraph justifies that the subset is dense. Why is $g=0$ significant? I think this may be help me understand the general structure of the proof.

1

There are 1 best solutions below

2
On BEST ANSWER

The proof is based on this standard fact from functional analysis:

Theorem. Let $H$ be a Hilbert space and let $E \subset H$ be a linear subspace. Let $E^\perp := \{ x \in H : \langle x,y \rangle = 0 \text{ for all } y \in E\}$. Then $E$ is dense in $H$ iff $E^\perp = \{0\}$.

So for this proof, $H = L^2(\mathcal{F}_T, P)$ and $E$ is the linear span described in the statement. We choose an arbitrary $g \in E^\perp$ and show we must have $g=0$. This will show that $E^\perp = \{0\}$ and hence $E$ is dense.

Next verify that if $h \in L^2([0,T])$ is a step function, having a jump of size $\lambda_i$ at the point $t_i$, $i=1,\dots,n$, and constant in between, then $\int_0^T h(t)\,dB_t = \lambda_1 B_{t_1} + \dots + \lambda_n B_{t_n}$. (This is immediate from the definition of the stochastic integral.) Then $$\exp\left(\int_0^T h(t)\,dB_t - \frac{1}{2} \int_0^t h(t)^2\,dt\right) = \exp\left(-\frac{1}{2} \int_0^T h(t)^2\,dt\right) \cdot \exp(\lambda_1 B_{t_1} + \dots + \lambda_n B_{t_n})$$ which is a scalar multiple of $\exp(\lambda_1 B_{t_1} + \dots + \lambda_n B_{t_n})$, which is therefore in $E$. Since $g$ was assumed to be in $E^\perp$, we have $$\int_\Omega \exp(\lambda_1 B_{t_1} + \dots + \lambda_n B_{t_n}) g\,dP = 0$$ which is (4.3.2).

The fact that $G$ has a complex analytic extension should be explained in Exercise 2.8(b). But rather than using facts about real analytic functions, it is probably easier to verify explicitly that $G$ is complex differentiable by differentiating under the integral sign (hopefully familiar from a measure theory course). You will need some estimates to justify this which are presumably the same ones from Exercise 2.8(b). Alternatively, use differentiation under the integral sign to show that $G$ satisfies the Cauchy-Riemann equations.

Now it is a standard fact from complex analysis that if $G$ is analytic on $\mathbb{C}^n$ and vanishes on $\mathbb{R}^n$, then $G=0$. When $n=1$ you can say more: if the zeros of $G$ have a limit point (which $\mathbb{R}^1$ certainly does) then $G=0$. You can find this as Theorem 6.9 in Bak and Newman. Our statement for general $n$ can then be proved by induction. Suppose it holds for $n-1$, and suppose $G : \mathbb{C}^n \to \mathbb{C}$ is analytic and vanishes on $\mathbb{R}^n$. Fix $w \in \mathbb{R}$ and consider the function $G_w(z_1, \dots, z_{n-1}) = G(z_1, \dots, z_{n-1}, w)$. Then $G_w : \mathbb{C}^{n-1} \to \mathbb{C}$ is analytic and vanishes on $\mathbb{R}^{n-1}$ so by the inductive hypothesis $G_w = 0$. Next fix $z_1, \dots, z_{n-1} \in \mathbb{C}$ and let $f(u) = G(z_1, \dots, z_{n-1}, u)$, so that $f : \mathbb{C} \to \mathbb{C}$ is analytic. For $u \in \mathbb{R}$ we have $f(u) = G_u(z_1, \dots, z_{n-1}) = 0$, so $f$ vanishes on $\mathbb{R}$ and by the $n=1$ case we have $f = 0$. Thus $G(z_1, \dots, z_{n-1}, u) = 0$ for all $u \in \mathbb{C}$. But $z_1, \dots, z_{n-1}$ were arbitrary so we must have $G=0$.

Finally, why do we use the Fourier transform? Well, because it works. Once we have the complex extension of $G$, we know that for any $y_1, \dots, y_n$ we get $\int_\Omega \exp\left(i y_1 B_{t_1} + \dots + i y_n B_{t_n}\right) g\,dP = 0$. But, roughly speaking, the Fourier inversion formula says that any reasonable function $\phi$ of $B_{t_1}, \dots, B_{t_n}$ is an "infinite linear combination" of functions of the form $\exp(\lambda_1 B_{t_1} + \dots + \lambda_n B_{t_n})$ (think of the integral in the inverse Fourier transform as a sort of infinite sum; in fact, it really is the limit of finite sums). So it is not surprising that we then get $\int_\Omega \phi\left(B_{t_1},\dots, B_{t_n}\right) g\,dP = 0$.

On the other hand, it had previously been shown (presumably Lemma 4.3.1) that the subspace $F$ spanned by functions of the form $\phi\left(B_{t_1},\dots, B_{t_n}\right)$, where $n$ ranges over all integers, was dense in $L^2(\mathcal{F}_T, P)$, and the conclusion of my previous paragraph says that $g \in F^\perp$. By the other direction of the theorem at the top of this answer, we conclude that $g=0$, which is what we needed.