How does the construction of the stochastic integral rely on predictability of the integrand?

1k Views Asked by At

Consider the stochastic integral of a process $H$ with respect to the local martingale $M$: $$ (H\bullet M)_t = \int_{[0,t]} H_s\,\mathrm d M_s. $$

We know that when $H$ is predictable and sufficiently integrable, then $H\bullet M$ is a local martingale. Moreover, it is also well-known that when $H$ is not predictable, then $H\bullet M$ need not be a local martingale. This answer gives a nice example demonstrating this fact. On the other hand, when $M$ also happens to be continuous, then we are able to also define $H\bullet M$ for progressive processes $H$ (cf. Karatzas and Shreve).

This naturally makes it important to identify where exactly in the construction of the stochastic integral predictability of the integrand is important. Unfortunately, I can't see where predictability plays a role here. Can anyone help clarify this?


Context and Background

A typical construction of the stochastic integral is to first define the integral for simple predictable processes. It is straightforward to show that when $H$ is simple predictable, then $H\bullet M$ is a local martingale. Standard arguments also imply that any predictable process is the limit of simple predictable ones.

Then, for a general predictable process $H$ (again, assuming sufficient integrability), we fix a sequence of simple predictable processes $\{ H^n\}$ with $H^n \to H$, and define the integral $H\bullet M=\lim H^n \bullet M$. (One can show that $H \bullet M$ does not depend on our choice of approximating sequence and is thus well-defined.) $H\bullet M$ inherits the (local) martingale property from its approximating sequence.

It seems to me that this procedure works just as well even though $H$ were not necessarily predictable, but simply a càdlàg adapted process, even for general (i.e. not necessarily continuous) local martingales.

What am I missing?

I know I am glossing over quite a few details here, since I don't want to make this post much longer than necessary. I can fill in the details as needed. For reference, the construction I have in mind is the one in Cohen and Elliott (2015).

2

There are 2 best solutions below

12
On BEST ANSWER

It might be helpful to take a look at the discrete martingale transform.

Given a martingale $(M_k)_{k \in \mathbb{N}_0}$ with respect to a filtration $(\mathcal{F}_k)_{k \in \mathbb{N}_0}$ and a process $(C_k)_{k \in \mathbb{N}_0}$ define the discrete martingale transform by

$$(C \bullet M)_n := \sum_{j=1}^n C_j (M_{j}-M_{j-1}), \qquad (C \bullet M)_0 := 0.$$

If $(C_k)_{k \in \mathbb{N}_0}$ is predictable, i.e. $C_k$ is $\mathcal{F}_{k-1}$-measurable for each $k$, then $C \bullet M$ is a martingale (...assuming that everything is nicely integrable). This corresponds, essentially, to the fact that the stochastic integral of a predictable simple process w.r.t to a (time-continuous) martingale is a martingale (..again, provided that everything is nicely integrable). If the process is not predictable, then the martingale property fails, in general, to hold. Since martingales have constant expectation, the condition

$$0 = \sum_{j=1}^n \mathbb{E}(C_j (M_j-M_{j-1}))$$

is necessary for $C \bullet M$ being a martingale. Since this needs to hold for all $n$, we actually need

$$0 = \mathbb{E}(C_n (M_n-M_{n-1})), \qquad n \in \mathbb{N}.$$

If $C$ is not predictable there is no reason why this should be true. The difference $M_n-M_{n-1}$ has expectation zero but since we are multiplying it with something which can be correlated with $M$, the product will, in general, fail to have zero expectation. For instance, we could choose $C_n := \frac{1}{2} (M_{n-1}+M_n)$ and see that

$$\mathbb{E}(C_n (M_n-M_{n-1}) = \frac{1}{2}( \mathbb{E}(M_n^2)-\mathbb{E}(M_{n-1}^2)) = \frac{1}{2} (\mathbb{E}\langle M \rangle_n-\mathbb{E}\langle M \rangle_{n-1})$$

where $\langle \cdot \rangle$ denotes the quadratic variation. The expression on the right-hand side is, in general, strictly positive. For instance, if $M$ is a "discretized" Brownian motion, then it equals $1/2$. This is exactly the phenomenon which we observe while studying the the Stratonovich integral (see the comment by @TheBridge).

0
On

saz's excellent answer helped me realise the following point. While it does seem to be the case that the predictability of the integrand does not play an important role in the proof that the stochastic integral of a simple predictable process is a martingale, the predictability of the integrand is nevertheless important in the definition of the stochastic integral of a simple process. It is this definition that guarantees the martingale property. I will try to illustrate this assertion below.

Denote the space of bounded, left-continuous, predictable processes by $\Lambda$. That is, $H\in\Lambda$ whenever there is a finite sequence of stopping times $0=t_0 <t_1<t_2<\cdots<t_n<t_{n+1}=\infty$ and a family $\{H^i\}_{i=1}^n$ of bounded random variables such that $H^i$ is $\mathcal F_{t_i}$-measurable for each $i$, and $$ H_0 =H^0 \quad \text{and} \quad H_t = H^i \; \text{for} \; t\in(t_i,t_{i+1}]. $$

For a square-integrable martingale $M$ (the local martingale case is similar), and $H \in \Lambda$, we have that the stochastic integral $H\bullet M$ is defined as $$ (H\bullet M)_t = H_0M_0 + \sum_i H_{t_i} (M_{t_{i+1}\wedge t}-M_{t_i\wedge t}). \tag{$\star$}\label{1} $$

It is easy to verify that the expression on the right-hand side is also a square-integrable martingale. This argument makes no use of the fact that $H$ is predictable. However, the fact that $H$ is predictable is important for the definition \eqref{1}. If $H$ were simple but not necessarily predictable, then \eqref{1} would no longer be the appropriate definition of the stochastic integral of $H$.

We want our stochastic integral to behave like a classical integral whenever a classical definition is applicable. This means that if $H$ were simple, right-continuous, and adapted, instead of predictable (that is, if $H_t = H^i$ for $t\in[t_i,t_{i+1})$), then the correct definition of $H\bullet M$, by analogy with Stieltjes integration, would be $$ (H\bullet M)_t = H_0M_0 + \sum_i H_{t_i} (M_{t_{i+1}\wedge t-}-M_{t_i\wedge t-}), $$ where $X_{t-}= \lim_{s\uparrow t} X_s$. In this case, we can no longer guarantee that $H \bullet M$ is a martingale, unless, for example, $M$ happened to be continuous, so that $M_t = M_{t-}$, and we can apply the proof of the martingale property as in \eqref{1}. (I believe this also is indicative of why we can actually define the stochastic integral for a much larger class of integrands when the integrator is continuous.)