About a step in the proof of Doob's $L^{p}$-maximal inequality

555 Views Asked by At

We had the above mentioned theorem in class recently.

Assume that $(X_n)$ is a nonnegative submartingale. Then for all $n \in \mathrm{N}$ and all $p \ge 2$ $$ \| \max_{k \le n}X_k \|_p \, \le \, \frac{p}{p-1} \|X_n\|_p $$

In the middle of the proof there is one step I can't follow. We introduced a stopping time $\tau$ given by

$$ \tau = \inf \{n : X_n \ge K \} $$

for $K \le \infty$ and then deduced

$$ \mathrm{E} \left[ \max_{k \le n}\left( X_{\tau \land k } \right)^p \right] \le \mathrm E \left[ \max_{k \le n-1} \left( X_{\tau \land k } \right) ^p \right] + \mathrm E \left[ \left( X_n \right)^p \right] \le K^p + \mathrm E \left[ \left( X_n \right)^p \right] .$$

I could see that this is true for example for a Brownian motion but why does this also hold in discrete time. Any help is appreciated very much.

2

There are 2 best solutions below

4
On BEST ANSWER

Assume without loss of generality that $p=1$ and consider $M_n=\max\limits_{1\leqslant k\leqslant n}X_{\tau\wedge k}$, then $$M_n=M_n\mathbf 1_{\tau\geqslant n+1}+\sum_{k=1}^nX_k\mathbf 1_{\tau =k}$$

  • On the event $\{\tau\geqslant n+1\}$, $X_{\tau\wedge k}=X_k<K$ for every $1\leqslant k\leqslant n$ hence $$M_n<K$$

  • For each $1\leqslant k\leqslant n$, the event $\{\tau=k\}$ belongs to the sigma-algebra $\sigma(X_\ell; 1\leqslant \ell\leqslant k)$ and the process $X$ is a submartingale hence $$E(X_k\mathbf 1_{\tau =k})\leqslant E(X_n\mathbf 1_{\tau =k})$$

Summing these yields $$E(M_n)\leqslant KP(\tau\geqslant n+1)+E(X_n\mathbf 1_{\tau\leqslant n})\leqslant K+E(X_n)$$

0
On

So meanwhile I tried to write everything together. I am very thankful for feedback.

Assume $(X_{n})$ a nonnegative submartingale, $\tau = \inf \{n : X_n \ge K \}$ for $K \le \infty$ a stopping time and $p \ge 2$. We want to show \begin{align*} \mathbb{E} \left[ \max_{1 \le k \le n}\left( X_{T \land k } \right)^p \right] \le K^p + \mathbb{E} \left[ \left( X_n \right)^p \right] . \end{align*} Without loss of generality we can assume $p = 1$. So we will show that \begin{align*} \mathbb{E} \left[ \max_{1 \le k \le n}\left( X_{T \land k } \right) \right] \le K + \mathbb{E} \left[ X_n \right] \end{align*} holds. We have \begin{align}\label{eq3} \max_{1 \le k \le n}\left( X_{T \land k } \right) = \max_{1 \le k \le n}\left( X_{T \land k } \right) \cdot 1_{\{T \ge n+1 \}} + \max_{1 \le k \le n}\left( X_{T \land k } \right) \cdot 1_{\{T \le n \}} . \end{align} We take a closer look at the first summand of (\ref{eq3}). As we have for every $k$ with $1 \le k \le n$ \begin{align*} X_{T \land k } \cdot 1_{\{T \ge n+1 \}} &= X_{k} \cdot 1_{\{T \ge n+1 \}} < K \cdot 1_{\{T \ge n+1 \}} \end{align*} we also have for the maximum \begin{align} \max_{1 \le k \le n}\left( X_{T \land k } \right) \cdot 1_{\{T \ge n + 1\}} < K \cdot 1_{\{T \ge n + 1\}} . \end{align} Now we consider the second summand of (\ref{eq3}). We have \begin{align*} \max_{1 \le k \le n}\left( X_{T \land k } \right) \cdot 1_{\{T \le n \}} &= \max_{1 \le k \le n}\left( X_{T \land k } \right) \cdot \sum_{l = 1}^{n} 1_{\{T = l \}} \\ &= \sum_{l = 1}^{n} \max_{1 \le k \le n}\left( X_{T \land k } \right) \cdot 1_{\{T = l\}} \\ &= \sum_{l = 1}^{n}X_{l} \cdot 1_{\{T = l\}} \end{align*} and so in total for (\ref{eq3}) we get \begin{align}\label{eq2} \max_{1 \le k \le n}\left( X_{T \land k } \right) < K \cdot 1_{\{T \ge n + 1\}} + \sum_{k = 1}^{n}X_{k} \cdot 1_{\{T = k\}} \, . \end{align} For each $k$ with $1\le k \le n$ we have \begin{align}\label{eq4} \{T = k\} \in \sigma\left( X_{1}, X_{2}, ... , X_{k}\right) \end{align} as $T$ is a stopping time. As $(X_{n})$ is a submartingal we have for every $k$ with $1\le k \le n$ \begin{align*} \mathbb{E} \left[ X_{n} | \sigma \left( X_{1}, X_{2}, ... , X_{k} \right)\right] & \ge X_{k} \\ \Leftrightarrow 1_{\{T = k\}} \cdot \mathbb{E} \left[ X_{n} | \sigma \left( X_{1}, X_{2}, ... , X_{k} \right)\right] & \ge X_{k} \cdot 1_{\{T = k\}} \end{align*} and with (\ref{eq4}) we get \begin{align*} \mathbb{E} \left[ 1_{\{T = k\}} \cdot X_{n} | \sigma \left( X_{1}, X_{2}, ... , X_{k} \right)\right] & \ge X_{k} \cdot 1_{\{T = k\}} \, . \end{align*} By taking expectations on both sides we get \begin{align}\label{eq1} \mathbb{E} \left[ X_{n} \cdot 1_{\{T = k\}} \right] \ge \mathbb{E} \left[ X_{k} \cdot 1_{\{T = k\}} \right] . \end{align} So in total by taking expectations on both sides of (\ref{eq2}) we have \begin{align*} \mathbb{E} \left[ \max_{1 \le k \le n}\left( X_{T \land k } \right)\right] &< \mathbb{E} \left[ K \cdot 1_{\{T \ge n+1 \}} \right] + \sum_{k = 1}^{n} \mathbb{E} \left[ X_{k} \cdot 1_{\{T = k\}} \right] \end{align*} and since $ \mathbb{E} \left[ K \cdot 1_{\{T \ge n+1\}} \right] = K \cdot \mathbb{E} \left[ 1_{\{T \ge n+1\}} \right] = K \cdot \mathbb{P} \left( T \ge n+1 \right)$ and with (\ref{eq1}) we have \begin{align*} \mathbb{E} \left[ \max_{1 \le k \le n}\left( X_{T \land k } \right)\right] &< K \cdot \mathbb{P} \left( T \ge n+1 \right) + \sum_{k = 1}^{n} \mathbb{E} \left[ X_{n} \cdot 1_{\{T = k\}} \right] . \end{align*} Finally using \begin{align*} \sum_{k = 1}^{n} \mathbb{E} \left[ X_{n} \cdot 1_{\{T = k\}} \right] = \mathbb{E} \left[ \sum_{k = 1}^{n} X_{n} \cdot 1_{\{T = k\}} \right] = \mathbb{E} \left[ X_{n} \cdot 1_{\{T \le k\}} \right] \end{align*} we get \begin{align*} \mathbb{E} \left[ \max_{1 \le k \le n}\left( X_{T \land k } \right)\right] &\le K \cdot \mathbb{P} \left( T \ge n+1 \right) + \mathbb{E} \left[ X_{n} \cdot 1_{\{T \le k\}} \right] \end{align*} that results in \begin{align*} \mathbb{E} \left[ \max_{1 \le k \le n}\left( X_{T \land k } \right)\right] &\le K + \mathbb{E} \left[ X_{n} \right] . \end{align*}