What is the expectation of the positive part of a stochastic integral?

140 Views Asked by At

How can I compute the integral of only the positive part of a Brownian motion process? I would like to compute $$\int_{\{W(t)>0\}}W(t) dW(t)$$ In the end I want the expectation of the above integral.

The probability that the process is below 0 in this interval is 0.5 from the reflection principle. In terms of step functions this seems to be something like only keep half of them but I don't really know how to translate that into a rigorous argument.

I haven't been able to find a similar example anywhere. Is it possible to compute this integral?

From simulating it in python it seems to look like this. I corrected the simulation, since I wasn't multiplying by dw in each step. Now it seems clear it is a martingale (thanks
Stratos supports the strike for pointing out that it's a martingale)

enter image description here

import numpy as np
import matplotlib.pyplot
# Set the parameters for the simulation
T = 1.0  # Total time
N = 1000  # Number of steps
dt = T / N  # Time step size
M = 1000  # Number of realizations (simulations)

# Create an array to store the realizations
g_t_matrix = np.zeros((M, N + 1))

# Simulate M realizations of the stochastic process
for i in range(M):
    # Generate a Wiener process realization: W(t) for each step
    dw = np.random.normal(0, np.sqrt(dt), N + 1)
    W_t = dw.cumsum()

    # Apply the function g(t) to the Wiener process
    g_t = np.maximum(0, W_t)
    g_t_matrix[i] = g_t*dw

# Calculate the average of g(t) across all realizations
g_t_average = g_t_matrix.mean(axis=0)

2

There are 2 best solutions below

1
On BEST ANSWER

Being a square-integrable martingale, its expectation is zero as pointed out by other users.

Going further, consider the function $f(x) = \frac{1}{2}\max\{0, x\}^2$. Although this is not a $C^2$-function, it can be verified that we can still apply Itô's lemma to obtain

\begin{align*} \mathrm{d}f(W_t) &= f'(W_t) \, \mathrm{d}W_t + \frac{1}{2}f''(W_t) \, \mathrm{d}t \\ &= \max\{0, W_t\} \, \mathrm{d}W_t + \frac{1}{2} \mathbf{1}[W_t \geq 0] \, \mathrm{d}t. \end{align*}

(For instance, we may apply Itô's Lemma to $f_{\epsilon}(x) = \frac{1}{4}x(x + \sqrt{x^2 + \epsilon^2})$ and then take limit as $\epsilon \to 0$.) Consequently, it follows that

$$ Y_T := \int_{0}^{T} \max\{0, W_t\} \, \mathrm{d}W_t = \frac{1}{2}\max\{0, W_T\}^2 - \frac{1}{2} \int_{0}^{T} \mathbf{1}[W_t \geq 0] \, \mathrm{d}t. $$

Below is the probability histogram of $10^5$ simulated samples of $Y_1$:

histogram


Here is a correct simulation using Python, correcting errors in OP's simulation:

import numpy as np
import matplotlib.pyplot as plt

# Set the parameters for the simulation
T = 1.0  # Total time
N = 1000  # Number of steps
dt = T / N  # Time step size
M = 1000  # Number of realizations (simulations)

# Create an array to store the realizations
Xt_list = np.zeros((M, N))

# Simulate M realizations of the stochastic process
for k in range(M):
    # Generate a Wiener process realization: W(t) for each step
    dW = np.random.normal(0, np.sqrt(dt), N)
    W = np.zeros(N)
    W[1:] = dW.cumsum()[:-1]

    Xt_list[k] = np.cumsum(np.maximum(0, W) * dW)

# Calculate the average of g(t) across all realizations
Xt_avg = Xt_list.mean(axis=0)

# Plot the first 6 sample paths
fig, axs = plt.subplots(3, 2, figsize=(10, 10))

t_list = np.linspace(0, T, N)
for k in range(6):
    i = k // 2
    j = k % 2
    axs[i, j].plot(t_list, Xt_list[k])
    axs[i, j].set_title(f"Sample path #{(k+1)}")

plt.tight_layout()
plt.show()

Sample results


And here is a simulation using Mathematica:

SimulateIntegral[T_, dt_] := Module[{n, dW, W},
    n = Floor[T/dt];
    dW = RandomVariate[NormalDistribution[0, Sqrt[dt]], n];
    W = Take[FoldList[Plus, 0, dW], n];
    FoldList[Plus, 0, (Max[#, 0] & /@ W)*dW]
];

GraphicsGrid[
    Partition[
        Table[
            ListPlot[
                SimulateIntegral[1, 0.001],
                DataRange -> {0, 1},
                Joined -> True,
                PlotRange -> All,
                Frame -> True,
                Axes -> False
            ], 
        {6}],
    2]
]

Sample results

0
On

This is to expand a bit on my comment.

There are different ways to see that the process $Y_t := \int_0^t W(s)\mathbf 1\{W(s)>0\}dW(s)$ is a martingale, but a very useful one is given by the following theorem, which can be found e.g. in chapter 3 of Øksendal's book :

Theorem : Denote by $(\Omega,\mathcal F,\mathbb P)$ the probability space. If $f:[0,\infty)\times\Omega\to\mathbb R$ is such that

  1. $f$ is $\mathcal B\times\mathcal F$-measurable, where $\mathcal B$ is the Borel $\sigma$-algebra of $[0,\infty)$,
  2. $\big[f(t,\cdot)\big]_t$ is adapted to the canonical filtration $\big[\sigma(W(t))\big]_t $
  3. $\mathbb E\left[\int_0^T f(t,\cdot)^2\ dt\right] <\infty$.

Then $\int_0^t f(s,\cdot)\ dW(s)$ is a martingale on $[0,T]$.

In the case of your process $Y_t$, it is easy to see that the function $f:(t,\omega)\mapsto W(t)\mathbf 1\{W(t)>0\}$ satisfies the requirements of the theorem : the first two points are clear, and for the third one note that $$\begin{align}\mathbb E\left[\int_0^T W(t)^2\mathbf 1\{W(t)>0\}\ dt\right]&\le\mathbb E\left[\int_0^T W(t)^2\ dt\right]\\ &=\int_0^T\mathbb E\left[W(t)^2\right]\ dt\\ &=\frac{T^2}{2}<\infty\ \ \forall T>0\end{align} $$ where we used Tonelli theorem and the fact that $W(t)$ has mean zero and variance $t$.

All of this tells us that $\mathbb E[Y_t] = Y_0 = 0$, as desired.