Derive PDE from $n^{\text{th}}$ moment of stopping time of SDE

630 Views Asked by At

Assume $X_t$ is an Ito process that satisfies the SDE

$$dX_t = b(X_t)dt + \sigma(X_t) dW_t, \hspace{1cm} X_0 \in D\subset \mathbb{R}^d$$

where $D$ is a bounded open set with a smooth boundary $\partial D$. Define the stopping time

$$\tau := \inf\{t > 0: X_t \in \partial D\}$$

Derive the elliptice boundary PDE satisfied by

$$T_n(x) = E^x[\tau^n], \hspace{1cm} T_n(x) = 0 \forall x \in \partial D$$

($E^x$ is expectation conditioned on $X_0 = x$.)

We saw this for the case $n = 1$ but I didn't understand the proof well and I've made attempts for the general moment case that honestly don't appear to go anywhere. For the case $n = 1$ I know that you need to work with backwards equations but I don't seem to have much luck here.

1

There are 1 best solutions below

0
On BEST ANSWER

Overview

Boundary value problems for conditional expectation involving a stopping or hitting time could be implemented under the stationary Feynman-Kac formula. This formula, in the time-dependent case, owes the conditional expectation $$ \mathbb{E}\left[\Phi(X_T)\exp\left(\int_t^Tq(X_s,s){\rm d}s\right)+\int_t^T\Psi(X_s,s)\exp\left(\int_t^sq(X_r,r){\rm d}r\right){\rm d}s\Bigg|X_t=x\right] $$ with respect to the Ito process $$ {\rm d}X_t=b(X_t,t){\rm d}t+\sigma(X_t,t){\rm d}W_t $$ to the solution of the following initial value problem: \begin{align} \frac{\partial u}{\partial t}(x,t)+b(x,t)\frac{\partial u}{\partial x}(x,t)+\frac{1}{2}\sigma^2(x,t)\frac{\partial^2u}{\partial x^2}+q(x,t)u(x,t)+\Psi(x,t)&=0,&&t<T,\\ u(x,t)&=\Phi(x),&&t=T. \end{align} A sketch of this proof could be found here. However, the time-independent case witnesses some variants, which will be explained in below.

The following paragraphs would explain

  • How to target an appropriate stochastic process $Y_t$,
  • How to relate the conditional expectation $\mathbb{E}\left(Y_t|X_0\right)$ to some elliptic equation, and
  • How to apply this result to the original question.

Specifically, the answer to the original question is highlighted at the end of this post. You may scroll down to that part and get a glimpse of it before being drowning in the sophisticated derivations.

Target an appropriate $Y_t$

Consider the following stochastic process $$ Y_t=\Phi(X_t)\exp\left(\int_0^tq(X_s){\rm d}s\right)+\int_0^t\Psi(X_s)\exp\left(\int_0^sq(X_r){\rm d}r\right){\rm d}s, $$ where $q$, $\Phi$ and $\Psi$ are appropriate $\mathbb{R}$-valued functions on $D\subseteq\mathbb{R}^d$, to be determined.

This target process is highly constructive, lacking in intuitions but following conventions. It plays an instrumental role in finance and physics.

Compute ${\rm d}Y_t$

Differentiating $Y_t$ yields $$ {\rm d}Y_t={\rm d}\left[\Phi(X_t)\exp\left(\int_0^tq(X_s){\rm d}s\right)\right]+\Psi(X_t)\exp\left(\int_0^tq(X_s){\rm d}s\right){\rm d}t, $$ where \begin{align} &{\rm d}\left[\Phi(X_t)\exp\left(\int_0^tq(X_s){\rm d}s\right)\right]\\ &={\rm d}\Phi(X_t)\exp\left(\int_0^tq(X_s){\rm d}s\right)+\Phi(X_t){\rm d}\exp\left(\int_0^tq(X_s){\rm d}s\right)+\\ &\qquad{\rm d}\left<\Phi(X_{\cdot}),\exp\left(\int_0^{\cdot}q(X_s){\rm d}s\right)\right>_t\\ &={\rm d}\Phi(X_t)\exp\left(\int_0^tq(X_s){\rm d}s\right)+\Phi(X_t){\rm d}\exp\left(\int_0^tq(X_s){\rm d}s\right)\\ &={\rm d}\Phi(X_t)\exp\left(\int_0^tq(X_s){\rm d}s\right)+\Phi(X_t)q(X_t)\exp\left(\int_0^tq(X_s){\rm d}s\right){\rm d}t, \end{align} with (Einstein summation notation adopted instead of vectors and matrices) $$ {\rm d}\Phi(X_t)=\partial_j\Phi(X_t){\rm d}X_t^j+\frac{1}{2}\partial_j\partial_k\Phi(X_t){\rm d}\left<X^j,X^k\right>_t. $$ Provided that $$ {\rm d}X_t^j=b^j(X_t){\rm d}t+\sigma_k^j(X_t){\rm d}W_t^k, $$ the last line reads \begin{align} {\rm d}\Phi(X_t)&=\partial_j\Phi(X_t)\left[b^j(X_t){\rm d}t+\sigma_k^j(X_t){\rm d}W_t^k\right]+\frac{1}{2}\partial_j\partial_k\Phi(X_t)\sigma_r^j(X_t)\sigma_s^k(X_t){\rm d}\left<W^r,W^s\right>_t\\ &=\partial_j\Phi(X_t)\left[b^j(X_t){\rm d}t+\sigma_k^j(X_t){\rm d}W_t^k\right]+\frac{1}{2}\partial_j\partial_k\Phi(X_t)\sigma_r^j(X_t)\sigma_s^k(X_t)\rho^{rs}{\rm d}t\\ &=\left[b^j(X_t)\partial_j\Phi(X_t)+\frac{1}{2}\rho^{rs}\sigma_r^j(X_t)\sigma_s^k(X_t)\partial_j\partial_k\Phi(X_t)\right]{\rm d}t+\partial_j\Phi(X_t)\sigma_k^j(X_t){\rm d}W_t^k. \end{align} With these results, $$ {\rm d}Y_t=\left(H(X_t){\rm d}t+\partial_j\Phi(X_t)\sigma_k^j(X_t){\rm d}W_t^k\right)\exp\left(\int_0^tq(X_s){\rm d}s\right), $$ where $$ H(x)=b^j(x)\partial_j\Phi(x)+\frac{1}{2}\rho^{rs}\sigma_r^j(x)\sigma_s^k(x)\partial_j\partial_k\Phi(x)+q(x)\Phi(x)+\Psi(x). $$

Construct $\mathbb{E}\left(Y_{\tau}|X_0\right)$

Suppose $q$, $\Phi$ and $\Psi$ are such that $$ H(x)=0 $$ for all $x\in D$. Thanks to this assumption, $$ Y_u-Y_0=\int_0^u{\rm d}Y_t=\int_0^uK_k(t){\rm d}W_t^k $$ holds for all $u\in\left[0,\tau\right)$, where $$ \tau=\inf\left\{t>0:X_t\in\partial D\right\} $$ is the given stopping time, while $$ K_k(t)=\exp\left(\int_0^tq(X_s){\rm d}s\right)\partial_j\Phi(X_t)\sigma_k^j(X_t). $$

Provided that (here $T>0$ is a sufficiently-large fixed constant) $$ \int_0^{\tau}K_k(t){\rm d}W_t^k=\int_0^{\tau\wedge T}K_k(t){\rm d}W_t^k=\int_0^TK_k(t)\cdot\mathbb{1}_{\left\{t<\tau\right\}}{\rm d}W_t^k, $$ where the last term is obviously a stochastic integral with respect to a Wiener process, it follows immediately that $$ \mathbb{E}\left[\int_0^{\tau}K_k(t){\rm d}W_t^k\Bigg|X_0\right]=\mathbb{E}\left[\int_0^TK_k(t)\cdot\mathbb{1}_{\left\{t<\tau\right\}}{\rm d}W_t^k\Bigg|X_0\right]=\mathbb{E}\left[\int_0^TK_k(t)\cdot\mathbb{1}_{\left\{t<\tau\right\}}{\rm d}W_t^k\right]=0. $$ Thanks to this fact, $$ \mathbb{E}\left(Y_{\tau}|X_0\right)-\mathbb{E}\left(Y_0|X_0\right)=\mathbb{E}\left(Y_{\tau}-Y_0|X_0\right)=\mathbb{E}\left[\int_0^{\tau}K_k(t){\rm d}W_t^k\Bigg|X_0\right]=0, $$ which indicates that $$ \mathbb{E}\left(Y_{\tau}|X_0\right)=\mathbb{E}\left(Y_0|X_0\right). $$ However, recall the definition of $Y_t$, and $$ Y_0=\Phi(X_0). $$ Thus $$ \mathbb{E}\left(Y_0|X_0\right)=\mathbb{E}\left(\Phi(X_0)|X_0\right)=\Phi(X_0). $$ Consequently, $$ \mathbb{E}\left(Y_{\tau}|X_0\right)=\Phi(X_0). $$

Summary: Stationary Feynmann-Kac formula

According to the derivations from above, one would draw the following conclusion.

Let $X_t$ be an $\mathbb{R}^d$-valued Ito process that satisfies $$ {\rm d}X_t^j=b^j(X_t){\rm d}t+\sigma_k^j(X_t){\rm d}W_t^k, $$ with $X_0\in D\subseteq\mathbb{R}^d$. Let $\tau$ be a stopping time defined as $$ \tau=\inf\left\{t>0:X_t\in\partial D\right\}. $$ Let $Y_t$ be an $\mathbb{R}$-valued stochastic process defined as $$ Y_t=\Phi(X_t)\exp\left(\int_0^tq(X_s){\rm d}s\right)+\int_0^t\Psi(X_s)\exp\left(\int_0^sq(X_r){\rm d}r\right){\rm d}s, $$ where $q$, $\Phi$ and $\Psi$ are appropriate $\mathbb{R}$-valued functions in $D$. With these settings, $$ \mathbb{E}^xY_{\tau}:=\mathbb{E}\left(Y_{\tau}|X_0=x\right)=\Phi(x) $$ holds for all $x\in D$, provided that $\Phi$ complies with $$ b^j(x)\partial_j\Phi(x)+\frac{1}{2}\rho^{rs}\sigma_r^j(x)\sigma_s^k(x)\partial_j\partial_k\Phi(x)+q(x)\Phi(x)+\Psi(x)=0. $$

This conclusion, however, is not satisfactory at all. One may observe that, in order to figure out an expression for $\mathbb{E}^xY_{\tau}$, the above derivation introduces $Y_t$ defined for all $t>0$; while $t\in\left(0,\tau\right)$, $X_t$ could be anywhere in $D$, for which all of $q$, $\Phi$ and $\Psi$ shall be functions in $D$. Ironically, if one focuses on $Y_{\tau}$, i.e., $$ Y_{\tau}=\Phi(X_{\tau})\exp\left(\int_0^{\tau}q(X_s){\rm d}s\right)+\int_0^{\tau}\Psi(X_s)\exp\left(\int_0^sq(X_r){\rm d}r\right){\rm d}s, $$ with the fact in mind that $X_{\tau}\in\partial D$ as per the definition of $\tau$, one may find it sufficient for $\Phi$ to be defined on $\partial D$ rather than in the whole $D$.

Hopefully, for arbitrarily given functions $q$ and $\Psi$ in $D$ and $\tilde{\Phi}$ on $\partial D$, one may solve the following PDE with respect to $u$: \begin{align} b^j(x)\partial_j u(x)+\frac{1}{2}\rho^{rs}\sigma_r^j(x)\sigma_s^k(x)\partial_j\partial_ku(x)+q(x)u(x)+\Psi(x)&=0,&&x\in D,\\ u(x)&=\tilde{\Phi}(x),&&x\in\partial D. \end{align} Suppose $u=u(x)$ is its solution. This $u(x)$ could play the role of the $\Phi(x)$ in the above derivation. That is, one may extend the time domain of $Y_{\tau}$ from $\left\{\tau\right\}$ to $\left[0,\tau\right]$ by introducing $$ \tilde{Y}_t=u(X_t)\exp\left(\int_0^tq(X_s){\rm d}s\right)+\int_0^t\Psi(X_s)\exp\left(\int_0^sq(X_r){\rm d}r\right){\rm d}s. $$ Especially, \begin{align} \tilde{Y}_{\tau}&=u(X_{\tau})\exp\left(\int_0^{\tau}q(X_s){\rm d}s\right)+\int_0^{\tau}\Psi(X_s)\exp\left(\int_0^sq(X_r){\rm d}r\right){\rm d}s\\ &=\tilde{\Phi}(X_{\tau})\exp\left(\int_0^{\tau}q(X_s){\rm d}s\right)+\int_0^{\tau}\Psi(X_s)\exp\left(\int_0^sq(X_r){\rm d}r\right){\rm d}s\\ &=Y_{\tau}. \end{align}

With this understanding, one eventually obtain the stationary Feynman-Kac formula.

Let $X_t$ be an $\mathbb{R}^d$-valued Ito process that satisfies $$ {\rm d}X_t^j=b^j(X_t){\rm d}t+\sigma_k^j(X_t){\rm d}W_t^k, $$ with $X_0\in D\subseteq\mathbb{R}^d$. Let $\tau$ be a stopping time defined as $$ \tau=\inf\left\{t>0:X_t\in\partial D\right\}. $$ Let $Y_{\tau}$ be an $\mathbb{R}$-valued random variable defined as $$ Y_{\tau}=\Phi(X_{\tau})\exp\left(\int_0^{\tau}q(X_s){\rm d}s\right)+\int_0^{\tau}\Psi(X_s)\exp\left(\int_0^sq(X_r){\rm d}r\right){\rm d}s, $$ where $q$ and $\Psi$ are appropriate $\mathbb{R}$-valued functions in $D$, while $\Phi$ is an appropriate $\mathbb{R}$-valued function on $\partial D$. With these settings, $$ \mathbb{E}^xY_{\tau}:=\mathbb{E}\left(Y_{\tau}|X_0=x\right) $$ defined for all $x\in D$ appears to be the solution of the following PDE with respect to $u$: \begin{align} b^j(x)\partial_ju(x)+\frac{1}{2}\rho^{rs}\sigma_r^j(x)\sigma_s^k(x)\partial_j\partial_ku(x)+q(x)u(x)+\Psi(x)&=0,&&x\in D,\\ u(x)&=\Phi(x),&&x\in\partial D. \end{align}

Obviously, in this last formula, all of $q$, $\Phi$ and $\Psi$ could be arbitrarily chosen (under some general regularity conditions, of course).

Application: Moments of stopping time

It is now, although not that straightforward, to figure out the governing boundary value problem for any moment of stopping time $\tau$, by using the above stationary Feynman-Kac formula.

First of all, determine the moment-generating function $$ v(x,\theta)=\mathbb{E}^x\exp\left(\theta\tau\right), $$ where $\theta\in\mathbb{R}$ is an auxiliary parameter. Simply set \begin{align} q(x)&=\theta,&&x\in D,\\ \Phi(x)&=0,&&x\in\partial D,\\ \Psi(x)&=\theta,&&x\in D. \end{align}

With these settings, the random variable $Y_{\tau}$ in the stationary Feynman-Kac formula reads $$ Y_{\tau}=\exp\left(\theta\tau\right)-1, $$ which implies that $$ \mathbb{E}^xY_{\tau}=\mathbb{E}^x\exp\left(\theta\tau\right)-1=v(x,\theta)-1. $$

Thus according to Feynman-Kac, $$ w(x,\theta)=v(x,\theta)-1 $$ would satisfy \begin{align} b^j(x)\partial_jw(x,\theta)+\frac{1}{2}\rho^{rs}\sigma_r^j(x)\sigma_s^k(x)\partial_j\partial_kw(x,\theta)+\theta w(x,\theta)+\theta&=0,&&x\in D,\\ w(x,\theta)&=0,&&x\in\partial D. \end{align}

As a result, $v(x,\theta)$ complies with \begin{align} b^j(x)\partial_jv(x,\theta)+\frac{1}{2}\rho^{rs}\sigma_r^j(x)\sigma_s^k(x)\partial_j\partial_kv(x,\theta)+\theta v(x,\theta)&=0,&&x\in D,\\ v(x,\theta)&=1,&&x\in\partial D. \end{align}

Finally, provided that $$ T_n(x)=\mathbb{E}^x\tau^n=\frac{\partial^nv}{\partial\theta^n}(x,0) $$ and that (one may verify this inductively for all $n\ge 1$) $$ \frac{\partial^n}{\partial\theta^n}\left[\theta v(x,\theta)\right]=n\frac{\partial^{n-1}v}{\partial\theta^{n-1}}(x,\theta)+\theta\frac{\partial^nv}{\partial\theta^n}(x,\theta), $$ one would be able to draw, by acting $\left(\partial^n/\partial\theta^n\right)|_{\theta=0}$ on the governing elliptic problem for $v$ from above, the following conclusion:

The conditional expectation of the $n$-th moment of the stopping time $\tau$, denoted as $$ T_n(x)=\mathbb{E}^x\tau^n $$ for all $n\ge 1$, complies with the following iterative boundary value problem: \begin{align} b^j(x)\partial_jT_n(x)+\frac{1}{2}\rho^{rs}\sigma_r^j(x)\sigma_s^k(x)\partial_j\partial_kT_n(x)+nT_{n-1}(x)&=0,&&x\in D,\\ T_n(x)&=0,&&x\in\partial D. \end{align} Here, for the sake of consistency, artificially define $$ T_0=\mathbb{E}^x\tau^0=1, $$ so that the $n=1$ case makes sense.

That is it! I hope this very long and perhaps also tedious explanation could be somewhat helpful for you :-)