Computing the limit of the expectation of a function of a stochastic process (phew!)

647 Views Asked by At

I state my problem in a few lines then describe what I have already done.

I have a quite simple stochastic differential equation (SDE):

$dx=-2x \, dt+\sqrt{1-x^2} \, dW$ with $W$ a brownian.

I want to compute $\displaystyle{\lim_{t\to 0}}~\mathbb{E}\left[B_t\tanh\left(A_t\frac{x(t)-x(0)}{t}\right)|x(0)\right]$ and can't manage to do it.

I want to describe a given phenomenon obeying my SDE, thus the factors B_t and A_t will depend on $t$. This is to ensure that as I decrease the time increment $t$ through which I approximate my continuous phenomenom by a "discrete" growth rate, as one cannot differentiate a Brownian, I will converge towards a given value. It is equivalent to the normalisation having to be applied to a random walk diffusion coefficient when one wants to converge to the "underlying" brownian. EDIT : $A_t\propto \sqrt{t}$ and $B_t \propto \frac{1}{\sqrt{t}}

This is my problem, any suggestions are welcome, below i expand on where I am, and how I approach the problem :

Let $\phi(x(t),t)$ be a twice differentiable function, Ito's lemma yields that if $\phi(x(t),t)$ is solution of : $$(1)~\frac{\partial \phi }{\partial t}-2x\frac{\partial \phi }{\partial x}+\frac{1-x^2}{2}\frac{\partial^2 \phi}{\partial x^2}=0\textrm{, with }\phi(x,0)=\Phi(x(0))\textrm{ as initial condition,}$$ then $\phi(x(t),t)=E[\Phi(x(0))|x(t)=x]$.

Noting that the partial differential equation (1) can be rewritten as $\partial_t\phi=M[\phi]$ with $M[.]$ a linear operator, any function of the form : $$\phi(x,t)=f(x)+\displaystyle{\sum_{n=1}^\infty}\frac{t^n}{n!}M^n[f]$$ is a solution to the PDE (1), satisfying the initial condition $\phi(x,0)=f(x)$.

Taking $\Phi(x(0))=x(0)$, for example, and with $t>0$, yields $$\mathbb{E}[x(t)|x(0)]=x(0)e^{-2t}$$ thus $\displaystyle{\lim_{t\to 0}}~\mathbb{E}\left[\frac{x(t)-x(0)}{t}|x(0)\right]=-2x(0)$

My problem is that $B_t~tanh\left(A_t\frac{x(t)-x(0)}{t}\right)$ is undefined at $t=0$, thus I cannot use the same approach.

I tried to compute the characteristic function as i can compute all the moments, then Fourier-transform it to get the distribution, but it doesn't seem to give any meaningful result, which I guess comes from the fact that the distribution tends towards pathological functions (Dirac distributions) as $t\to 0$ thus I i am not allowed to switch the limit and the expectation operator anymore.

I can expand if need be, but i think I'll already be lucky if anyone read this far :).

Thanks in advance


EDIT

I also tried to Taylor expand the $tanh$ (not bothering about radius of convergence to start with) apply the expectation operator, which I can compute when applied to any power of $x$. By $b_{2n}$ I denote the coefficient of the Taylor expansion of $tanh$. \begin{eqnarray} B_t\tanh\left(A_t\frac{x(t)-x(0)}{t}\right)&=&B_t\tanh\left(A_t\frac{\Delta x}{t}\right)\\ &=&B_t\displaystyle{\sum_{n=1}^\infty}b_{2n}\frac{A_t^{2n-1}}{t^{2n-1}}\Delta x^{2n-1}\\ &=&B_t\displaystyle{\sum_{n=1}^\infty}b_{2n}\frac{A_t^{2n-1}}{t^{2n-1}} \displaystyle{\sum_{k=0}^{2n-1}}(-1)^k\binom{2n-1}{k}x(0)^k x(t)^{2n-1-k} \end{eqnarray}

For simplicity I denote $x(0)=x$ and $x(t)=x_t$, and I apply the expectation operator : $$B_t\displaystyle{\sum_{n=1}^\infty}b_{2n}\frac{A_t^{2n-1}}{t^{2n-1}} \displaystyle{\sum_{k=0}^{2n-1}}(-1)^k\binom{2n-1}{k}x^k \mathbb{E}[x(t)^{2n-1-k}|x]$$ \begin{eqnarray} =&B\displaystyle{\sum_{n=1}^\infty}b_{2n}\frac{A_t^{2n-1}}{t^{2n-1}}\displaystyle{\sum_{k=0}^{2n-1}}(-1)^k\binom{2n-1}{k}x^k \displaystyle{\sum_{m=0}^{\infty}}\frac{t^m}{m!}M^m[x^{2n-1-k}]\\ \end{eqnarray} Moreover, computations give that : $$\forall(n,k)\in\mathbb{N^2},~M^n(x^k)=\displaystyle{\sum_{i=0}^{\lfloor k/2\rfloor}}\alpha_{i,n}(k)x^{2i+\delta_k}$$ with $\delta_k=0$ if $k$ is even, $\delta_k=1$ if not, $$\alpha_{i,n+1}(k)=\alpha_{i,n}(k)(2i^2+3i+\delta_k(2i+1))-\alpha_{i+1,n}(k)(2i^2+3i+\delta_k(2i+1))$$ and initial condition : $\alpha_{i,0}(k)=\delta_{i,\lfloor k/2 \rfloor}$, with $\delta_{i,j}$ Kronecker's symbol. We can rearrange the sum :

\begin{eqnarray} S&=&B\displaystyle{\sum_{n=1}^\infty}b_{2n}\frac{A_t^{2n-1}}{t^{2n-1}}\displaystyle{\sum_{m=0}^{\infty}}\frac{t^m}{m!}\displaystyle{\sum_{k=0}^{2n-1}}(-1)^k\binom{2n-1}{k}x^k \displaystyle{\sum_{i=0}^{\lfloor k/2\rfloor}}\alpha_{i,m}(2n-1-k)x^{2i+\delta_{k+1}}\\ &=&B\displaystyle{\sum_{n=1}^\infty}b_{2n}\frac{A_t^{2n-1}}{t^{2n-1}}\displaystyle{\sum_{m=0}^{\infty}}\frac{t^m}{m!}\displaystyle{\sum_{k=0}^{2n-1}}(-1)^k\binom{2n-1}{k}\displaystyle{\sum_{i=\lfloor k/2\rfloor}^{n-1}} x^{2i+1} \alpha_{i,m}(2n-1-k) \end{eqnarray}

In addition $x^{2i+1}\alpha_{i,m}(k)$ is a polynomial in $k$ of degree $d\leq 2m$, therefore its sum over $i$ has a maximal degree of $2m$ too. This implies that $$\forall m< n, ~\displaystyle{\sum_{k=0}^{2n-1}}(-1)^k\binom{2n-1}{k}\displaystyle{\sum_{i=\lfloor k/2\rfloor}^{n-1}} x^{2i+1} \alpha_{i,m}(2n-1-k)=0$$ We are left with : \begin{eqnarray} S&=&B_t\displaystyle{\sum_{n=1}^\infty}b_{2n}\frac{A_t^{2n-1}}{t^{2n-1}}\displaystyle{\sum_{m=n}^{\infty}}\frac{t^m}{m!}\displaystyle{\sum_{i=0}^{n-1}} x^{2i+1}\displaystyle{\sum_{k=0}^{2i+1}}(-1)^k\binom{2n-1}{k} \alpha_{i-\lfloor k/2\rfloor,m}(2n-1-k) \end{eqnarray}

If we take $A\propto t$ and $B\propto \frac{1}{t}$, as $t\to0$, only the term $n=1$ is kept, therefore we obtain a result linear in $x$, which is neglecting all the terms in $x$ coming from the term $M^n(x^k)$. If $A\propto \sqrt{t}$ and $B\propto \frac{1}{\sqrt{t}}$ we keep all the terms $m=n$. This expansion reduces to $$S=\displaystyle{\sum_{k=0}^{\infty}}(-1)^{k+1} a_k C^{2k+1}x(1-x^2)^k$$ where $\lfloor n/2 \rfloor ! \leq a_k \leq n!$ numerically (I calculated the first 100 terms...). Thus this expansion diverges.

This seems normal to me (now) because $tanh$ has a finite radius of convergence, and as $t\to 0$ when taking the expected value, $\frac{\Delta x}{\sqrt{t}}$ gets larger and larger for some values of $x_t$, those divergences are taken care of by the distribution of $x_t$ which converges to $0$, but the taylor expansion assumes that the argument of $tanh$ is bounded, which is not the case.

1

There are 1 best solutions below

22
On

One could begin with the decomposition $$ \frac{x(t)-x(0)}t=a(t)x(0)+b(t)y(0)W(t)+b(t)z(t), $$ where $$ a(t)=\frac{\mathrm e^{-2t}-1}t,\quad b(t)=\frac{\mathrm e^{-2t}}t,\quad y(s)=\mathrm e^{2s}\sqrt{1-x(s)^2} $$ and $$ z(t)=\int_0^t(y(s)-y(0))\mathrm dW(s). $$ When $t\to0$, $a(t)\to-2$, $b(t)\sim1/t$, $W(t)=\sqrt{t}V$ where $V$ is standard normal, and $z(t)$ is centered normal with variance $O(t^2)$, hence $$ \frac{x(t)-x(0)}t=y(0)\frac1{\sqrt{t}}V+U(t), $$ where $U(t)$ is a family of random variables bounded in $L^2$. Now, for every function $u$ such that $|u(t)|\ll1/\sqrt{t}$ when $t\to0$, $$ \tanh\left(A\left(y(0)\frac1{\sqrt{t}}V+u(t)\right)\right)\to\mathrm{sign}(AV). $$ If made rigorous, all this would yield $$ \mathbb E\left(\tanh\left(A\frac{x(t)-x(0)}t\right)\right)\to\mathbb E\left(\mathrm{sign}(AV)\right)=0. $$ Edit: It appears that $A=A_t$ should depend on $t$, for example with $A_t\propto\sqrt{t}$. If $A_t=a\sqrt{t}+o(\sqrt{t})$, the above suggests that $$ \mathbb E\left(\tanh\left(A_t\frac{x(t)-x(0)}t\right)\right)\to\mathbb E\left(\tanh\left(a\sqrt{1-x(0)^2}V\right)\right)=0. $$ Edit 2: The decomposition we started with follows from Itô's formula since $$ \mathrm d(\mathrm e^{2t}x(t))=2\mathrm e^{2t}x(t)\mathrm dt+\mathrm e^{2t}\mathrm dx(t)=y(t)\mathrm dW(t), $$ hence $$ \mathrm e^{2t}x(t)=x(0)+\int_0^ty(s)\mathrm dW(s), $$ which is equivalent to $$ x(t)=\mathrm e^{-2t}x(0)+\mathrm e^{-2t}\int_0^ty(s)\mathrm dW(s), $$ that is, $$ x(t)-x(0)=(\mathrm e^{-2t}-1)x(0)+\mathrm e^{-2t}z(t)+\mathrm e^{-2t}y(0)W(t). $$