Is this Stochastic Process bounded from above?

674 Views Asked by At

I'm dealing with a problem in my thesis that involves proving the boundedness of a Stochastic Process. This question is related to another one I did recently, but the results found in the other one sadly didn't help me sufficiently.

Let $\text{d}X_t= f(t,X_t) \text{d}t +\text{d}W_t$ be a stochastic process defined for the times $t\ge0,$ such that $X_0=0$ and $f(t,x)<a<0$ for all $t\ge0,x>k$ for a certain constant $k$.

Prove that $$\mathbb{P}[\exists C>0: X_t<C \ \forall t\ge 0]=1.$$

In other words, I wish to show that if the function $f$ is negative when $X_t$ goes above a certain value $k$, then the stochastic process is bounded from above, because the drift part is going to be negative.

I know that $Y_t=at+W_t$ is bounded from above if $a<0$, but what I can't manage to deal with is the fact that we cannot control the moments when $X_t$ goes above $k$: every time it goes above $k$ it might reach a higher peak since the bound on $Y_t$ is not $\omega$-wise uniform, and the sequence of those peaks might be unbounded.

Here's a simulation of the SDE $\text{d}Y_t=-Y_t\text{d}t+\text{d}W_t$, which seems to fit these hypotheses, as the time taken is quite long.

EDIT: my SDE, in particular, is $$\text{d}X_t=X_t\frac{D(X_t)-S}{D(X_t)+S}\text{d}t+X_t\sigma\text{d}t,$$ with $D$ being $D(x)=D_0\exp(-\alpha x),$ for some $D_0>S>0$, and $X_0>0$. This SDE is equivalent to $$\text{d}Y_t=\left(\frac{D(\exp(Y_t))-S}{D(\exp(Y_t))+S}-\frac{\sigma^2}2\right)\text{d}t+\sigma\text{d}t,$$ thanks to the substitution $Y_t=\log(X_t)$: this second equation is the one I was referring to. Actually, $S$ is not a constant but a bounded function, which breaks the Markovianity, but I think that if the result is not true for constant $S$ the it's not true also for bounded $S$.

Any help would be immensely appreciated.

4

There are 4 best solutions below

6
On BEST ANSWER

This might not be good news for you.
I don't think your setting will work generally. The reason -why your simulation seems to be confirming your hunch - might be due to your limited simulated time interval.
The main reason based on which I came to that conclusion is the Markov nature of your SDE. ( if there is no $t$ in f)
Now, let us together examine your setting to see if I have made any mistake.

Some assumptions I made on $f$:

  • (i) $f$ is time stationary, that is $ f(t,x)=f(x)$
  • (ii) $f(0)= 0$
  • (iii)$f(x) \ge 0$ if $x \le 0$ and $ f(x) \le 0$ in the other case.
  • (iv) $f$ is bounded under by a value $b<0$, that is $f(x) \ge b$

Let $\tau_1, \gamma_M$ denote:

  • $ \tau_1 := \inf \{ t \ge 1: X_t=0\}$ , the first time after $1$ that $X$ revisits $0$.
  • $ \gamma_M:= \inf \{ t \ge 0: X_t= m\}$ for some $M>0$

So the desired conclusion is that: $$\lim_{M \rightarrow +\infty} \mathbb{P}\left( \gamma_M < \infty \right) =0$$

What I will show is that: It seems to me, under these assumptions : $$ \mathbb{P}\left( \gamma_M < \infty \right) =1 \space \forall M>0$$

Heuristically, under my third assumptions (iii), $f$ acts as a repulsive force that drags $X$ back to $0$, and I don't think it is hard to prove that: $$ \mathbb{P}( \tau_1 < \infty)=1 $$ Then, we have: $$\mathbb{P}\left( \gamma_M< \infty \right) =\mathbb{P}\left( \gamma_M< \tau_1 \right)+\mathbb{P}\left( \tau_1 \le \gamma_M<\infty \right)$$ $$ \underbrace{=}_{ \text{Strong Markov's property}} \mathbb{P}\left( \gamma_M< \tau_1 \right)+\mathbb{P}\left( \tau_1 \le \gamma_M \right)\mathbb{P}\left( \gamma_M <\infty \right)$$ Which is equivalent to: $$\left[ \mathbb{P}\left( \gamma_M< \infty \right) -1 \right]\mathbb{P}\left( \gamma_M< \tau_1 \right)=0$$ $$\Leftrightarrow \mathbb{P}\left( \gamma_M< \infty \right)=1$$ Because the fourth assumption gives us clear a reason for which $\mathbb{P}\left( \gamma_M< \tau_1 \right) >0$. Indeed, we have: $$\mathbb{P}\left( \gamma_M< \tau_1 \right) \ge \mathbb{P}\left( \gamma_M< 1 \right) \ge \mathbb{P}\left( X_1 > M \right) \ge \mathbb{P}\left( W_1+b > M \right) >0$$
**QED **
*Discuss *: So there are more than just only the control over the negativity of $f$ in order for your result to happen.

1
On

This post is meant to explain why your toy example also fails and why I think your simulation did not show a good illustration here
Your toy example is Ornstein-Uhlenbeck process ( me too, I was having a little trouble solving until I realize how trivial it is when writing this answer), with solution: $$ Y_t= e^{-t} \underbrace{ \int_{0}^t e^sdW_s}_{=: A_t}$$

Moreover, $A_t$ is just a time change Brownian, that is, there is a Brownian process $B$ such that: $$ (A_t)= \left( B_{e^{2t}-1}\right)$$ and clearly, $(Y_t)$ is a.s unbounded.
Remark: We do have the same manipulation for a suitable SDE of form : $$dY_t= -g(t)Y_tdt+dW_t$$ or even $$dY_t= -g(t,Y_t)Y_tdt+dW_t$$ under some conditions on $g$

@to Mods: Since this topic is now more like a discussion. I would like to break my long comment in multiple posts for readability. Please kindly understand.

5
On

So, let's come back to our initial equation but with some modifications to suit our analysis on the impact of volatility.
$$ dX_t = \left( f(t,X_t)-\frac{\sigma^2}{2} \right)dt +\sigma dW_t $$ with intial condition $X_0=x_0$
Here are some additional assumptions we make on $f$ (which are also confirmed by our real $f$):

  • $f$ is bounded (1)
  • $f$ is decreasing on second variable (2)
  • $f$ is locally lipschitz (3)

Let $\tilde{X}$ denote the solution of our SDE when there is no volatility, that is, $\tilde{X}$ is the solution of the following ODE: $$d\tilde{X}_t = \left( f(t,\tilde{X}_t) \right)dt$$ with intial value $\tilde{X}_0=x_0$.
The condition of local Lipschitz (3) and boundedness of $f$ (1) guarantee the existence and unicity of $\tilde{X}$
After Ito's formula, we have: $$d(X_t-\tilde{X}_t)^2=2(X_t-\tilde{X}_t)\left[ f(t,X_t)- f(t,\tilde{X}_t) \right]dt + \sigma^2 \left[ 1-\left( X_t-\tilde{X}_t\right)\right]dt+\underbrace{2\sigma (X_t-\tilde{X}_t) dW_t}_{=:dM_t}$$

Let's have some simple analysis:

  • $(X_t)$ is bounded in $L^2$ ( as a direct consequence of the boundedness of $f$ )
  • $(M_t)$ is a local martingale and in fact, it's even a $L^2$ martingale ( thanks to the boundedness of (X_t) )
  • The first weight of $dt$ in the $RHS$ of the above equation is nonpositive (due to the monotonicity of $f$) So we can imply the following inequality for all $t \ge 0$:

$$g(t):= \mathbb{E}\left( (X_t-\tilde{X}_t)^2 \right) \le \sigma^2t- \int_{0}^t \sigma^2 \mathbb{E}(X_s-\tilde{X}_s)ds$$

Thus, $$ 0 \le g(t) \le 2\sigma^2 t +\int_{0}^t \sigma^2 g(s)ds $$ So after Gronwalls, we imply that: $$ 0 \le g(t) \le 2\sigma^2 t e^{\sigma^2t}$$ and that is : $$ \mathbb{E}( X^{\sigma}_t-\tilde{X}_t)^2 \le 2\sigma^2 t e^{\sigma^2t}$$

The last equation implies most of the convergence we know:

  • Strong convergence in Euler Maruyama's sense.
  • (As a consequence) Finite-dimensional convergence (in law and in $L^2$)
  • Convergence in the weak topology of continuous processes (taking value in $\mathcal{C}[0,T]$ ) ( tightness is free due to the boundedness of $f$)

Discussion:

  • The condition of local Lipschitz is unnecessary as its existence and unicity can be derived by the same method ( the decreasing monotonicity of $f$ is necessary)
  • The second assumption (2) plays the central role in our analysis.
  • We can give a convergence speed if we want.
  • Some modifications can be made to have the same result in a more general setting of SDE.
  • We can craft some sort of almost sure convergence as follows:
    Choosing a sequence $(\sigma_n , n \ge 1)$ such that: $$\sum_{n \ge 1} {\sigma_n}^2 <\infty $$ (for example $\sigma_n= \frac{1}{n}$ )
    Then, there is a sequence of positive real number $(\epsilon_n , n \ge 1)$ that is decreasing to $0$ such that: $$ \sum_{n \ge 1} \frac{\sigma_n^2}{\epsilon_n^2} <\infty $$ Thus, by Borel-Catelli, we imply that $\forall t \ge 0$: $$ X^{\sigma_n}(t) \xrightarrow{n \rightarrow +\infty} \tilde{X}(t) \text{ almost surely}$$. Then thanks to the boundedness of $f$ and Komogolrov's continuity theorem, we imply the "almost sure" convergence, that is almost surely, $$ \forall t X^{\sigma_n}(t) \rightarrow \tilde{X}(t)$$
  • The SDE can be viewed as an ODE driven by a Holder-continous noise. But I don't have any knowledge on this field to give any adequate proof.
0
On

@to Mods: Please kindly understand that we are discussing.

Almost Sure Convergence

(This is the continuation of my discussion in the above post)

Let $(\sigma_n ; n \ge 0)$ and $(X^{\sigma_n} ; n \ge 0 )$ be the sequence of volatilities and their respective stochastic processes as defined above.
As we had argued, we are able to show the almost sure convergence of $X^{\sigma_n}$ to $\tilde{X}$ at a countable dense subset, say $\mathbb{Q}^+$, of our time horizon $\mathbb{R}^+$, that is: Almost surely, for all $t \in \mathbb{Q}^+$, we have: $$\lim_n X^{\sigma_n}_t = \tilde{X}_t$$ Now if we fix a real positive number $T$. We see that, for almost all $\omega \in \Omega$, the sequence of continuous function $(X^{\sigma_n}(\omega) ; n \ge 1) \subset \mathcal{C}([0,T]) $ is equicontinous.
Indeed, we can prove this fact either by Komolgorov's continuity theorem( a bit overkill here) or just by obversing that: $$ X^{\sigma_n}_t=x_0 +\underbrace{ \int_{0}^t \left( f(s,X_s) -\dfrac{\sigma_n^2}{2} \right) ds }_{A^{(n)}_t} +\underbrace{\sigma_n W_t}_{B^{(n)}_t}$$

  • The part $A$ gives us a Lipschitz continuous function with the Lipschitz coefficient clearly bounded for all $\sigma_n$. ( by our choice of $\sigma_n$ and the boundedness of $f$). Hence, $(A^{(n)}(\omega) )$ is a family of equicontinuous functions for every $\omega$

  • The part $B$ clearly gives a family of equicontinuous functions as it is formed by the scalar multiplication between a bounded real number and a continuous function shared by all $n$.

Note: Two arguments above are constructed in the perspective that we are only considering all the processes restricted to the compact subset $[0,T]$ of time horizon)

Thus for almost every $\omega$, the family of functions $$( X^{\sigma_n}(\omega); n \ge 1 )$$ is a relative compact subset of $\mathcal{C}([0,T])$. Thus for every subsequence of that sequence, we can always extract a subsubsequence that converges uniformly to a limit function.
However, our very first result on the almost sure convergence of this sequence tells that if that limit exists, that limit function must be identical to $\tilde{X}$.
So a.s, $X^{\sigma_n}(\omega)$ converges uniformly to $\tilde{X}(\omega)$
Thus the conclusion $\square$.

Comments:

  • In the end, we even have the a.s convergence of $X^{\sigma}$ to $\tilde{X}$ under the usual metric of $\mathcal{C}( \mathbb{R_+})$
  • It seems that I haved blah-blah a lot in my arguments. By all means, the central idea is the equicontinuity of paths when restricted to $[0,T]$.
  • Some relaxation on the boundedness of $f$ can be done (if needed). While a replacement by an upper bound for $|f|$ only dependent of $t$ is easy to make, a bound depending also on $x$ is more not that straigthforward.