Boundedness in probability of Stochastic Process

393 Views Asked by At

This question is related to some other similar ones I did in the recent past.

Let $X_t$ be a Stochastic Process defined through the equation $$\text{d}X_t=f(X_t,t)\text{d}t+\text{d}W_t,$$ where $f$ is a twice differentiable function such that $f(x)<a<0$ for all $x>k$, and $X_0<k$.

What I am wishing to show is:

For all $\epsilon>0$ there is $M>0$ (which we can WLOG assume to be bigger than $k$) such that for all $t\ge0$ it holds that $$\mathbb{P}[X_t>M]\le\epsilon.$$

What I tried to do is define $\tau_t$ as $\sup\{0<s<t: X_s=k\}$, i.e. the last hitting time of the level $k$ before time $t$, where the sup is set equal to $0$ if the set $\{0<s<t: X_s=k\}$ is empty. This should be a stopping time, even if last hitting times in general are not.

Then, we can manipulate the given probability in this way: \begin{align*}\mathbb{P}[X_t>M]&=\underbrace{\mathbb{P}[X_t>M,\tau_t=0]}_{=0}+\underbrace{\mathbb{P}[X_t>M,\tau_t\ne0, X_t<k]}_{=0}+\mathbb{P}[X_t>M,\tau_t\ne0, X_t>k]\\ &\le\mathbb{P}[X_t-X_{\tau_t}>M-k,\tau_t\ne0]\\ &\le \mathbb{P}[a(t-\tau_t)+W_t-W_{\tau_t}>M-k, \tau_t\ne0]\\ &\le \mathbb{P}[a(t-\tau_t)+W_t-W_{\tau_t}>M-k].\end{align*}

Since $a<0$ we can bound this probability uniformly over $t$, and the claim should be proven. However, I think I've been too slick with some steps, and I want to make sure they are correct (in particular, the $\tau_t$ seems to be not a stopping time). Can someone help me find any potential errors? Also, is there a more straightforward way to prove this? Maybe I just didn't see an easier solution.

EDIT: the $f$ in my specific problem satisfies the hypothesis $|f(x,t)-k_0|<\theta\cdot |x|+ \mu,$ for some constants $k_0, \theta$ and $\mu$: is there some kind of comparison principle with the absolute value bound? I can't find it online, but if it exists, then I would be done, as the Ornstein-Uhlenbeck process is bounded in the sense I am looking for.

3

There are 3 best solutions below

2
On

I'll just give some partial answers to the original question, but my work is rather long so I highly doubt I can type it all in just one sit.

Organisation of my post

I'll start by showing some trivial points about $\mathbb{E}(X_t^2)$, to show that "at least we have something".
Then, I'll incorporate two sequences of stopping times to divide the time horizon in a nice way. From which, I present a tighter bound for $\mathbb{E}(X_t^2)$.
In the end, even though I have a strong feeling that I can well control $\sup \mathbb{E}(X_t^2)$ even without the time-homogeneity of $f$, I'll assume that condition nonetheless to give a sure partial answer.

Remark : To be honest, it's not really the $\mathbb{E}(X_t^2)$ that we will see but rather $\mathbb{E}((X_t)_+^2)$

A weak bound for $\mathbb{E}((X_t)_+^2)$

Theorem 1 If $f(t,x)<0$ for all $x \ge m$, we have:
$$\mathbb{E}((X_t)_+^2) \le 2m^2+2t$$ $\square$

Demonstration 2
By Ito's formula, we have: $$ (X_t-m)_+^2=2(X_t-m)_+d X_t+ 1_{(X_t -m)\ge 0} dt$$ Thus( by some standard local martingale+Fatou arguments which I omit) $$ \mathbb{E}((X_t-m)_+^2) \le \underbrace{2\int_{0}^t \mathbb{E}((X_s-m)_+f(s,X_s))ds}_{ \le 0}+ \mathbb{E} ( \int_{0}^t 1_{(X_s -m)\ge 0}ds )$$

Hence $$ \mathbb{E}((X_t-m)_+^2) \le t$$ Thus the conclusion $\square$

Corollary 2
If $f(t,x)<0$ for all $x\ge m$, we have: $$\mathbb{P}(X_t > M) \le \frac{2m^2+2t}{M^2}$$ for all $M>0$ $\square$

A better bound for $\mathbb{E}((X_t)_+^2)$

1.Some set up


So the above theorem provides a pretty nice bound for $\mathbb{P}(X_t > M)$ when we have some control for the negativity of $f$
Naturally, we should expect a better bound for $\mathbb{P}(X_t > M)$ when we have more control.
I first repeat the initial assumption on $f$,

Condition C1: $f(t,x) < a <0 $ for all $x>k$ $\square$.


Let's define the following two sequences of stopping time with two given real numbers $k<m_1<m_2$ (from now on, we only consider $M>m_2$ )
  • $\sigma_1:= \inf\{ t \ge 0 : X_t \ge m_1\}$ (because $X_0<k$)
  • $\tau_n := \inf\{ t \ge \sigma_n : X_t \ge m_2\}$
  • $\sigma_{n+1} = \inf\{ t \ge \tau_n : X_t \le m_1\}$

Remark 3: $(\sigma)$ and $(\tau)$ is nothing mysterious, they are just stopping times which delimit the upcrossings and downcrossings of $X$. In particular, $X_{[\tau_n,\sigma_{n+1}]}$ is a down crossing, while $X_{[\sigma_n,\tau_n]}$ is a upcrossing.

Assumption A1 $\mathbb{P}( \tau_n<+\infty) =1 \forall n$

Remark 4: In fact, this assumption is a bit superfluous because the infiniteness of $\tau_n$ will lead to an upper bound for $X_t$. However, more elaborate work is indeed needed to get rid of that case, I decided to just assume that equality.

Remark 5 If I remember correctly, I have proven that equality under some assumptions in the previous post.

For any t, we define: $$N(t):= \inf \{ k : \sigma_k >t\}$$

2. Main result

I now present the main theorem for this section

Theorem 6
Under the condition C1 and the assumption A1, we have: $$ \mathbb{E}( (X_t-m_1)_+^2 ) \le \mathbb{E}( t- \sigma_{N(t)-1})$$ $\square$

Remark 7: I lost my note on this part, so it is not really the "true" theorem, however, the essential is presented.

3. Demonstration

Again, by Fatou and local martingale arguments, we have: $$ \mathbb{E}\left( X_{ \sigma_{n+1} \wedge ( t \vee \sigma_n)}- X_{\sigma_n} \right)_+^2 \le \underbrace{ \mathbb{E}(2\int_{\sigma_n}^{\sigma_{n+1} \wedge ( t \vee \sigma_n)} (X_s-m_1)_+f(s,X_s))ds}_{ \le 0}+ \mathbb{E} ( \int_{\sigma_n}^{\sigma_{n+1} \wedge ( t \vee \sigma_n)} 1_{(X_s -m_1)\ge 0}ds )$$ (Note that: $X_{\sigma_n}= m_1$ )

Thus, $$ \mathbb{E}( \underbrace{ X_{ \sigma_{n+1} \wedge ( t \vee \sigma_n)}- X_{\sigma_n}}_{ =X_t-X_{\sigma_n} \text{ if } t \in [\sigma_{n},\sigma_{n+1}] \text{ and } 0 \text{otherwise} } )_+^2 \le \mathbb{E}( \underbrace{ \sigma_{n+1} \wedge ( t \vee \sigma_n)-\sigma_n}_{ = t-\sigma_n \text{ if } t \in [ \sigma_n,\sigma_{n+1}] \text{ and } 0 \text{ otherwise}}) $$ By varying $n$ and adding up all the produced inequalities, we impy that: $$ \mathbb{E}( (X_t-m_1)_+^2 ) \le \mathbb{E}( t- \sigma_{N(t)-1}) $$

4.Discussion
+This bound is indeed close enough. +The condition C1 is mostly needed to assure in some sense that the assumption A1 is reasonable.

(the continuation in the next post)

0
On

Small discussion before presenting the main result

  • Clearly, from the above work, we can have the desired conclusion if we can prove the following estimation: $$\limsup \mathbb{E}(t- \sigma_{N(t)-1}) <+\infty$$
  • Also, our question is now converted from a question on the behaviour of $X_t$ (for a specific $t$) to some question on the behaviour related to sequences of stopping times.
  • IMHO, the main advantages of this conversion are three things in follows: 1) the nice form of the expression we have to control now 2) If things go fine, we can provide an $O(1/M^2)$ upper bound for our $\sup \mathbb{E}(X_t>M)$ 3) The Markov property.

After all the lengthy work above, I think it is nice to have an affirmative answer in some cases.
Here I'll show those "some cases" and this also the central fruit of my work.

An uniform bound on $\mathbb{E} (X_t)_+^2 $

1.Main result
In this section, I'll take the advantage of some probabilistic (also analytic) tools in their purest form by assuming the following.

Condition C2 $f$ is time-homogeneous, i.e $f(t,x)=f(x) \square$

Here is the main result:

Theorem 8
Under the assumption A1 and the conditions C1 and C2, there is a constant $C$, which does not depend on $t$, such that:

$$ \mathbb{E}( (X_t-m_2)_+^2) < A \quad \forall t$$ $\square$

Remark 9: Yes, it is $m_2$.

2.Premilinaries

Theorem 9 (A different bound for $(X_t)_+$ ) Under the condition, $C_1$ and the assumption A1, we have: $$ \mathbb{E}( X_t-m_2)_+^2 \le \mathbb{E}( t\vee \tau_{N(t)-1}-\tau_{N(t)-1})$$ where $N(t)$ stays the same as above. $\square$

Remark 10: I have to replace the upper bound in theorem 6 by this bound because it is not straightforward to obtain a bound for $t-\sigma_{N(t)-1}$. The underlying reason is the fact that we have little no information about $f(x)$ when $x<k$. Thankfully for all $N$, $X$ is bigger than $m_1$(hence bigger than $k$) on $[\tau_N,\sigma_{N+1}]$

Demonstration:(summarized) Just add more technical manipulation in the proof of theorem 6.

Let:

  • $A(t) := \mathbb{E}( t\vee \tau_{N(t)-1}-\tau_{N(t)-1})$
  • $\gamma_1 \sim \tau_1-\sigma_1$ i.e the random time that $X$ has to spend to reach $m_2$ if it starts from $m_1$ ( under the assumption of homogeneity of $f$)
  • $\gamma_2 \sim \sigma_{2}-\tau_1$ i.e the random time that $X$ has to spend to reach $m_1$ if it starts from $m_2$ ( under the assumption of homogeneity of $f$)
  • $F$ and $G$ are the distribution of $\gamma_1$ and $\gamma_2$ respectively.

Remark 11 $F*G(t) = P(\gamma_1+\gamma_2 \le t)$

Theorem 12 Under condition C1, C2 and the assumption A1, let $X_0=m_1$ (i.e $\sigma_1=0$), we have: $$A(t)=\int_{0}^t A(t-s)dF*G(s)+\underbrace{\int_{t}^{\infty} B(t,s)dF*G(s)}_{=:h(t)} $$ where $B(t,s)= \mathbb{E}( t\vee \gamma_1-\gamma_1 | \gamma_1+\gamma_2=s)$

Demonstration(Omitted)

Message I get a little bit tired (of all technical details), please feel free to ask me to fill in details at any point in my work that is obscure in your opinion. By all means, I can always be wrong and making blunders.

Remark 13 In short, $A$ satisfies the renewal equation : $$ A=h+ A*K$$ where $K=F*G$

Remark 14: To be honest, $h(t)= \mathbb{E}( t\vee \gamma_1-\gamma_1 ,\gamma_1+\gamma_2 \ge t)$

Theorem 15 (Regularity $\gamma_2$) $\mathbb{E}(\gamma_1^2)<+\infty$ Demonstration (Summarized) As noted, as we know, during $[0,\gamma_2]$, $X_t \ge m_1> k$, thus our SDE behaves nicely as we have control over its drift. What's left is some technical argument. $\square$

3.Demonstration
Clearly, $$h(t) = \mathbb{E}( t\vee \gamma_1-\gamma_1 ,\gamma_1+\gamma_2 \ge t) \le \underbrace{\mathbb{E}( \gamma_2, \gamma_1+\gamma_2 \ge t )}_{=:h^*(t)} \le \mathbb{E}(\gamma_2) $$ Hence $h \in L^{\infty}$, which implies that (by renewal theory):

$$H(t) = \int_{0}^t h(t-s)dU(s)$$ where $U(s) = \mathbb{E}(N_s)$
Thus, $$H(t) \le \int_{0}^t h^*(t-s)dU(s)$$ Besides, $h^*$ is clearly decreasing and $\int_{0}^{\infty} h(t)dt<+\infty$, Thus by the key renewal theorem, we have: $$\lim \int_{0}^t h^*(t-s)dU(s) = \frac{1}{\mathbb{E}(\gamma_1+\gamma_2)} \int_{0}^{\infty} h(t)dt<+\infty, $$ Thus our conclusion. $\square$

Remark 16 $\mathbb{E}(\gamma_1)$ can be infinite and to be honest, most of the work in this section to get around that uncertainty.

Remark 16.1 The solution above is only applicable when $\mathbb{E}(\gamma_1) <+\infty$, if $\mathbb{E}(\gamma_1) =+\infty$ some technical manipulations are required

Remark 17 We have assumed $X_0= m_1$ while in fact this condition can be relaxed easily, however, I decided to omit that part.

1
On

Summarization

Consider a stochastic process $(X)$ driven by a Brownian motion $(W)$ which sastifies the following SDE: $$dX_t = f(t,X_t)dXt +dW_t$$

then

Theorem 1:
If $f(x,t) \le 0 \quad \forall x \ge m$, then for all t, we have : $$\mathbb{E}((X_t)^2_+) \le 2m^2+2t$$ $\square$

Theorem 2
If:

  • $f(x,t) \le 0$ for all $ x \ge m$ and
  • There are two number $m<m_1<m_2$ such that $X$ upcrosses the invertal $[m_1,m_2]$ infinitely often. then, $$\mathbb{E}(X_t-m_2)_+^2 ) \le \mathbb{E}( t\vee \tau_{N(t)-1} -\tau_{N(t)-1})$$ where $\tau_{N(t)-1}$ is the first time $X$ upcrosses $m_2$ such that the next time $X$ downcrosses $m_1$ is after $t$

$\square$

Theorem 3
If:

  • $f(x,t) \le a <0 $ for all $ x \ge m$ and
  • There are two number $m<m_1<m_2$ such that $X$ upcrosses the invertal $[m_1,m_2]$ infinitely often.
  • $f(t,x)=f(x)$

then there is a constant $C$ not depending on $t$ such that: $$\mathbb{E}(X_t-m_2)_+^2 \le C$$

$\square$

Discussion

  • Perhaps, there is a simpler way to obtain the same result but I would like to choose the one whose rigour I can be sure about. It would be nice if anyone can help me expand my humble knowledge.
  • The conditions on the upcrossings and the homogeneity seem relaxable, yet the relaxing work can quickly become complicated with my approach.
  • (Exclamation) This work is much longer than I expected ( it is not really long in my draft) even though the solution is here is also a shortened version.