This question is related to some other similar ones I did in the recent past.
Let $X_t$ be a Stochastic Process defined through the equation $$\text{d}X_t=f(X_t,t)\text{d}t+\text{d}W_t,$$ where $f$ is a twice differentiable function such that $f(x)<a<0$ for all $x>k$, and $X_0<k$.
What I am wishing to show is:
For all $\epsilon>0$ there is $M>0$ (which we can WLOG assume to be bigger than $k$) such that for all $t\ge0$ it holds that $$\mathbb{P}[X_t>M]\le\epsilon.$$
What I tried to do is define $\tau_t$ as $\sup\{0<s<t: X_s=k\}$, i.e. the last hitting time of the level $k$ before time $t$, where the sup is set equal to $0$ if the set $\{0<s<t: X_s=k\}$ is empty. This should be a stopping time, even if last hitting times in general are not.
Then, we can manipulate the given probability in this way: \begin{align*}\mathbb{P}[X_t>M]&=\underbrace{\mathbb{P}[X_t>M,\tau_t=0]}_{=0}+\underbrace{\mathbb{P}[X_t>M,\tau_t\ne0, X_t<k]}_{=0}+\mathbb{P}[X_t>M,\tau_t\ne0, X_t>k]\\ &\le\mathbb{P}[X_t-X_{\tau_t}>M-k,\tau_t\ne0]\\ &\le \mathbb{P}[a(t-\tau_t)+W_t-W_{\tau_t}>M-k, \tau_t\ne0]\\ &\le \mathbb{P}[a(t-\tau_t)+W_t-W_{\tau_t}>M-k].\end{align*}
Since $a<0$ we can bound this probability uniformly over $t$, and the claim should be proven. However, I think I've been too slick with some steps, and I want to make sure they are correct (in particular, the $\tau_t$ seems to be not a stopping time). Can someone help me find any potential errors? Also, is there a more straightforward way to prove this? Maybe I just didn't see an easier solution.
EDIT: the $f$ in my specific problem satisfies the hypothesis $|f(x,t)-k_0|<\theta\cdot |x|+ \mu,$ for some constants $k_0, \theta$ and $\mu$: is there some kind of comparison principle with the absolute value bound? I can't find it online, but if it exists, then I would be done, as the Ornstein-Uhlenbeck process is bounded in the sense I am looking for.
I'll just give some partial answers to the original question, but my work is rather long so I highly doubt I can type it all in just one sit.
Organisation of my post
I'll start by showing some trivial points about $\mathbb{E}(X_t^2)$, to show that "at least we have something".
Then, I'll incorporate two sequences of stopping times to divide the time horizon in a nice way. From which, I present a tighter bound for $\mathbb{E}(X_t^2)$.
In the end, even though I have a strong feeling that I can well control $\sup \mathbb{E}(X_t^2)$ even without the time-homogeneity of $f$, I'll assume that condition nonetheless to give a sure partial answer.
Remark : To be honest, it's not really the $\mathbb{E}(X_t^2)$ that we will see but rather $\mathbb{E}((X_t)_+^2)$
A weak bound for $\mathbb{E}((X_t)_+^2)$
Theorem 1 If $f(t,x)<0$ for all $x \ge m$, we have:
$$\mathbb{E}((X_t)_+^2) \le 2m^2+2t$$ $\square$
Demonstration 2
By Ito's formula, we have: $$ (X_t-m)_+^2=2(X_t-m)_+d X_t+ 1_{(X_t -m)\ge 0} dt$$ Thus( by some standard local martingale+Fatou arguments which I omit) $$ \mathbb{E}((X_t-m)_+^2) \le \underbrace{2\int_{0}^t \mathbb{E}((X_s-m)_+f(s,X_s))ds}_{ \le 0}+ \mathbb{E} ( \int_{0}^t 1_{(X_s -m)\ge 0}ds )$$
Hence $$ \mathbb{E}((X_t-m)_+^2) \le t$$ Thus the conclusion $\square$
Corollary 2
If $f(t,x)<0$ for all $x\ge m$, we have: $$\mathbb{P}(X_t > M) \le \frac{2m^2+2t}{M^2}$$ for all $M>0$ $\square$
A better bound for $\mathbb{E}((X_t)_+^2)$
1.Some set up
So the above theorem provides a pretty nice bound for $\mathbb{P}(X_t > M)$ when we have some control for the negativity of $f$
Naturally, we should expect a better bound for $\mathbb{P}(X_t > M)$ when we have more control.
I first repeat the initial assumption on $f$,
Condition C1: $f(t,x) < a <0 $ for all $x>k$ $\square$.
Let's define the following two sequences of stopping time with two given real numbers $k<m_1<m_2$ (from now on, we only consider $M>m_2$ )
Remark 3: $(\sigma)$ and $(\tau)$ is nothing mysterious, they are just stopping times which delimit the upcrossings and downcrossings of $X$. In particular, $X_{[\tau_n,\sigma_{n+1}]}$ is a down crossing, while $X_{[\sigma_n,\tau_n]}$ is a upcrossing.
Assumption A1 $\mathbb{P}( \tau_n<+\infty) =1 \forall n$
Remark 4: In fact, this assumption is a bit superfluous because the infiniteness of $\tau_n$ will lead to an upper bound for $X_t$. However, more elaborate work is indeed needed to get rid of that case, I decided to just assume that equality.
Remark 5 If I remember correctly, I have proven that equality under some assumptions in the previous post.
For any t, we define: $$N(t):= \inf \{ k : \sigma_k >t\}$$
2. Main result
I now present the main theorem for this section
Theorem 6
Under the condition C1 and the assumption A1, we have: $$ \mathbb{E}( (X_t-m_1)_+^2 ) \le \mathbb{E}( t- \sigma_{N(t)-1})$$ $\square$
Remark 7: I lost my note on this part, so it is not really the "true" theorem, however, the essential is presented.
3. Demonstration
Again, by Fatou and local martingale arguments, we have: $$ \mathbb{E}\left( X_{ \sigma_{n+1} \wedge ( t \vee \sigma_n)}- X_{\sigma_n} \right)_+^2 \le \underbrace{ \mathbb{E}(2\int_{\sigma_n}^{\sigma_{n+1} \wedge ( t \vee \sigma_n)} (X_s-m_1)_+f(s,X_s))ds}_{ \le 0}+ \mathbb{E} ( \int_{\sigma_n}^{\sigma_{n+1} \wedge ( t \vee \sigma_n)} 1_{(X_s -m_1)\ge 0}ds )$$ (Note that: $X_{\sigma_n}= m_1$ )
Thus, $$ \mathbb{E}( \underbrace{ X_{ \sigma_{n+1} \wedge ( t \vee \sigma_n)}- X_{\sigma_n}}_{ =X_t-X_{\sigma_n} \text{ if } t \in [\sigma_{n},\sigma_{n+1}] \text{ and } 0 \text{otherwise} } )_+^2 \le \mathbb{E}( \underbrace{ \sigma_{n+1} \wedge ( t \vee \sigma_n)-\sigma_n}_{ = t-\sigma_n \text{ if } t \in [ \sigma_n,\sigma_{n+1}] \text{ and } 0 \text{ otherwise}}) $$ By varying $n$ and adding up all the produced inequalities, we impy that: $$ \mathbb{E}( (X_t-m_1)_+^2 ) \le \mathbb{E}( t- \sigma_{N(t)-1}) $$
4.Discussion
+This bound is indeed close enough. +The condition C1 is mostly needed to assure in some sense that the assumption A1 is reasonable.
(the continuation in the next post)