Proving that a random walk using a maximum likelihood estimator can diverge to infinity

148 Views Asked by At

Consider a sequence of continuous random variables $\{X_n\}^\infty_{n=1}$ that are independent and identically distributed under the probability density function $f_\theta (x)$, where $\theta \in [\ell ,u]$ is an unknown but deterministic parameter, $\ell < 0 <u$ and $\theta \neq 0$.Consider the random sequence:

\begin{equation} S_n = \sum_{i=2}^n \log\frac{f_{\hat{\theta}_{1:i-1}}(X_i)}{f_0(X_i)} \end{equation}

where $\hat{\theta}_{1:i-1}$ is the maximum likelihood estimate of the parameter $\theta$ using the observations $\{X_n\}^{i-1}_{n=1}$, and $S_0 \triangleq S_1 \triangleq 0$. Let $ \tau =\inf\{n \geq 2 : S_n \leq 0\}$. I am trying to show that $S_n$ may diverge to infinity before becoming non-positive, with non-zero probability, i.e., \begin{equation} \mathbb{P}(\tau = \infty) >0. \end{equation} My intuition says that this should hold due to the fact that $\hat{\theta}_{1:i-1}$ converges to $\theta$ almost surely as $i \rightarrow \infty$. Do you think this result should hold for the case that the maximum likelihood estimator is used to estimate the parameter $\theta$. I would really appreciate if anyone could provide any results that could be useful to show this claim! Thanks!

1

There are 1 best solutions below

2
On BEST ANSWER

Is your question whether this can happen, or whether it must happen? It can happen, it doesn't have to happen. Your maximum likelihood estimators $\hat\theta_{1:i-1}$ don't even have converge to $\theta$... without some conditions on how $f_{\theta}$ depends on $\theta$, you can't say very much for sure. (For instance, if $f_{\theta}$ is actually independent of $\theta$, then $S_n$ is identically zero and $\tau = 2$.)