On the convergence in probability of the maximum statistic of a random variable according to triangular and uniform

22 Views Asked by At

Set up

Consider the example in section 2 of Ferguson (1982).

Let $X_1, \ldots, X_n$ be i.i.d. with a distribution which with probability $\theta$ is the $U(-1, 1)$, and with probability $(1-\theta)$ is equal to the triangular distribution on $(\theta - c(\theta), \theta + c(\theta))$; p.d.f. is $$ f(x\mid\theta) = \frac{1-\theta}{c(\theta)}\left(1-\frac{|x-\theta|}{c(\theta)}\right)\mathbb{I}_{|x-\theta|\leq c(\theta)} + \frac{\theta}{2}\mathbb{I}_{|x|\leq 1}, $$ where $c(\theta)$ is a continuous decreasing function of $\theta$ with $c(0) = 1$ and $0<c(\theta)\leq 1-\theta$ for $0<\theta<1$, and with $c(\theta)\to 0$ as $\theta\to 1$.

Theorem

Let $\hat{\theta}_n$ denote MLE of $\theta$. Then, $$ \hat{\theta}_n \overset{P}{\to} 1 \neq \theta, $$ so that $\hat{\theta}_n$ is inconsistent MLE.

sketch of proof:

We think the log-likelihood function $l_n(\theta)$. It can be shown that for any $0 < \alpha < 1$, there exists a constant $K(\alpha)$ such that $$ \max_{0\leq\theta\leq\alpha}\frac{1}{n}l_n(\theta) \leq K(\alpha) \quad\mathrm{for\ all\ }n. $$ On the other hand, it turns out that $$ \color{red}{X_{(n)} := \max{(X_1,\ldots, X_n)} \overset{P}{\to} 1}. $$ and that $$ \color{red}{\frac{1}{n}l_n(X_{(n)})\overset{P}{\to}\infty}, $$ provided $c(\theta)\to 0$ sufficiently fast as $\theta\to 1$. Since it turns that $$ \max_{0\leq\theta\leq 1}\frac{1}{n}l_n(\theta) \color{red}{\geq} \frac{1}{n}l_n(X_{(n)}) \overset{P}{\to}\infty, $$ we get $$ \max_{0\leq\theta\leq 1}\frac{1}{n}l_n(\theta) \overset{P}{\to}\infty. $$ Then, we find $$ \mathbb{P}\left(\hat{\theta}_n > \alpha\right) \to 1 $$ for any $0 < \alpha < 1$ and hence that $\hat{\theta}_n\overset{P}{\to}1$.

Problem

I don't understand how the above red areas (3 places) can be established.

This is between the lines in Ferguson's paper and is not always clear. There are no clues to guide these. What policies would you suggest?

In Ferguson we consider a.s., but now we only need to consider convergence in probability.