Showing that the expected log-likelihood is uniquely maximised at the true population value of parameter

292 Views Asked by At

Consider the linear model $Y_i=X_i\beta_0+\epsilon_i$ with $\epsilon_i|X_i\sim N(0,\sigma^2_0)$, $\sigma^2_0>0$ and $\{Y_i, X_i\}_{i=1}^n$ i.i.d.

Let $\theta_0:=(\beta_0,\sigma^2_0)\in \Theta\subset \mathbb{R}^2$, $\Theta$ compact. So $\theta_0$ is the true parameter value, while $\theta$ denotes a generic parameter value.

Consider the (conditional) log likelihood of each observation $$m(Y_i, X_i;\theta):= -\frac{1}{2}\log(2\pi)-\log(\sigma)-\frac{1}{2\sigma^2}(Y_i-X_i\beta)^2$$

By WLLN, $\forall \theta\in \Theta$ $$ \frac{1}{n}\sum_{i=1}^n m(Y_i, X_i;\theta)\rightarrow_p E( m(Y_i, X_i;\theta)) $$

Question: How do I show that $E( m(Y_i, X_i;\theta))$ is uniquely maximised at $\theta_0$?

1

There are 1 best solutions below

3
On BEST ANSWER

Let $(X,Y)$ be a generic version of $(X_i,Y_i)$, $v \equiv \sigma^2$ and let us show that $\ell(\theta;x) = E[ m(Y,X,\theta) | X = x]$ is maximized at $\theta_0$. We have

\begin{align*} \ell(\theta;x) &= - \frac12 \Big( \log v + \frac1v E(Y - x \beta)^2\Big) \\ &= - \frac12 \Big( \log v + \frac1v E(x\beta_0 + \epsilon - x \beta)^2\Big)\\ &= -\frac12 \Big( \log v + \frac1v E(x\beta_0 + \epsilon - x \beta)^2\Big)\\ &= - \frac12 \Big( \log v + \frac1v [x^2(\beta-\beta_0)^2 + \sigma_0^2]\Big) \end{align*} The unique maximizer over is clearly $\beta_0$ assuming that $x \neq 0$. Setting $\beta$ to this value, we need to minimize $v \mapsto \log v + \sigma_0^2/v$ which is uniquely maximized at $\sigma_0^2$.

Let $\ell(\theta) = E[m(Y,X,\theta)] = E[\ell(\theta;X)]$. Assuming that the distribution of $X$ puts positive mass on $X \neq 0$, the result follows by noting that $\ell(\theta;X) \le \ell(\theta_0;X)$ always and $\ell(\theta;X) < \ell(\theta_0;X)$ on some set of positive measure. (You could also directly compute $\ell(\theta)$ which amounts to changing $x^2$ to $E X^2$ in the expression above. This gives you the result assuming $E X^2 \neq 0$.)