Gaining an intuitive understanding of the variance of a point estimator

18 Views Asked by At

I had a homework question for my theoretical stats class that I have already solved. I want to gain a better understanding of how to solve this problem using intuition (if possible). Here is the problem: 8.6 b)

I have already solved both parts and I am interested in part b). I solved it by finding where

$\frac{d}{da}(a*\sigma_{1}^2+(1-a)*\sigma_{2}^2) = 0$

which is at

$a = \frac{\sigma_{2}^2}{\sigma_{1}^2+\sigma_{2}^2}$

I want to know if there is a way to solve this without calculus? Or at least a framework to understand this problem in a statistics context rather than abstracting it to the variance as functions and minimizing it with a derivative.

1

There are 1 best solutions below

0
On

There are some minor errors in your question. What I suspect you actually did was minimise $a^2 \sigma^2_1 +(1-a)^2\sigma^2_2$ by taking the derivative and setting $2a \sigma^2_1 -2(1-a)\sigma^2_2=0$ to find $a = \frac{\sigma_{2}^2}{\sigma_{1}^2+\sigma_{2}^2}$ to give a minimum value of $\frac{2\sigma_{1}^2\sigma_{2}^2}{\sigma_{1}^2+\sigma_{2}^2}$

Without calculus, you could "complete the square" by saying $$a^2 \sigma^2_1 +(1-a)^2\sigma^2_2 \\ = a^2 (\sigma^2_1 + \sigma^2_2) -2a\sigma^2_2 + \sigma^2_2 \\ = (\sigma^2_1 + \sigma^2_2) \left( a^2 -2\frac{\sigma^2_2}{\sigma^2_1 + \sigma^2_2}a + \left(\frac{\sigma^2_2}{\sigma^2_1 + \sigma^2_2}\right)^2\right) + \sigma^2_2-(\sigma^2_1 + \sigma^2_2) \left(\frac{\sigma^2_2}{\sigma^2_1 + \sigma^2_2}\right)^2 \\ = (\sigma^2_1 + \sigma^2_2) \left( a -\frac{\sigma^2_2}{\sigma^2_1 + \sigma^2_2}\right)^2 + \frac{\sigma^2_1 \sigma^2_2}{\sigma^2_1 + \sigma^2_2} $$

  • so the left part of this $(\sigma^2_1 + \sigma^2_2) \left( a -\frac{\sigma^2_2}{\sigma^2_1 + \sigma^2_2}\right)^2$ is non-negative and is zero when $a = \frac{\sigma^2_2}{\sigma^2_1 + \sigma^2_2}$
  • while the right part of this $\frac{\sigma^2_1 \sigma^2_2}{\sigma^2_1 + \sigma^2_2}$ does not vary with $a$
  • implying that $a^2 \sigma^2_1 +(1-a)^2\sigma^2_2 \ge \frac{\sigma^2_1 \sigma^2_2}{\sigma^2_1 + \sigma^2_2}$ with equality if and only if $a = \frac{\sigma^2_2}{\sigma^2_1 + \sigma^2_2}$

A hand-waving intuition could be that the combined variance is minimised when each part is effectively contributing the same amount to the combined variance. But this may not be immediately obvious