Seber, Ex. 1.b.3 - Calculating variance and showing unbiasedness

67 Views Asked by At

Exercise 3 of Sebers Linear Regression Analysis states the following: enter image description here

I tried to solve both problems but only managed to (kinda) solve the first one and have no idea how to tackle the second one. First, however, let me post the solutions: enter image description here

For 1), I tried to find the minimum variance by minimizing w/ the Lagrangian that is also stated in the solution. I proceeded as follows:

$var(\overline X_W) = var(\sum_i w_ix_i) = \sum_i var(w_ix_i) = \sum_i w_i^2 var(x_i) = \sum_i w_i^2 \sigma_i^2$.

This I minimized w.r.t to the constraint $\sum_i w_i = 1$ and obtained the Lagrangian $L = \Bigl(\sum_i w_i^2 \sigma_i^2 - w_i \Bigr)+1$.

Setting up the first order conditions gave me $2w_i\sigma_i ^2 = 0$ and the Lagrangian multiplier itself. What I got out of it is that $2w_i\sigma_i ^2 = 0 \implies w_i \propto \frac{1}{\sigma_i ^2}$. However, the solution states that the proportionality factor is $a$. The other thing is that I have no idea what they could mean by substituting $\sum_{i=1}^{n-1} w_i = w_n$ instead of using the Lagrange multiplier.

With b), I don't even know how to start. I tried rewriting it multiple times but it lead me nowhere unfortunately. To summarize, my questions are:

  1. Why did Seber use proportionality factor $a$ instead of the $\frac{1}{2}$ that I got?
  2. What could he mean by substituting $\sum_{i=1}^{n-1} w_i = w_n$?
  3. Why do the equalities in the solution to exercise b) hold true?
1

There are 1 best solutions below

5
On BEST ANSWER
  1. $2w_i \sigma_i^2 = 0$ is clearly wrong since that would imply $w_i = 0$. How did you conclude $w_i \propto 1/\sigma_i^2$? Applying the Lagrange multiplier theorem directly without making any simplifications, the Lagrange multiplier equations are \begin{align} 2\sigma_i^2w_i &= \lambda \\ \sum_{i}w_i &= 1. \end{align} From the first equation, $w_i = \frac{\lambda}{2\sigma_i^2}$, then from the second equation, $\frac{\lambda}{2} = \frac{1}{\sum_{i}\frac{1}{\sigma_i^2}}$, so $w_i = \frac{1}{\sum_{j}\frac{1}{\sigma_j^2}}\frac{1}{\sigma_i^2}$.

  2. That substitution is a way to frame what looks like a constrained optimization problem into an unconstrained problem. You could minimize the function $v(w_1, \dots, w_{n - 1}) = \sum_{i = 1}^{n}\sigma_i^2w_i^2$ over the region $\{(w_1, \dots, w_{n - 1}) : w_1, \dots, w_{n - 1} \geq 0, w_1 + \dots + w_{n - 1} \leq 1\}$. Here $w_n = 1 - (w_1 + \dots + w_{n - 1})$, so the problem is fully specified.

  3. Here is repeated use of the equality $E(f^2) = Var(f) + E(f)^2$, which holds for every probability space and every random variable $f$. Note that $\sum_{i}w_i(X_i - \bar{X}_w)^2$ is the variance of $(X_1, \dots, X_n)$ with respect to the probability mass function $(w_1, \dots, w_n)$. Hence \begin{align} \sum_{i}w_i(X_i - \bar{X}_w)^2 &= \sum_{i}w_iX_i^2 - \bar{X}_w^2. \end{align} He goes on to take expectations of both sides and use the identity again, then does algebra to arive at the final answer.