Understanding weighted linear least square problem

325 Views Asked by At

I am having difficulty in understanding about weighted linear least squares. Could anybody explain me instead of minimizing the residual sum of squares why we need to minimize the weighted sum of squares? Further, I want to know about the term weighted? Although I have gone through some wiki notes but I am not able to understand.

Thank you very much for the help.

1

There are 1 best solutions below

4
On BEST ANSWER

Take a look at a simple problem: $$ax=c\\ bx=d$$ where '$=$' is meant as an optimization goal, not as exact equality. So we want to find $x$ such that the error of both equalities is minimized in some optimal sense. If you choose a least squares criterion we get the following error function:

$$\epsilon = (ax-c)^2+(bx-d)^2$$

Now we can decide that for some reason the error in the first equation is more important than the error in the second equation, so we can add a weight ($>1$) to the error component of the first equation:

$$\hat{\epsilon} = w^2(ax-c)^2+(bx-d)^2$$

Minimizing $\hat{\epsilon}$ will result in a smaller error for the first equation at the expense of the error of the second equation. This is the basic idea of a weighted least squares error criterion.

If you solve the original (unweighted) problem by solving $\frac{d\epsilon}{dx}=0$ you get the optimal solution:

$$x_o=\frac{ac+bd}{a^2+b^2}$$

If you solve the weighted least squares problem by solving $\frac{d\hat{\epsilon}}{dx}=0$ you get

$$\hat{x}_o=\frac{w^2ac+bd}{w^2a^2+b^2}$$

From this you can see that if the weight $w$ is chosen very large, the solution $\hat{x}_o$ becomes close to $\frac{c}{a}$, which is simply the exact solution of the first equation, not at all considering the second equation. Obvioulsy, by using a weight $w>1$ you give more importance to the first equation.