Why there are n-r dimensional set of vectors x for rank-deficient least square problem

108 Views Asked by At

I saw this theorem on the lecture slides (http://www2.aueb.gr/users/douros/docs_master/Least_Square_pr.pdf) with topic Rank-Deficient Least Squares Problem

Given $A\in R^{m\times n}, m > n, \, r = \text{rank(A)}<n \, \text{and} \, b \in R^m$ and we want to find $x$ such that $\min_x \Vert Ax-b \Vert_2$, then there are n-r dimensional set of vectors x that minimize $\Vert Ax-b \Vert_2$

What's the reason behind the "$n-r$" theorem?

1

There are 1 best solutions below

0
On

I think I get it now....Given a matrix $A \in R^{m\times n}, \, m>n$ and $b \in R^m$, if we want to find a least square solution $\mathbf{\hat{x}}$ for the system $A\vec{x} = \vec{b}$, we are actually trying to solve the normal equation $$A^T A \vec{x} = A^T \vec{b}$$
And we also know that

$R(A^T)=R(A) = R(A^T A)$ and $N(A) = N(A^T A)$

So if the A is rank deficient, $r < n$ then the $nullity(A^T A)= n-r$, which means the the solution space of $A^T A \vec{v}=0$ is non-trivial and of $n-r$ dimension. We will now have infinity solution for the normal equation $A^T A \vec{x} = A^T \vec{b}$

Let $\vec{v} \in N(A^T A)$, then $$\vec{v} = \sum_{i=1}^{n-r} \alpha_i \mathbf{n_i} $$ where $N(A^T A) = span \{\mathbf{n_1, \cdots , n_{n-r}\}}$

And let $\vec{w} \in R(A^T A)$. So the least square solution can now be written as $$\mathbf{\hat{x}} = \vec{w} + \alpha_1 \mathbf{n_1} + \cdots + \alpha_{n-r} \mathbf{n_{n-r}}$$
which is of $n-r$ dimension