In given $\mathbf A$ ($m\times n$ while $m>n$), will do QR-Economy factorization and so:
$\mathbf A=\mathbf {QR}$ [while $\mathbf Q$ ($m\times n$) and $\mathbf R$ ($n\times n$)].
Hwo can i use this factorization for find $\mathbf x^*$ in LS problem?
i tried:
$$
E = \min_{\mathbf x} \|\mathbf{Ax}-\mathbf{b}\|^2= \min_{\mathbf x} \| \mathbf{QRx}-\mathbf{b}\|^2=\dots
$$
In fact, how can i prove that: $J=(\mathbf{ Q b}_1-\mathbf{b})^\top(\mathbf{ Q b}_1-\mathbf{b})$ while $\mathbf b_1=\mathbf{Q}^\top \mathbf{b}$.
Input
Start with a matrix $\mathbf{A} \in \mathbb{C}^{m\times n}_{\rho}$. Restrict the matrix to having full column rank $\rho = n$, and being overdetermined $m>n$. Also, this matrix does not need to be postmultiplied by a permutation matrix to reorder the columns in order to complete the factorization.
$\mathbf{Q}\mathbf{R}$ factorization
The $\mathbf{Q}\mathbf{R}$ factorization provides an orthogonal decomposition for the column space $$ \mathbf{C}^{m} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \oplus \color{red} {\mathcal{N} \left( \mathbf{A}^{*} \right)}. $$ The general factorization is $$ \mathbf{A} = \mathbf{Q}\,\mathbf{R} = \left[ \begin{array}{cc} \color{blue}{\mathbf{Q}_\mathcal{R}} & \color{red} {\mathbf{Q}_\mathcal{N}} \end{array} \right] % \left[ \begin{array}{cc} \mathbf{R}_{T} \\ \mathbf{0} \end{array} \right] % $$ with $$ \color{blue}{\mathbf{Q}_\mathcal{R}} \in \mathbb{C}^{m\times \rho}, \quad \color{red}{\mathbf{Q}_\mathcal{N}} \in \mathbb{C}^{m\times (m-\rho)}, \quad\mathbf{R}_{T} \in \mathbb{C}^{\rho\times n}, \quad \mathbf{0} \in \mathbb{C}^{(n-\rho)\times n}. $$ The matrix $\mathbf{Q}$ is unitary: $$ \mathbf{Q}^{*} \mathbf{Q} = \mathbf{Q} \, \mathbf{Q}^{*} = \mathbf{I}_{m} $$ The thin, or reduced, or economical, factorization dispenses with the $\color{red}{null}$ space elements: $$ \mathbf{A} = \color{blue}{\mathbf{Q}_\mathcal{R}} \mathbf{R}_{T} $$
Method of least squares
Given a suitable data vector $b\in\mathbb{C}^{m}$, the linear system $$ \mathbf{A} x = b $$ admits the solution $$ x_{LS} = \left\{ x\in\mathbb{C}^{n} \colon \lVert \mathbf{A} x - b \rVert_{2}^{2} \text{ is minimized} \right\} \tag{1} $$ Notice that after defining the residual error vector, $$ r(x) = \mathbf{A} x - b, $$ the least squares problem can be restated as minimizing $r^{2},$ the sums of the squares of the residual errors.
$\mathbf{Q}\mathbf{R}$ and least squares
Begin by expressing the target matrix in terms of the decomposition: $$ \lVert \mathbf{A} x - b \rVert = \lVert \mathbf{Q}\,\mathbf{R} \, x - b \rVert $$ Unitary transformations are invariant under the $2-$norm: $$ \lVert \mathbf{Q}\,\mathbf{R} x - b \rVert = \lVert \mathbf{Q}^{*} \left( \mathbf{Q}\,\mathbf{R} x - b \right) \rVert = \lVert \mathbf{R} \, x - \mathbf{Q}^{*}\,b \rVert $$ Let's focus on the thin decomposition of the question, and pry apart the $\color{blue}{range }$ and $\color{red}{null}$ space components of the total error: $$ \lVert \mathbf{R}_{T} \, x - \mathbf{Q}^{*}b \rVert = \Bigg\lVert \left[ \begin{array}{c} \mathbf{R}_{T} \\ \mathbf{0} \end{array} \right] \, x - \left[ \begin{array}{c} \color{blue}{\mathbf{Q}^{*}_\mathcal{R}} \\ \color{red} {\mathbf{Q}^{*}_\mathcal{N}} \end{array} \right] \,b \Bigg\rVert $$ The total error becomes $$ r^{2}(x) = \Bigg\lVert \left[ \begin{array}{c} \mathbf{R}_{T} \\ \mathbf{0} \end{array} \right] \, x - \left[ \begin{array}{c} \color{blue}{\mathbf{Q}^{*}_\mathcal{R}} \\ \color{red} {\mathbf{Q}^{*}_\mathcal{N}} \end{array} \right] b \Bigg\rVert^{2}_{2} % = % \underbrace{ \Big\lVert \mathbf{R}_{T}x - \color{blue} {\mathbf{Q}^{*}_\mathcal{R}} b \Big\rVert^{2}_{2} }_{\text{control with x}} % + % \underbrace{ \Big\lVert \color{red} {\mathbf{Q}^{*}_\mathcal{N}} b \Big\rVert^{2}_{2}}_{\text{no control}} $$
Least squares solution via $\mathbf{Q}\mathbf{R}$
There are two pieces. One piece can be controlled by adjusting the solution vector $x$; in fact, this piece can be driven to 0 via $$ \mathbf{R}_{T}x - \color{blue} {\mathbf{Q}^{*}_\mathcal{R}} b = 0 \qquad \Rightarrow \qquad \boxed{ \color{blue} {x_{LS}} = \color{blue} { \mathbf{R}^{-1}_{T} \mathbf{Q}^{*}_\mathcal{R} b}} $$ The residual error, the error that cannpt be removed is $$ r^{2}\left( x_{LS} \right) = \Big\lVert \color{red} {\mathbf{Q}^{*}_\mathcal{N}} b \Big\rVert^{2}_{2} $$ This is the magnitude of the portion of the data vector extending into the $\color{red}{null}$ space $\color{red} {\mathcal{N} \left( \mathbf{A}^{*} \right)}$.