The following linear optimization is common used
$$\min_x \|Ax - b\|^2$$
Here, $A$ is the matrix; $x,b$ are the vectors. I am curious about how reliable is the solution $x$ by solving above optimization.
For example, if I add some very small noise to $A$ with $\delta(A)$ and $B$ with $\sigma(b)$. The new optimization is
$$\min_y \|(A+ \delta(A))y - (b+ \sigma(b))\|^2$$
then the solution $y$ can be solve by solving above objective.
My question is how difference between $x$ and $y$ due to the small noise on the $A$ and $B$?
Any refernce to reliability of linear equation?
Thanks a lot
We call the error here the backward error.
Suppose that $x$ solves $\min ||Ax-b||^2$, and $x^{*}$ solves $\min ||(A+\delta A)x-(b+\sigma b)||^2$.
We are interested in quantifyig the error $\Delta x = x^{*} - x$. Usually what we do if look at the errors individually, i.e. find the error coming from the pertubation $\delta A$ in $\Delta x$, and then for $\sigma b$ and sum them. So let us look at $\delta A$.
We know that $x$, $x^{*}$ solve the normal equations:
$x = (A^T A)^{-1} A^T b$,
$x^{*} = (((A+\delta A)^{T} (A+\delta A))^{-1} (A+\delta A)^T b$
So $(A+\delta A)^T (A + \delta A)x^{*} - A^T A x = \delta A^T b$
So $A^T A (x^{*}-x) = \delta A^T b - A^T \delta A x^{*} - \delta A ^TAx^{*}-\delta A^T \delta A x^{*}$.
Then very roughly: $||A^T A|| * ||\Delta x|| \leq \delta ||A^T b||+ \delta ||A^T A||*||x^{*}||+\delta^2 ||A^TA|| * ||x^{*}||$.
So you can see the error as a function of your input $\delta$. We would neglect second order terms in $\delta$.
As mentioned Golub is the go to text on this topic.