Residual proof for numerical computation

129 Views Asked by At

Suppose we want to solve a system $Ax=b$ using Matlab.

>>format long e
>>A=rand(1000);
>>b=rand(1000,1);
>>xapp=A\b;
>>r=b-A*xapp

I'm asked to prove that if the residual $r=0$ then $xapp$ is the exact solution to the system. I'm assuming this is more of a mathematical proof since it has nothing to do with coding.

I know that the error $ε=x-xapp$, where $x$ is the exact solution. And that if $r$ is small, it doesn't necessarily mean that $xapp$ is near the solution. But other than this I have no idea where to start.

EDIT: Basically, we're trying to show that $r=0 \implies ε=0$.

1

There are 1 best solutions below

0
On BEST ANSWER

If $Ax=b$ is solved to maximal accuracy with floating point solution $x_{app}$, then general error theory says that $ε=x−x_{app}$ is of size $\kappa(A)·\mu·\|x\|$ where $κ(A)$ is the condition number of $A$ and $\mu$ the machine constant of the floating point format.

As explained in the comments, if $0=r=Ax_{app}-b$ in the numerical evaluation, then also $Aε=0$, which means that the floating point components of $ε$ are of the magnitude $$ \|A^{-1}\|·\mu·\|ε|\sim \kappa(A)·\|A^{-1}\|·\mu^2·\|x\|. $$ So if the condition number is not too bad, more exactly, $\kappa(A)·\|A^{-1}\|·\mu<1$, then the error in $x_{app}$ is smaller than the machine constant which means that $x_{app}$ is a floating point representation of the exact solution.

Note that if $x$ has components that are zero or very small, then the same components in $x_{app}$ need not be zero, as the relative error that was estimated above holds globally for the full vector.