Condition number of matrix inversion with respect to spectral norm

3.6k Views Asked by At

I would like to show that "the condition number for inversion of $A$, with respect to the spectral norm is $k(A)=\rho(A)\rho(A^{-1})$" for $A\in M_n$ as nonsingular and normal matrix . Can anyone confirm the following proof:

By definition we have:

  1. $\rho(A)=\max\{|\lambda|:\lambda \in \sigma(A)\}$. (spectral radius).
  2. If $\lambda$ is eigenvalue of $A$, then $Ax=\lambda x$.
  3. $k(A)=|||A|||\:|||A^{-1}|||$ (condition number).
  4. $|||A|||=\max_{||x||=1}||Ax||$ (induced matrix norm).

Now we can write: \begin{gather} |||A|||=\max_{||x||=1}||Ax||=\max_{||x||=1}||\lambda x||=\rho(A), \\ |||A^{-1}|||=\max_{||x||=1}||A^{-1}x||=\max_{||x||=1}||\lambda^{-1} x||=\rho(A^{-1}). \end{gather} So the condition number of $k(A)=|||A|||\:|||A^{-1}|||=\rho(A)\rho(A^{-1})$.

2

There are 2 best solutions below

2
On

taking the $Euclidea\: norm$ in to account defined for square-summable sequence space, then \begin{equation*} k(A)= \frac{\sigma_{\max}(A)}{\sigma_{min}(A)} \end{equation*} where $\sigma_{\max}(A)$ and $\sigma_{\min}(A)$ are maximal and minimal singular values of $A$. So, if $A$ is normal then \begin{equation}\label{eq8_1} k(A)=|\frac{\lambda_{\max}(A)}{\lambda_{\min}(A)}|. \end{equation} On the other hand, \begin{equation}\label{eq8_2} \rho(A)=\max\{|\lambda|:\lambda \in \sigma(A)\}=\lambda_{\max}(A) \end{equation} and if $A$ is invertible and have $\lambda$ as eigenvalue the $A^{-1}$ has the eigenvalue of $\frac{1}{\lambda}$, so we have \begin{equation}\label{eq8_3} \lambda_{\min}(A)=\frac{1}{\lambda_{\max}(A^{-1})} \Rightarrow \frac{1}{\lambda_{\min}(A)}=\lambda_{\max}(A^{-1})=\rho(A^{-1}) \end{equation} Now substituting (\ref{eq8_3}) and (\ref{eq8_2}) in (\ref{eq8_1}) yields \begin{equation*} k(A)=|\frac{\lambda_{\max}(A)}{\lambda_{\min}(A)}|=\rho(A)\rho(A^{-1}). \end{equation*}

0
On

If $\boldsymbol{A}$ is a normal matrix then $$\kappa_2(\boldsymbol{A})=\frac{\operatorname{máx}_{\lambda \in \sigma(\boldsymbol{A})}|\lambda|}{\operatorname{mín}_{\lambda \in \sigma(\boldsymbol{A})}|\lambda|}$$ but if the matrix $\boldsymbol{A} \in \mathcal{M}_n(\mathbb{R})$ is only no singular and $||\cdot||$ a subordinate matrix norm for the book introduction to numerical algebra and optimisation by philippe ciarlet, the $\textbf{theorem 1.4-3}$ part 1. $$\rho(\boldsymbol{A})\leq ||\boldsymbol{A}||$$ and this proof is the same to $\boldsymbol{A}^{-1}$, i.e. if $(\lambda, \boldsymbol{v})$ is an autopar $\boldsymbol{A}^{-1}$. This means that $\boldsymbol{A}^{-1}\boldsymbol{v}=\frac{1}{\lambda }\boldsymbol{v}$. It is known to exist $ \boldsymbol{w} \in \mathbb{C}^n \backslash\{\mathbf{0}\}$ such that $ \boldsymbol{v} \boldsymbol{w}^* \neq \boldsymbol{\Theta}$ (is easy to check). then

\begin{align*} \left(\boldsymbol{A}^{-1} \boldsymbol{v}\right) \boldsymbol{w}^* =\left(\frac{1}{\lambda } \boldsymbol{v}\right) \boldsymbol{w}^* &\Rightarrow \frac{1}{\lambda }\left(\boldsymbol{v} \boldsymbol{w}^*\right) =\boldsymbol{A}^{-1}\left(\boldsymbol{v} \boldsymbol{w}^*\right) \\ &\Rightarrow\left\|\frac{1}{\lambda }\left(\boldsymbol{v} \boldsymbol{w}^*\right)\right\| =\left\|\boldsymbol{A}^{-1}\left(\boldsymbol{v} \boldsymbol{w}^*\right)\right\| \leq\|\boldsymbol{A}^{-1}\| \| \boldsymbol{v} \boldsymbol{w}^* \| \\ & \Rightarrow \left|\frac{1}{\lambda }\right| \| \boldsymbol{v} \boldsymbol{w}^*\|\leq\| \boldsymbol{A}^{-1}\|\| \boldsymbol{v} \boldsymbol{w}^*\| \\ &\Rightarrow\left|\frac{1}{\lambda }\right| \leq\| \boldsymbol{A}^{-1} \| \end{align*}

In this way for all $ \lambda \in \sigma(\boldsymbol{A}^{-1})$ it has to $\left|\frac{1}{\lambda }\right |||\boldsymbol{A}^{-1}||$, which is equivalent to say \begin{equation} \rho(\boldsymbol{A}^{-1}) \leq\|\boldsymbol{A}^{-1}\| \end{equation} So of the inequalities it has to

\begin{equation} \rho(\boldsymbol{A}^{-1}) \rho(\boldsymbol{A}) \leq ||\boldsymbol{A}^{-1}|| ||\boldsymbol{A}|| \end{equation} Finally, from the definition of the spectral radius it follows that \begin{align*} \|\boldsymbol{A}\| &= \rho(\boldsymbol{A})=\max_{\lambda \in \sigma(\boldsymbol{A})} |\lambda| \\ \left\|\boldsymbol{A}^{-1}\right\| &=\rho\left(\boldsymbol{A}^{-1}\right)=\max_{\lambda \in \sigma(\boldsymbol{A})} \frac{1}{|\lambda|}=\frac{1}{\min_{\lambda \in \sigma(\boldsymbol{A})}|\lambda|} \end{align*} by replacing in the above equation \begin{equation} \frac{1}{\min_{\lambda \in \sigma(\boldsymbol{A})}|\lambda|} \max_{\lambda \in \sigma(\boldsymbol{A})} |\lambda| =\rho(\boldsymbol{A}^{-1}) \rho(\boldsymbol{A}) \leq \kappa(\boldsymbol{A}) \end{equation} Thus \begin{equation*} \kappa(\boldsymbol{A}) \geq \frac{\max _{\lambda \in \sigma(\boldsymbol{A})}|\lambda|}{\min _{\lambda \in \sigma(\boldsymbol{A})}|\lambda|} \end{equation*}