Is my following reasoning right for the eigen values of the following matrix?

90 Views Asked by At

I have two matrices $$\textbf{A}=\alpha_1 \textbf{I}+\alpha_2 \textbf{B}$$ where $\alpha_1<0$ and $\alpha_2>0$, $\textbf{I}$ (identity matrix) and $\textbf{B}$ are of size $2\times 2$. Further, $\textbf{B}$ is positive definite (since its determinant is zero and it (1,1) entry is $\geq 0$) and symmetric. Now I want to know what is the lowest possible value that eigen values of $\textbf{A}$ can achieve. In my opinion it cannot be lower then $\alpha_1$. My reason for this statement is as follows. Since the eigen values of $\textbf{A}$ are the sum of the eigen values of $\alpha_1 \textbf{I}$ and $\alpha_2 \textbf{B}$ and because the eigen values of $\alpha_2 \textbf{B}$ can not be smaller than $0$ therefore the lower bound on the eigen values of $\textbf{A}$ is given by $\alpha_1$. Is it right reasoning or not. Your comments on this will be much appreciated thanks in advance.

1

There are 1 best solutions below

0
On BEST ANSWER

For any matrix $C$ with eigenvalue $\lambda$ corresponding to eigenvector $\vec v \ne 0$,

$C \vec v = \lambda \vec v, \tag 1$

we have

$(\alpha C + \beta \mathbf I) \vec v = \alpha C \vec v + \beta \mathbf I \vec v = (\alpha \lambda + \beta) \vec v, \tag 2$

which shows that $\alpha \lambda + \beta$ is an eigenvalue of $\alpha C + \beta \mathbf I$, also with eigenvector $\vec v$. Furthermore, given (2) we may deduce that

$\alpha C \vec v + \beta \vec v = \alpha \lambda \vec v + \beta \vec v \tag 3$

or

$\alpha C \vec v = \alpha \lambda \vec v \Longrightarrow C\vec v = \lambda \vec v, \tag 4$

provided $\alpha \ne 0$. Thus, if this condition is met, we have that $\alpha \lambda + \beta$ is an eigenvalue of $\alpha C + \beta \mathbf I$ if and only if $\lambda$ is an eigenvalue of $C$. Taking

$C = \mathbf B, \; \alpha = \alpha_2 > 0, \; \beta = \alpha_1, \tag 5$

we see that the eigenvalues of

$\mathbf A = \alpha_2 \mathbf B + \alpha_1 \mathbf I \tag 6$

must be bounded below by $\alpha_1$ as $\mathbf B$ varies over the set of symmetric, positive definite $2 \times 2$ matrices, since the eigenvalues of such matrices are positive, and any two positive reals may occur as the eigenvalues of such a $\mathbf B$; we can see such a bound is strict by allowing the eigevalues $\mathbf B$ to be arbitrarily small positive real numbers.

This result may be seen even more clearly if we cast the equation (6) into diagonal form, always possible since $\mathbf B$ is symmetric, obtaining

$\mathbf A = \alpha_2 \begin{bmatrix} \beta_1 & 0 \\ 0 & \beta_2 \end{bmatrix} + \alpha_1 \mathbf I = \begin{bmatrix} \alpha_2 \beta_1 + \alpha_1 & 0 \\ 0 & \alpha_2 \beta_2 + \alpha_1 \end{bmatrix}. \tag 7$

The OP's reasoning is thus correct.