$A+3I$ is nonsingular

93 Views Asked by At

If $A^2=A+2I$, show that $A+3I$ is nonsingular. If $p$ is a polynomial that annihilates A, how does the minimum poly of A and $p$ relate?

Can someone help me with this? I’m not quite familiar with annihilating polynomial.

5

There are 5 best solutions below

0
On

Hint: if $p(A) = 0$ where $p$ is a polynomial, all eigenvalues of $A$ are roots of $p$.

0
On

Since $A^2 - A - 2I = 0$, we have that the polynomial $x^2 - x - 2 =(x+1)(x-2)$ annihilates $A$.

Therefore, the minimal polynomial of $A$ divides $(x+1)(x-2)$ so the only options are $x+1$, $x-2$ and $(x+1)(x-2)$.

The roots of the minimal polynomial are the eigenvalues of $A$, so we conclude $\sigma(A) \subseteq \{-1, 2\}$.

$A+3I$ is singular if and only if $0 \in \sigma(A+3I) = \sigma(A) + 3 \subseteq \{2, 5\}$, which cannot be true.

2
On

Let's find the inverse of $A+3I$ in the form $aA+bI$: $$ I = (A+3I)(aA+bI)=aA^2+(3a+b)A+3bI=(4a+b)A+(2a+3b)I $$ Solving $4a+b=0, 2a+3b=1$ gives $a = -1/10, b = 4/10$, and so $$ (A+3I)^{-1} = \frac{1}{10}(-A+4I) $$

0
On

If $A^2-A-2I=0$, then the possible eigenvalues of $A$ are $-1$ and $2$. Indeed, if $Av=\lambda v$, with $v\ne0$, then $$ (\lambda^2-\lambda-2)v=0 $$ by direct computation. Since $-3$ is not an eigenvalue, we can conclude that $A+3I$ is nonsingular.

If $p$ is a polynomial that annihilates $A$ and $m$ is the minimal polynomial, then $p(x)=m(x)q(x)$. This is because $m(x)$ is, by definition, the monic polynomial of least possible degree that annihilates $A$.

If $I$ is the set of polynomials that annihilate $A$, then $I$ is an ideal of $F[x]$ (where $F$ is the base field). The monic element of least degree in $I$ can be characterized as as the unique monic generator of $I$, which is a principal ideal by well known facts about the ring of polynomials over a field. (Here I assume that the matrix $A$ is not the zero matrix.)

0
On

If $A$ is a square matrix of size $n$ over a field $F$, and $\mu \in F$ is an eigenvalue of $A$, so that there is some vector $0 \ne \vec v \in F^n$ with

$A \vec v = \mu \vec v, \tag 1$

then

$A^2 \vec v = A(A \vec v) = A(\mu \vec v) = \mu A \vec v = \mu \mu \vec v = \mu^2 \vec v; \tag 2$

similarly,

$A^3 \vec v = A(A^2 \vec v) = A(\mu^2 \vec v) = \mu^2 A \vec v = \mu^2 \mu \vec v = \mu^3 \vec v; \tag 3$

one is led in the light of (1)-(3) to guess that

$A^k \vec v = \mu^k \vec v, \; 0 \le k \in \Bbb Z; \tag{4}$

we may validate such a guess by noting that (4) implies

$A^{k + 1} \vec v = A(A^k \vec v) = A(\mu^k \vec v) = \mu^k(A \vec v) = \mu^k \mu \vec v = \mu^{k + 1} \vec v; \tag 5$

we see that (1)-(5) indeed form, in essence, an inductive demonstration that (4) binds; we also see from (4) that, for $a \in F$,

$aA^k \vec v = a\mu^k \vec v; \tag 6$

therefore, if

$p(x) = \displaystyle \sum_0^{\deg p} p_i x^i \in F[x], \tag 7$

then

$p(A)\vec v = \displaystyle \left (\sum_0^{\deg p} p_i A^i \right ) \vec v = \sum_0^{\deg p} p_i A^i \vec v = \sum_0^{\deg p} p_i \mu^i \vec v = \left (\sum_0^{\deg p} p_i \mu^i \right) \vec v = p(\mu) \vec v; \tag 8$

it follows then that if (1) holds, then $p(\mu)$ is an eigenvalue of $p(A)$, also with eigenvetor $\vec v$, thus

$p(A) = 0 \Longrightarrow p(\mu) = 0, \; \text{where} \; A\vec v = \mu \vec v; \tag 9$

We apply the principle outlined in (1)-(9) above to the present case with

$A^2 = A + 2I, \tag{10}$

or

$A^2 - A - 2I = 0; \tag{11}$

now the eigenvalues of $A$ must satisfy

$x^2 - x - 2 = 0, \tag{12}$

the roots of which are $-1$ and $2$; thus $\mu \in \{-1, 2 \}$ and there are no other possibilities. Now let

$q(x) = x + 3 \in F[x]; \tag{13}$

it then further follows from what we have done above that the eigenvalues of

$q(A) = A + 3I \tag{14}$

must be of the form $q(\mu) = \mu + 3$, so they must lie in the set

$\{ 2 = -1 + 3, 5 = 2 + 3 \}; \tag{15}$

since

$0 \notin \{ 2, 5 \}, \tag{16}$

i.e., $0$ is not an eigenvalue of $A + 3I$, it follows that this matrix is nonsingular.

In general, if

$p(x) \in F[x] \tag{17}$

annihilates a matrix $A$ as does $x^2 -x - 2$ in the present example, that is, if

$p(A) = 0, \tag{18}$

then the minimal polynomial

$m_A(x) \in F[x] \tag{19}$

of $A$ must divide $p(x)$:

$m_A(x) \mid p(x); \tag{20}$

this follows easily from the division algorithm for polynomials, which asserts that

$p(x) = q(x)m_A(x) + r(x), \; q(x), r(x) \in F[x], \tag{21}$

where

$r(x) = 0 \; \text{or} \; 0 \le \deg r(x) < \deg m_a(x); \tag{22}$

indeed, if $r(x) \ne 0$, then

$r(A) = p(A) - q(A) m_A(A) = 0, \tag{23}$

contradicts the minimality of $m_A(x)$; thus we must have $r(x) = 0$ and

$p(x) = q(x) m_A(x), \tag{24}$

and so (20) binds.

To summarize what we have just done: The minimal polynomial $m_A(x)$ of any matrix $A$ divides any $p(x) \in F[x]$ which annihilates $A$, that is, $p(A) = 0$.