Basis for Matrix A

113 Views Asked by At

Find a basis for all $2\times2$ matrices $A$ for which $A\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$ = $\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}$.

Maybe I'm dumb-- but isn't $A$ just the $0$ matrix? In which case, the base is simply the $0$ matrix as well?

4

There are 4 best solutions below

4
On

Your guess is very intuitive, but let's check this rigorously:

$$\begin{bmatrix}a&b\\c&d\end{bmatrix}\begin{bmatrix}1&1\\1&1\end{bmatrix}=\begin{bmatrix}0&0\\0&0\end{bmatrix}$$

and we end up with a system of equations $$\begin{cases}a+b=0\\a+b=0\\c+d=0\\c+d=0\end{cases}$$

Do you think you can find a basis for the solution space?

0
On

$0$ matrix is a trivial solution in this case but it doesn't mean that there are no other solutions.

Notice that if we define $B = \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$, we have $AB = 0$ where $\det(B) = 0$. This means $B$ is not invertible. If it was, we could have said that $0$ matrix is the only solution.

However, in this case, let $A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$ and find some equations and parameterize them to get to the basis. These equations are $$a+b = 0$$ $$c+d = 0$$

Here, we have two pivot variables and two free variables so you can say $b = t$ and $d = s$ and find $a$ and $c$ in terms of $t$ and $s$, respectively, in order to get to a general form for $A$ such as $A = \begin{bmatrix} -t & t \\ -s & s \end{bmatrix}$.

0
On

If you let $A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}$ then your equation says,

$$ \begin{pmatrix} a + b & a + b \\ c+d & c+d \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}$$

i.e $a = -b$ and $c = -d$ which means we can rewrite $A$ as,

$$\begin{pmatrix} a & -a \\ -d & d \end{pmatrix} = a \begin{pmatrix} 1 & -1 \\ 0 & 0 \end{pmatrix} + d \begin{pmatrix} 0 & 0 \\ -1 & 1 \end{pmatrix}$$

And so you are interested in the subspace $S = \textbf{span}\left\{\begin{pmatrix} 1 & -1 \\ 0 & 0 \end{pmatrix},\begin{pmatrix} 0 & 0 \\ -1 & 1 \end{pmatrix}\right\} $

0
On

No, there are infinitely many matrices $A$ satisfying the stated property.

For $2\times2$ matrices it's easy enough to find a basis, but let's try and make the problem more general. Let $U$ be the $n\times n$ matrix with all entries equal to $1$; let $u$ be the $n\times 1$ column with all entries equal to $1$.

Then, for an $n\times n$ matrix $A$, the following conditions are equivalent:

  1. $AU=0$ (the null $n\times n$ matrix)
  2. $Au=0$ (the null $n\times 1$ column)

This is because $U=[u\ u\ \dots\ u]$ and so $AU=[Au\ Au\ \dots\ Au]$ (as block matrices).

So your problem actually is to find all matrices $A$ such that $Au=0$. Write $A$ as a row of $n\times 1$ columns: $$ A=\begin{bmatrix} a_1 & a_2 & \dots & a_n \end{bmatrix} $$ Then, by definition of matrix multiplication, $$ Au=a_1+a_2+\dots+a_n $$ so your condition can be written $a_n=-(a_1+a_2+\dots+a_{n-1})$.

In the case of $n=2$, you simply have $a_2=-a_1$, so the matrices satisfying the property are all those of the form $$ \begin{bmatrix} a & -a \\ b & -b \end{bmatrix} $$ and a basis is given by $$ \left\{ \begin{bmatrix} 1 & -1 \\ 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 \\ 1 & -1 \end{bmatrix} \right\} $$ For $3\times 3$ matrices, you get $$ \begin{bmatrix} a & b & -(a+b) \\ c & d & -(c+d) \\ e & f & -(e+f) \end{bmatrix} $$ and you get a basis by setting each of the “free” coefficients to $1$ and all others to $0$: $$ \left\{ \begin{bmatrix} 1 & 0 & -1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 1 & -1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 & 0 \\ 1 & 0 & -1 \\ 0 & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 1 & 0 & -1 \end{bmatrix}, \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 1 & -1 \end{bmatrix} \right\} $$