Matrix rank considering a real parameter

393 Views Asked by At

Assuming the linear system:

\begin{align} & (1+\lambda)x+y+z=1 \\ & x+(1+\lambda)y+z=\lambda \\ & x+y+(1+\lambda)z=\lambda^2 \end{align}

for which I need to determine the rank of the matrix of the coefficients defined as: \begin{equation} A=\pmatrix{1+\lambda & 1 & 1 \\ 1 & 1+\lambda & 1 \\ 1 &1 &1+\lambda} \end{equation} , where $\lambda \in \mathbb{R}$. Well, it is quite obvious that the rank is equal to $ since the rows/columns are linearly independent but I have to prove it. Therefore in an echelon attempt I have reached:

\begin{align} A=\pmatrix{1 &1+\lambda &1 \\ 0 &1 &-1 \\ 0 &0 &-\left(\frac{\lambda+3}{\lambda+2} \right)} \end{align} for $\lambda \neq -3$. Well, first of all I am not quite sure this is correct and secondly, is that enough? There is also a second question, which asks for which values of $\lambda$ the system has one solution, infinite and none respectively. Do I proceed with the determinant?

Lastly, can one direct me to a complete example of how to find the rank of a matrix with the use of minor determinants?

Thank you!

2

There are 2 best solutions below

1
On BEST ANSWER

Notice that $A$ can be viewed as $$A = \lambda I - B,$$ where $$B = \begin{pmatrix}-1 & -1 & -1 \\ -1 & -1 & -1 \\ -1 & -1 & -1\end{pmatrix} = -\frac{1}{3}ee^T, \quad e = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}$$ Hence $\lambda$ may be viewed as the eigenvalues of $B$. Therefore $\det(A) \neq 0$ if and only if $\lambda \notin \sigma(B)$, where $\sigma(B)$ denotes the set of eigenvalues of $B$. Clearly, $0 \in \sigma(B)$. On the other hand, since for any two matrices $X$ and $Y$, $XY$ and $YX$ have the same nonzero eigenvalues, we conclude that $B$ has a unique nonzero eigenvalue that is identical to the eigenvalue of $-\frac{1}{3}e^Te = -1$, which is $-1$.

In summary, if $\lambda \notin \{0, -1\}$, we have $\det(A) \neq 0$, thus $\text{rank}(A) = 3$.

If $\lambda = 0$, then clearly $\text{rank}(A) = 1$. If $\lambda = -1$, then $A$ has an obvious submatrix $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ with nonzero determinant, we conclude $\text{rank}(A) = 2$.

Regarding to the number of solutions, it's convenient to take the point of view of range space (of columns) --- the system has (doesn't have) solution if $b = (1, \lambda, \lambda^2)^T$ belongs (doesn't belong) to the space spanned by column vectors $a_1 = (1 + \lambda, 1, 1)^T, a_2 = (1, 1 + \lambda, 1)^T, a_3 = (1, 1, 1 + \lambda)^T$.

If $\lambda = 0$, then $b = (1, 0, 0)^T$, which can't be expressed as any linear combination of $a_1 = a_2 = a_3 = (1, 1, 1)^T$. Therefore, the system has no solution.

If $\lambda = -1$, then $b = (1, -1, 1)^T$, $a_1 = (0, 1, 1), a_2 = (1, 0, 1)^T, a_3 = (1, 1, 0)^T$. It is easy to check that $u_0 = (x_0, y_0, z_0)^T = (-1/2, 3/2, -1/2)^T$ is one solution of this system. Since $\text{rank}(A) = 2$, the dimension of the null space of $A$ equals to $3 - 2 = 1$, namely, the system $Au = 0$ has infinitely many solutions. Therefore, any vector with form $u_0 + u$, where $u \in \text{Ker}(A)$ is a solution to this system. Hence it has infinitely many solutions.

If $\lambda \notin \{0, -1\}$. Then the matrix $A$ is invertible, hence the system has a unique solution as $(x, y, z)^T = A^{-1}b$.

4
On

What's obvious isn't always true.

When $\lambda = 0$, the columns are identical, and the rank is $1$.

You might also want to look at $\lambda = -2, -3$.