Optimum way to find the determinant of the following matrix.

138 Views Asked by At

Given a matrix like the following, is there a way, only by using elementary transformations between rows and columns, to find its determinant? I know it would be easy to compute it or to try and find a pattern, but is there a faster way using that method?

The matrix goes like:

  • 1, if $i = j$.
  • $\lambda$, if $i = j > 1$.
  • k, if $i \neq j$.

Where $i$ and $j$ indicate the row and the column, respectively.

2

There are 2 best solutions below

0
On

$$M=\begin{bmatrix}1&k&k&k&\dots&k\\ k&\lambda&k&k&\dots&k\\ k&k&\lambda&k&\dots&k\\ \vdots&&\ddots&&\vdots\\ k&k&k&\dots&k&\lambda \end{bmatrix}_{n\times n}$$

Subtract the first row from all the others, giving $$M'=\begin{bmatrix}1&k&k&k&\dots&k\\ k-1&\lambda-k&0&0&\dots&0\\ k-1&0&\lambda-k&0&\dots&0\\ \vdots&&\ddots&&\vdots\\ k-1&0&0&\dots&&\lambda-k \end{bmatrix}_{n\times n}$$

Subtract the last row from all other rows but the first, giving

$$M''=\begin{bmatrix}1&k&k&k&\dots&k&k\\ 0&\lambda-k&0&0&\dots&0&k-\lambda\\ 0&0&\lambda-k&0&\dots&0&k-\lambda\\ \vdots&&\ddots&&\vdots&&\vdots\\ k-1&0&0&\dots&&0&\lambda-k \end{bmatrix}_{n\times n}$$

Of course $\det(M'')=\det(M)$

Expanding $\det(M'')$ by minors along the first column, $$ \det(M'')=(\lambda-k)^{n-1}-(-1)^n(k-1)\left\lvert\begin{matrix}k&k&k&\dots&k&k\\ \lambda-k&0&0&\dots&0&k-\lambda\\ 0&\lambda-k&0&\dots&0&k-\lambda\\ \vdots&&\ddots&&\vdots&\vdots\\ 0&0&\dots&&\lambda-k&k-\lambda\\ \end{matrix}\right\rvert_{n-1\times n-1}\\ =(\lambda-k)^{n-1}-(-1)^n(k-1)k(\lambda-k)^{n-2} \left\lvert\begin{matrix}1&1&1&\dots&1&1\\ 1&0&0&\dots&0&-1\\ 0&1&0&\dots&0&-1\\ \vdots&&\ddots&&\vdots&\vdots\\ 0&0&\dots&&1&-1\\ \end{matrix}\right\rvert_{n-1\times n-1}\\ $$ and the problem reduces to computing the last determinant. I leave that to you, but I have a couple of remarks. If you move the second row into the last spot, then the sign of the determinant changes, and we get a matrix like $M''$, upper triangular except fo a $1$ in the first column of the last row, so we can expand by minors along the first column easily. A few experiments lead me to believe that the determinant of an $n\times n$ matrix of this form is $(-1)^{n+1}n$. This looks easy to prove by induction, I think.

I hope I've typed all this correctly. Check carefully. I've made several mistakes today.

0
On

The direct way would be to compute its eigenvalues.
We want the eigenvalues of $A= \begin{pmatrix}1&k&\ldots &k\\k&\lambda&\ldots&k\\\vdots&&\ddots&\vdots\\k&\ldots&k&\lambda\end{pmatrix}$ An $n \times n$ matrix. Or $\dfrac{A-(\lambda+k)I}{k}=B$ is a matrix that has presumably only two non zero eigenvalues. The case $\lambda=1$ is easy so the eigenvalues of $A$ are $(kn+1-k,1-k,\ldots,1-k)$. Suppose $\lambda\neq 1$ we compute a non zero eigenvalue $r$ of $B$. Set $a=\dfrac{1-\lambda+k}{k}-r$ and $b=1-r$ to annihilate the determinant of

$B-rI=\begin{pmatrix}a&1&\ldots &1\\1&b&\ldots&1\\\vdots&&\ddots&\vdots\\1&\ldots&1&b\end{pmatrix}$

To do this we solve a linear dependence system of rows so there is a non zero tuple $(x_1,\ldots,x_n)$ for $2\le j\le n$:

$$\begin{cases}x_1a+\sum\limits_{i=2}^nx_i=0\\\sum\limits_{i\neq j}x_i+bx_j=0\end{cases}$$ doing some substracting with first equation and each of the rest we get that all $x_j$ are equal for $j\ge 2$. Finally you solve $x_1+(n-2)x+bx=0$ with $x_1=\dfrac{-(n-1)x}{a}$.

This gives a second degree polynomial in $r$. Calculations are cumbersome but doable.