I want to find all real $x$ that satisfy
$$ \textrm{det } X= \begin{vmatrix} x &2 &2 &2\\ 2 &x &2 &2\\ 2 &2 &x &2\\ 2 &2 &2 &x \end{vmatrix}\\ $$
My teacher does this by adding the three bottom rows to the top row
$$ \textrm{det } X= \begin{vmatrix} x+6 &x+6 &x+6 &x+6\\ 2 &x &2 &2\\ 2 &2 &x &2\\ 2 &2 &2 &x \end{vmatrix}\\ $$
and then subtracting a row of $2$'s from the bottom three rows
$$ \textrm{det } X= (x+6) \begin{vmatrix} 1 &1 &1 &1\\ 0 &x-2 &0 &0\\ 0 &0 &x-2 &0\\ 0 &0 &0 &x-2 \end{vmatrix}. $$
The answer is
$$ x\in \{-6,2\}. $$
I think I understand the operations (although subtracting an arbitrary row of numbers from a matrix/determinant row is something I've never seen before, but I don't see why that wouldn't be allowed. Just like you can subtract arbitrary coefficients on both sides of an equation, right?), my main issue is why they are performed.
- Why can't I just in the same way subtract a row of $2$'s from the three bottom rows in the first determinant? If I do that I get a different answer.
- I know I want a column of all zeroes except one column-element, but why do I need to perform the first operation beforehand? Is it somehow necessary that all the top row elements to be the same, $(x+6)$?
I think you should give a look at gaussian elimination.
You can only do certain manipulation the the matrix without changing it's determinant (or at least only changing it's sing or by some scalar). This manipulations are as follows:
The algorithm for gaussian elimination goes something like this