Consider the following matrix, where $k$ is a real parameter:
$$\begin{pmatrix} 1 & k & 1 \\ k & 1 & 1 \\ 1 & 1 & k \end{pmatrix}$$
I know I can study the zeroes of the determinant and so on. But I want to use Gauss reduction to study the rank.
So I start with $R_2 - k R_1$, with the condition $k \neq 0$ and $R_3 - R_1$, getting
$$\begin{pmatrix} 1 & k & 1 \\ 0 & 1-k^2 & 1-k \\ 0 & 1-k & k-1 \end{pmatrix}$$
Now the big question: I could normally go with $(1+k)R_3 - R_2$ to obtain
$$\begin{pmatrix} 1 & k & 1 \\ 0 & 1-k^2 & 1-k \\ 0 & 0 & k^2 + k - 2 \end{pmatrix}$$
And observe that the last row is null iff $k = 1$ or $k = -2$. Then I can conclude the rank is two for $k = -2$ and one for $k = 1$. So far so good.
- My big question is this: let's say I don't think much, and I notice that, right after the very first reductio, I could multiply both the second and third row by $\frac{1}{1-k}$ assuming $k\neq 1$. I would then get
$$\begin{pmatrix} 1 & k & 1 \\ 0 & 1+k & 1 \\ 0 & 1 & -1 \end{pmatrix}$$
Now this time there is no way to get a rank less than two (for $k = -2$ I can still get the rank equals to two).
So the idea of dividing by $1-k$ makes things wrong.
Why so? In what way should I avoid this operation, which apparently sounds legit to me in the same way at the beginning I did $R_2 - k R_1$ with $k\neq 0$?
Thank you so much!