I was studying linear algebra by David C. lay fourth edition.On page 57,In linear independcy of coloum vectors.
let $\overrightarrow{A}$=0$\hat{i}$+$\hat{j}$+5$\hat{k}$=$\begin{pmatrix}0\\ 1\\ 5 \end{pmatrix}$and $\overrightarrow{B}$=1$\hat{i}$+2$\hat{j}$+8$\hat{k}$=$\begin{pmatrix}1\\ 2\\ 8 \end{pmatrix}$ and similarly $\overrightarrow{C}$=$\begin{pmatrix}4\\ -1\\ 0 \end{pmatrix}$
$\overrightarrow{A},$$\overrightarrow{B}$and $\overrightarrow{C}$ are linear indepent $\Leftrightarrow$ H$\alpha$=$\begin{pmatrix}0 & 1 & 4\\ 1 & 2 & -1\\ 5 & 8 & 0 \end{pmatrix}$$\begin{pmatrix}\alpha_{1}\\ \alpha_{2}\\ \alpha_{3} \end{pmatrix}$=0 $\Longrightarrow$$\alpha_{1}=\alpha_{2}$=$\alpha_{3}$=0
H$\sim$H$_{RRE}$$\left(H_{RRE}=RowReducedEchlonForm\right)$$\Longrightarrow$H$\alpha$=0 and H$_{RRE}\alpha$=0 will have same set of solutions.
My Problem Is
In reduction process of $H$ to RRE Form Many Authors Use Row operations which is completely unjustified as coloumn operations In case of given example.
example: To Slove -3x$_{1}+$6x$_{2}-1x_{3}=0$,9x$_{1}+18x_{2}+3x_{3}=0$,6x$_{1}+12x_{2}+2x_{3}=0$ This set of equations is equivalent $\sim$Q$\alpha$$\Longrightarrow$$\begin{pmatrix}-3 & 6 & -1\\ 9 & 18 & 3\\ 6 & 12 & 2 \end{pmatrix}$$\begin{pmatrix}\alpha_{1}\\ \alpha_{2}\\ \alpha_{3} \end{pmatrix}$=0
as we can see in the matrix If we apply C$_{3}\rightarrow C_{3}+\frac{C_{1}}{-3}$$\Longrightarrow$$\begin{pmatrix}-3 & 6 & 0\\ 9 & 18 & 0\\ 6 & 12 & 0 \end{pmatrix}$$\begin{pmatrix}\alpha_{1}\\ \alpha_{2}\\ \alpha_{3} \end{pmatrix}$=0 Now according to me system is Completly destroyed You can not represent pivotal variables In terms of non-pivotal variable
on the other hand, if we apply only coloumn operation it leads us to $\Longrightarrow\begin{pmatrix}-3 & 6 & -1\\ 0 & 36 & 0\\ 0 & 0 & 0 \end{pmatrix}$$\begin{pmatrix}\alpha_{1}\\ \alpha_{2}\\ \alpha_{3} \end{pmatrix}$=0 $\Longrightarrow$$\alpha$=$\begin{pmatrix}\frac{-k}{3}\\ 0\\ k \end{pmatrix}$=k$\begin{pmatrix}\frac{-1}{3}\\ 0\\ 1 \end{pmatrix}$
Now we can see in case of matrix $Q$,we arranged the elements in rows,and column operations affected the solution set.Similarly In matrix $H$ where elements are arranged in columns,any row operation must affect the solution set.Why many authors use row operations on matrices like $H$?
It can be proved that
So if we are just interested to determine the dimension of the column space, we can determine the dimension of row space by just doing the row operations and obtaining the row reduced echelon form. However, in this process we cannot get the basis for column space and the vectors that we see in the rows of the matrix are not necessarily in the column space.
If you want to obtain the dimension and the basis of the column space at the same time the only way is to do column operations or equivalently row operations for the transpose of your matrix.