Until now I was using the elementary row operations to do the gaussian elimination or to calculate the inverse of a matrix.
As I started learning Laplace's transformation to calculate the determinant of a $n \times n$ matrix, I noticed that the book uses elementary column operations. I tried to use the column operations to do the gaussian elimination or to solve a $Ax = b$ matrix but it didn't work (comes out as a wrong answer).
I'm getting confused!
Example:
Let $A=\left[\begin{array}{rrrr}
x_1 & x_2 & x_3 \\
2 & 1 & 3 \\
4 & 4 & 2 \\
1 & 1 & 4 \\
\end{array}\right]b= \left[\begin{array}{r}10\\8\\16\end{array}\right]$
in this case if I interchange two rows/add a row to another/or multiply a row with a nonzero element the answer is always $$x = \begin{bmatrix}{-2,2,4}\end{bmatrix}$$
but if I interchange for example $x_1$ and $x_2$ /add a column to another.... comes out a different answer.
So why do column operations work for some operations and others not? How do you know when to use the column operations?
It would be great if anybody can help!
Reminder:
Elementary Row / Column Operations :
1. Interchanging two rows/or columns,
2. Adding a multiple of one row/or column to another,
3. Multiplying any row/or column by a nonzero element.
Manipulating rows is like manipulating the equations of the system but without changing the order of the unknowns, so interchanging the rows has no effect on the resolution of the system. Interchanging columns however is equivalent to changing the order of the variables, so the only way to obtain the same result as before, you'll have to adjust $b$ accordingly. In the same manner one can see that adding columns is illogical, since, for example, adding $x_1$ to $x_2$ is equivalent to saying that $2\alpha + 1\beta=3\beta$ (assuming $\alpha, \beta$ and $\gamma$ are the variables in that order), which is not true here.