What do column operations mean?

117 Views Asked by At

In Chapter 1 of Hoffman and Kunze (HK), row operations are motivated by way of the isomorphism between a matrix (perhaps augmented) and the system of linear equations which they represent (namely, the row operations correspond to taking linear combinations of the equations in the original system). At the very end of the chapter, they then seem to pass off that one can perform corresponding column operations and the theory developed maps over nicely:

It must have occurred to the reader that we have carried on a lengthy discussion of the rows of matrices and have said little about the columns. We focused our attention on the rows because this seemed more natural from the point of view of linear equations. Since there is obviously nothing sacred about rows, the discussion in the last sections could have been carried on using columns rather than rows. If one defines an elementary column operation and column-equivalence in a manner analogous to that of elementary row operation and row-equivalence, it is clear that each $m \times n$ matrix will be column-equivalent to a ‘column-reduced echelon’ matrix. Also each elementary column operation will be of the form $A \to AE$, where $E$ is an $n \times n$ elementary matrix-and so on.

But why should this be so? In terms of the isomorphism between a matrix and the system of linear equations which it represents, performing column operations seems to utterly destroy the solution set corresponding to a given matrix (say, in its corresponding homogeneous problem). For example, in the system $$4x_1 + 2x_2 = 0$$ $$4x_1 + 2x_2 = 0,$$ as represented by $$\begin{bmatrix} 4 & 2\\ 4 & 2 \end{bmatrix},$$ then if I multiply the second column by 2 then I am clearly changing the solution set?

I am looking for an answer at the matrix-theoretical level, as HK have (as yet) not introduced the machinery of abstract vector spaces and their corresponding theory.

1

There are 1 best solutions below

0
On

You can represent elementary row operations as the original matrix($M$) left-multiplied by the matrix ($E_1$) representing the row operation. Multiply that same matrix on the right instead, you get the corresponding column operation. In general you can use transposes of the elementary row operations to execute the same manipulations on columns. Since a lot of the difference really comes down to transposes and order of multiplication and are otherwise the same, column operations are not discussed much.