Once we perform row reduction on a matrix, to put it in row echelon form, what does this tell us? How should we interpret the results?
This question is intended as an FAQ. While plenty of questions are answered about this on MSE, I could not find a good, general, definitive answer on this topic. Please feel free to edit the answer below if you feel it needs tweaking or updating.
Systems of Linear Equations
This is the most common context in which you will perform row reduction to row echelon form, and need to interpret the result.
We will assume you know that a system of linear equations in the variables $x_1, x_2, \ldots, x_n$ takes the form: $$\begin{matrix} a_{11}x_1&+&a_{12}x_2&+&\cdots&+&a_{1n}x_n&=&b_1\\ a_{21}x_1&+&a_{22}x_2&+&\cdots&+&a_{2n}x_n&=&b_2\\ \vdots&&\vdots&&\ddots&&\vdots&&\vdots\\ a_{m1}x_1&+&a_{m2}x_2&+&\cdots&+&a_{mn}x_n&=&b_n, \end{matrix} \tag{1}$$ where the $a_{ij}$s and $b_i$s are constants, and that such a system is represented by an augmented matrix: $$\left(\begin{array}{cccc|c} a_{11} & a_{12} & \cdots & a_{1n} & b_1 \\ a_{21} & a_{22} & \cdots & a_{2n} & b_2 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} & b_m \end{array}\right).$$ To solve this system of equations, you will have applied row reduction on this augmented matrix, which we assume you're capable of doing. The question is, how do we interpret the results?
Step 1: Check for consistency
Start by looking at the rows that are $0$ to the left of the augmented column. If the matrix has a zero row, with a non-zero value in the augmented column, then stop: the system is inconsistent. That is, there are no solutions to the equation, regardless of any other rows in the matrix.
If the matrix has no zero rows, or all the zero rows extend to zeros in the augmented column, then the matrix is consistent, meaning there is either a unique solution, or infinitely many solutions, and you should proceed to the following steps.
Step 2: Check for free variables/uniqueness of the solution
While this step is not strictly necessary, it's a good habit to get into, and a quick way of determining if there are infinitely many solutions.
Find the pivots of the row-echelon augmented matrix. Is there a pivot in every column? If so, then there is a unique solution to your equation. If not, then there are infinitely many solutions. Make a note of which variables are free, and which ones are not.
Note 1: The number of zero rows in the row-echelon matrix does not impact whether or not the system has a unique solution (this is a common misconception). It's perfectly possible for a system with more equations than variables, producing a "tall" rectangular matrix to have a pivot in each column, and have zero rows.
Note 2: Systems with more variables than equations, on the other hand, produce augmented matrices with more columns than rows, and given there's at most one pivot per row, there must always be columns without pivots, i.e. free variables. Such systems can still be inconsistent (see step 1), but if they are consistent, they must have infinitely many solutions.
Note 3: If you have the same number of equations as variables, zero rows will guarantee columns without pivots and vice-versa. So, in this specific case, the system has infinitely many solutions if the row echelon matrix contains a zero row. But this logic only works in the specific case where the numbers of equations and variables are the same.
Step 3: Find the general solution
To make this step easier, we deal with some special cases separately: specifically when there are no free variables (i.e. the solution is unique), and when the matrix is not just in row-echelon form, but reduced row-echelon form (RREF), a type of row-echelon form that will make this step easier (found by Gauss-Jordan elimination).
Case a: In RREF and there are no free variables
In this case, your reduced row echelon form matrix will look like this: $$\left(\begin{array}{cccc|c} 1 & 0 & \cdots & 0 & c_1 \\ 0 & 1 & \cdots & 0 & c_2 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 & c_n \\ 0 & 0 & \cdots & 0 & 0 \\ 0 & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 0 & 0 \end{array}\right).$$ There may or may not be $0$ rows underneath. As a system of equations, it reads: \begin{align*} x_1 &= c_1 \\ x_2 &= c_2 \\ & \;\vdots \\ x_n &= c_n, \end{align*} as well as a bunch of $0 = 0$ equations from the $0$ rows. Essentially, this is the solution laid out for you: each $x_i$ has a unique value. This is the easiest case to deal with.
Case b: Not in RREF and there are no free variables
Like in the previous case, the solution is still unique, but we have to work a little harder to find it. Our solution looks like this: $$\left(\begin{array}{cccccc|c} 1 & * & * & \cdots & * & * & c_1 \\ 0 & 1 & * & \cdots & * & * & c_2 \\ 0 & 0 & 1 & \cdots & * & * & c_3 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & * & c_{n-1} \\ 0 & 0 & 0 & \cdots & 0 & 1 & c_n \\ 0 & 0 & 0 & \cdots & 0 & 0 & 0 \\ 0 & 0 & 0 & \cdots & 0 & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 0 & 0 & 0 \end{array}\right).$$ Here, the $*$s can be any value, and once again, the $0$ rows are optional. To find the unique solution, you start with the last non-zero equation, which reads $$x_n = c_n.$$ This tells you the unique value of $x_n$. The next equation looks like: $$x_{n-1} + {*x_n} = c_{n-1}.$$ You can then substitute the value of $x_n$ to find the unique value of $x_{n-1}$. The next equation up involves $x_{n-2}$, $x_{n-1}$ and $x_n$, and using the two known values, you can determine $x_{n-2}$. Keep going in this fashion, until you have all the unique values of all the variables.
Case c: In RREF and there are free variables
Write out all the equations from the non-zero rows, and move all but the first term (the term corresponding to the pivot entry, i.e. the first non-zero number in the row) to the other side of the equation. You should get $x_i$ on the left, where the pivot lies in the $i$th column, is equal to some constant (from the augmented row) minus whatever other terms used to be on the left side of the equation. This gives you the non-free variable $x_i$ in terms of the other free variables. This is the best you can do: simply express the non-free variables in terms of the free variables, and allow the free variables to take any value. This gives you a full solution, parameterised by the free variables.
For example, consider the following augmented matrix in RREF: $$\left(\begin{array}{cccc|c} \color{red}1 & -1 & 0 & 3 & 3 \\ 0 & 0 & \color{red}1 & 0 & 3 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right).$$ Note that the pivots are coloured red. The columns without pivots correspond to free variables: $x_2$ and $x_4$, while $x_1$ and $x_3$ are not free. As equations, we get two non-zero equations: \begin{align*} x_1 - x_2 + x_4 &= 3 \\ x_3 &= 3. \end{align*} Clearly, the value of $x_3$ is unique. On the other hand, we have $$x_1 = 3 + x_2 - x_4$$ which expresses $x_1$ in terms of the free variables $x_2$ and $x_4$. So, our full solution is $$(x_1, x_2, x_3, x_4) = (3 + x_2 - x_4, x_2, 3, x_4),$$ where $x_2$ and $x_4$ can take any value.
Case d: Not in RREF and there are free variables
This is a mix between cases b and c. Like in case b, we start with the last non-zero row, and like in case c, we express the first (pivot) term in terms of the other variables in the equation (which all must be free variables). This gives us our last non-free variable in terms of the free variables after it.
The next equation up can also be solved in terms of other variables, but the terms may include the previous non-free variable that we solved for, in addition to other free variables. Like in case b, back substitution is necessary to eliminate the non-free variable. Continuing in this fashion, i.e. solving for the pivot variable, and back-substituting previous expressions for non-free variables, we can obtain the general solution, in much the same fashion as in case c.
Another example:
$$\left(\begin{array}{cccc|c} \color{red}1 & -5 & 2 & 1 & 6 \\ 0 & 0 & \color{red}1 & 2 & -2 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right).$$ This matrix is in row echelon form, but not reduced row echelon form. Once again, $x_1$ and $x_3$ are not free (since there are pivots in column $1$ and $3$), and $x_2$ and $x_4$ are free (no pivots in columns $2$ and $4$). The equations become: \begin{align*} x_1 - 5x_2 + 2x_3 + x_4 &= 6 \\ x_3 + 2x_4 &= -2. \end{align*} We solve the last equation for the pivot variable $x_3$: $$x_3 = -2 - 2x_4,$$ which expresses $x_3$ in terms of free variables (just $x_4$, in this case). The next equation up (the first equation) becomes: $$x_1 = 6 + 5x_2 - 2x_3 - x_4,$$ which expresses $x_1$ in terms of the free variables $x_2$ and $x_4$ and the non-free variable $x_3$. From our previous work, we can substitute the expression for $x_3$ in to get $$x_1 = 6 + 5x_2 - 2(-2 - 2x_4) - x_4 = 10 + 5x_2 + 3x_4$$ This gives us what we need: $x_1$ in terms only of free variables $x_2$ and $x_4$. As in case c, we can turn this into a general solution parameterised only by free variables.
FAQs
A column with only zeros is still a column without pivots. This makes it a free variable. What's confusing is that, when you write out the equations in step 3 (cases c or d), that particular variable is not going to appear anywhere! What this means is that this particular free variable does not affect the value of any other non-free variable. It can just be whatever value it wants to be, without having any other variable depend on it. It does not mean you can ignore the variable, or that the variable's value is $0$; it is still a free variable, and you will get infinitely many solutions.
For example, if a system of equations produces the row echelon form $$\left(\begin{array}{ccc|c} 0 & 1 & 2 & 0 \\ 0 & 0 & 1 & 3 \end{array}\right),$$ then we would find $x_3 = 3$ and $x_2 = -6$, but $x_1$ is a free variable, since it has no pivot, but also does not affect the value of the other variables. This system has infinitely many solutions, and the general solution is $$(x_1, x_2, x_3) = (x_1, 3, -6)$$ where $x_1$ is a free variable.
This is only really possible if you start with the zero matrix. What it means is that every variable is free, and no variable affects the other! In other words, every possible combination of values of the variables is a solution.
This probably means you're not finished row reducing your matrix. When you get to row echelon form, each pivot should be strictly to the right of the previous one. If you're getting two expressions for $x_4$ (or anything else) it means you have two pivots on top of each other, which means you should subtract one row from the other to eliminate one of the redundant pivots in the fourth column.
You can perform row reduction on any matrix; the augmented column is not necessary. As to what such an operation signifies, or how to interpret the result, it depends on the question. In the context of systems of linear equations, the augmented column is often suppressed when you have a homogeneous system, i.e. a system of linear equations where the right hand side of all the equations (i.e. the numbers that go in the augmented column) are all $0$. The reason is, no elementary row operation can change this column of $0$s to anything else, so it's a waste of space to keep track of them. You can either add the augmented column of zeros yourself, or remember to put zeros on the right hand side of the linear equation when you get to step 3.
Matrix equations are essentially just another way to think about systems of linear equations, which is more helpful in certain circumstances. A matrix equation takes the form $$Ax = b$$ where $A$ is assumed to be some $m \times n$ matrix, $x$ is an unknown $n \times 1$ column vector, and $b$ is a known $m \times 1$ constant vector (i.e. not depending on $x$). Note that, if we multiply an arbitrary $m \times n$ matrix by an $n \times 1$ column vector, we get the following: $$ \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix} \pmatrix{x_1 \\ x_2 \\ \vdots \\ x_n} = \begin{pmatrix} a_{11}x_1 + a_{12}x_2 + \ldots + a_{1n}x_n \\ a_{21}x_1 + a_{22}x_2 + \ldots + a_{2n}x_n \\ \vdots \\ a_{m1}x_1 + a_{m2}x_2 + \ldots + a_{mn}x_n \end{pmatrix} $$ The resulting vector is the left hand side of $(1)$. If we set it equal to an $m \times 1$ column vector $b = \pmatrix{b_1 \\ \vdots \\ b_m}$, then equating each entry in the vectors yields precisely the linear system $(1)$.
So, we can change between systems of linear equations and matrix equations easily: we let $A$ be the matrix of coefficients of the linear system, $b$ be the vector of constants on the right hand side of the equations, and let $x$ be a vector of unknowns. To get from matrix equations to augmented matrices, we simply augment the column $b$ onto the matrix $A$ (and then row reduce as normal).
Sometimes you'll find yourself needing to solve two systems of linear equations that are the same, except for the entries in the augmented column (i.e. the constants on the right hand sides of the equations). Fortunately, there's an easy and efficient method to do this: simply augment more columns to the matrix. Each extra augmented column will not affect how you row reduce the original matrix, so to interpret the result, simply pretend the other augmented columns are not there, and interpret as normal.
For example, let's say we wanted to solve two systems: $$\begin{aligned} x_1 - x_2 &= 2 \\ -2x_1 + 2x_2 &= -4 \end{aligned} \qquad \text{ and } \qquad \begin{aligned} x_1 - x_2 &= 1 \\ -2x_1 + 2x_2 &= 3. \end{aligned}$$ We can form the (doubly) augmented matrix $$\left(\begin{array}{cc|cc} 1 & -1 & 2 & 1 \\ -2 & 2 & -4 & 3 \end{array}\right).$$ It row reduces to: $$\left(\begin{array}{cc|cc} 1 & -1 & 2 & 1 \\ 0 & 0 & 0 & 4 \end{array}\right),$$ from which we can conclude that the first system is consistent (ignoring the second augment column, it passes Step 1), while the second system is inconsistent (due to the augmented $4$ in the zero row).
Another common use of multiple augmented columns is in finding inverses of matrices, see the section entitled "Finding the inverse of a matrix".
Other uses
Here, we list some other situations where we may have to interpret the row-echelon form of a matrix.
Computing a basis of the rowspace/columnspace
A common linear algebra question involves taking an arbitrary matrix, and finding a basis for the rowspace or columnspace. In either case, row reducing the matrix to row echelon form can help.
If you want a basis for the rowspace, simply read off the non-zero rows of the reduced matrix. These will automatically be linearly independent (due to the pivots shifting across with each new row), and elementary row operations preserve the row space, so these non-zero rows automatically form a basis.
If you want a basis for the columnspace, look at the columns which contain pivots. The columns in the original matrix form a basis for the columnspace. In general, elementary row operations changes the columnspace of the matrix, so these columns in the reduced matrix may not even be in the columnspace of the original matrix, let alone be a basis for this columnspace. However, as it turns out, if you take only these columns from the original matrix, you will obtain a basis of columns for the original matrix.
Determine if a set of vectors in $\Bbb{R}^n$ are linearly (in)dependent
We can form a matrix, either by placing these vectors as rows or as columns, and row reduce. If we placed them as rows, then the vectors are linearly dependent if and only if the row echelon form contains at least one zero row. If we placed them as columns, the vectors are linearly dependent if and only if every column contains a pivot.
Note: As we noted previously in Step 2 from the Systems of Linear Equations section, if the matrix has more columns than rows, a pivot cannot occur in each column. This implies the well-known and important fact that, if you take more than $n$ vectors from $\Bbb{R}^n$, you cannot possibly have a linearly independent set.
Finding the inverse of a matrix
A common method for finding the inverse of an $n \times n$ square matrix $A$, should one exist, is to augment the $n \times n$ identity matrix to it, to form the matrix $(\begin{array}{c|c}A&I\end{array})$, and row-reduce to reduced row-echelon form.
If the resulting matrix takes the form $(\begin{array}{c|c}I&B\end{array})$ for some $n \times n$ matrix $B$, then $A$ is invertible, and has inverse $B$. Otherwise, the matrix is not invertible (and the matrix on the left hand side of the line will contain a zero row and at most two pivots).
Finding the nullspace of a matrix
The nullspace of a matrix is the solutions to $Ax = 0$, where $A$ is an $m \times n$ matrix, and $x$ is an unknown $n \times 1$ column vector. This is a matrix equation (see other answer), which is easily interpreted as a system of linear equations. Such a system is always consistent (as it contains the zero vector at least), but one can follow step 2 to determine if the nullspace is trivial (i.e. containing only the zero vector), or if it is non-trivial (contains non-zero, and hence infinitely many vectors). Finding the nullspace explicitly involves following step 3 from the Systems of Linear Equations section.
Moreover, the number of columns without pivots/the number of free variables, actually counts the dimension of the kernel.
Proving eigenvalues/computing eigenspaces
A square matrix $A$ has an eigenvalue $\lambda$ if and only if there is a non-zero $x$ such that $(A - \lambda I)x = 0$. That is, the nullspace of $A - \lambda I$ has a non-zero solution. The eigenspace is the nullspace of $A - \lambda I$. So, you can prove $\lambda$ is an eigenvalue of $A$ by row reducing $A - \lambda I$, and you can find the eigenspace by computing the nullspace, as in the previous section.
Also, as in the previous section, the geometric multiplicity of the eigenvalue $\lambda$, which is the dimension of this nullspace, can be computed by counting the columns without pivots.
Note: Row reduction will change the non-zero eigenvalues of a matrix, so it is not possible to row reduce first, and determine the eigenvalues later.