I continue to be amazed that row reduction is still, by far, the single most useful technique I've learned for solving linear algebra problems. Even as I do practice problems for my qualifying exam, my first thought is, "Is there something I can row reduce?" So I thought it might be fun/interesting to come up with something like a "complete" list of applications for row reduction, and I'd like to ask if you have anything to add!
Here's what I've got:
Basic Applications
- Find a basis for the row space
- Find a basis for the span of a set of vectors
- Find a basis for the column space (i.e. range of a transformation)
- Find the rank of a matrix
- Find a basis for the null space
- Solve $Ax = 0$
- Solve $Ax = b$
- Find the inverse of a matrix
- Calculate $\det A$
- Check if a set of vectors is linearly independent
Advanced or Niche Aplications
- Compute the LU, LDU, or LDL decompositions of a matrix
- Per Linear Algebra Done Wrong, diagonalize the matrix of a quadratic form
- For symmetric/Hermitian matrices, find the number of each positive, negative, and zero eigenvalue (as these match the signs of the pivots)
- By extension of 13, test if a symmetric/Hermitian matrix is positive definite
- Per Hoffman and Kunze, showing that the characteristic polynomial equals the minimal polynomial of a companion matrix.
- Per Hoffman and Kunze, one can use row reduction (and properties of multilinear functions) to show that $\det \begin{bmatrix} A & B \\ 0 & C \end{bmatrix} = \det A \det C$ for the block matrix
It is applications like the last six that I am looking to add to this list (or basic applications that I have missed). I have done a lot of googling, but unfortunately one finds dozens of sources aimed at very new linear algebra students, and I thought maybe Stack could supply some interesting/surprising/advanced applications I hadn't thought of! Thanks for your input!
Another non-listed case:
The computation of a product of the form $CA^{-1}B$ where $C,A,B$ have resp. dimensions $(m \times n), \ \ (n \times n), \ \ (n \times p).$
(such a product occurs in particular for the computation of a Schur complement)
This computation is done by Gaussian elimination on the following matrix:
$$\begin{pmatrix}A & B \\ -C & 0\end{pmatrix} \to \begin{pmatrix}A' & B' \\ 0 & D'\end{pmatrix}$$
using operations "zeroing" the first $n$ columns below $A$ giving $D'=CA^{-1}B$.
Reference: Lemma 10.6 p. 269 of "Fundamental Problems of Algorithmic Algebra" by Chee Keng Yap, Oxford University Press, 2000.