What are some interesting examples of nontrivial applications of linear algebra to other areas of mathematics?

1.2k Views Asked by At

When motivating the study of linear algebra, authors typically state that linear algebra is ubiquitous in almost every branch of mathematics, and science in general.

The canonical example is the system of linear equations: physical/mathematical systems can often be described by a set of linear equations, and linear algebra gives us the tools to solve them.

To me this example is trivial and unconvincing because linear algebra hardly brings anything new to the table besides a systematic set of notations and a marginal gain in computational efficiency. Indeed, given a set of $n$ equations with $n$ unknowns, I could just as well solve it the old-fashioned way via Gaussian elimination without uttering the word "matrix" once, and with a complexity of $O(n^3)$. On the other hand, the best algorithms for inverting matrices are $O(n^{2.376})$ which is a considerable improvement but not groundbreaking to me in my everyday life. Besides, these algorithms are impossible to carry out by hand.

This brings me to my question: what are some examples of applications of linear algebra to branches of mathematics that genuinely shed a new light on a problem, that allow us to do things that would have been impossible otherwise?

2

There are 2 best solutions below

0
On

We aren't using linear algebra to solve equations because that is all linear algebra can do. We are using it to solve equations to get a new perspective on a problem we are familiar with, with concise notation, and an entirely different ruleset for manipulation.

You say rewriting a set of linear equations into the form $Ax= b$, finding $A^{-1}$ and solving by $x = A^{-1}b$ is a trivial rewrite? Perhaps, but there is a lot of non-trivial equation solving to linear algebra as well.

For instance, if you have more equations than variables (which is quite common in applications), which is to say that the coefficient matrix $A$ has more rows than columns, then you can't simply take the inverse of $A$. However, you can multiply on both sides from the left by $A^T$ to get $A^TAx = A^Tb$, and take the inverse of $A^TA$. The fact that $x = (A^TA)^{-1}A^Tb$ is a least-squares approximation is definitely not trivial.

If you want an application that is not equation solving, then consider, for instance, the singular value decomposition, which when used on a matrix representing the pixel values of a greyscale image allows you to decompose the pixel values of an image into an "edge" part and a "colour" part. Or the adjacency matrix of a graph, giving us entirely new branches of graph theory where properties of a graph is encoded in the eigenvalues and eigenvectors of that matrix.

5
On

I suspect that you misunderstood what ubiquity of linear algebra means. Or, perhaps, it is more accurate to say that ubiquity = linear algebra gives us the huge benefit of economy of thinking and learning.

Basically linearity is everywhere.

  1. In multivariable calculus the nature of a critical point (local maximum/minimum, saddle point) is revealed by studying the eigenvalues of the Hessian.
  2. Actually in all of multivariable calculus and differential geometry we have the maxim: derivative (=best linear approximation) is useful because it linearizes the phenomenon we want to study. For example the implicit function theorem can be reformulated by stating that (in general position) the intersection of (hyper)surfaces can be approximated by the intersection of their linear approximations (tangent planes, lines, whatnot).
  3. Big chunks of the theory of differential equations can be more quickly absorbed by using tools of vector spaces of functions.
  4. The same applies to a lot of advanced analysis: problems are best set up in vector spaces of functions (and/or their dual spaces).
  5. Elsewhere, the theory of error-correcting codes (e.g. digital tv, cellular phones, QR-codes) relies heavily on linear algebra over finite fields.
  6. The inner working of 3D-graphics routines in computer graphics and games depends on our ability to combine rotations and such using matrices (or quaternions).
  7. IIRC huge chunks of quantum mechanics are written using operators on vector spaces (they appear e.g. via differential equations or Lie groups).

Et cetera.

Learn it once, and then use it. You don't have to relearn it every time you encounter a new area written in a similar language.

My point is that you don't need to understand linear algebra inside out to do any single one of the above (and whatever others will add to the list). You can always try to cop out, and learn just the bare minimum required for that job. May be you can even learn to use a tool without understanding the underlying linear algebra? But, then you run the risk of becoming a mathematical broiler (=someone trained and raised for a single purpose only - not unlike what some chicken farmers do to the fowls in these parts).