Practical uses of matrix multiplication

37k Views Asked by At

Usually, the use of matrix multiplication is initially given with graphics — scalings, translations, rotations, etc. Then, there are more in-depth examples such as counting the number of walks between nodes in a graph using the power of the graph's adjacency matrix.

What are other good examples of using matrix multiplication in various contexts?

7

There are 7 best solutions below

0
On

Linear discrete dynamical systems, aka recurrence relations, are best studied in a matrix formulation $x_{n+1} = A x_n$. The solution of course is $x_n = A^n x_0$, but the point is to exploit the properties of $A$ to allow the computation of $A^n$ without performing all multiplications. As an example, take the Fibonacci numbers. The formula for them comes directly from this matrix formulation (plus diagonalization).

Don't forget the origins of matrix multiplication: linear change of coordinates. See, for instance, section 3.4 of Meyers's book (page 93) at http://web.archive.org/web/20110714050059/matrixanalysis.com/Chapter3.pdf.

See also http://en.wikipedia.org/wiki/Matrix_multiplication#Application_Example.

3
On

Matrix multiplication — more specifically, powers of a given matrix A — are a useful tool in graph theory, where the matrix in question is the adjacency matrix of a graph or a directed graph.

More generally, one can interpret matrices as representing (possibly weighted) edges in a directed graph which may or may not have loops, and products of matrices as specifying the total number (or total weight) of all the walks with a given structure, between pairs of vertices.

2
On

Matrix multiplcation plays an important role in quantum mechanics, and all throughout physics. Examples include the moment of inertia tensor, continuous-time descriptions of the evolution of physical systems using Hamiltonians (especially in systems with a finite number of basis states), and the most general formulation of the Lorentz transformation from special relativity.

General relativity also makes use of tensors, which are a generalization of the sorts of objects which row-vectors, column-vectors, and matrices all are. Very roughly speaking, row- and column-vectors are 'one dimensional' tensors, having only one index for its coefficients, and matrices are 'two dimensional' tensors, having two indices for its coefficients, of two different 'kinds' representing rows and columns — input and output, if you prefer. Tensors allow three or more indices, and to allow more than one index to have the same 'kind'.

1
On

A fundamental example is the multivariate chain rule. A basic principle in mathematics is that if a problem is hard, you should try to linearize it so that you can reduce as much of it as possible to linear algebra. Often this means replacing a function with a linear approximation (its Jacobian), and then composition of functions becomes multiplication of Jacobians. But of course there are many other ways to reduce a problem to linear algebra.

0
On

High-dimensional problems in statistical physics can sometimes be solved directly using matrix multiplication, see http://en.wikipedia.org/wiki/Transfer_matrix_method. The best-known example of this trick is the one-dimensional Ising model http://en.wikipedia.org/wiki/Ising_model, where an $N$-particle system can be 'solved' by calculating the $N$-th power of a 2x2-matrix, which is (almost) trivial; otherwise, one would have to compute a sum over $2^N$ terms to get the same result.

0
On

Matrices are heavily used in mathematical finance in various ways. One specific example is a correlation matrix where an entry (i,j) specifies the degree to which price movements in instrument i and instrument j are correlated over a specified time period. A huge number of computer cycles is spent daily on computing these sorts of matrices and applying further analysis to them in order to, in part, attempt to quantify the amount of risk associated with a portfolio of instruments.

0
On

Hey Alex, a central theme of Machine Learning is about finding structures (preferably linear ones) in the data space; the intrinsic dimentionalities of your observations if you may (see Eigenfaces).

I understand this may not be about matrix multiplication per se; instead, this is about what, many times, happens right before it. It begins with the spectral theorem: A = SΛS' (inverse when A is non-symmetric); it is Literally the basis of so many things (see what I did there?).