Linear Algebra without Matrices

1.8k Views Asked by At

How far could one get in linear algebra without matrices? It seems like the more I learn, the less I actually use them, but most of the basic theorems and invariants that learned first -- and still use -- were defined via matrices. Could linear algebra be done without matrices at all? Is there a book that takes this approach? I'm just curious how one would go about it.

2

There are 2 best solutions below

4
On BEST ANSWER

You absolutely do not need matrices to talk about linear operators.

At its barest, a matrix represents a linear function of a vector. Given an input vector $a$, a linear function $\underline T$ might be written like this:

$$\underline T(a) = (a \cdot u) \ell + (a \cdot v) m + (a \cdot w) n + \ldots$$

This is a very "dumb" way of translating a matrix to a more basis-independent notion. The vectors $u, v, w, \ldots$ would be colums, and the vectors $\ell, m, n$ would simply be chosen basis vectors, $e_1, e_2, e_3, \ldots$. They need not be basis vectors in a particular basis, however.

Now, in itself, this kind of translation from matrices to basis-independent language isn't terribly useful. All you've done is take the columns of the matrix and put that information into the vectors $u, v, w, \ldots$. A very general matrix may have no other simpler way of writing the linear function, but many interesting matrices, corresponding to interesting transformations, often do.

Using clifford algebra, you can simplify the explicit expression for a rotation matrix in terms of a rotor--the clifford algebra analogue of a quaternion. A rotation map $\underline R$ looks like

$$\underline R(a) = q a q^{-1}$$

And so, instead of explicitly working with the rotation matrix, much information is invested in the rotor $q$ instead. The explicit form of this map lends itself to easy generalization to objects beyond vectors, also. Orthogonal projections can also be condensed considerably from the most general form.

So, one reason you might not want to work with a matrix is when you can write the linear function with much more compact and powerful expressions instead.


Many of the properties of matrices are properties of their associated linear operators and do not require matrices to compute them.

For example, trace: you're often taught to sum over diagonal elements to compute trace. But consider instead the usual derivative operator $\nabla$. What happens when you do this?

$$\nabla \cdot \underline T(a) = \text{scalar}$$

The result is a scalar. It's the trace. So, instead of a computation that requires you to write the matrix of a transformation, you can compute the divergence of a linear function and get its trace that way. If you can compute this divergence without choosing a basis at all, it's a net win.

Similarly, there is a natural extension of any linear operator to blades: simple products of several vectors under the wedge product. Blades are natural representations of subspaces--that don't require matrices!--and give considerable geometric insight to linear algebra. Indeed, any exterior algebra built on a base vector space $\mathbb K^n$ has a single linearly independent $n$-blade. Call this $n$-blade $i$, and the natural extension of the linear operator acting on this $n$-blade is

$$\underline T(i) = (\det \underline T) i$$

This can be a basis independent way of defining the determinant; no matrix required! And yes, the $n$-blade $i$ carries the same meaning as a volume object in the space; this is a direct translation of the common geometric idea of looking at a volume's dilation or shrinking under a transformation.


I mentioned blades from exterior algebra. Linear algebra with matrices often relies excessively upon characterizing subspaces through, for instance, the kernel of a linear transformation. Using blades to represent subspaces instead frees you from having to represent subspaces through projection maps' kernels. For instance, it's often that one represents a 2-blade through an antisymmetric matrix. You no longer need a matrix to do this if you use the exterior algebra instead.


No longer having to use matrices, certain aspects of linear algebra have to be thought of differently:

Similarity transformations take on an obvious meaning, as they are just changes of basis. Linear functions written in basis-independent language have little need of such things. Thus, the difference between a diagonal operator and one that is merely diagonalizable is just one of a choice of basis, for instance.

Things like QR factorization are also basis dependent, and so they focus more on the practical matter of doing computations rather than expressing a linear function in terms of something intrinsic to its properties.


When you no longer use matrices to encode, for instance, the inner product, the notion of symmetry of a linear map becomes a bit clearer: that we need to use the conjugate transpose for complex maps is something that can initially seem mysterious, but it follows directly from the definition of the complex inner product and the notion that $\underline T(a) \cdot b = \overline T(b) \cdot a$, which defines the relationship between a linear map and its adjoint. You begin to realize that matrices are inextricably designed for Euclidean space, and over the years, various hacks and other simple tricks have been designed to keep them useful for other applications. All of these things, like the conjugate transpose, have the feel of dirty tricks needed just to keep all the math making sense, where in a basis-independent formulation, these cracks and seams in matrix algebra are no longer of any consequence.


Make no mistake: going away from matrices may mean you need other math to make computations make sense. I've made references to clifford algebra and exterior algebra here, for instance. Did you know you can invert a linear operator without using matrix inverse formulas? This is very possible to do with clifford algebra, but you do need the formalism to get all the requisite operations in place to actually do the computation. And of course, the result will ultimately be the same.

For numerical applications, you'll be tied to using some basis all the time anyway. I can't foresee a future in which matrices are not used in computing for sheer speed and convenience, but one should always understand that the principal results of linear algebra do not require matrices to be proven, and it can be instructive to prove them without matrices. And often, basis-independent formulations of linear maps can be convenient to use for certain applications.


I've based most of this answer on my knowledge of clifford algebra--specifically the newer "geometric algebra" that is used in some physics circles and that tries to cut down on some complex notations to make things look more like vector algebra and vector calculus. Some authors in the subject include Hestenes and Sobczyk; Doran and Lasenby; Dorst, Fontijne, and Mann; and Alan Macdonald, who in particular has written texts appropriate even for undergraduates that relate traditional methods of linear algebra to a more geometric algebra perspective.

2
On

When you work with linear maps, matrices are a great help to simplify your task. Instead of taking the image of a vector by your map, you multiply the matrix associated to this map by the vector (written as a column matrix). So why should this tool be excluded?