For example, If we want to prove than a operator is unitary. It is enough to prove than their associated matrix is unitary?. From my point of view, it is correct since there's an isomorphism between linear maps and matrices, and isomorphisms preserve algebraic properties. But i cannot find a single comment of this thought in my book(Lang's Linear Algebra), so just want to be sure when i can apply this "trick".
2026-03-26 16:09:50.1774541390
Can we prove properties of linear maps, just by proving the same properties on their associated matrix?
83 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
2
Yes, you can often prove a property of an operator by choosing a basis, expressing the operator by a matrix with respect to that basis, and then proving the property for that matrix. It is not a general rule though (which is why no theorem to this effect is given) because it depends on the property in question just how one should proceed. For instance for the unitary property you mention, it does not work as I formulated, because the matrix will not in general be unitary iff the operator is unitary; this is only the case if one takes the basis to be orthonormal.
Working with a matrix is in fact the most obvious thing to do if working with one specific linear map. But in many cases giving a matrix is by far the most convenient way to specify one particular linear map to begin with, so the kind of problem where you have one specific linear map but not a specific matrix for it is rare; an example would be considering the linear operator defined by differentiation on a space of polynomials in$~X$ of degree less than $n$.
But in many problems of not specifically given linear operators, the question about matrices that results from this translation is not really easier than original question was, since the two are strictly equivalent. Having a matrix may seem more concrete though, and if it helps you think about a problem, by all means think about matrices. But on the other hand there are plenty of things one could do with a matrix that cannot be easily translated back into the language of linear maps because they are intimately tied to the particular basis chosen. For instance, while taking the sum of the main diagonal entries gives the trace, which is a meaningful quantity for a linear operator (independently of which basis you chose), the sum of the entries on the anti-diagonal which is just as easy to imagine for a matrix is quite meaningless for the associated linear operator. So if you are thinking about matrices, you must learn to discern the kind of operations with them that are pertinent to the linear operator, and that might serve to get to grips with the problem at hand about linear operators.
In fact you will find that because of this correspondence, most of the linear algebra questions on this site are already stated in terms of matrices rather than in terms of linear operators (or of other things that can be expressed by matrices, such as symmetric bilinear forms), even when they easily could have been. For instance one would ask to show that "any two diagonalisable matrices $A,B$ with $AB=BA$ can be simultaneously diagonalised" rather than "for any two commuting diagonalisable linear operators (on a finite dimensional vector space) there exists a basis of common eigenvectors". It really doesn't matter much, but it is in fact useful to restate a problem about matrices in more abstract terms when it can, since this helps focus on the essence of the problem.