For some reason I cannot wrap my head around this one. My schoolbook begins the chapter on eigendecomposition of real symmetric matrices by stating that eigenvectors from distinct eigenspaces are orthogonal. The ensuing proof made use of transposes, row vectors and matrix multiplication. That's all fine but in the end it seemed to suggest we're dealing in cartesian coordinates (where dotting two vectors implies summing the product of their corresponding components). Is there a fault in my reasoning or does eigendecomposition of real symmetric matrices depend on our initial basis?
In the above example, if the u's behave like i and j, then the alpha's are certainly orthogonal with respect to the dot product. But say we didn't know this. As in most exercices in the book, I'm thrown squares with numbers in each entry and the only indication is to 'show the spectral decomposition'. Must I make the assumption that I'm dealing with cartesian coordinates?
Orthogonality is defined by the specific inner product being used. The expression of that inner product in terms of coordinates is naturally dependent on the basis. However, one can always find an orthonormal basis (that’s what the Gram-Schmidt process does) and relative to that basis, the inner product of two vectors is equal to the dot product of their coordinate tuples.
I think you’ll find that all of those computations and results are still valid after inserting change-of-basis matrices in all of the appropriate places, so we can assume without loss of generality that a suitable basis has been chosen for the inner product at hand. Unless a problem states otherwise, there’s an understanding that the inner product is just the dot product.