For the sake of brevity, I'm given a 3x3 symmetric matrix with real entries with no further information as to what the rows and columns encode. (eg. this not need to be the case but the columns may very well stand for i, j and k vectors and the rows for the transformed i, j and k)
Suppose we didn't know this and this matrix acts on a vector space spanned by an arbitrary basis. What is required of this starting basis for the eigendecomposition to work?
Eigenspace orthogonality proof from school book
For example, when looking at the above proof, it uses the fact that a dot product may be written as a (in this case) 1x3 row vector multiplying a 3x1 column vector. But this not need to be the case, right? Unless I gravely misunderstood this is only true for Cartesian coordinates. If the starting basis were not orthogonal, then dotting a pair of vectors in that basis will produce crossed terms and this does not reflect a row vector multiplying a column vector.
So much for brevity. Generally speaking, what assumptions need to be made when performing eigendecomposition on a real symmetric matrix?
edit: It seems I've been vague so I've added a little example to better illustrate my point.
eigendecomposition of a real symmetric matrix is based on the idea that eigenvectors from different eigenspaces are orthogonal with respect to the dot product. I have a problem with this statement as it seems to make the assumption that we're working in cartesian coordinates. Is this true or is there a fault in my reasoning?
In the above example the alphas are certainly orthogonal if the u's are like i and j. However if it's not specified, do we have to make this assumption?
A vector dot product is a binary operation which has to satisfy the property $\mathbf{a}\cdot\mathbf{b} = |\mathbf{a}| |\mathbf{b}| \cos\theta$ where $\theta$ is the angle between the vectors. This is the definition of a dot product.
Now, vectors are objects in space with a length and direction. They can be mathematically expressed using a basis which is a set of vectors with specific lengths and directions. It so happens that if the operands are expressed in a Cartesian basis, the dot product $\mathbf{a}\cdot\mathbf{b}$ takes the familiar formula $\sum_i a_ib_i$. Note that for Cartesian basis, $a_i$ and $b_i$ are coefficients of base vectors $\mathbf{e}_i$. $$\mathbf{a}=\sum_i a_i\mathbf{e}_i$$ See Wolfram's page on dot product.
I hope this answers your concern:
Since the elements of a vector are the coefficients of base vectors, the multiplication of a row and column vector gives the dot product. Unless you change the basis itself, your statement holds true. In fact, the Cartesian basis is an implicit assumption for the matrix itself. For any different basis, the elements of the matrix would also change (this is what happens when a second order Tensor is transformed).