I am seeking assistance in understanding and computing the change of basis matrix (M) for a linear transformation from the Eigenbasis (B) to an Standard Basis (A). Specifically, I have a linear transformation T that maps R^2 to R^2, projecting onto the vector
v = (1/sqrt(2),1/sqrt(2)).
The matrix representation of T in the standard basis (A) is given as [(1/2, 1/2), (1/2, 1/2)].
Now, I find the matrix representation of T in the eigenbasis (B) of T, which is [(1/sqrt(2), 1/sqrt(2)), (-1/sqrt(2), 1/sqrt(2))]. The desired matrix in basis B is [(1, 0), (0, 0)].
I understand that the relationship between the matrices in the two bases is given by the formula: matrix of T in B = M^-1 * matrix of T in A * M.
Therefore, my question is twofold:
- Why does the change of basis matrix (M) for a linear transformation T, from basis A to basis B, correspond to the columns of basis B? I have managed to obtain the correct result in this particular case as
M = [(1/sqrt(2), 1/sqrt(2)), (-1/sqrt(2), 1/sqrt(2))],
but I am looking for a more general understanding of this phenomenon.
- Additionally, I have encountered an issue when attempting to retrieve the matrix representation of T in the standard basis (A) when given the matrix of T in the eigenbasis (B). In this scenario, using the formula matrix of T in A = M^-1 * matrix of T in B * M, where
M is the change of basis matrix [(1, 1), (1, 1)],
I find that the result is the matrix of T in the eigenbasis (B) again. Could someone provide insights into why this is occurring and guide me on the correct steps to obtain the matrix representation of T on the standard basis?