Generalizing Eigenvectors of N-Dimensional Diagonalizable Matrices

47 Views Asked by At

I have a question that actually comes from my physics research on coupled oscillators and normal coordinates, but which I want to understand from a purely rigorous linear algebra point of view.

Imagine I have a general matrix (I will work in 4-D currently but assume that I'm interested in the arbitrary N-dimensional case):

\begin{equation} \mathbb{K}\equiv\begin{bmatrix} (k_0+k_1) & -k_1 & 0 &0\\ -k_1 & (k_1+k_2) & -k_2 & 0\\ 0 & -k_2 & (k_2+k_3) & -k_3\\ 0 & 0 & -k_3 & (k_3+k_0)\end{bmatrix} \end{equation}

Let this matrix act on a vector

\begin{equation} \mathbf{X}\equiv\begin{bmatrix} x_1\\ x_2\\ x_3\\ x_4\\ \end{bmatrix} \end{equation}

In the context of the physics I'm doing, these matrices are introduced so as to solve the differential equation $\ddot{\mathbf{X}}=-\mathbb{K}\mathbf{X}$ which we do by finding the normal modes. This is nothing more than finding the eigenvalues and eigenvectors of $\mathbb{K}$. Assume I've done so and have diagonalized $\mathbb{K}$ so that it becomes a new diagonal matrix I will denote $\mathbf{\Omega}$, which is given by:

\begin{equation} \mathbf{\Omega}\equiv\begin{bmatrix} \Omega_1 & 0 & 0 &0\\ 0 & \Omega_2 & 0 & 0\\ 0 & 0 & \Omega_3 & 0\\ 0 & 0 & 0 & \Omega_4\end{bmatrix} \end{equation}

With associated eigenvectors $\vec{V}_1\equiv\begin{bmatrix} A_1\\A_2\\A_3\\A_4\end{bmatrix}$, $\vec{V}_2\equiv\begin{bmatrix} B_1\\B_2\\B_3\\B_4\end{bmatrix}$, $\vec{V}_3\equiv\begin{bmatrix} C_1\\C_2\\C_3\\C_4\end{bmatrix}$, and $\vec{V}_4\equiv\begin{bmatrix} D_1\\D_2\\D_3\\D_4\end{bmatrix}$.

What I'm interested in, in general, is defining a new vector $\vec{Q}\equiv\begin{bmatrix}Q_1\\Q_2\\Q_3\\Q_4\end{bmatrix}$, such that $\mathbf{\Omega}\vec{Q}=\mathbb{K}\mathbf{X}$. In the physics literature, $Q_n$ is known as the $n^{th}$ "normal coordinate" of the system.

For the 2-D case, it is easily shown that $Q_1=(\frac{1}{|\vec{V}_1|}\vec{V}_1\cdot\mathbf{X})=\frac{x_1+x_2}{\sqrt{2}}$ and $Q_2=(\frac{1}{|\vec{V}_2|}\vec{V}_2\cdot\mathbf{X})=\frac{x_1-x_2}{\sqrt{2}}$ because in the 2-D case, $\vec{V}_1=\begin{bmatrix} 1\\1\end{bmatrix}$, $\vec{V}_2=\begin{bmatrix} 1\\-1\end{bmatrix}$.

My question is, is the relationship $Q_n=\frac{1}{|\vec{V}_n|}(\vec{V}_n\cdot\mathbf{X})$ true in general? And if so, can someone explain why this relationship holds based on linear algebra?

Thank you!

1

There are 1 best solutions below

0
On BEST ANSWER

$ \newcommand\b\mathbf \newcommand\T{\mathrm T} $

It looks like you're just trying to express the vector $\b X$ in the (normalized) eigenvector basis. $\b\Omega$ is $\b X$ expressed in the eigenbasis, and if $\b V = (\hat V_1, \hat V_2, \hat V_3, \hat V_4)$ is the matrix of normalized eigenvectors $\hat V_k = \vec V_k/|\vec V_k|$ then $\b V$ is the change-of-basis matrix from the eigenbasis to the $\b X$ basis, and $\b V^{-1} = \b V^\T$ (since $\mathbb K$ is symmetric) is the change-of-basis from the $\b X$ basis to the eigenbasis. Your 2-D case is exactly $\vec Q = \b V^\T\b X$, which is saying that $\vec Q$ is $\b X$ expressed in the eigenbasis.

If this is the case, then your equation $\b\Omega\vec Q = \mathbb K\b X$ is wrong; the result of the LHS is in the eigenbasis but the result of RHS is in the $\b X$ basis. It should be $\b V\b\Omega\vec Q = \mathbb K\b X$, or equivalently $\b\Omega\vec Q = \b V^\T\mathbb K\b X$. Then since $\mathbb K = \b V\b\Omega\b V^\T$ we see that $$ \b\Omega\vec Q = \b V^\T(\b V\b\Omega\b V^\T)\b X = \b\Omega\b V^\T\b X. $$ Hence it is reasonable to define $\vec Q = \b V^\T\b X$, and is necessary to do so when all eigenvalues are non-zero.