In my mathematical physics course, we were introduced to the separation of variable techinique to solve second order partial differential equations. After the separation of variables, we solve each ODE to get the eigenvalue and the eigenfunction.
I understand that eigenfunctions are the “eigenvectors” of vector spaces where the “vectors” here are functions. But I don’t understand how this “eigen” idea in function space is connected to the ones we learned in linear algebra. How are the procedures connected? How is this connected with the linear algebra thing $|A-\lambda I|=0$ and $(A-\lambda_1I)\eta_1 = 0$ ?
For example, the one-dimensional spring vibration problem:
$$ \frac{\partial ^2u}{\partial t^2}-a^2 \frac{\partial ^2u}{\partial x^2}=0 $$
(suppose we have a homogeneous boundary condition for $x$, such that $X(0) = X(l) = 0$ )
Applying the separation of variable technique, $u(x,t) = T(t)X(x)$, we eventually get
$$ X''+\lambda X = 0 $$
with the general solution for this ODE and its boundary condition, we eventually reach
$$ X_n(x) = \sin(\frac{n\pi}{l}x) $$
which we call the “eigenfunction”.
How is this connected with all those LA stuff?
I have not taken any course in Differential equations, please be more concrete in your answers. Thanks in advance.
Consider an arbitrary homogeneous second order ODE with (for now) constant coefficients
$$y''+ay'+by=0$$
and consider the change of variables $x_1 = y$ and $x_2 = y'$. This changes this second order equation into a system of first order equations
$$\begin{cases}x_1' = x_2\\ x_2' = -bx_1-ax_2\end{cases}$$
which we could rewrite in matrix form
$$\mathbf{x}' = \begin{pmatrix}0 & 1 \\ -b & -a\end{pmatrix}\mathbf{x}$$
Now a system of ODEs is in general hard to solve because of how the variables are coupled to each other. But if we could diagonalize the matrix, i.e. find the eigenvectors and move to the eigenbasis, then we would end up with a system of decoupled differential equations
$$\mathbf{w} = \begin{pmatrix}\lambda_1 & 0 \\ 0 & \lambda_2\end{pmatrix}\mathbf{w} \implies \begin{cases}w_1' = \lambda_1w_1 \\ w_2' = \lambda_2w_2\end{cases}$$
which are all easy to solve ($w_i = C_ie^{\lambda_i t}$, if you are not convinced of this try separation of variables on your to prove that this is the only solution). This gives a motivation for finding eigenvectors in a DE problem.
Let's use your equation as an example. Using our new change of variables as a guide, we get the matrix
$$y'' + \lambda y = 0 \implies \mathbf{x}' = \begin{pmatrix}0 & 1 \\ -\lambda & 0\end{pmatrix}\mathbf{x}$$
You can solve this with $A-\nu I$ (using a different letter for the eigenvalue) on your own and you can verify that the eigenvectors and eigenvalues associated with this matrix are
$$\nu_\pm = \pm i\sqrt{\lambda} \hspace{12 pt} \mathbf{e}_\pm = \begin{pmatrix}1 \\ \pm i\sqrt{\lambda}\end{pmatrix}$$
From knowing the eigenvalues, we can solve the decoupled system of equations. Since a change of basis is writing one set of basis vectors as a linear combination of a different set of basis vectors, we know that
$$x_1 = C_1 w_1 + C_2 w_2 = C_1 e^{i\sqrt{\lambda} t} + C_2 e^{-i\sqrt{\lambda}t}$$
This means we call the solutions to an ODE the "eigenfunctions" since in this situation, separation of variables with the PDE introduced a free parameter that gives us plenty of distinct eigenvalues and eigenfunctions.