Understanding Eigenvalues, Eigenfunctions and Eigenstates

905 Views Asked by At

Please could somebody explain the meaning and uses of Eigenvalues, eigenfunctions and eigenstates for me. I have taken 3 years of physics and math classes at university and never fully grasped the concept/ never had a satisfactory answer. I used eigenstates a lot in Quantum mechanics yet I did not understand their significance and it still bothers me to this day.

If possible please include some basic examples or analogies.

2

There are 2 best solutions below

0
On

One talks of eigenvalues etc for linear transformations that are functions from a vector space to itself. $T(u+v)= T(u)+T(v),\ T(av)= aT(v)$.

Having studied physics you must have a good understanding of the 2 and 3-dimensional (euclidean) spaces. I'll stick to them. Any three dimensional rotation around origin is a linear transformation (check it using parallelogram law for addition of vectors). Same for a refelection about a plane passing through origin.

A linear transformation on this vector space (in co-ordinates) is simply a function that is given by three linear homogeneous polynomials. (or two homogeneous poly in two variables in the 2-dimensional case)

$$(x,y,z)\mapsto (a_1x+b_1y+c_1z, a_2+b_2y+c_2z, a_3x+b_3y+c_3z)$$

This is often written in matrix form as $$\left(\begin{array}{lll} a_1 &a_2&a_3\\ b_1&b_2&b_3 \\ c_1&c_2&c_3 \\ \end{array}\right)$$

A vector is called an eigenvector for such a transformation if it moves in its own line (connecting it to the origin) by this transformation. The ratio of the lengths of the vector after and before transformation is the eigenvalue of that eigenvector.

Clear that in 2d there are no eigenvectors for rotations (except the zero degree one!). In 3d, we have axis of rotation: they are fixed and so they are eigenvectors of eigenvalue 1.

For reflections there will always be eigenvectors of eigenvalue $-1$ those vectors perpendicular to reflecting plane (or line in the case of 2d).

I have explained in geometric terms. There are lots and lots of linear transformation other than rotations and reflections. For them, eigenvalues and eigenvectors are computed by matrix calculations and solving linear systems.

2
On

Just to supplement @PVanchinathan's excellent answer and because the comment became too long, I'm writing this answer.

The movement of the dots represents the linear transformation on a whole. Some vectors are also shown. For instance, the red ones are all vertical/horizontal in the original representation, but when transformed, they suddenly point in another direction. Same deal with the purple ones. But the blue ones doesn't change direction under the transformation, they only change their length. If we represent the linear transformation in question by a matrix $\mathbb{A}$, we see that to apply it to a blue vector $v_b$ (the eigenvector) is the same as multiplying it with some number $\lambda$ (the eigenvalue), which can be written succinctly as an eigen-equation $$\mathbb{A}v_b=\lambda v_b$$

The reason this kind of thing is so useful, for instance in QM, is first and foremost because it is easy to work with them (there are many nice theorems that allow you to do nice things when you're working in a basis of eigenvectors), but also because the (time-independent) Schrödinger-equation itself is an eigen-equation:

$$H \psi = E \psi$$

Oh, and eigenfunction is just another name for eigenvector. Same with eigenstate and eigenvalue.

Hope that helps!

gif