Proof that $e^x$ is the eigenvector of the derivative operator

4k Views Asked by At

I remember hearing my professor talk about how $e^x$ shows up in all our differential equations because it is the eigenvector for the derivative operator. Can someone explain and prove this to me?

I have taken Linear algebra and a course on ODEs and a little bit of PDEs.

EDIT: Specifically I am wondering: know how you take a matrix that represents a linear operator and subtract lambda off the diagonals and then solve for the eigenvalues and eigenvectors? Is there a similar proof that results in e^x?

4

There are 4 best solutions below

0
On BEST ANSWER

The problem with what you want is that when we use matrices, we usually work in a finite-dimensional vector space. Yet the natural interpretation of the vector $e^x$ (i.e., the function $x \mapsto e^x$), is as a member of some infinite-dimensional vector spaces (like, say, the vector space of all suitably convergent power series, or of differentiable functions, or something like that). In such vector spaces, matrix algebra becomes a rather unwieldly tool, because matrices themselves then become infinitely-sized objects, and we have to either

  • deal with convergence issues, or

  • restrict matrices to a finite number of non-zero entries in every column.

I'm therefore not going to give you a formal proof, but rather a very sketchy idea of how one could proceed, as similarly to finite-dimensional linear algebra as possible, to indeed show that $e^x$ is a eigenvector of $D = \frac{d}{dx}$.

We're going to work in a vector space $V$ of suitable convergent power series, but I'm going to ignore convergence issues mostly. We're going to treat $$ B = \left\{ b_k = \frac{x^k}{k!} \,:\, k \in \mathbb{N} \right\}, \quad\text{ with the understanding that $b_0 = 1$,} $$ as a basis of some sort, i.e. assume that we can represent each vector $v$ as $$ v = \sum_{i=1}^\infty c_ib_i = \sum_{i=1}^\infty c_i\frac{x^n}{n!} \text{.} $$ Note that $B$ isn't a basis in the usual vector-space sense, since we resort to infinite series here. It would be a basis in the hilbert-space sense, if we cared to turn $V$ into a proper hilbert space, which I won't do here. As I said, this is very sketchy.

We now observe how our differentation operator $D$ behaves on the elements of that basis $B$. We obviously have $$ Db_k = \frac{d}{dx} \frac{x^k}{k!} = \frac{kx^{k-1}}{k!} = \frac{x^(k-1)}{(k-1)!} = b_{k-1} \text{ and $Db_1 = \frac{d}{dx}1 = 0$} $$ Thus, we represented as a (infinitely large!) matrix, $D$ looks something like this $$ M_D = \begin{pmatrix} 0 & 1 & 0 & 0 & 0 & \ldots \\ 0 & 0 & 1 & 0 & 0 & \ldots \\ 0 & 0 & 0 & 1 & 0 &\ldots \\ 0 & 0 & 0 & 0 & \ddots&\ddots \\ \vdots&\vdots&\vdots&\vdots& \ddots&\ddots \end{pmatrix}\text{.} $$ At this point, we have to leave the path set out by finite-dimensional linear algebra, though, because trying to make sense of the determinant of such matrices get us into trouble. For $M_D$, we might get away with saying $\det M_D = 0$ - after all, it's a triangular matrix with only zeros in the diagonal. But how would we interpret $$ \det(\lambda I - M_D) = \left|\begin{matrix} \lambda & -1 & 0 & 0 & 0 & \ldots \\ 0 & \lambda & -1 & 0 & 0 & \ldots \\ 0 & 0 & \lambda & -1 & 0 &\ldots \\ 0 & 0 & 0 & \lambda & \ddots&\ddots \\ \vdots&\vdots&\vdots&\vdots& \ddots&\ddots \end{matrix}\right|\text{?} $$ Using the rules of finite-dimensional linear algebra, we'd have to conclude that the result is $\lambda^\infty$, which makes no sense. So instead, we directly look for eigenvectors, i.e. some $v = (c_1,c_2,\ldots)$ for which $$ M_D v = \lambda v \text{.} $$ Looking at the matrix, we can easily see that this indeed holds for $v = (1,1,\ldots)$ and $\lambda = 1$, or in other words that $$ v = (1,1,\ldots) \text{ is an eigenvector of the eigenvalue } \lambda = 1 \text{.} $$ So which function does $v$ represent? Per the definition of our basis above, it's the function defined by the power series $$ \sum_{k=1}^\infty 1 b_k = \sum_{k=1}^\infty \frac{x^k}{k!} \text{,} $$ which of course is $e^x$.

2
On

$\dfrac{d}{dx}e^{kx} = ke^{kx}$ implies that $k$ is an eigenvalue and $e^{kx}$ is an eigenvector of the operator $\dfrac{d}{dx}$

0
On

Eigen Vector: A vector is said to be an eigen vector of a particular operator if $Tv=\lambda v$. Now if you observe in the particular case where $\frac{d}{dx}e^x$ is $1*e^x$. So 1 is the eigen value and $e^x$ is eigen vector. Also v has to be non zero

2
On

You can solve the eigenvalue problem in the operator form without expressing your vectors in any basis. We all know that the exponential function is invariant to derivative, so $\frac{d}{dx}e^{kx}=ke^{kx}$.

I think you are understanding vector spaces in a very narrow sense. The space all real-valued functions is an infinite dimensional Hilbert space. Very commonly, one considers a vector subspace of $C^\infty$ analytical functions (smooth, infinitely differentiable). Another common subspace is $L^2$: square integrable functions.

You can understand a function as an infinitely dense vectors with values at $y(x)$. However, if you handle one of the more restricted subspaces, the vector space is countably infinite dimensional, and you can express your functions in some basis, defined as an infinite sequence of functions.

Common bases:

  • Taylor basis: polynomials $y_n=x^n$. Complete basis for whole functions (or, on a restricted domain, all functions with a convergent power series representation).
  • Fourier basis: functions $y_n=e^{inx}$: Basis for periodic functions well behaved enough to have a fourier representation.
  • Fourier transform: functions $y(k)=e^{ikx}$: uncountably infinite basis for $L^2$ functions. A dual of the standard $y(x)$ that is also uncountable.
  • Orthogonal polynomials with different weights: on different domains, you can get a countable basis, orthogonal for a chosen dot product (Laguarre, Lagrange, Chebyshev,...). You can put the weight into the functions: check out the Hilbert functions.

At least in the countable bases, you can technically talk about a matrix for the derivative operator, although it is an infinite matrix. Of course, you can also take a basis with only one functions - your function. In that case, you have a $1\times 1$ matrix :)