Get $e^x$ from the polynomial derivative matrix

859 Views Asked by At

I'm studying linear algebra. Tried to apply it to different scope, the polynomial functions space.

Can we get $e^x$ just by knowing that it is an eigenfunction with respect to the Derivative operator, not knowing anything about the definition of derivative? only knowing about its matrix D? $D = \begin{vmatrix} 0 & 1 & 0 & 0 & ...\\ 0 & 0 & 2 & 0 & ...\\ 0 & 0 & 0 & 3 & ...\\ ... & ... & ... & ... & ... \end{vmatrix}$

I mean: if we are given the transformation matrix $D$ (with respect to the canonical basis 1,$x$,$x^2$,...) that outputs the derivative of a function, what can we do to find its eigen-everything? I tried to follow the usual approach, but as I compute the characteristic polynomial $det(D-\lambda I)$, one gets $det\begin{vmatrix} -\lambda & 1 & 0 & 0 & ...\\ 0 & -\lambda & + 2 & 0 & ...\\ 0 & 0 & -\lambda & 3 & ...\\ ... & ... & ... & ... & ... \end{vmatrix}$ which is a triangular matrix, that has, as determinant, the product $(-\lambda)^{\inf}$, which says that 0 is the only eigenvalue, even if we know that 1 is indeed another eigenvalue, since $D e^x = 1*e^x$

What am I missing?

2

There are 2 best solutions below

2
On BEST ANSWER

It is dangerous to take concepts from finite-dimensional cases and applying them to infinite-dimensional objects. The determinant a priori makes only sense finite dimension (the infinite products very much ruins everything).

The main problem with your reasoning here, is that the exponential function is not in the polynomial space and therefore, it is not an eigenfunction of your operator. If you want the exponential function to lie in your function space, then you would need to consider the ring of formal power series. However, if you do that, then $1, x, x^2, \dots$ are not a basis anymore.

In fact, we can directly check, that your operator has no eigenvalues except $0$. Let $(a_0, a_1, \dots)$ be an element of your polynomial function space (the $a_i$ representing the coefficients with respect to the canonical basis), then the eigenvalue equation reads

$$ (a_1, 2a_2, \dots ) = D(a_0, a_1, \dots) = \lambda (a_0, a_1, \dots) = (\lambda a_0, \lambda a_1, \dots). $$

Thus, we have for all $n\geq 0$

$$ \lambda a_n = (n+1) a_{n+1}.$$

By induction, one proves $a_n= \frac{\lambda^n}{n!} a_0$ for $n\geq 0$. Hence, there are no eigenvalues for $\lambda \neq 0$ (polynomials have only finitely many coefficients). Thus, the only possible eigenvalue is zero.

For $\lambda = 0$ we have the eigenvector

$$ (1, 0, 0, \dots). $$

3
On

As another answer points out, you have to be quite careful when you pass from finite-dimensional concepts to infinite-dimensional ones, and certain concepts like the determinant just don't make sense anymore.

However, eigenvalues and eigenvectors still make sense, it's just you can't use $\det(\lambda I - D)$ to find them anymore, because determinants don't make sense. The other answer shows how to derive this eigenvector, and finds that it has infinitely many nonzero terms, so it cannot be a polynomial. But you still know what the exponential should be from this: it should be the "polynomial-like" thing $$e^x = 1 + x + \frac{1}{2!} x^2 + \frac{1}{3!}x^3 + \cdots$$ the only problem is that it doesn't lie in the vector space you were considering, which were vectors of the form $(a_0, a_1, a_2, \ldots)$ such that only finitely many $a_i$ are nonzero.

A simple way to fix this is just to enlarge your vector space: allow the vector $(a_0, a_1, a_2, \ldots)$ to have any coefficients, so for example $(1, 1, 1, \ldots)$ is now a vector that lives in your space. Certain things go wrong when you do this (for example there is no obvious basis any more), but you can still write down vectors and operators, and define the "differentiation operator" $D$ in the same way: $$ D(a_0, a_1, a_2, \ldots) = (a_1, 2a_2, 3a_3, \ldots)$$ and applying the same logic, find that the unique vector with eigenvalue $1$ is $$ (1, 1, \frac{1}{2!}, \frac{1}{3!}, \ldots)$$ and this time it is a vector in the space!

To answer your original question, yes I think that you can derive the exponential from only knowing its matrix, but you can't use the usual finite-dimensional approaches, and you need to be fairly careful along the way.