I'm studying linear algebra. Tried to apply it to different scope, the polynomial functions space.
Can we get $e^x$ just by knowing that it is an eigenfunction with respect to the Derivative operator, not knowing anything about the definition of derivative? only knowing about its matrix D? $D = \begin{vmatrix} 0 & 1 & 0 & 0 & ...\\ 0 & 0 & 2 & 0 & ...\\ 0 & 0 & 0 & 3 & ...\\ ... & ... & ... & ... & ... \end{vmatrix}$
I mean: if we are given the transformation matrix $D$ (with respect to the canonical basis 1,$x$,$x^2$,...) that outputs the derivative of a function, what can we do to find its eigen-everything? I tried to follow the usual approach, but as I compute the characteristic polynomial $det(D-\lambda I)$, one gets $det\begin{vmatrix} -\lambda & 1 & 0 & 0 & ...\\ 0 & -\lambda & + 2 & 0 & ...\\ 0 & 0 & -\lambda & 3 & ...\\ ... & ... & ... & ... & ... \end{vmatrix}$ which is a triangular matrix, that has, as determinant, the product $(-\lambda)^{\inf}$, which says that 0 is the only eigenvalue, even if we know that 1 is indeed another eigenvalue, since $D e^x = 1*e^x$
What am I missing?
It is dangerous to take concepts from finite-dimensional cases and applying them to infinite-dimensional objects. The determinant a priori makes only sense finite dimension (the infinite products very much ruins everything).
The main problem with your reasoning here, is that the exponential function is not in the polynomial space and therefore, it is not an eigenfunction of your operator. If you want the exponential function to lie in your function space, then you would need to consider the ring of formal power series. However, if you do that, then $1, x, x^2, \dots$ are not a basis anymore.
In fact, we can directly check, that your operator has no eigenvalues except $0$. Let $(a_0, a_1, \dots)$ be an element of your polynomial function space (the $a_i$ representing the coefficients with respect to the canonical basis), then the eigenvalue equation reads
$$ (a_1, 2a_2, \dots ) = D(a_0, a_1, \dots) = \lambda (a_0, a_1, \dots) = (\lambda a_0, \lambda a_1, \dots). $$
Thus, we have for all $n\geq 0$
$$ \lambda a_n = (n+1) a_{n+1}.$$
By induction, one proves $a_n= \frac{\lambda^n}{n!} a_0$ for $n\geq 0$. Hence, there are no eigenvalues for $\lambda \neq 0$ (polynomials have only finitely many coefficients). Thus, the only possible eigenvalue is zero.
For $\lambda = 0$ we have the eigenvector
$$ (1, 0, 0, \dots). $$