I wonder if anyone knows a reference for this question:
For a complex matrix $A$ with only one eigenvalue $\lambda$ such that $||A^k || \leq C|\lambda|^k$ (for some constant $C$), can we say $A$ is diagonalizable?
I wonder if anyone knows a reference for this question:
For a complex matrix $A$ with only one eigenvalue $\lambda$ such that $||A^k || \leq C|\lambda|^k$ (for some constant $C$), can we say $A$ is diagonalizable?
On
$\newcommand{nrm}[1]{\left\lVert {#1}\right\rVert}\newcommand{abs}[1]{\left\lvert {#1}\right\rvert}$ Let $A$ be a triangulable matrix with exactly one eigenvalue $\lambda$, and let there be some $C>0$ such that $\nrm {A^k}\le C\abs\lambda^k$ for all $k\ge 1$.
If $\lambda=0$, then $A=0$ by considering $k=1$.
If $\lambda\ne0$, then, by hypothesis, $A-\lambda I$ is nilpotent, therefore $$\nrm{A^k}=\nrm{(\lambda I+(A-\lambda I))^k}=\nrm{\sum_{h=0}^s \lambda^{k-h}\binom kh (A-\lambda I)^h}$$ where $s$ is the least natural number such that $(A-\lambda I)^{s+1}=0$. \begin{align}\nrm{A^k}&=\nrm{\sum_{h=0}^s \lambda^{k-h}\binom kh (A-\lambda I)^h}\ge\\&\ge\nrm{\lambda^{k-s}\binom ks(A-\lambda I)^s}-\nrm{\sum_{h=0}^{s-1}\lambda^{k-h}\binom kh(A-\lambda I)^h}\ge\\&\ge\abs\lambda^k\left(\nrm{\lambda^{-s}(A-\lambda I)^s}\abs{\binom ks}-\sum_{k=0}^{s-1}\abs{\binom kh}\nrm{\lambda^{-h}(A-\lambda I)^h}\right)\end{align}
Now, $\nrm{\lambda^{-h}(A-\lambda I)}$ for $h=0,1,\cdots, s$ are non-zero constants that depend on $A$ and $\lambda$. On the other hand, all the $\binom kj$ are polynomials (exactly) of degree $j$ in $k$. Therefore, $\frac1{\abs\lambda^k}\nrm{A^k}$ grows asymptotically at least as fast as some polynomial of degree $s$. Therefore the hypothesis demands $s=0$, which implies $(A-\lambda I)^{0+1}=0$.
The answer is yes. While the idea of the proof can usually be found in the development of Gelfand's formula, I don't think the result is stated as a theorem in many books.
In general, suppose $\lambda$ is one of the largest-sized eigenvalue of a complex square matrix $A$ (i.e. $|\lambda|=\rho(A)$). If there is a positive constant $C$ such that $\|A^k\|\le C|\lambda|^k$ for every positive integer $k$, then $\lambda$ must be semi-simple. (In your case, since $\lambda$ is the only eigenvalue, that $\lambda$ is semi-simple implies that $A$ is diagonalisable.)
Suppose the contrary. Then $Av=\lambda v$ and $Au=\lambda u+v$ for some eigenvector $v$ and generalised eigenvector $u$. It follows from mathematical induction that $A^ku=\lambda^ku+k\lambda^{k-1}v$. Now pick any vector norm $\|\cdot\|$ such that $\|au+bv\|=\max(\|au\|,\|bv\|)$ (e.g. take $\|x\|=\|P^{-1}x\|_\infty$ where $P$ is any invertible matrix whose first two columns are $u$ and $v$). Then it induces a matrix norm $\|M\|=\sup_{x\ne0}\frac{\|Mx\|}{\|x\|}$. Since all norms are equivalent on a finite-dimensional vector space, we may assume that this induced matrix norm is the one in the condition that $\|A^k\|\le C|\lambda|^k$. Thus $$ k|\lambda|^{k-1}\|v\|\le\|\lambda^ku+k\lambda^{k-1}v\|=\|A^ku\|\le\|A^k\|\|u\|\le C|\lambda|^k\|u\|\tag{1} $$ for every $k\ge1$. But this is impossible: if $\lambda=0$, $(1)$ is violated when $k=1$; if $\lambda\ne0$, $(1)$ is violated when $k$ is large. Thus $\lambda$ must be semi-simple.