I'm reading the wikipedia page for generalized eigenvectors, and I'm stuck on one part:
They show in the canonical basis section that (assuming a fixed eigenvalue $\lambda$ for the whole post), $\rho_k$, the number of linearly independent generalized eigenvectors of rank $k$, can be found by looking at the difference in ranks for adjacent powers of $(A - \lambda I)$:
$\rho_k = \text{rank}[(A - \lambda I)^{k-1}] - \text{rank}[(A - \lambda I)^k]$
I understand $\text{rank}[(A - \lambda I)^k]$ as looking at the dimension of the nullspace of $(A - \lambda I)^k$, so looking at that difference tells you how many generalized eigenvectors need $k$ (and not fewer) applications of $(A - \lambda I)$ to be mapped all the way to zero, which is the definition of a generalized eigenvector of rank $k$.
A generalized eigenvector of rank $k$ generates a Jordan Chain, by producing a generalized vector of rank $k - 1$ with each extra application of $(A - \lambda I)$, as they mention in the intro. And, that chain should all be linearly independent.
My question is: is it possible to have two generalized eigenvectors of rank $k$ both get mapped to the same generalized eigenvector of rank $k - 1$ by applying $(A - \lambda I)$ ? i.e., is it possible for the chains to "merge" ?
E.g., if $x$ is a generalized eigenvector of rank $k-1$ and $y_1 \neq y_2$ are generalized eigenvectors of rank $k$, is it possible to have:
$(A - \lambda I) y_1 = x$ and $(A - \lambda I) y_2 = x$ ?
My guess is no, because that implies that $(A - \lambda I) (y_1 - y_2) = 0$, which would mean that $(y_1 - y_2)$ is itself a generalized eigenvector of rank $1$, but I can't figure out how to see that intuitively.