Let's assume we have an upper triangular matrix $A \equiv (A_{i j}) \in \mathbb{R}^{n\times n}$ for which we know a diagonalisation $A = \Omega D \Omega^{-1}$.
If we now choose some index subsequence $(i_k)_{k=1, \ldots, n'}$, and delete all rows and columns whose indices are not contained in the subsequence, we obtain a sub-matrix $A' \in \mathbb{R}^{n' \times n'}$.
How can we get a diagonalisation of $A' = \Omega' D' \Omega'^{-1}$, $\Omega' \in \mathbb{R}^{n'\times n'}$ from the existing diagonalisation of $A$?
If need be, we can additionally make the following assumptions (though I doubt that they help much): $A$ has
- non-negative off-diagonal entries,
- non-positive diagonal entries
- and a last row that consists only of zeros.
I have wondered about this for a bit because initially it seemed like the problem should be trivial, but I haven't actually been able to come to a conclusion. Simply sub-indexing $\Omega$, $D$, and $\Omega^{-1}$ was the obvious idea, but I don't see how to actually do it.
Edit: For clarification, this is about a genuine diagonalisation with some square change of basis $\Omega'$, not just a product of a rectangular, diagonal, rectangular.
For an illustration as requested by @Will Jagy, we can consider a $3\times 3$ matrix \begin{equation} A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ 0 & a_{22} & a_{23} \\ 0 & 0 & a_{33} \end{pmatrix} \end{equation} which we can diagonalise as \begin{equation} \Omega^{-1} A \Omega = \begin{pmatrix} \lambda_1 & 0 & 0 \\ 0 & \lambda_2 & 0 \\ 0 & 0 & \lambda_3 \end{pmatrix} \end{equation} for some $\Omega \in \mathbb{R}^{3 \times 3}$. Now, we would like to use this information to obtain $\Omega' \in \mathbb{R}^{2 \times 2}$ such that it provides a diagonalisation of the upper left $2 \times 2$ sub_matrix of $A$:
\begin{equation} \Omega'^{-1} \begin{pmatrix} a_{11} & a_{12} \\ 0 & a_{22} \\ \end{pmatrix} \Omega' = \begin{pmatrix} \lambda'_1 & 0 \\ 0 & \lambda'_2 \end{pmatrix} \end{equation}
presumably with different eigenvalues $\lambda'_1, \lambda'_2$.
The upper left idea helped. Take 4 by 4 $$ M= \left( \begin{array}{rrr|rr} 1 & A & B & * &* \\ 0&1 & C & * &* \\ 0&0&1 & * & * \\ \hline 0&0&0&2 & * \\ 0&0&0&0&2 \\ \end{array} \right) $$ For eigenvalue $1$ we demand that $A-I$ be nice...
$$ M-I= \left( \begin{array}{rrr|rr} 0 & A & B & * &* \\ 0&0 & C & * &* \\ 0&0&0 & * & * \\ \hline 0&0&0&1 & * \\ 0&0&0&0&1 \\ \end{array} \right) $$
Next we try to row reduce this, in particular by row operations we can erase the upper right rectangle
$$ (M-I)_1= \left( \begin{array}{rrr|rr} 0 & A & B & 0 &0 \\ 0&0 & C & 0 &0 \\ 0&0&0 & 0 & 0 \\ \hline 0&0&0&1 & * \\ 0&0&0&0&1 \\ \end{array} \right) $$ In fact, we can squash the final $*$ for
$$ (M-I)_2= \left( \begin{array}{rrr|rr} 0 & A & B & 0 &0 \\ 0&0 & C & 0 &0 \\ 0&0&0 & 0 & 0 \\ \hline 0&0&0&1 & 0 \\ 0&0&0&0&1 \\ \end{array} \right) $$
When do we have three independent null vectors for this? Only when $A=B=C $ So we actually had this from the beginning:
$$ M= \left( \begin{array}{rrr|rr} 1 & 0 & 0 & * &* \\ 0&1 & 0 & * &* \\ 0&0&1 & * & * \\ \hline 0&0&0&2 & * \\ 0&0&0&0&2 \\ \end{array} \right) $$