Let $$J= \begin{pmatrix}0 & \vec 0_m^T & \vec 0_n^T\\ \vec0_m & A & B\\ \vec0_n & C & D \end{pmatrix}.$$
Here we consider that $\dim{A}$, $\dim{B}$, $\dim{C}$ and $\dim{D}$ are all at least but not limited to $1$ and we need $A$ to be an $m\times m$ matrix, $D$ to be an $n\times n$ matrix, and $B$ and $C$ to be $m\times n$ and $n\times m,$ respectively. Also, $A, B, C$ and $D$ are non-zero. This means that $J$ is at least a $3\times 3$ matrix, but always square.
Upon doing some symbolic computation with Maple, whereby I keep the overall size of $J$ to be $n \times n$, I varied $A, B, C$ and $D$ to be differing sizes.
For example Let $$J = \begin{pmatrix} 0 & \vec0_m & \vec0_n & \vec0_n\\ \vec0_m & A & B & B\\ \vec0_n & C & D & D\\ \vec0_n & C & D & D\\ \end{pmatrix}.$$
In all cases, when I compute the eigenvalues, up to $J$ being a $12 \times 12$ matrix, I always get a list of $0$'s and then $2$ eigenvalues, this assumes for A and D there is $1$ eigenvalue only. Though I suppose the result will be that the total number of eigenvalues is a multiple of the number of eigenvalues in each matrix?
I have searched through google to see why this is the case and I have not found a reason. I was hoping it would be a theorem or something.
Can someone help me with this?
Thanks to Cameron Buie for the correction suggestions.
Clearly false with $$ J=\left(\begin{array}{c|c|cc} 0 & 0 & 0 & 0\\\hline 0 &1&0&0\\\hline 0 &0&2&0\\ 0 &0&0&3 \end{array}\right), $$ for example.
Addendum: even with the condition $A,B,C,D$ nonzero, the result is still false, as demonstrated by $$ J=\left(\begin{array}{c|c|cc} 0 & 0 & 0 & 0\\\hline 0 &1&10^{-10}&10^{-10}\\\hline 0 &10^{-10}&2&10^{-10}\\ 0 &10^{-10}&10^{-10}&3 \end{array}\right), $$ where $10^{-20}$ is just some extremely small numbers so it won't perturb the eigenvalues by much.