I have a block matrix, $M= L-I$, where $L$ takes the form, $$ L= \begin{bmatrix} 0 & L_2& \ldots & L_M \\ L_1 &0 &\ldots & L_M \\ \vdots & \vdots & \ddots & \vdots \\ L_1 & L_2& \ldots& 0 \end{bmatrix} $$ and $I$ is an appropriately sized identity matrix. The $L_m$ are all negative semi-definite with one eigenvalue $\lambda_1 = -\frac{1}{2}$ and another eigenvalue $ -\frac{1}{2}< \lambda_2 <0$ (Edit: the exact value depends on the matrix, so this one might be different for each block). Some of the $L_m$ may not be full rank and thus have some additional eigenvalues $\lambda_3 =0$ reducing the multiplicity of $\lambda_2$.
I can see numerically that the largest positive eigenvalue of $L$ is $\frac{1}{2}$ and thus the largest eigenvalue of $M$ in nonabsolute terms is $-\frac{1}{2}$.
Is there a way to prove this numerical result based on the known eigenvalues of the $L_m$ and the structure of $L$?
Thank you!
Edit: $\lambda_i$ for $i =1,2,3$ are the only eigenvalues for the $L_i$. $\lambda_3=0$ may not be an eigenvalue of all $L_i$, but those $L_i$ that have $\lambda_3$ as an eigenvalue are not invertible as some of the rows and corresponding columns are all zero.
It isn't clear whether the $L_i$s have the same size and whether their eigenvalues are arranged in ascending order or not. I suppose that they have identical sizes, $-\frac{1}{2}I\preceq L_i\preceq0$ and $\lambda_\min(L_i)=-\frac12$ for each $i$.
Suppose each $L_i$ is $N\times N$. Let $P=\operatorname{diag}(\sqrt{-2L_1},\sqrt{-2L_2},\ldots,\sqrt{-2L_M})$ and let $e\in\mathbb C^M$ be the vector of ones. Then $L=\left[(ee^T-I_M)\otimes I_N\right](-\frac{P^2}{2})$. Therefore $$ \lambda_j(L) =\frac12\lambda_j(\left[(I_M-ee^T)\otimes I_N\right]P^2) =\frac12\lambda_j(P\left[(I_M-ee^T)\otimes I_N\right]P). $$ Now, suppose that each $L_i$ is negative definite. Then $P$ is invertible and $\|P\|_2=1$. Since $P\left[(I_M-ee^T)\otimes I_N\right]P$ is congruent to $P\left[(I_M-ee^T)\otimes I_N\right]P$, precisely $(M-1)N$ of its eigenvalues are positive and the rest are negative. That is, if we arrange the eigenvalues of $L$ in ascending order, we have $$ \lambda_1(L)\le\cdots\le\lambda_N(L)<0<\lambda_{N+1}(L)\le\cdots\le\lambda_{MN}(L) $$ and $$ \lambda_{MN}(L)=\lambda_\max(L)=\frac12\lambda_\max(P\left[(I_M-ee^T)\otimes I_N\right]P). $$ Let $x$ be a unit eigenvector corresponding to the maximum eigenvalue of $P\left[(I_M-ee^T)\otimes I_N\right]P$. Then $$ \begin{aligned} 0&<\frac12\lambda_\max(P\left[(I_M-ee^T)\otimes I_N\right]P)\\ &=\frac12x^TP\left[(I_M-ee^T)\otimes I_N\right]Px\\ &=\frac12\|Px\|_2^2\left(\frac{Px}{\|Px\|_2}\right)^T\left[(I_M-ee^T)\otimes I_N\right]\left(\frac{Px}{\|Px\|_2}\right)\\ &\le\frac12\left(\frac{Px}{\|Px\|_2}\right)^T\left[(I_M-ee^T)\otimes I_N\right]\left(\frac{Px}{\|Px\|_2}\right)\\ &\le\frac12\max_{\|y\|_2=1}y^T\left[(I_M-ee^T)\otimes I_N\right]y\\ &=\frac12\lambda_\max\left[(I_M-ee^T)\otimes I_N\right]\\ &=\frac12. \end{aligned} $$ It follows that $0<\lambda_\max(L)\le\frac12$ when the $L_i$s are negative definite. Since the eigenvalues of a matrix is a continuous function of matrix entries and every negative semidefinite matrix is the limit of a sequence of negative definite matrices, by a continuity argument, when the $L_i$s are negative semidefinite, we obtain $0\le\lambda_\max(L)\le\frac12$ and $$ \lambda_1(L)\le\cdots\le\lambda_N(L)\le0\le\lambda_{N+1}(L)\le\cdots\le\lambda_{MN}(L)\le\frac12. $$