Suppose that I have a Markov chain that has absorbing states. Since there are absorbing states, lets group the Markov matrix into four blocks: the submatrix all states in the absorbing region(s) $A$, the submatrix of all states that are not in an absorbing region $N$, the transition values from Non-absorbing to absorbing $T$, and then a $0$ block as you cannot move from absorbing to non-absorbing. In essence this means that our Markov matrix $$ M = \begin{bmatrix} N & 0 \\ T & A \end{bmatrix} $$
Note that I have it set up so that the columns of $M$ add to $1$, just so that I do left multiplication of vectors rather than right.
For the equilibrium solution we are looking for a vector $\vec{X}$ such that $$ M\vec{X} = \vec{X} $$ and $$ \sum_i X_i = 1 $$
Now this is a fairly standard proceedure, as you just find the eigenvector corresponding to an eigenvalue of $1$, however I want to show that the equilibrium probabilities for the states in $N$ are always $0$. $$ N\vec{x} = \vec{x} \implies \vec{x} = 0 $$
Here we know that $N$ must contain no trapping regions (not sure how to formally describe this) and at least one column of $N$ must sum to less than $1$.
If I could show that $(N-I)$ was invertible (or that the null space was trivial) or that the magnitude of its largest eigenvalue was $<1$ then that would be sufficient to prove.
I'm trying to do this myself so any hints or pointers to potentially helpful theorems would be very appreciated!
Say we break down the vector $\vec x$ into two parts: $\vec y$ for the non-absorbing states, and $\vec z$ for the absorbing states. Then $M\vec x = \vec x$ tells us that $$ \begin{cases} N \vec y = \vec y \\ T \vec y + A \vec z = \vec z \end{cases} $$ as well us $\sum_i y_i + \sum_j z_j = 1$.
Usually, for the absorbing states, we take $A = I$: once you're in an absorbing state, you stay put. Then $A \vec z = \vec z$, leading us to $T \vec y = \vec 0$.
But even if you don't make this assumption, we can deduce $T \vec y = \vec 0$. Let $\vec 1$ be the all-$1$ vector (of the same dimension as $\vec z$); from $T \vec y + A \vec z = \vec z$, we get $\vec 1^{\mathsf T} T \vec y + \vec1^{\mathsf T}\!A \vec z = \vec1^{\mathsf T}\vec z$. Because the columns of $A$ add up to $1$, we must have $\vec1^{\mathsf T}\!A = \vec1^{\mathsf T}$, so we get $$ \vec1^{\mathsf T} T \vec y + \vec1^{\mathsf T} \vec z = \vec1^{\mathsf T} \vec z \implies \vec1^{\mathsf T} T \vec y = 0. $$ In other words, the components of $T \vec y$ sum to $0$; however, since they're nonnegative, this can only happen if $T \vec y = \vec 0$.
How do we use $T\vec y= \vec 0$?
Look at the $i^{\text{th}}$ component of this product: it says $t_{i1} y_1 + t_{i2} y_2 + \dots + t_{ik} y_k = 0$. Here, every $t_{ij}$ and every $y_j$ is nonnegative. So the only way for the sum to be $0$ is that whenever $t_{ij} > 0$, $y_j$ must be $0$. So all states with a transition to an absorbing state have a limiting probability of $0$.
Next, whenever we deduce $y_j=0$, knowing that $N\vec y = \vec y$ tells us that $(N\vec y)_j = 0$, or $n_{j1} y_1 + n_{j2} y_2 + \dots + n_{jk} y_k = 0$. Here, also, every term is nonnegative; whenever $n_{j\ell} > 0$, $y_\ell$ must be $0$ by the same logic. So all non-absorbing states with a transition to such a state $j$ (a state $j$ which has a transition to an absorbing state) must also have a limiting probability of $0$. To rephrase: all non-absorbing states with a $2$-step path to an absorbing state must have a limiting probability of $0$.
From here, we can prove that all non-absorbing states with a path to an absorbing state must have a limiting probability of $0$, by induction on the length of the path. If we assume that from every non-absorbing state, there's a path to an absorbing state, then we can conclude that $\vec y = \vec 0$.