If I have an absorbing Markov chain represented as a transition matrix P - same notation as Wikipedia article.
$ P = \begin{pmatrix} Q & R \\ 0 & I_r \end{pmatrix} $
How would I compute the expected number of unique transient states visited before arriving at the absorbing state?
I am assuming it would involve some sum of products over row probabilities in $Q$ but I am not sure.
-- For my specific case, I only have one terminating state.
It will of course depend on the initial state. The probability of visiting state $j$ is the probability of ending up at $j$ in a modified Markov chain where you make $j$ absorbing. Now sum this over all transient states $j$.