When we have a simple Markov matrix (rows sum to 1) $P$ where $P_{i,j}$ gives the probability of making a transition from state $i$ to state $j$, We can get the steady state probabilities of being in each of the states ($\pi$) by solving the following system of equations -
$$P \pi = \pi$$
Now, let's say we have a matrix of transition times as well $T$, where $T_{i,j}$ gives the time it takes to make the transition from state $i$ to state $j$. When $T$ has all entries of the same magnitude, it is clear that $\pi$ would not change. If not however, $T$ will also have some impact on $\pi$. Any ideas on how to find the new $\pi$?
I think I figured it out. I don't have a concrete proof, but strong numerical verification (through simulations) for a variety of matrices (which is good enough for me on most days). @Math1000 had the right idea of taking the $\pi$ matrix and modifying it with the transition times matrix.
$\pi$ represents the proportion of times the system enters each state. However, once the system enters state $i$, it spends on average not one unit of time there now. No, it spends $t_i = \Sigma_j P_{i,j}T_{i,j}$. In this way, we can get the $t$ vector. Then, simply element-wise multiply $t$ with $\pi$ and renormalize the result (to make it sum to one). And that is it.
The Python code I used to verify this is below. It returns two arrays, the simulation result and the closed form calculated through the method outlined. The two are always very close regardless of the $P$ and $T$ matrices input to the routine. The code is not very clean but then again, not too much is going on here.