In my Markov chain lecture, we introduced the following quantities:
$$Qf(i) = \sum_{i\in E} Q(i,j)f(j)$$ Here, $(X_n)_{n\in \mathbb{N}}$ is a Markov chain and $Q$ is a transition matrix. $f$ is a non-negative measurable function $E\mapsto\mathbb{R}_+$
Additionaly, we define a new measure $$\mu Q(A) = \sum_{i\in E}\sum_{j\in A}\mu(\{i\})Q(i,j)$$ where $\mu$ is a measure on the state space $E$ and $A$ is a measurable set.
I am having difficulty understanding why do we define such quantities, what is the intuition behind and how should they be interpreted?
(This is a community answer! Please feel free to improve it to your liking.)
As you may already know,
$$ Q(i, j) = “Q(i \to j)” = \mathbf{P}(X_{t+1} = j \mid X_t = i) $$
represents the probability of a “particle” at state $i$ moving to state $j$ in one unit of time.
Also, we can consider $\mu$ as a "bar chart" representing the states in a population, where each individual can assume any state in $E$. For instance, let's say $E = \{\text{low}, \text{middle}, \text{high}\}$ with $\mu(\{\text{low}\}) = 0.1$, $\mu(\{\text{middle}\}) = 0.6$, $\mu(\{\text{high}\}) = 0.3$. This would depict a population with $10\%$ in the low state, $60\%$ in the middle state, and so on.
Now, imagine everyone in this population makes a random move according to the transition matrix $Q$. What would the bar chart look like after one unit of time? Assuming a sufficiently large population (or ideally, in the limit as the population becomes infinite), the measure $\mu Q$ precisely describes the resulting frequency of the states.
In summary, $\mu Q$ shows how a large population of “Markovian people”, initially distributed according to $\mu$, would redistribute after a one-step Markov transition by $Q$.
Regarding the meaning of $Qf(i)$, it becomes easier to interpret when we note that
$$ (Qf)(i) = (\delta_i Q) f = \sum_{j \in E} (\delta_i Q)(\{j\}) f(j). $$
That is, $Qf(i)$ is simply the average of $f$ with respect to the measure $\delta_i Q$, which describes the population initially starting at $i$ and then redistributing after a one-step Markov transition by $Q$.