I have the following definition (from lecture notes)
Let $(\Omega, \mathcal{A}, \mu)$ be a measure space and let $f:\Omega \to [0,\infty)$ be an $\mathcal{A}/\mathcal{B}\left([0,\infty)\right)-$measureable mapping. Then we denote by $f \odot \mu: \to [0,\infty]$ the mapping with the property that for all $A \in \mathcal{A}$ it holds that: $$(f \odot \mu)(A) = \int_\Omega f(\omega) \cdot \mathbb{1}_{A}^{\Omega}(\omega)~\mu (d\omega) = \int_\Omega f \cdot \mathbb{1}~d \mu.$$
While the rhs Lebesgue integral makes sense to me, I don't fully understand the middle term, particularly the $\mu(d\omega)$, which I have not encountered yet. Can someone explain to me what it means and how the rhs equality is established?
$\mathbf 1_A^\Omega(\omega)$ is the indicator function that atomic outcome $\omega\in\Omega$ exists in $A$. That is: $$\mathbf 1_A^\Omega(\omega) = \begin{cases} 1 & : \omega\in A\subseteq \Omega\\0 & : \text{otherwise}\end{cases}$$
The $\int\ldots\mu(\operatorname d\omega)$ notation is basically shortened form of $\int\ldots\frac{\partial \mu(\omega)}{\partial \omega}\operatorname d \omega$, and hence $$\int_\Omega f(\omega)\cdot\mathbf 1_A^\Omega(\omega)\;\mu(\operatorname d \omega) = \int_A f\cdot1\operatorname d \mu$$