The Leibniz integral rule states
$$ {\displaystyle {\frac {d}{dx}}\left(\int _{a}^{b}f(x,t)\,dt\right)=\int _{a}^{b}{\frac {\partial }{\partial x}}f(x,t)\,dt} $$
when the integral bounds are not a function of $x$, i.e. the variable we take the derivative with respect to.
An expectation of a continuous random variable or with respect to a continuous distribution is defined as an integral. More precisely, let $X$ be a continuous r.v. and $p(x)$ be its parametrized (by $\theta$) density, then we have
$$ \mathbb{E}_{p_{\theta}(x)}\left[ f(x) \right] = \int p_{\theta}(x) f(x) dx $$
In certain cases, you need to take the derivative of an expectation with respect to the parameters $\theta$ (e.g. this is common in certain machine learning problems)
$$\frac{d}{d \theta}\mathbb{E}_{p_{\theta}(x)}\left[ f(x) \right]$$
So, some people, in certain papers, seem to apply the Leibniz integral rule to get
\begin{align} \frac{d}{d \theta}\mathbb{E}_{p_{\theta}(x)}\left[ f(x) \right] & \stackrel{?}{=} \mathbb{E}_{p_{\theta}(x)}\left[ \frac{d}{d \theta} f(x) \right] \\ & \stackrel{?}{=} \int \frac{d}{d \theta} \left[ f(x) p_{\theta}(x) \right] dx \\ \end{align}
Some papers that say that we can bring the derivative inside the expectation because of the dominated convergence theorem, which I am not familiar with, so I would like someone to clarify me the relationship between the Leibniz integral rule above and the dominated convergence (specifically, in the context of taking derivatives of expectations, i.e. probability theory and statistics). Is the DCT just the way to prove the Leibniz integral rule? If that's true, can you show that?
Moreover, if you see above, I have $\mathbb{E}_{p_{\theta}(x)}\left[ \frac{d}{d \theta} f(x) \right] \stackrel{?}{=} \int \frac{d}{d \theta} \left[ f(x) p_{\theta}(x) \right] dx$, however, $p_{\theta}(x) \frac{d}{d \theta} f(x) \neq \frac{d}{d \theta} \left[ f(x) p_{\theta}(x) \right]$, so I suspect I've done something wrong or that the DCT and Leibniz integral rule are not applicable to the same contexts, i.e. maybe the Leibniz integral rule is not directly applicable to expectations because they involve random variables and densities?
What you want to do is to bring the limit operation inside the integral sign. This cannot be done in general; one classic counterexample is that if
$$f_n(x)=\begin{cases} n & x \in [0,1/n] \\ 0 & \text{otherwise} \end{cases}$$
then $$\lim\limits_{n \to \infty} \int_0^1 f_n(x) dx = 1$$ but $$\int_0^1 \lim\limits_{n \to \infty} f_n(x) dx = 0$$ The Leibniz rule for fixed limits can be justified by the use of the dominated convergence theorem, which is the most widely used method to prove that one can interchange limit and integral. It works fine in the setting of probability theory since it is based on the Lebesgue integral (which is how expectation is usually defined in probability theory).
As for the rest of your question, passing $\frac{d}{d\theta}$ through $\mathbb{E}_{p_\theta}$ is definitely not valid, but $\frac{d}{d\theta} \mathbb{E}_{p_\theta}[f(X)]=\int \frac{d}{d\theta} \left [ f(x) p_\theta(x) \right ] dx$ is correct (assuming $p_\theta$ is the density function of the distribution of $X$).