Let $x\in\mathbb{R}^d, (W_t)_{t\geq0}$ be a $d$-dim. Brownian motion. I have the following processes
$$
Z_{t, x} = b(Z_{t, x})dt + \sigma(Z_{t, x})dW_t, \qquad Z_{0, x} = x \\
V_{t, v} = \nabla b(Z_{t,x})V_{t,v}dt + \nabla \sigma(Z_{t,x})V_{t,v}dW_t, \qquad V_{0, v} = v\\
U_{t, v, v'} = (\nabla b(Z_{t,x})U_{t, v, v'} + \nabla^2b(Z_{t, x})[V_{t, v'}]V_{t,v})dt \\ + (\nabla \sigma(Z_{t,x})U_{t, v, v'} + \nabla^2\sigma(Z_{t, x})[V_{t, v'}]V_{t,v})dW_t, \qquad U_{0, v, v'}=0,
$$
i.e. a stochastic process and its first and second variation process, formally derived by differentiating $Z_{t, x}$ in direction $v\in\mathbb{R}^d$, and then $v'\in\mathbb{R}^d$.
Let $(P_t)_{t\geq0}$ denote the transition semigroup, i.e. $(P_tf)(x)=\mathbb{E}[f(Z_{t,x})]$. Can someone explain, why Dynkin's formula would lead to
$$ \mathbb{E}\left[ (P_tf)(x) \int_0^t\langle \sigma^{-1}(Z_{s,x})U_{s, v, v'}, dW_s \rangle \right] = 0? $$ To my knowledge, letting $\mathcal{A}$ denote the infinitesimal generator, Dynkin's formula reads $$ (P_tf)(x) = \mathbb{E}[f(Z_{t,x})] = f(x) + \mathbb{E}\left[\int_0^t(\mathcal{A}f)(Z_{s,x})ds\right] \\ = f(x) + \mathbb{E}\left[\int_0^t \langle b(Z_{s,x}), \nabla f(Z_{s,x}) \rangle + \langle \sigma(Z_{s,x})\sigma^T(Z_{s,x}), \nabla^2 f(Z_{s,x}) \rangle_{\mathrm{F}} ds\right], $$ where $\langle \cdot, \cdot \rangle_F$ denotes the Frobenius scalar product. I don't see how that helps here...
Introduction to the paper
This answer focuses on the paper attached here. The paper measures sample quality using diffusions. The main idea for this is the idea of Stein's discrepancy method, a powerful idea in probability theory which allows the upper bounding of usual probabilistic distances by an appropriate Stein discrepancy. The Stein discrepancy involves the use of Ito diffusions, and in order to ensure that the Stein operator is well behaved, one needs good bounds on the behaviour of the underlying semigroup of the Ito diffusion, which we call $P_t$. The Ito diffusion will be called $Z_{t,x}$, the initial point being $x$ and time being $t$. We have $P_t f(x) = \mathbb E[f(Z_{t,x})]$.
On the use of Dynkin's formula
The main reference for stochastic calculus of the paper is Avner Friedman, Stochastic Differential Equations And Applications. As per the paper , equation (7.10) of the text has the following statement :
It is clear, that the use of Dynkin's formula must involve differentiation of a certain kind, since $L$ involves the same.
For a clear use of Dynkin's formula, we witness page number $32$ of the paper, where we see that in both applications of Dynkin's formula, derivatives clearly appear on the functions $\sigma$ and $b$.
The story of $\mathbb E[J_3(x)]$
To compute $\mathbb E[J_3(x)]$, nowhere is Dynkin's formula used, unlike what the author says. Indeed, what is used is equation $(29)$ at the end of page number $30$.
$$ f(Z_{t,x}) = (P_tf)(x) + \int_0^t \langle \Delta (P_{t-s}f)(Z_{s,x}) ,\sigma(Z_{s,x}) dW_s\rangle $$
which is derived by applying the Ito formula to $(s,x) \to P_{t-s}f(x)$, a $C^2$ function.
From there, we have : \begin{align} \mathbb E[J_3(x)] & = \frac{1}{t} \mathbb E\left[f(Z_{t,x}) \int_0^t \left\langle \sigma^{-1}(Z_{s,x})U_{s,v,v'},dW_s\right\rangle\right] \\ &= \frac{1}{t} \mathbb E\left[\left((P_tf)(x) + \int_0^t \langle \Delta (P_{t-s}f)(Z_{s,x}) ,\sigma(Z_{s,x}) dW_s\rangle\right) \int_0^t \left\langle \sigma^{-1}(Z_{s,x})U_{s,v,v'},dW_s\right\rangle\right] \\ &= \frac 1t \mathbb E\left[P_tf(x)\int_0^t \left\langle \sigma^{-1}(Z_{s,x})U_{s,v,v'},dW_s\right\rangle\right] \\ &+ \mathbb E\left[\int_0^t \langle \Delta (P_{t-s}f)(Z_{s,x}) ,\sigma(Z_{s,x}) dW_s\rangle\int_0^t \left\langle \sigma^{-1}(Z_{s,x})U_{s,v,v'},dW_s\right\rangle\right] \end{align}
Of these, the first term vanishes since the $P_tf(x)$ comes outside the integral and we are left with the expectation of a stochastic integral, which is a martingale starting at $0$ hence has constant expectation equal to $0$. For the second term, we observe the form of Friedman's Ito theorem statement and how it was translated here, and obtain by expansion a result which I find bewildering but definitely true : $$ \left\langle\Delta (P_{t-s}f)(Z_{s,x}) ,\sigma(Z_{s,x}) dW_s \right\rangle= \sum_{l=1}^n \sum_{i=1}^m [\Delta (P_{t-s}f)(Z_{s,x})]_i \sigma(Z_{s,x})_{il} (dW_s)_l \\ = \sum_{l=1}^n (\Delta P_{t-s}f(Z_{s,x})\sigma(Z_{s,x}))_l (dW_s)_l =\left\langle \Delta (P_{t-s}f)(Z_{s,x}) \sigma(Z_{s,x}), dW_s \right\rangle $$
which is kind of saying that the $\sigma$ term can come from right to left side. I don't know why it was kept on the right in the first place, this could have even been a typo for all I know.
Anyway, now we come to : $$ \mathbb E\left[\int_0^t \langle \Delta (P_{t-s}f)(Z_{s,x})\sigma(Z_{s,x}), dW_s\rangle\int_0^t \left\langle \sigma^{-1}(Z_{s,x})U_{s,v,v'},dW_s\right\rangle\right] $$
and are in the "covariance of Ito-Integrals" territory. Indeed, by equation (7.1) of Friedman applied coordinatewise, we get : $$ \mathbb E\left[\int_0^t \langle \Delta (P_{t-s}f)(Z_{s,x})\sigma(Z_{s,x}), dW_s\rangle\int_0^t \left\langle \sigma^{-1}(Z_{s,x})U_{s,v,v'},dW_s\right\rangle\right] \\ = \mathbb E\left[\int_0^t \left\langle \Delta (P_{t-s}f)(Z_{s,x})\sigma(Z_{s,x}), \sigma^{-1}(Z_{s,x})U_{s,v,v'}\right\rangle ds\right]$$
whence we can again use the same coordinatewise expansion logic we applied last time in the Ito integral form, for this expression to bring the $\sigma^{-1}$ over to the left of the $\sigma$ and cancel out, to get : $$ \mathbb E\left[\int_0^t \left\langle \Delta (P_{t-s}f)(Z_{s,x}), U_{s,v,v'}\right\rangle ds \right] $$
which is the final expression obtained by the authors of the paper as well.
Final thoughts
I feel that the paper skips many steps, and struggles to integrate its notation with that of the standard reference. However, the use of the Stein discrepancy in this particular instance is very nice, and I would suggest going through the paper without really looking at the details and just understanding the flow and what is happening in general, because it should not take too much effort for an expert to fill in details if and when required.