Reparametrization trick replaces $z\sim N(\mu, \sigma)$ with $z = \mu + \epsilon \sigma$, $\epsilon \sim N(0, I)$ for backpropagation. Intuitevely, I can imagine that this is related to sampling from $N(\mu, \sigma)$.
Is this exactly the same as $N$? If not, what is the difference between $z \sim N(\mu, \sigma)$ and $z = \mu + \epsilon \sigma$?
The following two variables: $$ z_1\sim\mathcal{N}(\mu_\theta(x),\Sigma_\theta(x)) $$ $$ z_2 = \mu_\theta(x) + \epsilon\,\Sigma_\theta(x)^{1/2},\;\, \epsilon \sim \mathcal{N}(0,I) $$ are equivalent in the sense that the distributions of $z_1$ and $z_2$ are the same.
The difference is in computing derivatives: computing $\partial_\theta z_1$ is not well-defined, but $\partial_\theta z_2$ is. This fact is very useful for e.g. Variational Autoencoders and other models using the reparametrization trick in practice. Note that many other distributions cannot be decomposed (i.e. reparametrized) like this; how to get derivatives with these other distributions is an active research area (e.g. [1], [2]).