We have given $N\sim \mathfrak{N}(0;1)$, $x\in \Bbb{R}$ and $\epsilon>0$. Furthermore $f$ is continuous and bounded. Then we want to compute $E(f(x+\epsilon N))$
We have just proven that for a random variable $X=x+\epsilon N$ the density is $$f_x(y)=\frac{1}{\sqrt{2\pi\epsilon^2}} \exp\left(-\frac{1}{2}\left(\frac{y-x}{\epsilon}\right)^2\right)$$
Thus I thought that $$E(f(x+\epsilon N))=\int_{-\infty}^{\infty}f(x+\epsilon y) \frac{1}{\sqrt{2\pi\epsilon^2}} \exp\left(-\frac{1}{2}\left(\frac{y-x}{\epsilon}\right)^2\right) dy$$ But our prof instead wrote $$E(f(x+\epsilon N))=\int_{-\infty}^{\infty}f(x+\epsilon y) \frac{1}{\sqrt{2\pi\epsilon^2}} \exp\left(-\frac{y^2}{2}\right)dy$$
He said something like both works but is this true? And if yes why does the second one works I don’t see this.
Thanks for your help.
If $X$ is a random variable with distribution $\mu_X$ and $h$ is any measurable function, then $$ \mathbb E[h(X)] = \int_{\mathbb R} h(x)d\mu_X(x).$$ If $\mu_X$ has density $g$ with respect to Lebesgue Measure, it is the same as $$ \mathbb E[h(X)] = \int_{\mathbb R}h(x)g(x)dx.$$
In your case, you have $f(x+\varepsilon N)$ which can be viewed in different ways. Either as $f(X)$ where $X \sim x + \varepsilon N \sim \mathcal N(x,\varepsilon^2)$, which would give you $$ \mathbb E[f(x+\varepsilon N)] = \mathbb E[f(X)] = \int_{\mathbb R} f(y)d\mu_{\mathcal N(x,\varepsilon^2)}(y) = \int_{\mathbb R} f(y) \frac{1}{\sqrt{2\pi}\varepsilon}\exp\left(- \frac{(x-y)^2}{2\varepsilon^2}\right)dy$$ (note that under $f$ there is only $y$, and $\varepsilon$ is not under $\sqrt{.}$ sign in the denominator) or $f(x+\varepsilon N)$ can be viewed as $h(N)$, where $h(y) = f(x+\varepsilon y)$, which would give you $$ \mathbb E[f(x+\varepsilon N)] = \mathbb E[h(N)] = \int_{\mathbb R}h(y) d\mu_N(y) = \int_{\mathbb R}f(x+\varepsilon y)\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{y^2}{2}\right)dy $$ (note that $\varepsilon$ is not in the denominator).