Why are 1D gaussians defined as $$F(x;\sigma^2) = e^{\frac{-x^2}{2\sigma^2}}$$
for a probability function (after computing the gaussian integral):
$$p_F(x;\sigma^2) = \frac{1}{\sqrt{2\pi}\sigma}e^{\frac{-x^2}{2\sigma^2}}$$
rather than just
$$G(x;\sigma^2) = e^{\frac{-x^2}{\sigma^2}}$$
$$p_G(x;\sigma^2) = \frac{1}{\sqrt{\pi}\sigma}e^{\frac{-x^2}{\sigma^2}}$$
In other words, what is the convenience of having a '2' in $F(x;\sigma^2) vs. G(x;\sigma^2)$. The gaussian integral doesn't require a 2 to for it to be computed:
$$ \int^{+\infty}_{-\infty}e^{-x^2}dx = \sqrt{\pi}$$
Is it just purely out of convention, or am I missing something?
Because with the factor of $2$ included, the corresponding Gaussian random variable has variance and standard deviation $1$, whereas without it, the random variable has variance $1/2$.
This $2$ floats around in all sorts of connections between probability and other subjects. One example is in connections between Brownian motion and PDEs; for instance, if $B_t$ is the standard Brownian motion then $u(t,x)=E[f(x+B_t)]$ solves the heat equation $u_t=\frac{1}{2} u_{xx}$ with the initial condition $u(0,x)=f(x)$. The 2 here is "the same 2" as the one in the formula for the Gaussian pdf. As it turns out it is also "the same 2" as the one in a second order Taylor expansion.