I often see people say that if you have 2 IID gaussian RVs, say $X \sim \mathcal{N}(\mu_x, \sigma_x^2)$ and $Y \sim \mathcal{N}(\mu_y, \sigma_y^2)$, then the distribution of their sum is $\mathcal{N}(\mu_x + \mu_y, \sigma_x^2 + \sigma_y^2)$.
This is only true when $X$ and $Y$ have the same units right, otherwise you can't even sum them to begin with without standardization?
e.g., if $X$ was some measure of distance in meters and $Y$ was some measure of velocity in $\frac{meters}{second}$, then you can't simply just add their means and variances together. That wouldn't make sense. You'd have to standardize them first so they're both unitless before you can do the above.
If $X$ and $Y$ are independent , $X \sim N(\mu_X,\sigma_X^{2})$ and $Y \sim N(\mu_Y,\sigma_Y^{2})$ then $X+Y \sim N(\mu_X+\mu_Y,\sigma_X^{2}+\sigma_Y^{2})$. [IID would force $\mu_X=\mu_Y, \sigma_X=\sigma_Y$ which is not required here. Independence is enough]. There is no need to standardize the random variables for this.
[There are many ways of proving this and one way is to use characteristic functions. Use the fact that $Ee^{it(X+Y)}=Ee^{itX}Ee^{itY}$ and you would be able to supply a proof].