Variance of a location-scale transformed density

150 Views Asked by At

NOTE: I originally wrote this question by trying to transcribe the original problem according to my understanding, and in the process I introduced some problems. I used 'asterisks' as a multiplication, mainly because as a CS student that was a norm when programming multiplications, and I was ignorant how in other circles that could be problematic. Originally I also believed that $f$ by itself was a pdf, and not necessarily $f\{\}$. I've since deleted that transcription and replaced it with a screenshot of the problem itself, and hopefully that will do a better job that my first attempt to communicate the problem I'm trying to solve.

The question is as follows.

I think part of the confusion I've been having is whether to read "f" by itself as a representation of a pdf, or if $f\{(x - \mu)/\sigma\}$ is "f". If it is the latter case, then "g" is $f\{(x - \mu)/\sigma\}/\sigma$, and not $f$ times $\{(x - \mu)/\sigma\}/\sigma$, the latter being how I originally thought what was being stated.

1

There are 1 best solutions below

0
On BEST ANSWER

$\newcommand{\real}{\mathbb{R}}$

First of all, the correct expression of $g$ should be $g(x) = \sigma^{-1}f((x - \mu)\sigma^{-1})$ (that is, the "$*$" in the definition of $g$ in your post should be removed). In terms of random variables, $g$ is the density of $X = \sigma Y + \mu$, if we know $Y \sim f$. This expression then makes the calculation of $\operatorname{Var}(X)$ (equivalently, the variance of $g$) almost immediate:
\begin{align} \operatorname{Var}(X) = \operatorname{Var}(\sigma Y + \mu) = \sigma^2\operatorname{Var}(Y) = \sigma^2, \end{align} as $\operatorname{Var}(Y) = 1$.

It remains to show the density of $X$ is indeed $g$, which can be verified by first evaluating the distribution function of $X$: \begin{align} F_X(x) = P[X \leq x] = P[\sigma Y + \mu \leq x] = P[Y \leq \sigma^{-1}(x - \mu)] = F_Y(\sigma^{-1}(x - \mu)), \end{align} whence the density of $X$ is \begin{align} f_Y(x)= F_X'(x) = F_Y'(\sigma^{-1}(x - \mu)) = \sigma^{-1}f(\sigma^{-1}(x - \mu)) = g(x). \end{align} This completes the proof.

Alternatively, a direct calculation of $\operatorname{Var}(X)$ through $E[X^2] - (E[X])^2$ is also viable (the key is then to make every change of variable step correct): \begin{align} & E[X] \\ =& \int_\real xg(x)dx \\ =& \int_\real x\frac{f(\sigma^{-1}(x - \mu))}{\sigma}dx \\ =& \int_\real (\sigma y + \mu)f(y)dy \tag{$y = \sigma^{-1}(x - \mu)$} \\ =& \mu\int_\real f(y)dy + \sigma\int_\real yf(y) dy \\ =& \mu, \\[1em] & E[X^2] \\ =& \int_\real x^2g(x)dx \\ =& \int_\real x^2\frac{f(\sigma^{-1}(x - \mu))}{\sigma}dx \\ =& \int_\real (\sigma y + \mu)^2f(y)dy \tag{$y = \sigma^{-1}(x - \mu)$} \\ =& \sigma^2\int_\real y^2f(y)dy + 2\mu\sigma\int_\real yf(y)dy + \mu^2\int_\real f(y) dy \\ =& \sigma^2 + \mu^2. \end{align} In the above derivation, we used conditions: the expectation of $f$ is $0$ (i.e., $\int_\real yf(y)dy = 0$), and the variance of $f$ is $1$ (i.e., $\int_\real y^2f(y)dy = 1$). Therefore, $$\operatorname{Var}(X) = E[X^2] - (E[X])^2 = \sigma^2 + \mu^2 - \mu^2 = \sigma^2.$$ This completes the proof.


Background: For a given density function $f$ with mean $0$ and variance $1$, the family of distributions $$\{f_{\mu, \sigma}(x) = \sigma^{-1}f((x - \mu)/\sigma): \mu \in \real, \sigma > 0\}$$ is called a location-scale family. For example, the family of normal distributions $N(\mu, \sigma^2)$ is a location-scale family. It is well-known that the variance of the standard normal distribution is $1$ and the variance of a general normal distribution is $\sigma^2$. This exercise asserts that the same property holds for any location-scale family (where the expectation and variance of the base density $f_{0, 1} = f$ are $0$ and $1$ respectively).