Expected value and distributions

236 Views Asked by At

I'm reading a book at the moment and it has a theorem that states this:

if $X \sim N(\mu,\sigma^2)$ then
$E(X) = \mu$

My question is, when would it not equal $\mu$
I thought the expected value is the mean of any distribution, not just the normal. Is this rule onle the case for the normal distribution, all symmetric distributions or all distributions?
Thanks

2

There are 2 best solutions below

2
On BEST ANSWER

The expected value $ E(X) = \mu$ is often connected to the parameters of the distribution for instance

$$ X \sim Bin(n,p), E(X) = np$$

$$ X \sim Geom(p) , E(X) = p $$

$$ X \sim NegBin(r,p) , E(X) = \frac{r}{p}$$ $$ X \sim Pois(\lambda) , E(X) = \lambda $$ $$ X \sim N(\mu, \sigma^{2}) , E(X) = \mu $$ $$ X \sim Gamma(\alpha,\beta) ,E(X) = \frac{\alpha}{\beta}$$ $$ X \sim U(a,b) ,E(X) = \frac{b+a}{2} $$

$$ X \sim Beta(\alpha,\beta) ,E(X) = \frac{\alpha}{\alpha +\beta} $$

So $ E(X) = \mu$ is a function of them simply because of how the expected value is defined..

$$ E(X) = \int_{-\infty}^{\infty} xf(x) dx$$

where $f(x) $ is our density function, for continous functions

for discrete random variables we have $$ E(X) = \sum_{i} x p(x_{i}) $$

It isn't surprising why we get the parameters of the distribution in our expectation because of this. For instance. Supppose $ X \sim U(a,b) $. It's pdf is given by.

$$ f(x) =\begin{align}\begin{cases} \frac{1}{b-a} & \textrm{ for } a \leq x \leq b \\ \\ 0 & \textrm{ for } x <a\textrm{ or } x >b \end{cases} \end{align}$$

$$ E(X) = \int_{-\infty}^{\infty} x \frac{1}{b-a}dx $$

$$ E(X) = \frac{1}{b-a}\int_{-\infty}^{\infty} x dx $$ $$ E(X) = \frac{1}{b-a}\int_{a}^{b} x dx $$ $$ E(X) = \frac{1}{b-a} \frac{x^{2}}{2}\Big|_{a}^{b}$$ $$ E(X) = \frac{1}{2(b-a)} b^{2}-a^{2}$$ $$ E(X) = \frac{1}{2(b-a)} (b-a)(b+a)$$ $$ E(X) = \frac{b+a}{2} $$

7
On

The normal distribution $N(\mu, \sigma^2)$ is a probability distribution based on the pdf:

$f(x; \mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{(x - \mu)^2}{2\sigma^2}}$

Nowhere in that definition is it guaranteed that $\mu$ is the mean of the distribution, or that $\sigma$ is its standard deviation. Both need to be proven.

With other probability distributions, there's no guarantee that they will be directly parameterised by their expected value. For example, the log-normal distribution $\text{Lognormal}(\mu, \sigma^2)$ has an expected value of $e^{\mu+\frac{\sigma^2}{2}}$.