I was looking at an example that uses the central limit theorem to calculate the probability of a the sample mean being between two constants. The part of the example that is causing me some problems is the following:
Let $\overline{X}$ denote the mean of a random sample of size $25$ from the distribution whose p.d.f. is $f(x)=\frac{x^3}{4}$, $0<x<2$. It is easy to show that $\mu=\frac{8}{5} = 1.6$ and $\sigma^2 = \frac{8}{75}$...
I think this is a really easy problem, but I have a few doubts:
If the p.d.f is what is between $0<x<2$, what about the rest of $x$, i.e. what is the $p.d.f$ when $2 \leq x \leq 25$?
The variables in the sample should be i.i.d., so they should all have the same mean and variance. How did they obtain $\mu=\frac{8}{5} = 1.6$ and $\sigma^2 = \frac{8}{75}$?
I am sorry if this is a stupid question, but I have some problems getting my head around this.
Based on what you have written in your question, you have some fundamental misunderstandings that you must rectify before you can proceed any further.
First of all, when we say that we have a random sample of size $n$ drawn from a distribution, what we mean is that there are $n$ independent and identically distributed random variables $$X_1, X_2, \ldots, X_n$$ whose values all share the same probability distribution. That is NOT to say that the support has anything to do with the size of that sample. The support is the set of possible outcomes for a random variable. When you are told that $$f_X(x) = x^3/4, \quad 0 < x < 2,$$ that means that any random variable with this density can only attain values between $0$ and $2$. To check that this density is a true probability density, its integral on the interval $(0,2)$ must equal $1$: indeed, $$\int_{x=0}^2 \frac{x^3}{4} \, dx = \left[\frac{x^4}{16}\right]_{x=0}^2 = \frac{16}{16} - \frac{0}{16} = 1.$$ This tells you that the support is correctly specified and that the density everywhere else is zero.
The fact that you ask "what happens for $2 \le x \le 25$" shows that you have confused the sample size with the support. These are distinct concepts. If I have a fair coin and represent the outcome of getting heads on a single flip as $Y = 1$ and the outcome of tails as $Y = 0$, then clearly $Y$ is Bernoulli distributed with $$\Pr[Y = 1] = \Pr[Y = 0] = 1/2,$$ and the support is the set $\{0, 1\}$ but I could choose to flip this coin as many times as I want: I could flip it $n = 10$ times, $n = 1000$ times, $n = 10^{100}$ times. How many times the coin is flipped has nothing to do with the value of the random variable I have specified for a single trial.
Back to the distribution of $X$: to calculate the mean of a single realization, we simply apply the formula for expected value: $$\operatorname{E}[X] = \int_{x=0}^2 x f_X(x) \, dx = \int_{x=0}^2 \frac{x^4}{4} \, dx = \left[\frac{x^5}{20}\right]_{x=0}^2 = \frac{2^5}{20} - \frac{0}{20} = \frac{32}{20} = \frac{8}{5} = \mu,$$ as claimed. The variance is calculated as $$\operatorname{Var}[X] = \operatorname{E}[(X - \mu)^2] = \int_{x=0}^2 \left(x - \frac{8}{5}\right)^2 \frac{x^3}{4} \, dx$$ or equivalently, $$\operatorname{Var}[X] = \operatorname{E}[X^2] - \operatorname{E}[X]^2 = \int_{x=0}^2 \frac{x^5}{4} \, dx - \left(\frac{8}{5}\right)^2.$$ These values for $\mu$ and $\sigma^2$ are, again, for a single observation. The mean and variance of the sample mean $$\bar X = \frac{1}{25}\sum_{i=1}^{25} X_i$$ may be different. The mean of the sample mean is of course $\mu$, because of the linearity of expectation: $$\operatorname{E}[X_1 + \cdots + X_n] = \operatorname{E}[X_1] + \cdots + \operatorname{E}[X_n].$$ But the variance of the sample mean will not be $\sigma^2$. I leave it to you to figure out what it should be.