What is an intuitive explanation for how the t-distribution, normal distribution, F-distribution, and Chi-square distribution relate to each other?
Could anyone explain this clearly with a sensible example?
I am a biologist and 've been trying to understand this nearly 10 years now. Every time use the statistical tests without a proper understanding of the base. Textbooks do not refer to this question either, moreover, we are not math or stat specialized in the university.
It is not totally clear to me precisely what you are looking for, but suppose $X_1,X_2,....,X_n$ are i.i.d. normally distributed random variables with mean $\mu$ and variance $\sigma^2$,
writing their average as $\bar{X}={\frac1n}\sum\limits_{i=1}^{n} X_i$, then $\dfrac{\bar{X} -\mu}{\sigma/\sqrt{n}}$ has a standard normal distribution $N(0,1)$ indicating the distribution of the sample mean
and $\sum\limits_{i=1}^{n} \left(\frac{X_i-\mu}{\sigma}\right)^2$ has a $\chi_n^2$-distribution, i.e. a chi-squared distribution with $n$ degrees of freedom as the sum of the squares of $n$ independent standard normal random variables
while estimating the unbiased sample variance as $S^2=\frac1{n-1}\sum\limits_{i=1}^{n} \left({X_i-\bar{X}}\right)^2$ you have $(n-1)\frac{S^2}{\sigma^2}$ having a $\chi_{n-1}^2$-distribution, i.e. a chi-squared distribution with $n-1$ degrees of freedom since $\bar{X}$ is affected by the individual $X_i$
and looking at the distribution of the sample mean you have $\dfrac{\bar{X} -\mu}{S/\sqrt{n}}$ having a Student $t$-distribution with $n-1$ degrees of freedom - not quite the same as the standard normal distribution in the first point, but close for large $n$; you can use this to test the hypothesis that the population mean is actually $\mu$ without knowing $\sigma^2$
as a tool in comparing variances, if $Z_1 \sim \chi^2_{d_1}$ and independently $Z_2 \sim \chi^2_{d_2}$, i.e. have chi-squared distributions with $d_1$ and $d_2$ degrees of freedom, then $\frac{Z_1 / d_1}{Z_2 / d_2} \sim \mathrm{F}(d_1, d_2)$, i.e. has an $F$-distribution with parameters $d_1$ and $d_2$
and in particular if $Y_1,Y_2,....,Y_m$ are also i.i.d. normally distributed random variables with a different mean $\mu_Y^{\,}$ and but the same variance $\sigma^2$ as the earlier $X_i$, then using the third bullet point, $\dfrac{\sum\limits_{i=1}^{n} \left({X_i-\bar{X}}\right)^2}{\sum\limits_{j=1}^{n} \left({Y_j-\bar{Y}}\right)^2}\sim \mathrm{F}(n-1, m-1)$, i.e. has an $F$-distribution with parameters $n-1$ and $m-1$ and you can use this as a test of the hypothesis that the variances are equal without knowing their value or the value of the means
You may not know that the $X_1,X_2,....,X_n$ are in fact normally distributed, but the Central Limit Theorem suggests that for large $n$ and finite $\mu$ and $\sigma^2$ you should have $\bar{X}$ approximately normally distributed as in the first bullet point, which may turn out to be good enough for the other properties, though for $n$ too small it may not be