Pure Mathematics vs Mathematical Statistics

4.2k Views Asked by At

I see that these majors are usually offered separately at universities.

1) Does Pure Math not cover ALL math including statistics?

2) If you choose Pure Math - will there be things you will NOT LEARN in pure math that you are able to learn in Statistics?

3) Do you do PROOFS in Mathematical Statistics like with Pure math?

1

There are 1 best solutions below

0
On

$(1)$ Emphatically no.

$(2)$ Emphatically yes.

$(3)$ Emphatically yes.

The list of counterexamples to the seeming suggestions in $(1)$ and $(2)$ is very long. Here are a few:

  • In pure mathematics you will not learn about design of experiments, even though you may learn a lot about finite fields and combinatorial designs that are used in the design of experiments. You will not find out the difference between replication and repetition. You might not hear about the fact that you can't tell the difference between a split-plot design and factorial design without knowing how the randomization is done. You won't hear that although the same subject might participate in an experiment once in the test group and later in the control group, you can't have the same subject participate once as a male and once as a female, or what that implies about the different roles of test-versus-control and male-versus-female in the way the data get analyzed.
  • In pure mathematics you will not learn about the consequences in the empirical sciences of the difference between assigning probabilities to all uncertain propositions, and assigning them only when they can be interpreted as relative frequencies. You won't learn how that difference interacts with some issues in the epistemology of scientific induction.
  • In pure math you might well learn about singular-value decompositions despite the fact that those are more frequently seen in "applied" settings, but you won't learn to look at a matrix in which each entry is a sample average and, based on a reasonable guess that the rank is more than $2$ only because of noise, replace all but the two largest singular values with $0$ and then plot "row markers" and "column markers" in $\mathbb R^2$ and make further guesses based on whether the rows markers or the column markers might be collinear if not for noise, or whether the two lines meet at a right angle. And that's just one of hundreds, or more likely thousands, of things similar in spirit that are done in statistics.

There are also some things that belong squarely in the "pure math" camp that nevertheless you're far more likely to encounter in a statistics course than in any pure math course. For example:

  • For a random variable $X$ taking values in $\mathbb R^{n\times 1}$, define the variance of $X$ as $$ \operatorname{var}(X)=\operatorname{E}((X-\mu)(X-\mu)^T)\in\mathbb R^{n\times n} \text{ where }\mu=\operatorname{E}(X). $$ Then how do you show that every non-negative-definite symmetric matrix can be realized as such a variance?
  • Or if $A\in\mathbb R^{k\times n}$, prove that $$ \operatorname{var}(AX) = A\Big(\operatorname{var}(X)\Big)A^T\in\mathbb R^{k\times k}. $$ (That one's easy.)
  • Supposing $Y_i = \alpha +\beta X_i+\varepsilon_{ij}$ and the $\varepsilon$s are uncorrelated (not necessarily independent), have expected value $0$ and equal variances (not necessarily the same distribution), how do you show that among all linear functions of the vector of $Y$-values whose expectations are $(\alpha,\beta)$, the least-squares estimators have the smallest variance?
  • Supposing $X_1,\ldots,X_n\sim\mathrm{i.i.d.}\,N(\mu,\sigma^2)$ how do you show that among all measurable functions of $(X_1,\ldots,X_n)$ (not necessarily linear; for example, consider the median of these $n$ values) that do not depend on $\mu$ and $\sigma$ and whose expected value is $\mu$, the one with the smallest variance is the sample mean $(X_1+\cdots+X_n)/n$? (This reduces to the one-to-one nature of the two-sided Laplace transform.)

That's a very short answer; a long answer or at least an answer that tries not to miss anything big might be somewhat interesting.