Is there a name for the statement that the frequentist definition of probability follows from the Kolmogorov definition?

81 Views Asked by At

From what I learned about the debate over definitions/interpretations of probability, it seems like that it doesn't mathematically make sense to define the probability in the frequentists way, but once probability is defined properly (axiomatically) the frequentists interpretation holds, when frequency is a well-defined concept (correct me if I am wrong).

Is there a name for a theorem that states that frequency corresponds to probability in the Kolmogorov sense under certain conditions?

2

There are 2 best solutions below

3
On BEST ANSWER

I think the closest you're going to come is the Law of Large Numbers, which states in part that if an event has probability $p$ then the frequency of the event in infinitely many independent trials is almost surely $p$.

Alas this doesn't really say that the frequentist interpretation is correct, because of the words "almost surely". If you toss a fair coin infinitely many times the frequency of heads is almost surely $1/2$, but that doesn't allow us to use the frequency to define probability, because the frequency might not be $1/2$.

Trying to give a frequentist definition of that "almost surely" leads to an infinite regress: We could perform the experiment "toss a coin infinitely many times" infinitely many times, and then almost surely the frequency with which we get a frequency equal to $1/2$ is $1$. But there's that "almost surely" again...

0
On

R.A. Fisher, in the preface to his 1928 Theory of Statistical Estimation, put forward the following proposition as a justification for his frequentist approach:

Imagine a population of $N$ individuals belonging to $s$ classes, the number in each class $k$ being $p_kN$. This population can be arranged in order in $N!$ ways. Let it be so arranged and let us call the first $n$ individuals in each arrangement a sample of $n$. Neglecting the order within the sample, these samples can be classified into the several possible types of sample according to the number of individuals of each class which appear. Let this be done, and denote the proportion of samples which belong to type $j$ by $q_j$, the number of types being $t$.

Consider the following proposition. Given any series of proper fractions $P_1,P_2,\ldots,P_s$, such that $S(P_k) = 1$, and any series of positive numbers $η_1,η_2,\ldots,η_t$, however small, it is possible to find a series of proper fractions $Q_1,Q_2,\ldots,Q_t$, and a series of positive numbers $\epsilon_1,\epsilon_2,\ldots,\epsilon_s$, and an integer $N_0$, such that, if $N > N_0$ and $|p_k-P_k|< \epsilon_k$ for all values of $k$, then will $|q_j−Q_j|< η_j$ for all values of $j$.

I imagine it possible to provide a rigorous proof of this proposition, but I do not propose to do so. If it be true, we may evidently speak without ambiguity or lack of precision of an infinite population characterised by the proper fractions, $P$, in relation to the random sampling distributions of samples of a finite size $n$.

It is indeed possible to prove this and several years ago I attempted to do so in a note. Some of Fisher's terms are not immediately obvious, and I tried to explain them on page 2 of my note

But this does not necessarily mean "the frequentists interpretation holds"; many critics suggest that frequentist statistical methods attempt to answer the wrong questions, exemplified by frequentist interpretations of confidence intervals which make no claims about a particular calculated interval but instead about the method used to produce it