I have come across various papers that consider a stronger form of probability-relative frequency convergence theorem called the 'exact law of large numbers".
I note that in particular such theorems are invoked or proven within the nonstandard probability theory of J Keisler, Hoover, and Ed Nelson amongst.
What exactly is meant by such theorems. Are these: one of (1) or (2) or (3)
(1) convergence theories that apply to un-countably many identically distributed D trials, rather then countably (infinitely many),
or
(2) a stronger form of convergence theorem present in certain probabilistic semantics/model and or logic, wherein convergence is certain (not almost certain) or occurs 'almost always' .
By which I mean in the stronger combinatorial sense, as in 'almost all' possible infinitely long binary sequences. In contrast to rather then 'almost surely' which is simply with Lebesgue 'measure 1' result exhibited in the standard strong law of large numbers).
(3) something else entirely different.
.
cite authors="Malitz, Jerome">Malitz, Jerome, Infinitary analogs of theorems from first order model theory, J. Symb. Log. 36, 216-228 (1971).
<Chateauneuf, Alain, On the existence of a probability measure compatible with a total preorder on a Boolean algebra, J. Math. Econ. 14, 43-52 (1985).
Chuaqui also worked with , In his book (1994), "Probability and Possibility" Chuaqui extends some of Ed Nelsons results from 'Elementary Probability Theory'.
I suppose you do not know if Hoover, or Keisler aim at stronger limit relative frequency theorems (stronger or akin to those in Banach spaces).? That is, Certain convergence, of relative frequency values to the probability value, in the limit, with in almost all cases, or with measure one (almost surely etc).?.
Nelson uses the notion of internal sets and uses another phrase rather than 'exact law of large numbers in 'elementary probability theory'. I cant remember at the moment. But I presume this qualifiers are meant to denote that the theorem is more rigorous by the lights of non-standard analysis (and not that it necessarily has stronger consequences).
I would like to clarify the issue mentioned by the OP, which has to do with the kind of infinity that's involved when analyzing probability problems in Robinson's framework.
Leibniz, Euler, and Cauchy used infinite numbers frequently in their work. Thus Euler would analyze $e^x$ by means of an infinite integer $i$ by applying the binomial formula to $(1+\frac{1}{i})^i$. Since today the symbol $i$ is reserved for the imaginary unit it is preferable to use an alternative notation for an infinite number, say $H$, and work with $(1+\frac{1}{H})^H$ instead. Such infinite numbers obeyed the usual laws of arithmetic and behaved in the usual way when elementary functions were applied to them.
Cantor changed the way we think about infinity, and today when one hears the term one tends to think in terms of cardinalities, whether countable or uncountable. These are less well suited for probabilistic analysis than infinite numbers as used by Leibniz, Euler, and Cauchy and formalized in a satisfactory way in Abraham Robinson's framework.
Thus, one would typically work with a hyperfinite number $H$ of trials. The precise cardinality of the set of numbers less than $H$ is mostly irrelevant for probabilistic applications.