What is the intuition behind the definition of a simply strongly normal number?

83 Views Asked by At

In a thesis written by A. Belshaw (On the normality of numbers), they define a new normality criterion named simply strongly normal. The motivation behind this definition is:

Let $\alpha$ be a number represented in the base $r$ and let $m_k(n)$ be the number of occurrences of the $k$-th 1-string in the first n digits.Then $\alpha$ is simply normal to base $r$ if $\frac{r m_k(n)}{n} \to 1$ as $n \to \infty$, for each $k \in \{0,1, \dots r-1\}$. But if a number is binomially random, then the discrepancy $m_k(n) - n/r$ should fluctuate with an expected value of $\sqrt{n}$.

Definition Let $\alpha$ and $m_k(n)$ be defined as above. Then $\alpha$ is simply strongly normal to base $r$ if

$\displaystyle \limsup_{n \to \infty} \displaystyle\frac{(m_k(n) - n/r)^2}{\frac{r-1}{r^2}n^{1+\varepsilon}} = 0 \hspace{2cm}$ and $\hspace{2cm} \displaystyle \limsup_{n \to \infty} \displaystyle\frac{(m_k(n) - n/r)^2}{\frac{r-1}{r^2}n^{1-\varepsilon}} = \infty$

for any $\varepsilon > 0$,

where the constant $\frac{r-1}{r^2}$ is derived from the variance of the binomial distribution.

The definition as stated above, makes me think of a (binomially) normalised random variable. As the random variable $m_n(k)/n$ should converge to $1/r$ for $\alpha$ to be simply normal, the first criterion seems more intuitive to me than the second (as the numerator 'should converge to zero').

Also, intuitively, I'd say that they normalise because they want to look at the rate of convergence (towards the asymptotic frequency) in order to analyse the asymptotic behaviour of different normal numbers. E.g. Champernowne's number in base 2 has an increasing excess of ones in the expansion, but the asymptotical frequency of the digit 1 is still 1/2.

This is as far as my intuition brought me. The motivation given in the paper does not give me more understanding of the choice for these criteria. Thus, what is the intuition behind the above definition? / Why do these criteria give a "stronger" form of normality?

1

There are 1 best solutions below

0
On

I think the intuition comes from looking at Champernowne's number and asking why it doesn't 'feel" random even though all strings appear equally.

No matter where you look at Champernowne's number, it goes off on long runs where it tilts towards a number. For instance in the 2 digit numbers (digits 10 to 99) - the number 1 occurs 55% of the time in digits 10 to 19, then 2 occurs 55% of the time in 20 to 29, and so on. Looking across 10 to 99, it evens out. But on the shorter runs it's not even.

So they're trying to establish a criteria about why something like pi "feels" random but Champernowne's number doesn't "feel" random.

This doesn't explain their particular choice but rather what the problem is with some normal numbers.