Successful approaches to the modelization of ''randomness''

731 Views Asked by At

If you pick a number $x$ randomly from $[0,100]$, we would naturally say that the probability of $x>50$ is $1/2$, right?

This is because we assumed that randomly meant that the experiment was to pick a point from $[0,100]$ (with numbers equally distributed). But, since $f(r)=r^2$ is a bijection $[0,10] \rightarrow [0,100]$, we could also pick a number $r$ from $[0,10]$ and then do $x=r^2 \in [0,100]$ and let that be our random experiment. This time $x>5$ only for $r> \sqrt{50} \sim 7.07$.

In this case we would agree that the first way of choosing $x$ looks a lot more natural. So we would equally agree that is a successful way of modeling the experiment ''pick a random number from [0,100]''.

There are sometimes when we can't even agree on that! For example, on Bertrand's Paradox we are asked to pick a random chord from a circumference and calculate the probability that it is longer than the side of the inscribed equilateral triangle. The point is there are several (a priori) natural ways of choosing the chords (three of them are nicely described here) which, of course, produce different probabilities.

How and when can we consider something is truly random? Does it even make any sense saying something is truly random or is it more a matter of agreement?

Is there any convention in the mathematical community about this issues?

Could we say the common notion of randomness relates to the notion of uniform distribution?

Are there any successful approaches on models about randomness? (That let us decide if a certain distribution represents randomness in the sense of being an uniform distribution)

For example, on the comments it is said: "One can show [using Kolmogorov Complexity] that a number in [0,1] is random with probability 1 under the uniform distribution, so it coheres well with other notions.''

5

There are 5 best solutions below

4
On BEST ANSWER

One way to interpret your motivating examples is not that the word random is ill-defined (all of probability theory would disagree with that), but that you want a mathematically natural characterization and generalization of the notion of a uniform distribution. In that case, the answer could be the Haar measure on Lie groups (among other things). This is a measure that is invariant under the action of the group, and if you restrict it to a compact set you can normalize it to form a probability distribution.

For example, the real numbers form a Lie group under addition, and the corresponding Haar measure is nothing but the usual uniform measure on $\mathbb R$, which restricted to $[0,100]$ leads to the uniform distribution on the same. We can tell that the distribution produced by uniformly picking a number in $[0,10]$ and squaring it is not uniform, because it is not invariant under addition (the probability of $[20,30]$ is not equal to the probability of $[20,30]+40 = [60,70]$).

Similarly, when dealing with lines in the plane, the relevant Lie group is the Euclidean group of rigid motions of the plane, which comes equipped with a Haar measure. This induces a measure on the space of lines which is invariant to translation and rotation. When restricted to the lines that intersect a given circle, it gives you something you could objectively call "the" uniform distribution over chords of the circle. This corresponds to picking the angle and the distance from the center uniformly, and matches Jaynes' solution using the principle of maximum ignorance.

The field of integral geometry deals with exactly this sort of thing: the properties of geometrical objects under measures that are invariant to the symmetry group of the geometrical space. It has many interesting results such as the Crofton formula, stating that the length of any curve is proportional to the expected number of times a "random" line intersects it. Of course, this could not be a theorem without precisely formalizing what it means for a line to be random.

2
On

A common abuse of language is to say "let $x$ be a random foo" when one really means "let $x$ be a random variable uniformly distributed over all foo".

It is also common to abuse language to use "random" to mean "something that appears too hard to predict".

The fundamental rationale behind applying probability distributions to real world observations is really a matter of metaphysics, not mathematics.

8
On

When someone says 'random' there should be a distribution that goes along with it. In your example, to pick a random $x$ from $[0,100]$, it is implied that you pick $x$ over a uniform distribution. Of course, like you pointed out, using a different distribution will give you a different result.

The point is, 'random' needs a distribution to define it.

1
On

If we look at dice or a pseudo-random number generating algorithm, we need to know laws (physics or algorithm) and initial conditions to predict the result.

Based on that, here goes my attempt to define randomness:

If function's $f\left( x_1, x_2, \ldots \right)$ value can't be predicted while knowing the values of all variables $x_1, x_2, \ldots$, then the value of the function is a random number.

Any comments on this are welcome.

Truly random numbers in real life

I disagree that a truly random number is impossible.

It would be hard to believe that a deterministic algorithm could produce random results. But an algorithm is not the only way to produce a number. You just need to assign numbers to possible outcomes of some random process to get random numbers. In quantum physics the result of an experiment is random.

Example 1

State of a particle is described by wave function $\psi \left( q \right)$. Probability to find particle somewhere in $\delta$ is $\int\limits_\delta \left| \psi \left( q \right) \right| ^2 dq$. If you perform many experiments, you'll get results distributed according to $\left| \psi \left( q \right) \right| ^2$. But outcome of a single experiment is a random number from that distribution.

Example 2

Let's look at another experimental situation: light travels along $z$ axis and it is polarized along $x$ axis. It falls on a polarizer whose axis of polarization is not the same $x$ axis. For simplicity let's put our polarizer so that the angle between it's axis and $x$ axis is $\pi/4$. In that case half of the light goes through and half is absorbed. for individual photons it means that a photon will randomly either be absorbed or let through with equal probability.

The second experiment is almost the same coin toss, but, when throwing coin, one could think

if i could very precisely know initial speed and position and everything else, I could predict it's final position without any randomness

but in case of photons the are no underlying variables, the result of a single experiment is fundamentally unpredictable (random), however the result of many (infinitely many) experiments approach the 50/50 distribution.

Furthermore, the processes in atmosphere are very unstable so some tiny quantum randomness might actually be enough to make a macroscopic result random.

1
On

I would recommend looking into:

Kolmogorov complexity $K(x)$ measures the amount of information contained in an individual object $x$, by the size of the smallest program that generates it.

Shannon entropy $H(X)$ of a random variable $X$ is a measure of its average uncertainty. It is the smallest number of bits required, on average, to describe $x$, the output of the random variable $X$.

Here are two references for your review Kolmogorov complexity and Shannon entropy and Shannon Entropy.

Today, most people marry this into Information Theory.

Random numbers are very important in many fields and particularly for cryptographic applications (since getting this wrong could make a secure system insecure). I would recommend looking into the papers and code for DIEHARDER and TESTU01 and there are interesting papers and results for psuedo-RNGs and crypto-strength RNGs.

Random numbers, as you are finding, are a very complex area and it is a great idea to question them.

Here is a List of random number generators for your perusal. You might also have a look at the Handbook of Applied Cryptography - HAC for some crypto related ones.

Regards