Obtaining deterministic information from probability theory

123 Views Asked by At

A simple but very useful fact that constitutes the basis of the probabilistic method is the following: if an event $A$ has positive probability, then it can't be empty: there should exist at least one realization $\omega \in A$, just by definition of probability measure. However, the converse is not true in general: non-empty events can have zero measure.

The question is: if $P(A) > k$, where $k \in [0,1)$, can we say anything else about the event $A$, depending on the value of $k$, besides the fact that $A \neq \emptyset$? In other words, can we reveal more deterministic information about $A$ by knowing not only that $P(A) > 0$, but also that $P(A) > k$? In particular, if we are told that $P(A) > 0$ and $P(B) = 1$, does $B$ have different properties than $A$? If so, how many?

For instance, higher probability means higher cardinality in certain probability spaces. Of course, the answer depends on the probability space. If we consider a countable probability space $\Omega = \{w_{i} : i \in \mathbb{N}\}$ with $p(w_{i}) > 0$ for every $i \in \mathbb{N}$, then $P(B) = 1$ implies that $B = \Omega$, since any event strictly contained in $\Omega$ has strictly less probability by construction.

I understand that my question may be a bit unclear, so feel free to answer what you think might be helpful. There is no correct answer, as indicated by the soft-question tag. I'm just looking for conclusions that can be determined, just as the examples in the previous paragraph. Please don't close the question.

2

There are 2 best solutions below

2
On BEST ANSWER

[Edited] If the probability measure is nonatomic and $P(A) > 0$, then $A$ is uncountable. Martin's axiom implies $A$ has cardinality $\ge c$.

0
On

Here is a nice example I had just learned in a course.

$G$ is a group acting on some topological space $X$. Take a probability distribution $\mu$ on $G$. We mark $A= supp (\mu) = \{g\in G:\mu(g) >0\}$.

Assume $A$ generates $G$ and that $G$ is countable (for the sake of simplicity).

We can think about the notion of a "random walk" of a group $G \acts X$ - start with a point $x\in X$. Pick at random some $g_1\in G$. Go from $x_1= x$ to $x_2 = g_1x$. Then pick again at random another $g_2$ and go from $x_2$ to $x_3=g_2x_2$ and so on.

The classical example is a random walk on $\mathbb{Z}^{2}$- take $X=\mathbb{Z}^2$ and the group $\Gamma =<g_1,g_2>$ with $g_1((x,y)) = (x+1,y)$ , $g_2((x,y)) = (x,y+1)$ and a probability distribution $\mu(g_1) = \mu(g_2) = \mu(g_1^{-1}) = \mu(g_2^{-1}) = \frac{1}{4} $.

A probability distribution $\nu$ on $X$ is called $\mu$ - stationary if the measuere $\mu * \nu(f) = \int_{G} \int_{X} f(x)d \nu(g\cdot x) d\mu(g) $ is equal to $\nu$ ($ \mu*\nu$ is called the convolution measure). You can say that being $\mu$ - stationary is being "invariant" to a random walk by this group.*

The simplest example - if $\nu$ is invariant under the action of $G$ (i.e $\nu(g\cdot A) = \nu(A)$) obviously it is $\mu$ stationary... However a measure doesn't have to be invariant in order to be stationary...

Assume $\nu$ is $\mu$ stationary.

Now - assume there is an atom $\nu(\{x\}) = p>0$. We take $\{x\}$ to be the atom with maximal measure.

Observe that $\nu (\{x\}) = \mu * \nu(\{x\}) = \sum_{g\in A} \nu(\{g\cdot x\}) \mu(g)$. From this simple equation we can deduce two things:

  1. Since this is the atom with maximal probability, $\nu(\{g\cdot x\}) = \nu(\{x\})$ for all $g\in A$, and since $<A> = G$, $\nu(\{x\}) = \nu(\{g\cdot x\})$ for all $g\in G$.
  2. Since $\nu$ is a probability measure, $A= |supp(\mu)| < \infty $.

Furthermore - we can deduce that the orbit of $x$ has to be finite.

*This definition of $\mu$ - stationary measure is not precise, but I don't think its the main issue and this answer is long enough.