The following paragraphs are from my study notes on probability theory. It is a section within the independence discussion. But to me, they seem to appear here out of blue. I do not understand what exactly the question we are trying to answer here is. In particular, in the first paragragh, it starts with $\{0, 1\}$ and then basically forgets this case in latter discussion. As for those theorems, it does not make sense to me why we want to do them. In other words, what is the logic or intuition behind all of the discussion, please? Could anyone clarify the point, please? Thank you!

Update: I think the theorem referred above can be thought of as the justification for why we can generate iid random numbers from a given distribution. The solution provided by this theorem is that for certain probability space we can be assured to generate iid uniform random numbers. Once this is done, we can use the so called inverse quantile function to have iid random numbers from any given distribution (only in principle since it is not alway easy or even possible to invert the CDF). I think this is why we need this theorem in the first place. Right?
Adding comments as an answer:
The theorem states:
That for any choice of probabilities $P(E)=p$ and $P(\bar{E})=1−p$ one can indeed find a random variable $X$ in a measureble probability space which fullfils this distribution.
$p \in (0,1)$ is a necessary condition, $p$ is a probability and if either $1$ or $0$ it is trivial and not random. The set $\{0,1\}$ is the set that the (proved) random variable $X$ takes values in. It is not related to the probability $p$. In fact one can use random variable $Y=(b−a)X+a$ which takes values in $\{a,b\}$