Assuming F is the CDF of R.V. X and is strictly increasing, the following is true: F(X) ~ Unif(0,1).
I was able to prove this as follows,
Let Y = F(X), Y is an R.V. with sample space Y = [0,1]. Mathematically,
Y = F(X) = P(X $\leq$ X),
P(Y $\leq$ y) = P(F(X) $\leq$ y) = P(X $\leq$ $F^{-1}$(y)) = F($F^{-1}$(y)) = y
I am confused by this result. Intuitively, the probability of the event (X <= X) would be 1 (once X is realized), right? Even before going with the proof, I was guessing the pdf of Y would be 1 at Y = 1 and 0 everywhere else. Can someone please help understand the gap in my understanding?
Side question: while thinking about the universality of uniform distribution and working through the above proof, I was trying to come up with an intuition for the inverse of a CDF. But, the inverse is not making a lot of sense to me. If someone has interpretations for this, I would really appreciate it.
Lets remember if we have a probability space $(\Omega,\mathcal{F},\mathcal{P})$ i.e a (set, sigma algebra, probability measure), and $X$ is a (say real valued) random variable w.r.t to this guy, then $X:\Omega \to \mathbb{R}$, and so how you have defined $Y$ we similarly have $Y:\Omega \to \mathbb{R}$.
Now your proof is fine (as long as you state for $y\in[0,1]$) so I will address your concerns. Firstly you saying the pdf of $Y$ should put all the mass into $1$, i.e because $X\leq X$ is always true $Y$ should always take the value $1$. The problem here is your reading of $Y=F(X)$, this is a random variable so once we give it an $\tilde{\omega} \in \Omega$ we get
$$ Y(\tilde{\omega})=\mathbb{P}\Big(X\leq X (\tilde{\omega})\Big)=\mathbb{P}\Big(\Big\{ \omega\in \Omega ~:~X(\omega)\leq X (\tilde{\omega})\Big\}\Big), $$
so you can see where your confusion came from.
The uniform random variable is fundamental for instance in programming / simulations, take a general random variable if you have the inverse of its CDF you can numerically draw from its distribution using the uniform random variable.