I am taking an optional stats graduate course (without prior stats experience) and I have difficulties in understanding basics.
I understand concept of inverse function, e.g. if we have f(2) = 5 then we plug 5 into inverse function of f(x) and we get 2 as a result. I am also familiar with concepts of p.d.f and c.d.f, as well as normal, exponential, uniform etc distributions.
I do not understand a concept of "Inverse Transform Theorem" which states that:
"Let X be a continuous random variable with c.d.f. F(x). Then F(X) is approximately distributed uniformly U(0,1)."
How is it possible that for every F(X) (exponential, normal...) we get uniform distribution U(0,1)? Some visualisation would be really helpful to understand what is going on. I tried to search for the answer using Google and Youtube and there are a lot of resources but still but couldn't understand this (probably simple) concept, which is nowhere explained.
I am also having troubles understanding the difference between F(x) and F(X). If big X is a continuous random variable, then what is small x?
Edit:
As you can see on the graph below, it is easy to understand that in uniform distribution, each random variable X between a and b has equal chance of being selected. So, I do not understand how this applies to Inverse Transform Theorem... how below graph fits on the Y-axis where inverse function outputs values out of normal or exponential input values? Why output of inverse function is uniform for any input?
An example of Inverse Transform Theorem:




I think I got it. Because input into inverse function in random, hence each value on the Y-axis (output of inverse function) has equal chance of occurence, thus it has uniform distribution.
Ahhhh... so simple. :) I hope I am right.