This is a bit of an abstract math question I'm sure someone can shed light on.
I am doing some dense probability calculations. One thing that just keeps amazing me is that - somehow - Kolmogorov figured out a way to make it more formal, so it jumped from discrete combinatorics to continuous functions and values.
If we refer to the Probability Density Function as the pdf, and the Cumulative Distribution Function (surface) as the CDF, all of the following are true:
- We can get the probability of an event as the integral to the right limits of the pdf; or
- We can look up the probability directly from the point on the CDF surface. So far, nothing fancy per se.
So, it can be seen that simply: we have a 3-D surface that ranges from $-\infty$ to $\infty$for both the X and Y axis. And, we have floating above it a surface which has value zero at either end on the left/below: i.e., the value if either limit is $-\infty$ is zero. And, as the point of interst in X-Y space moves positive, the function grows in non-decreasing manner along the X or Y axis, ending up at one when both limits go to $\infty$.
It seems at times blindingly obvious but hard for me to put on a mathematical footing. What is it about this definition that allows us to treat random variables algebraically, whil keeping in mind what it really is and applying some logic to it.
Put more crudely, why can I pretend an RV is just a variable, manipulate to my heart's content, then "apply the rules of probability" to the outcome? How does the "probabilitiness" make this all hold together?
It sounds like you're asking a more general question like "why do mathematical models work?", which is arguably more philosophical than mathematical.
For probability theory, you could describe what's going on as:
If you find it surprising that that this process works so well, you might be interested in this discussion.