I want to strip away the language of measure theory to get to the essence of what a random variable is.
I am reading Probability and Random Processes by Grimmett and Stirzaker, and the authors goes through this iterative construction process, whereby the expectation is defined at every step.
These steps can be simply boiled down to:
- Let $X$ be a simple function, $X = \sum\limits_{i = 1}^k x_i I_{A_i}$ and then calling $E(X) = \sum\limits_{i = 1}^k x_i P(A_i)$ the expectation
- Next, consider non-negative RVs, $X$ and show that the limit of the preceding expectation is this new expectation
- Do it again for random variables $X$ defined on all of $\mathbb{R}$
Viola, then you have this magic notation $\mathbb{E}(X) = \int_\Omega X(\omega) dP$. My goal is to reverse engineer this function and express it in the most tractable way.
I want to do away with this notation (in my opinion, confusing, as it abstracts away too much) and try to understand exactly what function we are dealing with, and I have came up with the following: for any random variable $X: \Omega \to \mathbb{R}$,
$$ \textstyle \mathbb{E}(X):=f(\{x_{n,i}\}, \Omega, P) = \mathbb{E}(\max(\lim_{n \to \infty} \sum_{i = 1}^k x_{n,i} P(A_{n,i}),0) + \min(\lim_{n \to \infty} \sum_{i = 1}^k x_{n,i} P(A_{n,i}),0)] $$ where $x_{n,i}$ is the $i$th component of simple functions $X_n: \Omega \to \mathbb{R}, X_n = \sum_{i = 1}^k x_{n,i} P(A_{n,i})$ and $A_{n,i}$ is the some partition of $\Omega$ associated with the same $X_n$.
Can someone chime in whether this is a valid definition for a random variable? Why not directly deal with this object, instead of hiding it with $\mathbb{E}(X) = \int_\Omega X(\omega) dP$?