Intuition behind variance in terms of $L^P$ norms?

1.3k Views Asked by At

I've just started working through Varadhan's Probability lecture notes, and I was wondering if there's any intuitive connection between the variance formula and Holder's inequality/ $L^p$ norms in general.

In a sense, for a probability space $(\Omega, \cal B, P)$ with $P(\Omega) = 1$ and a random variable $X\in L^2(\Omega)$, then if $X\geq 0$ we can write:

$E[X^2] - (E[X])^2 = (\int X(w)^2 dP) - (\int X(w) dP)^2 = \vert\vert X \vert\vert^2_{L^2(\Omega)} - \vert\vert X^{\frac{1}{2}} \vert\vert^4_{L^{2}(\Omega)}\geq 0 $

Is there any way to think of variance in terms of the $L^p$ norms? It's probably not a very good question, but I was hoping to make a connection between this and my Lebesgue integration/measure theory course I just finished.

1

There are 1 best solutions below

0
On

The even $p$ norms corresponds directly to the $p$th moment of the random variable. $$E[X^p] = ||X||_p^p$$ The association for odd $p$ is not as clear because the definition of the norm requires taking the absolute value of the outcomes. At best you can note: $$E[|X|^p] = ||X||_p^p$$ In this way, for a symmetric distribution, you can essentially say its characteristic function is defined by the norms of a random variable with that distribution. For a random variable that is non-negative, as you mentioned, all the norms give the characteristic function.