Expected value of indicator function on $\mathbb R$

1.4k Views Asked by At

Before you read - I have a background in computer science, but I never dealt with measure theory before. So if there is something completely wrong, please tell me.

Let $\omega \in \mathbb R$ be a random variable and let $\mathbb P$ be a probability measure for this variable. Now let $1_x(\omega) = \begin{cases}1 \text{ if } x = \omega \\ 0 \text{ else} \end{cases}$ be an indicator function which is 1 if $\omega = x$ for some $x \in \mathbb R$.

Given a function $g \colon \mathbb R \to \mathbb R$, I would like to compute the expected value $\mathbb E[1_{x}(\cdot)g(\cdot)] = \int_{\mathbb R} 1_{x}(\omega)g(\omega)\mathbb d \mathbb P(\omega)$

For indicator functions we known that $\int_{\mathbb R} \mathbb{1}_{x}(\omega)d\mathbb P(\omega) = \mathbb P(x)$

Therefore: $\mathbb E[1_{x}(\cdot)g(\cdot)] = g(x)\mathbb P(x)$ since $1_{x}(\omega)g(\omega) = g(x)$ is a constant during integration.

A colleague of mine now mentioned that one can do that, but since $\omega \in \mathbb R$ every valid probability measure $\mathbb P$ has some (??) relationship to the Lebesgue measure. In turn, this means that $\mathbb P(\omega)$ would be 0 for a single event. In other words, he said that due to the fact that there are infinite many different events $\omega \in \mathbb R$, every single event has a probability of $0$.

I can follow his informal argumentation, but not so much his formal argumentation. So, is it true, that $\mathbb P(\omega)$ is always 0 and if so - why?

2

There are 2 best solutions below

0
On BEST ANSWER

Your choice of expectation is a little odd. I'll come back to this at the end.

$1_x(\omega)$ is a piecewise defined function. This suggests you should split the integral along the pieces of this function.

$$ \int_{\mathbb{R}} \dots = \int_{(-\infty,x)} \dots + \int_{[x,x]} \dots + \int_{(x,\infty)} \dots \text{,} $$ where each of the four ellipses stand for $1_x(\omega) g(\omega) \mathrm{d}\mathbb{P}(\omega)$. Since $1_x(\omega)$ is $0$ on $(-\infty,z)$ and on $(x, \infty)$, the first and third integrals are $0$. The middle integral is $ 1_x(x)g(x)\mathrm{d}\mathbb{P}(x) = g(x)\mathrm{d}\mathbb{P}(x)$, so

$$ \int_\mathbb{R} \dots = 0 + g(x)\mathrm{d}\mathbb{P}(x) + 0 \text{.} $$

Here, we get to a point where we must ask what you mean by "let $\mathbb{P}$ be a probability measure for this variable". What you are probably thinking of is an absolutely continuous measure with respect to the Lebesgue measure, i.e., one having a probability density function. Such a measure has the property that your colleague mentions. If we put together the definitions, this forces the integral over a single point, which has Lebesgue measure zero, to be zero. An absolutely continuous measure makes sense for your $\mathbb{P}$ if you are imagining that $\omega$ can take any value in some interval of the reals (or several such intervals). (Some care is needed here. Really, this is modeling that $\omega$ can take values from a set of positive Lebesgue measure.) Since the probability of single events is always zero, it is more useful to ask for the probability of an event landing in some set (of positive measure). (An old instructor: "ask if it's in an interval, not if it has a value".)

However, when you write "let $\mathbb{P}$ be a probability measure for this variable", you could be thinking a little more generally. Perhaps $\mathbb{P}$ has a probability mass function. A simple probability mass function assigns the probability $1/2$ to $\{0\}$ and $1/2$ to $\{1\}$ and models flipping a fair coin (heads = 0, tails = 1). Now your expectation integral need not be zero for every choice of $x$. For instance, if $\mathbb{P}$ is the probability mass function of the simple example above, then the expectation integral you write is $g(0)\frac{1}{2}$ for $x = 0$, is $g(1)\frac{1}{2}$ for $x = 1$, and is zero for every other choice of $x$.

I say that your choice of expectation value is a little odd because what one would normally say is "what is the expectation of $g(\omega)$", which is $\int_\mathbb{R} g(x)\,\mathrm{d}\mathbb{P}(x)$. In the simple example, this is $g(0)\frac{1}{2} + g(1)\frac{1}{2}$, the expected value of $g(\omega)$. The expectation integral you have written is conditioned on $\omega = x$, so you get a different result for each choice of $x$. Maybe it's the right thing for your context.

0
On

Let's step back for a minute: suppose we have the function $$f(x) = \begin{cases}1, &x=0\\0, &x\neq 0.\end{cases}$$

Then for any function $g:\mathbb{R}\to\mathbb{R}$, $$\int_{\mathbb{R}} f(x)g(x)\,dx = 0.$$

To see this, notice that $$\left|\int_{\mathbb{R}}f(x)g(x)\,dx\right| = \left|\int_{-\epsilon}^{\epsilon} f(x)g(x)\,dx\right| \leq 2\epsilon |g(0)|$$ for any $\epsilon>0$.

You are probably confusing the Kronecker delta $f$ with the Dirac delta distribution $\delta(x)$ which does have the property that $$\int_{\mathbb{R}}\delta(x)g(x)\,dx = g(0).$$ However $\delta(x)$ is not an indicator function, nor a proper function at all, but "is infinite" (in a precise sense) at $0$.

Now your friend is right, for the same reason, about $\mathbb{E}(1_x g)=g(x)P(\omega=x)$. This expectation cannot be equal to the probability density function of $\omega$ at $x$, for the exact reason as above. You are integrating a function which is nonzero only at a single point, and so you will always get zero, regardless of $g$. Formally this follows from the fact that the integral of any function which is non-zero only on a set of measure zero vanishes (and you can prove this claim in the same way as above, by drawing smaller and smaller intervals around the points where the integrand has nonzero value, and bounding the resulting integral). Informally, it should be clear that the "area under" a single point (or isolated set of points) is always zero.