Why should an outer measure give 0 on an empty set?

107 Views Asked by At

A standard definition of outer measure on a set $X$ is a function $$\rho:2^X\to[0,\infty]$$ satisfying the following properties:

  1. $\rho(\varnothing)=0$
  2. $A\subseteq B\implies\rho(A)\leq\rho(B)$
  3. $\rho(\cup_{n=1}^\infty A_n)\leq\sum_{n=1}^\infty\rho(A_n)$

A subset $E\subseteq X$ is then said to be $\rho$-measurable if $$\rho(A)=\rho(A\cap E)+\rho(A\cap E^c)$$ for all $A\subseteq X$. My question is: Why is the first condition necessary? As far as I can see, none of the important properties of outer measure and measurable set seems to require this condition.

1

There are 1 best solutions below

1
On BEST ANSWER

Unless I am mistaken, for any measurable non empty set we would have $\rho(A) = \rho(A \bigcap \emptyset) + \rho(A \bigcap X) = \rho(\emptyset) + \rho(A)$, so $\rho(\emptyset)$ must be $0$ if a measurable set $A$ exists