Counter-intuitive zero measure thorem

106 Views Asked by At

I just started learning about zero measure sets for multivariable integral calculus. The definition of a zero measure set is the following:

Let $A \subset \mathbb{R}^n$. We say it has zero measure if $\forall \epsilon > 0$ there exist a family of closed and bounded intervals $\{I_j\}$ finite or countably infinite such that $A \subset \cup_j I_j$ and $\sum_j V(I_j)<\epsilon.$

The interval $(0,2) \in \mathbb{R}$ is given as an example of non zero measure set.

After this, the following theorem is stated and proven:

Let $I$ be a closed and bounded interval of $\mathbb{R}^n$ and $g:I \rightarrow \mathbb{R}$ integrable in $I$. Then the set:

$A=\{(x_1,...,x_n,x_{n+1}) \in \mathbb{R}^{n+1} : (x_1,...,x_n)\in I, x_{n+1}=g(x_1,...,x_n)\}$ has zero measure.

So the set $\{(x,y) \in \mathbb{R}^2: 0 \le x \le 10, y=x^2\}$ would have zero measure. Intuitively the interval $(0,2)$ is "smaller" than the set $A$ so how is it possible to cover $A$ with arbitrarily small closed and bounded sets but it is not possible to do the same with $(0,2)$? I know we can' t really compare them because closed and bounded sets in $\mathbb{R}^2$ are not the same as the ones in $\mathbb{R}$ but I still see this as counter-intuitive.

1

There are 1 best solutions below

1
On BEST ANSWER

One-line explanation: We're talking about different notions of "zero area". If $\delta>0$ is vvery small then $\delta^2$ is much smaller than $\delta$, so there's no problem having $\sum\delta_j=1$ and $\sum\delta_j^2<\epsilon$.

More formally: Let $A=[0,1]\subset\Bbb R$ and $B=[0,1]\times\{0\}\subset\Bbb R^2$. Then

(i) $A$ is not a zero-area subset of $\Bbb R$.

(ii) $B$ is a zero-area subset of $\Bbb R^2$.

These both seem intuitively clear to me: The "area" in $\Bbb R$ is just the length, and $A$ has positive length. Otoh the "area" in $\Bbb R^2$ is "two-dimensional area", and $B$ certainly looks like it has zero two-dimensional area.

The actual proof of (i) is not quite trivial; I'm going to ignore that, since I think it seems plausible and I gather there's a proof in the book.

Proof of (ii): Let $I_j=[(j-1)/n,j/n]\times[0,1/n]$. Then $B\subset\bigcup_{j=1}^n I_j$, and $$\sum V(I_j)=n\sum_{j=1}^n\frac1{n^2}=\frac1n<\epsilon\quad(n>1/\epsilon).$$

More to the point, why (ii) does not contradict (i): Saying $B\subset\bigcup I_j$ says just that $A\subset\bigcup I_j'$, where $I_j'=[(j-1)/n,j/n]$. And $$\sum V(I_j')=n\sum\frac1n=1.$$

Probably it would be better to write $V_n(I)$ instead of $V(I)$. The point being that $V(I_j')=V_1(I_j')=1/n$, while $V(I_j)=V_2(I_j)=1/n^2$, much smaller.