Why should someone use decimal expansion of a real number when that decimal expansion is ITSELF a real number?

120 Views Asked by At

EDIT: Is the cornell proof incomplete? Since it doesn't account for the two possible decimal expansions of a real number. For example, it says 1/2 = .5 but doesn't account for .49999999...

I'm looking at these two proofs - one from Cornell and one from Berkeley (example 3).

https://courses.cs.cornell.edu/cs2800/wiki/index.php/Proof:The_set_of_reals_is_uncountable

https://math.berkeley.edu/~arash/55/2_5.pdf

I understand Cornell's proof, but I'm confused about the Berkeley proof. Why does the Berkeley proof consider the decimal representation of real numbers between 0 and 1 and not just the real number itself, since it's already in decimal form (since it's between 0 and 1). Doesn't this just add potential holes in the argument, since there are either 1 or 2 possible decimal expansion forms?

I'm not comfortable with the real numbers or decimal expansions, so there's a good chance I'm missing something. My friend also wrote a proof that uses decimal expansion, but I'm not sure why. Basically, why should someone use decimal expansion? Is it because 5 is a real number but we want to represent it in decimal form so the decimal expansion of 5 is 5.00000... and 4.99999....? IN SHORT: Why should someone use decimal expansion of a real number when that decimal expansion is ITSELF a real number?

My friends proof:

Proceed by contradiction. Suppose there exists a surjection $f: \mathbb N \mapsto \mathbb R$. Consider the possible decimal expansions of $f(n)$ for any $n \in \mathbb N$; (there may be multiple). Consider the $n$th place after the decimal point, and let $S_n$ be the set of integers that appears in this $n$th place after the decimal point among the possible decimal expansions of $f(n)$.

Lemma: First fixing $n$ and $f$th, $S_n$ has at most two elements (give proof)

Now, for each $n$, pick $0 \leq a_n \leq 9$ an integer not in $S_n$ (can do that as $|S_n| \leq 2$). Consider a = $0.a_1a_2a_3\cdots$, i.e. the real number with decimal expansion given by $a_1$ in the tenths place, $a_2$ in the hundredths place, etc. Then $a \notin Im(f)$, i.e. $a \neq f(a)$ for any $n \in \mathbb N$ (give proof), so $f$ is not a surjection.

1

There are 1 best solutions below

0
On

A decimal expansion is a sequence $u=(u_n)_{n>0} \in \{0;...;9\}^{\mathbb{N}^{>0}}$. You have a function $\sigma: \{0;...;9\}^{\mathbb{N}^{>0}} \rightarrow [0,1]$ which sends a decimal expansion $u$ to the sum $\sum \limits_{n>0}\frac{u_n}{10^n}$. Say that a decimal expansion $u$ is the decimal expansion of a number $x$ if $\sigma(u)=x$. Then it is known that each number $x\in [0,1]$ has exactly one or exactly two decimal representations: it has only one if it is not decimal or if $x \in \{0,1\}$, and it has two otherwise.

Writing a number in decimal form is not a mathematical statement, but an informal way to say "considering / computing the decimal expansion" of said number.