My background knowledge - I know single-variable differentiation and integration. I've never taken a course in Real Analysis. The little I know (and it's little) is from self-study. I do have some basic knowledge on Limits and Limit Theorems though.
Context/motivation - Wikipedia writes $1/3$ as... $$\frac{1}{2}-\frac{1}{4}+\frac{1}{8}-\frac{1}{16}+...$$ which is different from the $1/4+1/16+1/32+1/64+...$, that I was used to, but the algebra works so it's not surprising. However, this form made me curious. Is it possible to express any real number (whose size is at most 1) as an infinite sum of plus or minus successive reciprocal powers of two?
Another way to think of the problem - if a hiker of infinitesimal size starts at position $x=0$ and moves either left or right by exponentially-decreasing steps $\frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \frac{1}{16}, ...$ is it always possible for the hiker to reach any real number in the interval $[-1,1]$ just by choosing whether to move left or right? I think it's an interesting and even practical question because there are contexts where it's only halves we can work with, e.g. folding unmarked pieces of paper.
I've attempted two proofs which make me confident that the hiker can get to whatever destination they please. The proofs aren't perfect but you people are experts so I'd like to know what you think.
First proof - Cardinality Proof:
Let's invent a sort of positional number system that describes the process. It uses an infinite string of bits (read left-to-right) to describe any real number in $[-1,1]$. An example of a number is $$LRLRLRLR...$$ which means "start from x=0 then move left by half, move right by a fourth, move left by an eighth and so on". A more "mathematical" string is $$10101010...$$ With this version, each bit tells us the power to which $(-1)$ is raised in the infinite sum $\sum_{n\geq 1}\frac{(-1)^{f(n)}}{2^n}$. If $n$ is the position, then $f(n)$ is simply the bit at position n.
The number $101010...$ is essentially just $-\frac{1}{2}+\frac{1}{4}-\frac{1}{8}+\frac{1}{16}+...$ which should converge to $-1/3$. If you're uncomfortable with infinity, then just think of the infinite string as a limit of partial strings e.g. $$10, 1010, 101010, ...$$
I haven't proved that these strings always converge but I believe they do based on the loose argument that any sub-sequence of $\{1/2^n\}$ forms an absolutely convergent series. Our process partitions the sequence into two sub-sequences which are each added up but get different signs. Both sub-sequences converge so their sum/difference should converge. The real issue is whether all these binary strings 'fill' the entire interval $[-1,1]$.
Assuming that each infinite binary string represents a unique real number in that interval (there's one exception I know of), then let's consider the cardinality of the set of all infinite binary strings. From what I know, this cardinality is uncountable - it has the same size as the set of all real numbers. Hence, there's a correspondence between these strings/sums and every real number in the interval $[-1,1]$. For any real number, there exists a string - no matter how patternless it seems - that tells our hiker how to reach that number.
Intuitively, the rational numbers consist of the strings which become periodic after some position, and the irrational numbers consist of the aperiodic strings. Just for fun here are some approximate values for some fractions in this unusual number system...
$$1/e \approx 01010000$$ $$\ln(2) \approx 00100111$$ $$1/5 \approx 01100110$$ The approximations aren't really the best, but that's a different problem entirely. With infinite resources, we can always get where we want.
Zero has two representations $$0 = 0\bar{1} = 1\bar{0}$$ Either move right once then move left forever, or move left once then move right forever.
This post is getting too long so I'll quickly summarise my second proof attempt.
Algorithmic Proof
Let $r$ be the real number we're trying to converge to. $p$ is the partial sum. Initialise $p$ as $0$.
At any step in the computation, compare $p_n$ and $r$. If $p_n<r$, then $p_{n+1}=p_n+(1/2)^n$. If $p_n>r$ then $p_{n+1}=p_n-(1/2)^n$.
If $p_n=r$ then $p_{n+1} = p_n+0$ where $0$ means either $(1/2)^n-\sum_{i>n}{(1/2)^i}$ or $-(1/2)^n+\sum_{i>n}{(1/2)^i}$. In practice we'd just stop the process.
In any case the distance $d_k = |{r-p_k}|$ seems to obey either of two relations... $$d_{n+1} < d_n$$ $$d_{n+1} \leq (1/2)^{n+1}$$ The steps are tedious and convoluted and I'm not even sure if the first relation reveals much but I'll elaborate if you want.
What do you think?
Your first proof looks good to me.
To see that all such sums actually converge and cover everything – which you mention as having not been done – you can invoke the "nested interval theorem" or one of its equivalent axioms. You can find a discussion of this theorem (as an axiom; sometimes it's referred to as a property) on the wikipage here. This would be how one formally proves that the sequences in your first proof do, in fact, converge. (There's not a need to prove anything about cardinality; you have given a sufficient description of the real numbers in terms of their infinite expansions.)
You can find other, perhaps pertinent information by reading about the "Heine-Borel Theorem," which ends up being quite similar to the question you're asking here; for an MSE answer in this direction, see e.g. the post here.