The following is taken from the classic Probability and Measure by Patrick Billingsley, Theorem 2.2 (page 26 in the 3rd edition). I have a question on his proof, but I give the necessary defintions to make my question self-contained.
Denote by $\mathcal B_0$ the field of all finite unions of half-open sub-intervals $(a,b]$ of $(0,1]$, i.e. it is a system of sets containing all half-open sub-intervals of $(0,1]$ and closed under finite intersection and complementation (and therefore also finite unions). It is easy to see that $\mathcal B_0$ is exactly the set $$ \mathcal B_0 = \left\{ \bigcup_{i=1}^n (a_i, b_i] : (a_i, b_i] \subseteq (0,1] ~ \mbox{disjoint} \right\} $$ of all subsets of $(0,1]$ that could be written as finite unions of disjoint subintervals of $(0,1]$. The following property is needed in the following: If $I = \bigcup_k I_k$ and the $I_k$ are disjoint, then $|I| = \sum_k |I_k|$. (*)
Then define a mapping $\lambda : \mathcal B_0 \to [0,1]$ by $$ \lambda(A) = \sum_{i=1}^n (b_i - a_i). $$ Then $\lambda : \mathcal B_0 \to [0,1]$ is countable additive, i.e. we have for $A \in \mathcal B_0$ and $A_1, A_2, \ldots \in \mathcal B_0$ disjoint and $A = \bigcup_{k=1}^{\infty} A_k$ $$ \lambda(A) = \lambda(\bigcup_{k=1}^{\infty} A_k) = \sum_{k=1}^{\infty}\lambda(A_k). $$
Proof: Suppose that $A = \bigcup_{k=1}^{\infty} A_k$, where $A$ and the $A_k$ are $\mathcal B_0$-sets and the $A_k$ are disjoint. Then $A = \bigcup_{i=1}^n I_i$ and $A_k = \bigcup_{j=1}^{m_k} J_{kj}$ are disjoint unions of subintervals of $(0,1]$, and the above, and (*) give
\begin{align*} \lambda(A) & = \sum_{i=1}^{n} |I_i| = \sum_{i=1}^n \sum_{k=1}^{\infty} \sum_{j=1}^{m_k} |I_i \cap J_{kj}| \\ & = \sum_{k=1}^{\infty} \sum_{j=1}^{m_k} |J_{kj}| = \sum_{k=1}^{\infty} \lambda(A_k). \qquad \square \end{align*}
To quote the book:
[...] proving countable additivity on $\mathcal J$ [i.e. the set of all subintervals of $(0,1]$] requires the deeper property of compactnness.
Where exactly is compactness used in the above proof? I do not see where it is applied.
Billingsley uses compactness to prove $*$. If you cover $I=(a,b]$ with countably many intervals $I_k=(a_k,b_k]$ (disjoint or not), $(a,b]\subset\bigcup_{k=1}^{\infty} (a_k,b_k]$, how can you be sure that $b-a=|I|\leq \sum_k|I_k|$? (Elementary to show for the finite case.) At the moment, we only know how to handle the length $|\!\cdot\!|$ of a finite union of intervals.
Here is a sketch of the argument from Billingsley (proof of $*$, Th. 1.3). The idea is that for $0<\varepsilon<b-a$, the open intervals $(a_k,b_k+\varepsilon2^{-k})$ cover the interval $[a+\varepsilon,b]$. But the latter interval is closed and bounded, so it is compact. By compactness, there exists a finite $n$ such that $[a+\varepsilon,b]\subset \bigcup_{k=1}^n (a_k,b_k+\varepsilon2^{-k})$. Now $(a+\varepsilon,b]\subset \bigcup_{k=1}^n (a_k,b_k+\varepsilon2^{-k}]$. By the finite case, $b-(a+\varepsilon)\leq \sum_{k=1}^n b_k+\varepsilon2^{-k}-a_k\leq \varepsilon+\sum_{k=1}^{\infty} b_k-a_k$, and the result follows because $\varepsilon$ was arbitrary.