Let $X_0 = \frac12$, and $X_n = \text{Uniform}(0, 2X_{n-1})$ for $n \ge 1$ be a sequence of random variables. Since $E[X_{n} | X_0, \cdots, X_{n-1}] = X_{n-1}$, we have that $X_n$ is a martingale. Also, it is bounded from below by $0$.
Using (a version of) Martingale convergence theorem, we can say that $X_n \to X$ with probability $1$, where $X$ is a random variable with finite expectation.
Now, in class we argued that $X_n$ cannot converge to any value other than $0$ with positive probability. This can be seen with elementary argument by assuming that it converges to some $a > 0$ with positive probability, and considering an arbitrarily small interval around $a$ to show that that cannot be the case because of the way $X_n$ is defined.
However, we then argued that the previous argument implies that $\Pr(X_n \to 0) = 1$. This part is not clear to me, because we can consider a similar argument for $0$, by considering an interval $(0, \epsilon]$ for some $\epsilon$.
Note that this was meant as an introduction to Martingales, and we proved the Martingale convergence theorem using "basic probability" and "elementary" $\epsilon-\delta$ arguments. We do not have any background in measure theory, and we did not base our discussion of Martingales on the foundations of measure theory, which I understand is a standard way of presenting Martingales.
My question is, can we show that with probability $1$, $X_n$ converges to $0$ without using sophisticated arguments from measure theory?
First of all, note that
$$\text{Uniform}(0,\alpha) = \alpha \cdot \text{Uniform}(0,1)$$
for any $\alpha>0$; for instance picking a random number from the interval $(0,2)$ (with uniform distribution) is the same (in distribution) as picking a random number from the unit interval $(0,1)$ (with uniform distribution) and multiplying the number by $2$.
This means that
$$X_n = \text{Uniform}(0,2X_{n-1}) = 2 X_{n-1} \text{Uniform}(0,1) .$$
If we set $\xi_n := \frac{X_n}{2X_{n-1}}$, then $\xi_n = \text{Uniform}(0,1)$ and
$$X_n = 2 X_{n-1} \xi_n. $$
Iterating the procedure, we find
$$X_n = 2X_0 \prod_{j=1}^n \xi_j \tag{1}$$
where $\xi_j = \text{Uniform}(0,1)$ for all $j=1,\ldots,n$.
From now on we assume that the random variables $\xi_1,\xi_2,\ldots$ are independent. That's an assumption we have to make; otherwise $(X_n)_{n \in \mathbb{N}}$ may fail to be a martingale. It follows from $(1)$ that
$$\log(X_n) = \log \left(2 X_0 \prod_{j=1}^n \xi_j \right) = \log(2X_0) + \sum_{j=1}^n \log(\xi_j). \tag{2}$$
The random variables $\eta_j := \log(\xi_j)$ are independent, identically distributed and integrable; therefore the law of large number shows
$$\frac{1}{n} \sum_{j=1}^n \log(\xi_j) = \frac{1}{n} \sum_{j=1}^n \eta_j \xrightarrow[]{n \to \infty} \mathbb{E}(\eta_1)$$
Since
$$\mathbb{E}(\eta_1) = \int_0^1 \underbrace{\log(x)}_{<0} \, dx < 0$$
we find
$$\sum_{j=1}^n \log(\xi_j) = n \underbrace{\left( \frac{1}{n} \sum_{j=1}^n \log(\xi_j) \right)}_{\to \mathbb{E}(\eta_1)<0} \xrightarrow[]{n \to \infty} - \infty.$$
Letting $n \to \infty$ in $(2)$ we conclude
$$\log(X_n) \xrightarrow[]{n \to \infty} - \infty$$
which is, by the continuity of $\exp$, equivalent to saying
$$X_n = \exp(\log(X_n)) \xrightarrow[]{n \to \infty} 0.$$