In Simon & Reed's book Methods of Modern Mathematical Physics, it is proven in chapter 1 (Theorem 1.12) that $L^1$ is complete (Riesz-Fisher theorem). The proof starts off as follows:
Let $f_n$ be a Cauchy sequence of functions in $L^1$. It is enough to show that some subsequence converges (this has been shown earlier) so we pass to a subsequence (labeled in the same way) with $\left|\left|f_n-f_{n+1}\right|\right|_1\leq 2^{-n}$.
This arouses my suspicion (although surely I will turn out to be wrong): Can we pick a subsequence such that $\left|\left|f_n-f_{n+1}\right|\right|_1\leq 2^{-n}$? This seems strange to me because it seems to say something about the rate of at which elements "get closer" under this norm (I should probably specify that $\left|\left| f\right|\right|_1=\int \left| f \right| dx$), rather than just saying that they do get arbitrarily close at some point. In particular, it seems to say that each progressive element is "twice as close".
The definition of a Cauchy sequence says that for each $\epsilon>0$ we can choose and $N$ such that $n,m>N$ implies $\left|\left| f_n-f_m\right|\right|_1=d(f_n,f_m)\leq\epsilon$, where $d(\cdot,\cdot)$ is the metric induced by the norm. This, to me, does not seem equivalent to saying that for any strictly positive $\epsilon(n)$ there is a subsequence such that $d(f_n,f_{n+1})\leq \epsilon(n)$. Am I mistaken?
The key to this observation in Reed and Simon is that we are choosing a subsequence by a particular process. And we can force this subsequence to obey a certain "rate of closeness." Here's how this works in general.
Let $(X,d)$ be an arbitrary metric space. Suppose that $\{x_n\}_{n=1}^\infty$ is a Cauchy sequence in $X$. First, choose an integer $n_1 \in \mathbb{N}$ so that $n, m \ge n_1$ implies $d(x_n, x_m) < \frac{1}{2}$. We have the ability to do this because $\{x_n\}$ is Cauchy.
Next, choose $n_2 \in \mathbb{N}$ so that $n, m \ge n_2$ implies $d(x_n, x_m) < \frac{1}{4} = \frac{1}{2^2}$ (again, we can do this because $\{x_n\}$ is Cauchy). In fact, we can go ahead and assume $n_2 > n_1$, since any integer $\tilde{n} > n_2$ also satisfies the property:
$$n,m \ge \tilde{n} \implies d(x_n, x_m) < \frac{1}{2}.$$
Now, we finish the argument by induction. Suppose we have chosen $n_1 < n_2 < \cdots < n_k$ with the property that
$$n,m \ge n_k \implies d(x_n,x_m) < \frac{1}{2^k}.$$
Notice in particular that this means $d(x_{n_i}, x_{n_{i+1}}) < \frac{1}{2^i}$ for $i = 1, \dots, k-1.$
Then, just as we did for the $n_2$ case, we pick $n_{k+1} > n_k$ so that $$n,m \ge n_{k+1} \implies d(x_n, x_m) < \frac{1}{2^{k+1}}.$$
And this implies $d(x_{n_k}, x_{n_{k+1}}) < \frac{1}{2^k}$. So in this way we construct a subsequence $\{x_{n_k}\}_{k=1}^\infty$ that satisfies the "rate of closeness" we desire.
This construction works for the example you give if we set $X = L^1$, $d(f, g) = \|f-g\|_1$. The potentially confusing part in Reed and Simon is the relabeling of the subsequence. Basically, Reed and Simon throw away the subscripts on the subsequence $\{x_{n_k}\}_{k=1}^\infty$ and just write the subsequence as $\{x_k\}_{k=1}^\infty$. And this is confusing because it makes the subsequence look just like the original sequence. But in fact they are now just using the subsequence.