Given a sequence $(a_n)$ such that $0 \leq a_i <\frac{1}{2}$ for all $i$, it is true that:
$$ \sum_i \log(1-a_i) > -\infty \iff \sum_i \log(1-2a_i) > -\infty$$
However, I found my proof of this a bit crude and long-winded. Does anyone have a nicer approach?
For reference, here's how I did it:
Referring to the equation at the top, we let $s_1$ be the sum on the LHS and $s_2$ be the sum on the RHS. (or more rigorously, we could say its the limit of the partial sums)
It is sufficient to prove that $s_1 >-\infty \implies s_2 > -\infty$, as $s_1 \geq s_2$. Thus, we assume that $s_1$ converges to a finite value.
We consider the difference of the two sums,
$$ s_1 - s_2 = \sum_i \log(1-a_i)-\log(1-2a_i) = \sum_i \log\left(\frac{1-a_i}{1-2a_i}\right) = \sum_i \log\left(1+\frac{a_i}{1-2a_i}\right) $$
Since we're assuming $s_1$ converges, we must have that $a_i \to 0$, thus for any $\epsilon > 0$, we have that:
$$ s_1-s_2 =\sum_i \log\left(1+\frac{a_i}{1-2a_i}\right) \leq C(\epsilon) + \sum_{i >N(\epsilon)} \log(1+(1+\epsilon)a_i) $$
$$\leq C(\epsilon) + \sum_i \log(1+(1+\epsilon) a_i)$$
If the bottom sum is finite, $s_1-s_2$ and thus $s_2$ must also be finite. We denote that bottom sum as $s_3$.
$$ s_1+s_3 = \sum_i \log((1-a_i)(1+(1+\epsilon) a_i)) \leq \sum_i \log(1+\epsilon a_i) $$
$$ \leq \sum_i \log(1+a_i)$$
Adding $s_1$, we get:
$$s_1 + \sum_i \log(1+a_i) = \sum_i \log(1-a_i^2) $$
We then have that:
$$ s_1 < \sum_i \log(1-a_i^2) \leq 0$$
Thus $s_3$ is finite, meaning $s_2$ is finite, and we are done.
This can be phrased from a more general lemma:
The proof of this is easy: there are only finitely many $a_i$ such that $|f(x)|>\alpha$, so the sum of the corresponding $|g(a_i)|$ is finite. Then, the sum of the remaining $|g(a_i)|$ is bounded by $c$ times the sum of the corresponding $|f(a_i)|$, which is finite due to $\sum f(a_i)$ converging absolutely. Thus, $\sum g(a_i)$ converges absolutely.
All you need to do then is to notice that for the functions $a(x) = \log(1-x)$ and $b(x)=\log(1-2x)$ this condition holds for either assignment of $f$ and $g$ to $a$ and $b$. This follows entirely from the fact that $a$ and $b$ are both differentiable at $0$ and have non-zero derivatives; in particular, note that $$\lim_{x\rightarrow 0}\frac{b(x)}{a(x)} = 2$$ which implies that, for small enough $x$, we get the bounds on $a(x)$ and $b(x)$ that we require. A bit of fiddling with continuity converts these facts into the hypothesis of the lemma. Note that, since the series being summed in your question are non-positive, convergence is the same as absolute convergence here.