I'm looking for a little help regarding integration of series where the domain of integration gets very close to the edge of the series' domain of convergence. My particular case is the logistic function and it's Maclaurin series (via the geometric series expansion around $x=0$), $$ f(x) = \frac{1}{1+e^{-x}} = \sum_{n=0}^{\infty} (-1)^n e^{-nx}. $$ It's easy to check with the Alternating Series Test that this series converges strictly for $x > 0$. What I'm trying to wrap my head around is the following. This particular integral of $f$ is pretty straightforward: $$ \int _{0}^{1} f(x) \, dx = \ln (1 + e^x) \Big\vert _0 ^1 = \ln (2). $$ However, if we consider the integral of the series, $$ \int _{0}^{1} \sum_{n=0}^{\infty} (-1)^n e^{-nx} \, dx, $$ it's not so clear to me anymore how we may justify integrating all the way over to $x=0$ if the series does not actually converge for $x=0$. I've read other questions on the site pointing to the fact that the Dominated Convergence theorem allows you (especially in cases like these with alternating series, where Tonelli/Fubini can't help much) to exchange the limit with the integration sign, but (unless I'm missing something) this is only true provided that the sequence of partial sums converges pointwise to $f$, which to my understanding is not the case for $x=0$.
So, how is it that you go about justifying a procedure like this? A naive integration yields $$ \sum _{n=1}^{\infty} \frac{(-1)^{n+1}}{n} e^{-nx}, $$ which actually does converge at $x=0$, and correctly evaluates to $\ln (2)$. However, I'm not thoroughly convinced that integrating the series from $0$ to $1$ is legal. Any and all help is appreciated :)
EDIT: A comment kindly pointed out that the integral does NOT in fact equal $\ln(2)$, but rather $\ln(1+e) - \ln(2)$. Still, my doubt about the validity of the integration remains. If it's valid on integrate all the way to $0$, why is it so? If it's not, then why?
Indeed your made a very nice observation that is often neglected by practitioners of arcane art of integral and series.
This type of technical issue is often overcome by realizing the given expression as the limit of perturbed expressions with additional parameters. (In this regard, we might possibly borrow the physics jargon 'regularization' for this technique) Abel's Theorem is an archetypal example of this approach.
1. Let us consider OP's example in detail. One obvious resolution is to cut-off the domain of integration around the origin. So let $\epsilon \in (0, 1)$ and consider
$$ \int_{\epsilon}^{1} \frac{1}{e^x + 1} \, \mathrm{d}x. $$
Then Fubini-Tonelli Theorem is now applicable since
$$ \sum_{n=1}^{\infty} \int_{\epsilon}^{1} \left| (-1)^{n-1}e^{-nx} \right| \, \mathrm{d}x < \infty, $$
and so,
\begin{align*} \int_{\epsilon}^{1} \frac{1}{e^x + 1} \, \mathrm{d}x &= \sum_{n=1}^{\infty}(-1)^{n-1} \int_{\epsilon}^{1} e^{-nx} \, \mathrm{d}x \\ &= \sum_{n=1}^{\infty}(-1)^{n-1} \frac{e^{-n\epsilon} - e^{-n}}{n} \\ &= \log(1+e^{-\epsilon}) - \log(1 + e^{-1}). \end{align*}
Now letting $\epsilon \to 0^+$ shows that the original integral is equal to $\log 2 - \log(1+e^{-1})$. So the inapplicability of Fubini-Tonelli Theorem to the original integral can be overcome by this cut-off.
2. Of course, this cut-off technique is not the only way of perturbing the integral. For instance, we may introduce a new parameter $r$ taking values in $(0, \infty)$ and then perturb the integral to introduce
$$ I(r) := \int_{0}^{1} \frac{1}{e^x + r} \, \mathrm{d}x. $$
Then it is routine to prove that $I(r) \to I(1)$ as $r \to 1$. Moreover, if $r \in (0, 1)$, then we may utilize Fubini-Tonelli Theorem to compute
$$ I(r) = \sum_{n=1}^{\infty}(-1)^{n-1} r^{n-1} \int_{\epsilon}^{1} e^{-nx} \, \mathrm{d}x = \frac{\log(1+r) - \log(1 + r e^{-1})}{r}. $$
Then letting $r \uparrow 1$ yields the same answer as before.