Let $s_0, s_1, \ldots$ be a completely monotone sequence. This means that, defining \begin{align*} (\nabla s)_n &= s_{n}-s_{n+1}\quad\text{and}\\ (\nabla^{r+1}s)_n &= (\nabla^{r}s)_n - (\nabla^{r}s)_{n+1}, \end{align*} we have $(\nabla^r s)_n\ge 0$ for all $r,n\ge 0$.
I am looking for a simple proof of the fact that complete monotonicity implies log-convexity, that is $s_i^2\le s_{i-1}s_{i+1}$, that does not use the characterization that (minimal) completely monotone sequences are interpolated by completely monotone functions.
Thank you
We can use the characterization of monotone sequences:
$(s_n)$ is completely monotone if and only if there exists a positive measure $\mu$ on $[0,1]$ such that $$s_n = \int_0^1 x^n d \mu(x)$$
The necessity condition is not hard: we have $$(\Delta^r s)\_n=\sum_{k=0}^r (-1)^k \binom{r}{k} s_{n+k}= \int_0^1 (1-x)^r x^n d\mu(x)\ge 0$$
The sufficiency is the hard part, see Hausdorff moment problem.
Now, any sequence of moments for measures concentrated on $[0, \infty)$ is logarithmically convex. This is right away from the C-B-S inequality. So you are done.
Note: a sequence of moments for a measure on $[0,\infty)$ satisfies even more conditions than log convexity, see the Stieltjes moment problem. We see that we require more than some $2\times 2$ determinants to be $\ge 0$, but in fact all minors of the infinite Hankel matrix $(a_{mn})\_{m,n\ge 0}$, $a_{mn} = s_{m+n}$ have to be $\ge 0$ ( can be reduced to fewer conditions).
Summing up:
c. m. $\Longleftrightarrow$ moment sequence on $[0,1] \implies $ moment sequence on $[0, \infty)\ \ \implies $ log convex
$\bf{Added:}$ The answer above still uses the theory of moments. Let's see if we can do without. The answer is "Yes and No"
Consider the sequence $s=(s_n)_n$. We are given a bunch of linear inequalities for $(s_n)_n$ and we want to prove another one. Apriori, it looks quadratic. But we can look at it as $$s \cdot (s_2^2, -2 s_2 s_1, s_1^2,0, \ldots,0,\ldots) \ge 0$$
We could try to show that $c=(s_2^2, -2s_2 s_1, s_1^2, 0, \ldots)$ is a positive linear combination of the other linear conditions. This is not quite so. However, it is the case that for every $\epsilon> 0$ we have $$c_\epsilon = (s_2^2 + \epsilon, -2 s_2 s_1, s_1^2, 0,\ldots)$$ is a positive linear combination of the given condition. In translation, it says that the polynomial $$(s_1 x - s_2)^2 + \epsilon$$ is a positive linear combination of polynomials $x^n (1-x)^r$. But it is the case that for every $a$, $b$ reals, and $\epsilon>0$ the polynomial $$(a u + b v)^2 + \epsilon(u+v)^2$$ multiplied by a sufficiently large power $(u+v)^N$ will have coefficients of $u$, $v$ positive ( think $x = \frac{u}{u+v}$). So this is the result of Hausdorff that has a generalization (Polya positivity theorem.
OK, assuming that we have this real analysis fact, we get $$s \cdot ( s_2^2+ \epsilon, - 2 s_2 s_1, s_1^2, 0, \ldots) \ge 0$$ for all $\epsilon>0$, and so $$s_2( s_0 s_2 - s_1^2) \ge 0$$
Now, if a c.m. sequence is has $s_2=0$ then $s_1=0$ also. Eliminating this possibility, we get from above $s_0 s_2 - s_1^2\ge 0$.
Note: we are using Hausdorff positivity result: a polynomial $P(x)$ that is $>0$ on $[0,1]$ is a positive combination of $x^n(1-x)^r$.