A model is given a constant probability $p$ and number of iterations $n\geq2$ and follows the pattern
$$ \begin{align} c_2&=1\\ c_3&=1+(1-p)\\ c_4&=1+p(1-p)+(1-p)\\ c_5&=1+p(1-p)+(1-p)+p(1-p)^2\\ c_6&=1+p(1-p)+(1-p)+p(1-p)^2+p^2(1-p)^2\\ c_7&=1+p(1-p)+(1-p)+p(1-p)^2+p^2(1-p)^2+p^2(1-p)^3\\ \end{align} $$
from which I deduced the general formula
$$ c_n= \begin{cases} \sum^{\frac{n}{2}-1}_{k=0}p^k(1-p)^k+(1-p)\sum^{\frac{n}{2}-2}_{k=0}p^k(1-p)^k && \text{if n is even}\\ \sum^{\frac{n+1}{2}-2}_{k=0}p^k(1-p)^k+(1-p)\sum^{\frac{n+1}{2}-2}_{k=0}p^k(1-p)^k && \text{if n is odd} \end{cases}. $$
We are interested in the behavior for $n\to\infty$. However I am having difficulties with the expressions in the upper bound of the sums. A naive approach would be to replace the upper bounds
$$ c_{n\to\infty}=\sum^\infty_{k=0}p^k(1-p)^k+(1-p)\sum^\infty_{k=0}p^k(1-p)^k $$
and use that $p(1-p)<1$ gives us geometric series (for both cases).
However I don't believe this is correct. I am aware of index shifting $$ \sum^n_{k=1}a_k=\sum^{n+z}_{k=1+z}a_{k-z} $$ which applied to the present problem leads to
$$ c_n= \begin{cases} \sum^{\frac{n}{2}}_{k=0}(p(1-p))^{k-2}+(1-p)\sum^{\frac{n}{2}}_{k=0}(p(1-p))^{k-4} && \text{if n is even}\\ \sum^{\frac{n}{2}}_{k=0}(p(1-p))^{k-3}+(1-p)\sum^{\frac{n}{2}}_{k=0}(p(1-p))^{k-3} && \text{if n is odd} \end{cases}. $$
which shows a different behavior for $n\to\infty$ and even diverges for the different cases.
How do you properly treat a (linear) mapping in the upper bound of a sum?
The case for $p \in \{0,1\}$ is trivial, so I'm going to assume that $0 < p < 1$. Now, using this (and the immediate fact that $(1-p) < 1$) we can see that regardless of parity, we always have that
$$ c_n \leq 2\sum^{[\frac{n}{2}] +1}_{k=0}(p(1-p))^{k-4} $$
Here I use that in the second sum, $(1-p) < 1$, and that in any case, since $p(1-p) < 1$, $(p(1-p))^j \leq (p(1-p))^k$ if $k \leq j$. Now, since $[\frac{n}{2}] + 1\leq n$,
$$ c_n \leq 2\sum^{n}_{k=0}(p(1-p))^{k-4} = 2(p(1-p))^{-4}\sum^{n}_{k=0}(p(1-p))^{k} $$
and since the latter is a geometric series, it converges, and therefore so does $c_n$. Now, since we have convergence, the limit is the same for any subsequence. In particular, you could take $(c_{2n})_{n \in \mathbb{N}}$ for which you have an explicit expression which is computable since it is a linear combination of geometric series.
Edit: to answer your last question, more generally, suppose that your partial sums $\sum_{i=1}^nc_i$ verify that, for a certain convergent series $\sum_{n}a_n$,
$$ \sum_{i=1}^nc_i \leq \sum_{i = 1}^{M_n}a_i $$
with $M_n \to\infty$, and both are series of positive terms. Then, $\sum_{i=1}^nc_i$ converges. In effect, since $M_n \to \infty$, we can without loss of generality assume the sequence increasing. If not, we can define $\bar{M}_1 = M_1$, $\bar{M}_i > \max\{M_1, ... , M_{i-1}\}$ which we can take precisely because $(M_n)_{n \in \mathbb{N}}$ is not bounded, and the inequality will still hold because adding more terms only makes the sum greater. Now, let
$$ S_n = \sum_{i=1}^nc_i, \ T_n = \sum_{i=1}^na_i $$
be the partial sums of each series. We are interested in concluding that $(S_n)_{n\in\mathbb{N}}$ converges. Since the terms are positive, $S_n$ is increasing and it will suffice to prove boundedness. Since $(M_n)_{n \in \mathbb{N}}$ is increasing, and $T_n$ converges by hypothesis, $T_{M_n}$ is a subsequence and therefore converges (to the same limit). Finally, restating the original inequality, we have $S_n \leq T_{M_n}$ and therefore
$$ \lim_{n \to \infty} S_n \leq \lim_{n \to \infty}T_{M_n} = \lim_{n \to \infty}T_{n} < \infty $$
which concludes the proof.