I'm searching for an upper bound for the following sum:
$$\lim\limits_{M \rightarrow \infty}{\sum_{k=l}^M B_{k,p}(l)} = \lim\limits_{M \rightarrow \infty}{\sum_{k=l}^M {{k}\choose{l}} p^l(1-p)^{k-l}}$$
The success probability $p$ is in $(0,1]$ and $l$ is from $0..M$.
After some brute-force substitutions, I'm pretty convinced that the limit of the sum is at most $1/p$ for all $l$ and all $p$. But I'm failing to proof it.
I'm able to do the border cases $l=0$ and $l=M$. First, let $l$ be $0$.
$$\sum_{k=0}^M {{k}\choose{0}} p^0(1-p)^{k-0} = \sum_{k=0}^M (1-p)^{k} =\frac{1-(1-p)^{M+1}}{1-(1-p)} = \frac{1-(1-p)^{M+1}}{p}$$
And then
$$\lim\limits_{M \rightarrow \infty}{\frac{1-(1-p)^{M+1}}{p}} = 1/p$$
Now let $l$ be $M$.
$$\sum_{k=M}^M {{k}\choose{M}} p^M(1-p)^{k-M}=p^M$$
and
$$\lim\limits_{M \rightarrow \infty}{p^{M}} = 0$$
How can we show that the limit is always between $0$ and $1/p$, for all $l$?
Your conjectured upper bound of $1/p$ is correct - and actually turns out to be an equality!
$$ \sum_{k=l}^{\infty}\binom{k}{l}p^l(1-p)^{k-l}=\frac{1}{p},\qquad (\star) $$ for all integers $l\geq 0$ and for all $p\in (0,1]$. (And of course, the infinite sum is the same thing as writing $\lim_{M\to\infty}\sum_{k=l}^M$ since you are assuming that $l$ is fixed and does not depend on $M$.)
There are two proofs of $(\star)$ provided in this answer. The first proof is a direct computational proof which employs a variant of the binomial theorem. The second proof is more conceptual and uses probability theory, to explain where the identity is coming from.
First proof of $(\star)$:
This is a consequence of the binomial series, which is what you get when you apply the binomial theorem with a negative exponent as follows: $$ p^{-l-1}=\bigl(1-(1-p)\bigr)^{-l-1}=\sum_{n=0}^{\infty}\binom{-l-1}{n}\bigl(-(1-p)\bigr)^n.\qquad (\star\star) $$ To see why this is equivalent to $(\star)$, it is necessary to manipulate the right side. Using the negative binomial coefficient identity explained on that same wikipedia article, observe that $$ \binom{-l-1}{n}(-1)^n=\binom{n+l}{n}=\binom{n+l}{l}. $$ Substitute this into $(\star\star)$ to obtain that $$ p^{-l-1}=\sum_{n=0}^{\infty}\binom{n+l}{l}(1-p)^n=\sum_{k=l}^{\infty}\binom{k}{l}(1-p)^{k-l}, $$ where we have shifted the index from $n$ to $k=n+l$.
Finally, multiply through on both sides by $p^l$ to obtain the claimed identity $(\star)$. $\square$
Second proof of $(\star)$:
Consider a sequence of independent coin tosses, each coming up heads with probability $p$. Let $X_n$ be the number of heads after $n$ tosses, for any integer $n\geq 0$. The sequence $(X_n)_{n=0}^{\infty}$ always increases by $0$ or $1$. By focusing on the "flat parts" where it remains constant, we will derive the identity $(\star)$ without needing to do any calculations.
The formal definition of the "flat part" is as follows. For each integer $l\geq 0$, consider the (random) interval $I_l$ which is the set of integers $n$ such that $X_n=l$. Let $Y_l$ denote the cardinality of $I_l$, so that $Y_l$ is the length of the "flat part" at height $l$.
Observe that $Y_l$ is a geometric random variable with mean $1/p$: indeed, it represents how much time we remain "stuck" at $l$ heads, before finally flipping head number $l+1$.
On the other hand, we can use the linearity of expectation to write the mean of $Y_l$ as a sum of binomial probabilities: $$ \mathbb EY_l=\sum_{n=0}^{\infty}\mathbb P(n\in I_l). $$ Why is $\mathbb P(n\in I_l)$ a binomial probability? It is because the event $\{n\in I_l\}$ is equivalent to saying that there were precisely $l$ heads after $n$ coin tosses, which is $B_{n,p}(l)$ using the notation from your question. Thus $$ \frac{1}{p}=\mathbb EY_l=\sum_{n=0}^{\infty}\binom{n}{l}p^l(1-p)^{n-l}, $$ completing the second proof of $(\star)$.
The following is my original answer, applying to a different question than what was asked - due to a misreading of the question on my part.
Consider a random variable, $X_M$, equal to the number of heads obtained after flipping $M$ independent biased coins each with probability $p$ of being heads. You are also given a fixed integer $l$. The quantity you are trying to bound is equal to $$ \lim_{M\to\infty}\mathbb P(X_M\geq l). $$ In fact, we can use the law of large numbers to compute the answer. The law of large numbers says that the random variable $X_M/M$ converges to the deterministic quantity $p$. This means that if $A$ is any subset of the interval $[0,1]$, we have that $$ \lim_{M\to\infty}\mathbb P\left(\frac{X_M}{M}\in A\right)=\begin{cases}1,&\text{if }p\in A\\0,&\text{if }p\not\in A.\end{cases} $$
Let's focus on the case $p>0$, and let $A=[\tfrac{p}{2},1]$. So the law of large numbers tells us that $$ \lim_{M\to\infty}\mathbb P(X_M\geq Mp/2)=1. $$ On the other hand, for all $M>2l/p$ we have that $\mathbb P(X_M\geq Mp/2)\leq \mathbb P(X_m\geq l)$, therefore $$ \lim_{M\to\infty}\mathbb P(X_M\geq l)\geq \lim_{M\to\infty}\mathbb P(X_M\geq Mp/2)=1. $$ But the limit on the left can be no greater than $1$, so we have actually shown that $\lim_{M\to\infty}\mathbb P(X_M\geq l)=1$. And this answers your question, when $p>0$. For the $p=0$ case, things are much simpler since all terms in the sum except for the first vanish.