In $n$ trials, each with success-probability $p$, what is the probability of at least $k$ successes, $P[n,k]$ ?
The answer depends on the dependence between the trials:
A. If the trials are independent, then the number of successes is a Binomial variable, so: $$ P^{\text{ind}}[n,k] = \sum_{i=k}^n {n\choose k} p^i (1-p)^{n-1} $$
B. If the trials can be arbitrarily dependent, then $P[n,k]$ might be 0. E.g, take $p=1/2$, $n=2,k=2$, and assume $X_2 = 1 - X_1$ (trial 2 fails iff 1 succeeds; from Henry's comment).
C. Now suppose that the variables are dependent "in the good direction": $$ \forall i_1,\ldots,i_j: \Pr[X_{i_1}=1\cap\cdots\cap X_{i_j}=1] \geq \Pr[X_{i_1}=1]\cdots\Pr[X_{i_j}=1] = p^j $$ that is, if we already had some successes, the probability of another success weakly increases.
Here, $P[n,k]$ cannot be zero. A trivial upper bound is $P[n,k] \geq p^k$ (as commented by muzzlator), since even if we consider only trials $1,\ldots,k$, the assumption implies that the probability all of them succeed is at least $p^k$
Is there a better lower bound for $P[n,k]$ in this case?
Initially I thought that $P[n,k]\geq P^{\text{ind}}[n,k]$, since intuitively, the dependence between the variables can only increase the number of successes. However, this is not true. As a simple example (from muzzlator's comment), take $k=1, n\to\infty$ and assume that all variables are the same: $X_i=X_1$ for all $i$. Then, $P^{\text{ind}}[n,1]\to 1$ while $P[n,1]=p$.
So, what is a correct lower bound on $P[n,k]$?
The answer to the question whether $p^k$ is the lower bound is: It depends. For some $p$,$n$,$k$ it is possible to achieve $P[n,k]=p^k$ for others it is not. The precise bound is given below. The following contains the sections:
Examples
I found those by writing the conditions as linear programming problem and solving it with R. The simplest example was $n=3$ and $k=2$. The tables below give the probability for each of the $2^n$ "states", i.e. unique combinations of zeros and ones of the variables. For $p=0.5$ the minimum $p^2=0.25$ is achieved by the solution:
For $p=0.9$ the optimum solution has $P[3,2]=0.85>0.81=0.9^2$:
No other solutions for $P[3,2]$ with $p=0.9$
Note that the dependency constraint C. as well as the objective function $P[n,k]$ are invariant under permutations. Hence if there is any solution at all there is also one invariant under permutations (exchangeable) and there is no loss of generality to discuss only those exchangeable solutions.
To generalise the failed example above let $x$ denote the total mass put on the states with 2 and 3 successes. The remaining mass can then be distributed on the states 1,2,4 with a single success and on state 0. Due to exchangeability each of the states 1,2,4 receives the same mass, call it $y$ and the rest which is allocated to state 0 $z$.
Using exchangeability again this leads to the equations $$ x+y=p \quad \text{ and }\quad x+3y+z=1\text{ for } z\geq 0.$$ The former expresses the constraint $\Pr[X_1=1]=p$ the latter the fact that all probabilities add up to one. Now set $x=p^k=p^2$, then $y=p-p^2$ and the remaining equation reads $$ z = 1- 3p - 2p^2.$$ A quick check reveals that $z(0.9)=-0.08 < 0$ fails while $z(0.5) = 0$. This corresponds to the probability of state 0 in the solution above and shows that the treshold for $p$ is indeed 0.5 in this case (the other root is 1).
General case
First one needs to convince oneself that the strategy minimising $P[n,k]$ puts all required mass $m$ (ideally $m=p^k$) into the single state with all ones and the $\binom{n}{k-1}$ states with $k-1$ successes. The reason is efficiency, you get best possible overlap when allocating to states with maximal number of ones. Note that if you are not on the threshold any mass remaining after condition C. is satisfied is allocated to the 0-state.
Case $k=1$: This case always achieves the minimum $p^k$. Just assign mass $p$ to state of all one and 1-p to the 0-state. So assume $k\geq 2$.
Due to exchangeability again all states with $k-1$ successes get assigned the same mass $y$. As in the example $y$ is determined by: $$(*) P[X_1=1]=p = m + \binom{n-1}{k-2} y$$ since $\binom{n-1}{k-2}$ is the number of states with $k-1$ successes and $X_1=1$.
The 0-state constraint reads: $$ z = 1 - m - \binom{n}{k-1}y\geq 0.$$
The j-dependency constraint for $2\leq j < k$ i.e. $\Pr[X_1=1, X_2=1, \ldots,X_j=1]\geq p^j$ is $$ m + \binom{n-j}{k-j- 1}y\geq p^j.$$
Since the strategy puts all mass in the all-one state, the remaining dependency constraints for $k\leq j \leq n$ are straightforward. We need $m\geq p^j$ and since $p^j\geq p^{j+1}$ this reduces to the single constraint $$ m\geq p^k.$$
To abbreviate write $b_j=\binom{n-j}{k-1-j}$ and use the equality constraint (*) to get rid of $y$ in the inequalities. The 0-constraint is then $$ m\geq \frac{b_1 - b_0 p}{b_1-b_0}$$ (note reversal of inequality since $b_1<b_0$) and the j-dependency constraints for $2\leq j \leq k-1$: $$ m \geq \frac{b_1p^j - b_j p}{b_1 - b_j}.$$
The smallest $m$ satisfying all constraints is the maximum of the constraints, i.e.:
Necessity of all constraints
The constraints are not redundant as simple examples show. For $n=5$, $k=3$ and $p=\frac{2}{5}$ the maximum conditions in the sequence above are $$ \max \Big\{ \frac{8}{125}, 0, \frac{10}{125} \Big\} = \frac{10}{125}$$ which means that the 2-dependency bound is decisive and for $n=5$, $k=4$ and $p=\frac{3}{5}$ the 3-dependency bound bites $$ \max \Big\{ \frac{81}{5^4}, 0, \frac{75}{5^4}, \frac{87}{5^4} \Big\} = \frac{87}{5^4}$$ An example where a bound different from $j=k-1$ is active is provided by $n=8$, $k=4$ and $p=\frac{3}{8}$ with $$ \max \Big\{ 0.0198, 0, 0.0469, 0.0366\Big\} = 0.0469. $$