Learning about sufficient statistics my text has the following example that I don't understand:
Let $X_1,\ldots,X_n \sim \operatorname{Bernoulli}(p)$. Then $\mathcal{L}(p) = p^S(1-p)^{n-S}$ where $S = \sum_i X_i$, so $S$ is sufficient.
Intuitively I understand a statistic being sufficient to mean that if I know it I should be able to calculate the likelihood function without knowing the actual value of the parameter $p$. But I don't see how that's the case here -- knowing $S$ and $n$ isn't enough, I still need $p$ and $(1-p)$ to compute the function, or don't I?