This is an easy question about taking expectations that I'm confusing myself with. To describe the simplest setting, suppose that I have $N$ machines, which when chosen outputs a value $1$. I'll choose $K$ machines simultaneously, as a batch. I'll denote the probability of each machine $i$ being chosen in this ($K$-batch) setting as $p_i$, from a frequentist perspective if it's not well defined.
I want to study the expectation of the sum of the values the machines spit out. When we denote the $K$-batch as $S_k\subset \{1,\dots,N\}$. Clearly we have $$\mathbb{E}\left[ \sum_{i \in S_k} 1\right] = K.$$ However, I want to think about things in a different way. As there are ${N\choose{K}}$ choices for the $K$-sample, I want to compute the expectation directly for $S_k^c$ each realization of the $K$-sample, for $c \in \{1,\dots , {N\choose{K}}\}$. However, in this setting, we have that $$\mathbb{E}\left[ \sum_{i \in S_k} 1\right] =\sum_{c=1}^{{N\choose{K}}} K \left(\Pi_{i\in S_k^c} p_i \right) \neq K$$ in general.
So what is going on here? My intuition would be that the probability of $S_k^c$ being selected as the batch can't be computed as $\Pi_{i\in S_k^c} p_i$, because even though I obtained the $p_i$ by repeating the $K$-batch experiment multiple times from a frequentist perspective, still, the probability of device $j$ being selected, without replacement, after device $i \neq j$ is selected, is not $p_j$. Thus the multiplicative law doesn't apply due to lack of independence.
If so, what is the correct way (or technique) to compute the probability of $S_k^c$ being selected? For example, would I have to define $p_i$ a different way, somehow? I believe there is a similar question in this link: Sampling a sequence without replacement with non-uniform probability of sampling each element but it's a bit difficult to understand. In particular, I don't get why renormalization really works to obtain the probability of the later devices being selected, though it seems intuitive.