I want to describe a thought experiment to explore independent events, and whether they may actually be linked.
You are given a set of independent coin generators.
Coin generators are true random number generators that return 'Heads' or 'Tails'.
The probability that any of the independent coin generators would return heads is $0.5$. The set of all of these coin generators is $S$.
$\# S = n$.
You fire all the coin generators, and they all produce their results.
Before observing the coin generators, you split the coin generators into two groups $A$ and $B$.
The selection of each group is random. $\# A + \# B = n$
You get two pieces of paper and write out your expectation for the number of heads you expect for each group on each piece.
You then observe $A$.
Do you change your observation for $B$?
Should you?
The random variables of interest are ${\#}h(A)$ and ${\#}h(B)$.
A few things I feel need to be made explicit. As at the time you conducted the experiment (immediately after firing the coin generators and before splitting $S$), the number of heads in $S$ was fixed. Let's call the set of all heads in $S$ $H$.
For each $V \subset S$ the set of all heads in $V$ is $h(V)$.
The selection process is random, this means that $\frac{{\#}A}{{\#}S} : \frac{{\#}h(A)}{{\#}H}$ may be $\gt 1, \, \lt 1$ or $= 1$. However, for each $A$ and $S$, only a few selections ($n\choose {\#}A$) produce $\frac{{\#}A}{{\#}S} : \frac{{\#}h(A)}{{\#}H} = 1$.
$A = B'$ and vice versa.
Let's call the sampling from $S$ such that $\frac{{\#}A}{{\#}S} = \frac{{\#}h(A)}{{\#}H}$ the equal sampling $(T_E)$.
Let's call the sampling from $S$ such that $\frac{{\#}A}{{\#}S} \gt \frac{{\#}h(A)}{{\#}H}$ the $A$-biased sampling $(T_A)$.
Let's call the sampling from $S$ such that $\frac{{\#}A}{{\#}S} \lt \frac{{\#}h(A)}{{\#}H}$ the $B$-biased sampling $(T_B)$.
$Pr(T_E) = n{\choose}{\#}A$
$$Pr(T_A) = Pr(T_B) = \frac{2^n - {{n}\choose{\#}A}}{2}$$
I feel the above should be kept in mind when we consider whether $h(A)$ affects $E\left(h(B)\right)$. For one, observing a higher than expected amount of heads in $A$ may raise the probability of the $A$-biased sampling.
I think that depending on what I observed in $A$, that I may revise my beliefs about $B$. Assuming $n = 8$, ${\#}A = 6$, and $h(A) = 6$, I will update (an increase) my posterior probability on the selection chosen favouring $A$ over $B$ in distribution of heads. This may in turn lead me to lower my estimation of the heads contained in $B$.
I think my stance of possibly shifting my beliefs about $B$ based on my observation of $A$ is legitimate, as the selection process is random and may lead to an unfair distribution of heads.
Should we shift our beliefs after observing $A$ (and more importantly, why so)?
NOTE:
I think it is important to draw a distinction between this experiment and the gambler's fallacy. We do not first generate $A$ and then subsequently generate $B$ (in which case $h(A)$ would be independent of $h(B)$), we generate $S$. We do not observe the number of heads or tails in $S$. The coin generators may be fired in any order (temporal or otherwise), but we do not observe $S$ until after all $x \in S$ have finished generating.
We then randomly sample $S$ into $A$ and $B(A')$. After sampling $S$, we estimate the number of heads in $A$ and $B$ $\left(E\left(h(A)\right) \text{and} \, E\left(h(B)\right) \text{respectively}\right)$. After our estimation, we observe $A$: should we update $E\left(h(A)\right)$ in light of $h(A)$? I don't think it's as clear cut as people seem to be making it out to be.
Assuming that the generators are independent and of known bias, no. Knowing some subset of the coin flips, even if that subset was chosen randomly, does not give you any information about the remaining coin flips. Saying "but $8$ heads is super unlikely so since I've seen $6$ it's much likely that it's $6$ and $2$ rather than $8$ and $0$" is just the gambler's fallacy.
Assuming you agree that the traditional gamblers fallacy is a fallacy, the way I would assure you that the random element doesn't matter is this: Assume instead of flipping coins one by one and looking after each flip, flip a large number of them without looking, rearrange them and then pick a random number $n$ and uncover $n$ coins one by one. Symmetry makes gambling on the next uncover the same as being asked to gamble on a coinflip after $n$ observations (where $n$ was chosen randomly by the casino). If we shout 'tails is due! I bet on tails' we are guilty of the gamblers fallacy. But this is just what we'd be doing in your situation.
Addendum
I suppose since you never explicitly said they were independent or of known bias in your question, I should add that if they are not independent (or you don't know if they're independent) or are of unknown bias that all bets are off. Still, the most plausible scenario here actually calls for you to predict more heads if you see a lot of heads in your first sample. If you see lot of heads, that seems like it would be evidence that these machines are biased toward flipping heads.