Page 71 of "Information Theory, A Tutorial Introduction" states the following:
More importantly, as the messages are allowed to get longer, the law of large numbers guarantees that almost all messages generated will contain $nP$ $1$s.
As I understand the law of large numbers (both of them) it makes a statement about averages. I'd expect the number of messages containing exactly $nP$ $1$s to actually be vanishingly small.
($n$ is the message length, $P$ the probability of a $1$)
The statement is indeed wrong if taken literally as stated. To make it even more ridiculous, what if $nP$ is not an integer? What is meant is, for large $n$, in nearly all samples the fraction of $1$s will be close to $P$.