I found the following exercise in the "Information Theory, Inference, and Learning Algorithms" by David J.C. MacKay book.
13.4: What is the probability of block error of the $(N, K)$ Hamming code to leading order, when the code is used for a binary symmetric channel with noise density $f$?
And the answer is supplied by the author to be $$\frac{3}{N}{N \choose 2}f^2$$
I can't really understand such answer. I suspect the underlying idea is to calculate a probability of 2 bits flipping, which would explain the binomial and $f^2$ terms. However where did $3/N$ come from?
(the probability of block error is the probability that one or more of the decoded bits in one block fail to match the corresponding source bits)
It definitely seems to be a typo in the book. Most practical error correction coding books cover this. As the comments pointed out we have the block error rate $$\mathbb{P}(Block~ Error)=\binom{N}{2} f^2(1-f)^{N-2}\approx \binom{N}{2} f^2.$$ Another way of saying this is that the dominant block error event is the 2 error event (single errors are corrected, triple errors can sometimes result in wrong decoding if they result in transforming a codeword to another codeword).
Now, if there is a block error what's the most likely number of bit errors? Since the nearest codewords are the most likely spheres the errored word $c+e$ ends up in, the most likely number of bit errors is 3, the minimum distance of the Hamming code.
So, the bit error rate is $$\mathbb{P}(Bit~ Error)\approx \frac{3}{N}\binom{N}{2} f^2.$$