In coding theory when we encode 101 with 111000111 we have certain error probability. how can one prove that increasing the number of repetitions decreases the error probability.
Let the probability of flipping a bit on receiving is 1-p
If it done using three bits then
P[A] = If two bits are flipped. = 3C2 * p^2 * (1-p)
P[B] = If three bits are flipped = 3C3 * p^3
P[E] < P[A] + P[B]
Similarly one can calculate for n=4 but I am not able to generalize it.
Well, it wasn't too tricky in the end. Let's write $S(p,k)$ for the probability of successfully transmitting a bit, when a repetition code of length $2k+1$ is used. As concluded by zyx (and myself) the question is interesting only, when $0<p<1/2$.
The claim is that for all $p$ in this range and all positive integers $k$ we have the inequality $$ S(p,k+1)>S(p,k). $$
To prove this assume that a repetition code of length $2k+3$ is used once. W.l.o.g. we can assume that a 0 was transmitted, so in the received bitstring of length $2k+3$ any bit is equal to $0$ with probability $1-p$ and equal to $1$ with the (cross-over) probability $p$. The transmission is successful, iff the majority of the bits are zeros. Let us look at the substring $s$ of the first $2k+1$ bits. A majority of those are 0s with probability $S(p,k)$.
In order to analyze the effect of the two additional bits we need to define two events: Call $s$ a nearly successful string (NS), if it has exactly $k$ 0s and consequently exactly $k+1$ 1s. IOW, the correct message lost by a single vote. Similarly call $s$ a nearly failed string (NF), if the correct message won by a single vote, i.e. there were exactly $k+1$ 0s and $k$ 1s in $s$.
When does the receiver seeing all the $2k+3$ bits make a correct decision about the transmitted bit? We see that his/her decision will be the same as it would be with the initial $2k+1$ bits of $s$ except in the two cases, where two last bits are both 1, and the string $s$ is nearly failed, and also in the case, where the two last bits are both 0, and the string $s$ was nearly successful. So we get the formula $$ S(p,k+1)=S(p,k)+(1-p)^2 P(NS)-p^2P(NF). $$ Here the binomial distribution tells us that $$ P(NS)={2k+1\choose k}p^{k+1}(1-p)^k,\qquad P(NF)={2k+1\choose k+1}p^k(1-p)^{k+1}. $$ Therefore (the two binomial coefficients are equal) $$ (1-p)^2P(NS)-p^2P(NF)={2k+1\choose k}p^{k+1}(1-p)^{k+1}(1-2p)>0. $$ Hence $S(p,k+1)>S(p,k)$ as claimed.