This question pertains to the proof of Theorem 1 (The Plotkin Bound) in "The Theory of Error-Correcting Codes" by MacWilliams and Sloane.
The theorem states: For any $(n,M,d)$ code $C$, for which $n<2d$, we have $$M \leq 2[\frac{d}{2d-n}]$$
The proof states: We shall calculate the sum $$\sum_{u\in C}\sum_{v \in C}dist(u,v)$$ in two ways.
First, since $dist(u,v) \geq d$, if $u \neq v$, the sum is $\geq M(M-1)d$.
On the other hand let $A$ be the $M \times n$ matrix whose rows are the codewords. Suppose the $i^{th}$ column of $A$ contains $x_{i}$ $0$'s and $M-x_{i}$ $1$'s. Then this column contributes $2x_{i}(M-x_{i})$ to the sum, so that the sum is equal to $$\sum_{i=1} ^{n} 2x_{i} (M-x_{i})$$
This is where I have a problem. I understand the first part and that the sum must be $$\geq M(M-1)d$$ because there are $M(M-1)$ distinct pairs of unordered codewords and the distance between any of these pairs of codewords must be $\geq d$.
However, I do not understand why the $i^{th}$ column of A contributes $2x_{i}(M-x_{i})$ to the sum? Hence, I don't understand why the sum is equal to $$\sum_{i=1} ^{n} 2x_{i} (M-x_{i}).$$
I don't understand where the $2$ is coming from?
Surely the $v \in \cal D$ in your double sum should be $v \in \cal C.$
In the double sum $$ \sum_{u\in C}\sum_{v \in C}dist(u,v) $$ each pair ${u,v}$ appears twice. When you consider the $M\times n$ matrix whose rows are codewords and focus on a column (i.e., a bit position in the codewords) you need to include this factor of $2.$ To see this write out in full the expression that leads to $2x_i(M-x_i)$. Let $A_{u,i}$ be the $i^{th}$ bit of the codeword with index $u.$ The expression is $$ \sum_{1\leq u,v\leq M} A_{u,i} A_{v,i}=\sum_{1\leq u,v\leq M} \mathbb{1}\{A_{u,i}=A_{v,i}=1\} $$ since we sum over $u,v.$ We get a $1$ term in the sum twice if $A_{u,i}=A_{v,i}=1,$ since the double sum covers each pair $\{u,v\}$ twice.