Let $X,Y$ be random variables and let $C$ denote the covariance function.
$$C(X,Y) := E[(X-m_x)(Y-m_y)] = E[XY]-m_xm_y.$$ $$C(Y,X) := E[(Y-m_y)(X-m_x)] = E[YX]-m_ym_x.$$
Setting these equal to eachother $C(X,Y) = C(Y,X)$ gives
$$E[XY] = E[YX].$$
This is only true if $X,Y$ are independent (since then $E[XY] = E[X]E[Y]).$ So I conclude that $C(X,Y) = C(Y,X)$ is true if $X,Y$ are independent.
However I read in my textbook that ($X_1, X_2$ being independent)
$$C[X_1,X_1] - 2C[X_1,X_2] - 2C[X_2,X_1] + 4C[X_2,X_2] = V[X_1] + 4V[X_2]$$ doesn't this mean that $C[X_1, X_2] = -C[X_1, X_2]?$.
What's going on here, what's the general rule?
To summarize the comments: Covariance is commutative: $C(X,Y)=C(Y,X)$ for any random variables $X,Y$.
The quoted equation is correct provided independence is assumed. The middle terms on the LHS will be zero by independence and the outer terms are equal to the terms on the RHS. That is, it just uses the properties: $C(X, X)=V(x)$ and $C(X,Y)=0$ whenever $X,Y$ are independent.