I have a question regarding the minimax theorem
https://en.wikipedia.org/wiki/Minimax_theorem
Does anyone know whether the theorem holds also for negative x and y? i.e. $x_i <0, y_j <0$ possible.
If i compute the optimal x and y with the "linprog" function in matlab and the conditions: $$\sum_i x_i=1 , \quad x_i \in \mathbb{R}, \quad x_i>-10$$ (same for y), i get two different values: X gets at least v1 and Y pays at most v2, with v1$>$v2. But i expect them to be equal if the theorem holds..
As Peter indicates in one of the comments, if $\ x_i\ $ or $\ y_j\ $ can be negative, it makes no sense to regard $\ x\ $ and $\ y\ $ as mixed strategies, or $\ x^TCy\ $ as an "average" payoff to player X. However, you can perfectly well regard $\ x^TCy\ $ as the payoff to X in a zero-sum game where $\ x\ $ and $\ y\ $ are regarded as pure strategies. It is then true that the game you give as an example does have optimal pure strategies $\ x^*=\left(1,-1,1\right)^T\ $ and $\ y^*=\left(\frac{1}{5},\frac{3}{5},\frac{1}{5}\right)^T\ $. Since $$ {x^*}^T\pmatrix{2&-1&1\\ 1&0&-1\\-1&1&-2}=\left(0, 0, 0\right)\ , $$ and $$ \pmatrix{2&-1&1\\ 1&0&-1\\-1&1&-2}y^*=\pmatrix{0\\0\\0}\ , $$ then $$ {x^*}^T\pmatrix{2&-1&1\\ 1&0&-1\\-1&1&-2}y=0 $$ for all pure strategies $\ y\ $ of Y, and $$ x^T\pmatrix{2&-1&1\\ 1&0&-1\\-1&1&-2}y^*=0 $$ for all pure strategies $\ x\ $ of X. Thus $\ x^*\ $ guarantees X a payoff of exactly $\ 0\ $, and $\ y^*\ $ guarantees that Y suffers a loss of exactly $\ 0\ $.
More generally, if $\ X\ $ and $\ Y\ $ are any non-empty compact convex subsets of $\ \mathbb{R}^m\ $ and $\ \mathbb{R}^n\ $ respectively, $\ C\ $ an $\ m\times n\ $ matrix with real entries, and $\ f(x,y)=x^TCy\ $ for $\ x\in X $ and $\ y\in Y\ $, it follows from a minimax theorem of Ky Fan (in particular, from Theorem $1$(ii) of this paper) that $\ f\ $ has a saddle point, $\ (x^*, y^*) $ in $\ X\times Y\ $—that is, $$ f(x,y^*)\le f(x*,y)\ \text{ for all }\ (x,y)\in X\times Y\ . $$ If $\ X\ $ and $\ Y\ $ are convex compact polyhedra, the optimal strategies can be found with a pair of dual linear programs, by linearly transforming $\ X\ $ and $\ Y\ $ into polyhedral subsets of probability simplices in $\ \mathbb{R}^{m+1}\ $ and $\ \mathbb{R}^{n+1}\ $ respectively.
In the example given by the OP, however, the polyhedra have the form $$ \sum_{i=1}^mx_i=s,\ \ a_i\le x_i \ \text{ for all }\ i\\ \sum_{i=j}^ny_j=t,\ \ b_j\le y_j \ \text{ for all }\ j\ , $$ where $\ s, t, a_i, b_j\ $ could be any constants, with $\ \sum_\limits{i=1}^m a_i<s\ $ and $\ \sum_\limits{j=1}^n b_j<t\ $. It's possible to transform polyhedra of this form into probability simplices by changing variables to $\ \xi_i=\frac{x_i-a_i}{s-\sum_\limits{i=1}^ma_i}\ $ and $\ \eta_j= \frac{y_j-b_j}{t-\sum_\limits{j=1}^nb_j}\ $, which then satisfy the constraints $$ \sum_{i=1}^mx_i= \sum_{j=1}^n\eta_j=1,\ \xi_i,\eta_j\ge0\ . $$ The payoff to player X in terms of these new variables is \begin{align} \left(\sigma\xi+a\right)^TC\left(\tau\eta+b\right)=\xi^T\overline{C}\eta \end{align} where $\ \sigma=s-\sum_{i=1}^ma_i\ $, $\ \tau=t-\sum_{j=1}^nb_j\ $, $\ \overline{C}=$$ \sigma\tau C+\sigma Cb\mathbb{1}_n^T+\tau \mathbb{1}_ma^TC+ \mathbb{1}_m a^TCb \mathbb{1}_n^T\ $, and $\ \mathbb{1}_r\ $ is an $\ r\times1\ $ column vector whose entries are all $1$. Thus the original game is equivalent to a zero-sum game with payoff matrix $\ \overline{C}\ $, and strategies $\ \xi\ $ and $\ \eta\ $ lying in the probability simplices of $\ \mathbb{R}^m\ $ and $\ \mathbb{R}^n\ $ respectively, which can be solved in the same way as a standard zero-sum matrix game with payoff matrix $\ \overline{C}\ $.
Since the original matrix $\ C\ $ given by the OP is singular, it is easy to find column vectors $\ x\ $ and $\ y $ such that $\ x^TC=0_{1\times3}\ $ and $\ Cy=0_{3\times1}\ $, and to scale them so that they satisfy the constraints. Transforming the problem in the manner described above is therefore not really necessary. Nevertheless, to illustrate the procedure, I use it below to derive the solution given above.
For the example given by the OP, \begin{align} a=b&=\pmatrix{-10\\-10\\-10}\\ s=t&=1\\ \sigma=\tau&=31,\ \text{and}\\ \overline{C}=&\pmatrix{682&-1581&961\\341&0&-341\\-961&1581&-682}\ . \end{align} The optimal mixed strategies for the zero-sum matrix game with matrix $\ \overline{C}\ $ are \begin{align} \xi^*&=\pmatrix{\frac{11}{31}, \frac{9}{31},\frac{11}{31}}^T\ \text{and}\\ \eta^*&=\pmatrix{\frac{51}{155},\frac{53}{155}, \frac{51}{155}}^T\ , \end{align} which give \begin{align} x^*&=\sigma\xi^*+a\\ &= \left(1,-1,1\right)^T\ \text{and}\\ y^*&=\tau\eta^*+b\\ &= \left(\frac{1}{5},\frac{3}{5},\frac{1}{5}\right)^T \end{align} as the optimal pure strategies for the original problem.