I'm learning about linear and nonlinear programming and on the chapter about duality I have the following statement and proof I can't understand:
minimize $\textbf{c}^T\textbf{x}$
subject to $\textbf{Ax}=\textbf{b}\;\;\;\;\;(3)$
$\textbf{x}\geq \textbf{0}$
Suppose $(3)$ has a finite optimal solution with value $z_0$. In the space $E^{m+1}$ define the convex set
$C=\{ (r, \textbf{w}): r=tz_0-\textbf{c}^T\textbf{x}, \textbf{w}=t\textbf{b}-\textbf{Ax}, \textbf{x}\geq \textbf{0}, t \geq 0 \}$.
It is easily verified that $C$ is in fact a closed convex cone. We show that the point $(1,\textbf{0})$ is not in $C$. If $\textbf{w}=t_0\textbf{b}-\textbf{A}\textbf{x}_0 = \textbf{0}$ with $t_0 > 0, \textbf{x}_0\geq \textbf{0},$ then $\textbf{x}=\textbf{x}_0/t_0$ is
feasible for $(3)$ and hence $r/t_0=z_0-\textbf{c}^T\textbf{x}\leq\textbf{0};$ which means $r\leq0.$ If $\textbf{w}=-\textbf{A}\textbf{x}_0=\textbf{0}$ with $\textbf{x}_0 \geq \textbf{0}$ and $\textbf{c}^T\textbf{x}_0 = -1$, and if $\textbf{x}$ is any feasible solution to $(3)$, then $\textbf{x}+\alpha\textbf{x}_0$ is feasible for any $\alpha\geq0$ and gives arbitrarily small objective values as $\alpha$ is increased. This contradicts our assumption on the existence of a finite optimum and thus we conclude that no such $\textbf{x}_0$ exists. Hence $(1,\textbf{0})\not\in C$.
Can someone explain this proof in other words to me? I've read it through few times, but it confuses me :/ So my question is: Why doesn't the point $(1,\textbf{0})\in E^{m+1}$ belong into $C$?
$\textbf{A}$ is a matrix, $\textbf{x}, \textbf{b}, \textbf{w}, \textbf{c}$ are vectors and $\alpha, t, t_0, z_0$ are real numbers.
Thnx for any help! =)
P.S. please let me know if you need more information =)
There are two possibilities: $t$ could be zero or positive. We approach the positive case first:
Let the constraints be tight with $t_0>0$ so that $t_0\mathbf{b}-\mathbf{A}\mathbf{x}_0 = \mathbf{0}$, or what is the same thing, $t_0\mathbf{b} = \mathbf{A}\mathbf{x}_0$. Then we see that the scaled vector $x_0/t_0$ satisfies the constraints: $\mathbf{A}(\mathbf{x}_0/t_0) = \mathbf{b}$, so is feasible. Call this scaled feasible vector $\mathbf{x} = \mathbf{x}_0/t_0$.
We wish to see how well the current $t_0$ and $\mathbf{x}_0$ perform compared to the optimal value of the objective function. $r=t z_0-\mathbf{c}^T\mathbf{x} = t_0 z_0 - \mathbf{c}^T\mathbf{x}_0$. Dividing through by $t_0$, we get $r/t_0 = z_0 - \mathbf{c}^T\mathbf{x}$. Since $z_0$ is the minimum possible value of $\mathbf{c}^T\mathbf{x}$ as $\mathbf{x}$ ranges over all possible vectors, this is a small number minus a larger number, unless $\mathbf{x}$ is optimal in which case the two numbers are the same, so is negative or zero: $r/t_0 \leq 0$. Since $t_0 > 0$ in our current case, it must be that $r \leq 0$. In particular $r$ cannot be as large as $1$.
We have shown that if $\mathbf{w} = \mathbf{0}$ it follows that $r \neq 1$ in $C$.
No we approach the case $t = 0$.
Since $t=0$, we again pick a tight vector $\mathbf{x}_0$ and have $\mathbf{w} = t\mathbf{b} - \mathbf{A}\mathbf{x}_0 = - \mathbf{A}\mathbf{x}_0$ and also $r = t z_0 - \mathbf{c}^T\mathbf{x}_0 = - \mathbf{c}^T\mathbf{x}_0$. Suppose we force $r = 1$, hoping to show $\mathbf{w}$ cannot be $\mathbf{0}$. Then $\mathbf{c}^T\mathbf{x}_0 = -1$. Observe that we can multiply $\mathbf{x}$ by any positive number we like, $\alpha > 0$, the larger the better, and $- \mathbf{A}(\alpha\mathbf{x}_0)$ will continue to be zero, so the multiples will also be feasible vectors. The objective will also improve the greater $\alpha$ is: $\mathbf{c}^T(\alpha \mathbf{x}_0) = -\alpha$. (Recall we're searching for a minimum, so a negative number of greater magnitude is superior to one of lesser magnitude.)
We have shown that if $t = 0$, the system has no lower bound on the optimum, so the assumption of a finite optimum ($z_0$) is invalidated.
Whether $t>0$ or $t=0$, either $(1, \mathbf{0}) \not \in C$ or we find a contradiction. Either way, consistency demands $(1, \mathbf{0}) \not \in C$.