So we've been learning about the Kuhn Tucker conditions in my non-linear optimization course and I've been having trouble with this problem: QUestion: description here
Question: a strictly convex function, $f$, achieves a global minimum over $R^2$ at $(-5,2)$. If you restrict the domain to $x,y \geq 0$, then prove that the minimum must occur at $x = 0$.
So far I've split it up into 4 cases: 1 the minimum occurs when both constraints are inactive ($x>0, y>0$), but this can't be a min since $grad(f) = 0$ only at $(-5,2)$, so the second case ($x=0, y>0$) I have to prove works which is essentially that $\frac{df}{dx} = \mu_1 >0$, and that the third case $\frac{df}{dx} = \mu_2$. I must prove is less than zero, so that makes it not possible, which also deals with the fourth case.
Any tips would be greatly appreciated, Thanks a lot
Let $(x^*, y^*) = \arg\min_{x\geq 0, y\geq 0}f(x,y)$ such that $x^*>0$. Define $\alpha$ as the solution of the following equation $$-5\alpha + (1-\alpha)x^*=0.$$ Then $f$ attains a smaller value at $(0, 2\alpha+(1-\alpha)y^*)$ than $f(x^*, y^*)$. In fact, we obtain $$f(x^*, y^*) > \alpha f(-5,2)+(1-\alpha)f(x^*, y^*)\geq f(0, 2\alpha+(1-\alpha)y^*),$$ where in the first inequality we used strict convexity, and in the second just the definition of convexity. This contradiction almost finishes the proof. Almost because in the very beginning you need to show that there exists at least one minimizer $(x^*, y^*)$. It remains as a simple exercise.