Assume that we want to solve this equation:
$$ Ax \leq b$$
So we can either use Quadratic Programming:
$$J_{max}: x^TQx + c^Tx$$ $$Ax \leq b$$ $$x \geq 0$$
Or Linear Programming: $$J_{max}: c^Tx$$ $$Ax \leq b$$ $$x \geq 0$$
Where $Q = A^TA$ and $c = A^Tb$
So what's the difference when I try to find the best $x$ value? They both solve the problem. But they are different values. Is QP more optimal?
Practical GNU Octave Example:
Linear programming VS Quadratic programming.
Download this lmpc.m file and run it with this code:
Now add this code line
u = qp([], alp, -clp, [], [], [], [], [], alp, blp);
like this:
When we run the function again, then we get this result.
I'm seeking the best choice of selecting QP VS LP if I want to find $U$ from this equation:
$$r = PHI*x + GAMMA*U$$
Ass you can see, for LP, the objective function is:
c = (GAMMA'*GAMMA)'*(r - PHI*x)
And the constraints are:
A = GAMMA'*GAMMA
b = GAMMA'*(r - PHI*x)
For QP, the objective function is:
H = GAMMA'*GAMMA
c = -((GAMMA'*GAMMA)'*(r - PHI*x)) % Must have negative sign, else unstable!
And the constratins are:
A = GAMMA'*GAMMA
b = GAMMA'*(r - PHI*x)
If we now compare the outputs between LP and QP. What can we say about that? LP gives a more smoother result, but the output is like an impulse. While QP have more oscillating result?




I am interpreting that you want to find a feasible solution to $Ax \le b, x \ge 0$. (If the non-negative constraint is not needed, you might like to remove them).
In that case, you just have to solve
$$\min 0 $$ subject to $$Ax \le b$$ $$x \ge 0$$
Note that every linear programming problem is actually a quadratic programming problem with $Q=0$.
You can also define your own objective value like what you did in the question.
When you say a solution is more optimal, we have to state in what sense do we mean a solution is better than another solution.
Notice that $\min f(x)$ is equivalent to $-\max (-f(x))$.