In the article "How to use Farkas' lemma to say something important about linear infeasible problems by Prof. Andersen, E.D. (link cited below), he wrote about how we can use Farkas' lemma as a certificate for the infeasibility of a linear programming problem.
https://docs.mosek.com/whitepapers/infeas.pdf
That is, instead of proving the infeasibility directly, we could use Farkas' lemma and instead, find a vector sastifying an another set of constraints.
I could fairly understand the idea. However, a question arose as I was thinking. That is, what is the advantage of this process over just simply proving the infeasibility directly?
My point is, for example, with problems like $$\begin{cases} x_1 + x_2 \to \min \\ x_1 \ge 2 \\ -x_1 \ge 2 \end{cases} $$ then the infeasibility was too clear to tell. But when using that idea, we must find a $p$ sastifying $$\begin{cases} p_1 + 2p_2 > 0 \\ p_1 \le 0 \\ p_2 \ge 0 \end{cases} $$ which is much harder.
Indeed, when facing a real linear programming problem, the infeasibility can't be that clear for us to tell. However, is there any signs that indicates when we should use that idea, and when we shouldn't? Or, being more specific, what type of problems should be checked with the idea?
In the attempt of answering the question, I found that with problems where the amouth of contraints is far less than the amouth of variables, the idea could have an advantage.
But, is there any other cases that the one I stated above when the idea could prove usefulness over than just simply checking via the definition of "infeasibility"?
Any help is appriciated. Thank you for reading.
The main point of this paper is that the Farkas Lemma certificate of infeasibility gives you useful information about what constraints might be responsible for the infeasibility of the problem, while the solution of a phase I LP provides less useful information to help in debugging the problem formulation.