When discussing solutions to linear programs, the discussion immediately jumps to basic feasible solutions.
While I understand why they are important and useful in the context of LP, I need to satisfy myself that these basic feasible solutions span the entire solution space.
Is is something along the lines that every solution to the LP is necessarily a convex combination of basic feasible solutions? How can one show that this is always true?
For the record, I'm talking about the set of solutions to $\mathbf{A}\mathbf{x}=\mathbf{b}$ such that $\mathbf{x} \ge 0$, and I'm assuming the set is bounded and nonemtpy.
Can anyone shed some light on this?
The concept follows from a famous result by Leonid Kantorovich in 1939, where he proved that if $c\cdot x$ has a maximum in any polyhedron, it must also attain the maximum on its boundary.
In other words, e.g. if $Ax = b, x \ge 0$ describes some trapezoid $T \subset \mathbb{R}^2$, and $c\cdot x$ attains a maximum on $T$, it must also attain in on $\partial T$, which is the set of the edges describing the boundary of $T$. So there must be some edge $e \in \partial T$ where $c \cdot x$ is also maximized.
Now a second application of the same result shows that $c \cdot x$ must also have a maximum on $\partial e$, but the boundary of the edge $e$ are only 2 points between which $e$ is drawn.
By this example you can see if you attain a maximum anywhere in $T$, you must also attain it along one of the vertices of $T$. This is easy to extend to $\mathbb{R}^n$ for arbitrary finite $n$.
The intuition behind the proof of Kantorovich's result is exactly what you indicated - you can find a vector from your optimal point inside the polyhedron, moving along which you are guaranteed not to decrease $c \cdot x$ until you hit the boundary of the polyhedron in question.