I am interested in the algorithm for delayed column generation. I understand the general idea of delayed column generation: We are only adding columns to our solution if the column results in a negative reduced cost (in a minimization problem, obviously the opposite for maximization problem).
So, the reduced cost is given by $\bar{c_j}=c_j-{c_B}'B^{-1}A_j$, where:
- $B$ is the basis matrix
- ${c_B}'$ are the weights on the columns in the basis matrix
- $A_j$ is column $j$ in the matrix A from the linear program in standard form.
The above can be found in the Bertsimas text on page 232 in chapter 6.
We are looking for a reduced cost that is less than $0$ of course. So, we have to examine every column $j$ in the matrix $A$ before determining that there is a column that can be added to our basis matrix.
If we need to examine all columns anyway, then I'm having trouble figuring out why/how this is any faster than solving the problem before including ALL the columns in the optimization problem in the beginning.
What obvious am I missing?
The sources for this are the canonical text Bertsimas available here.
In practice, the column generation problem is typically solved without explicitly enumerating all of the possible columns and checking the reduced cost of each column. Instead, an optimization subproblem is solved to obtain a column with negative reduced cost, or if the subproblem is infeasible, show that there are no columns with negative reduced costs.
A good example of this can be found in Chapter 13 of Chvatal's Linear Programming textbook, where a column generation approach to the cutting-stock problem is developed.