A textbook asks,
For each $b \in \mathbb{R}^m$, let $\xi^*(b)$ denote the optimal objective function value for the following linear program:
$$\begin{align}&\text{maximize} & c^T x \\&\text{subject to} & Ax & \leq b \\& & x & \geq 0\end{align}$$
Suppose that $\xi^*(b) \lt \infty$ for all $b$. Show that the function $\xi^*(b)$ is concave (a function $f$ on $\mathbb{R}^m$ is called concave if $f\left(tx+(1−t)y\right) \geq tf(x)+(1−t)f(y)$ for all $x$ and $y$ in $\mathbb{R}^m$ and all $0 \lt t \lt 1$). Hint: Consider the dual problem.
(Vanderbei, Linear Programming: Foundations and Extensions 4e, exercise 10.2)
We must show that for any pair of $b_i \in \mathbb{R}^m$ such that $\xi^*(b) \lt \infty$,
$$\xi^*\left(t b_1 + (1-t) b_2 \right) \geq t \xi^*\left(b_1\right)+ (1-t) \xi^*\left(b_2\right)\tag{♥}$$
That is, the optimal objective value associated with a convex combination of $b_i$s is greater than or equal to the same convex combination of the optimal objective values of the $b_i$s taken separately.
Case 1: $b_1$ and $b_2$ yield the same optimal basis.
Let $\Delta b = b_2 - b_1$. Then the left side of $(♥)$ can be written as $\xi^*\left(b_2 - t\Delta b\right)$. We know (from a previous chapter on sensitivity analysis) that the optimal dictionary remains optimal for $t \in (0,1)$.
Let $c_\mathcal{B}$ denote the costs associated with the basic variables and $B$ denote the basis matrix in this optimal dictionary. Then the cost associated with a combination of $b_1$ and $b_2$ is
$$\begin{align} \xi^*\left( t b_1 + (1-t) b_2 \right) & = c_\mathcal{B}^T B^{-1} \left( t b_1 + (1-t) b_2 \right) \\ & = t c_\mathcal{B}^T B^{-1} \left( b_1 \right) + (1-t) c_\mathcal{B}^T B^{-1} \left( b_2 \right) \\ & = t \xi^*\left(b_1\right)+ (1-t) \xi^*\left(b_2\right) \end{align}$$
and equality holds in $(♥)$. Therefore, the objective value for $t$ in this range is just the corresponding linear interpolation of the objective values associated with $b_1$ and $b_2$.
I visualize it like this:
Case 2: $b_1$ and $b_2$ yield different optimal bases.
This is where my reasoning gets sketchy.
Referring again to sensitivity analysis, there will be a value (or several) of $t \in (0,1)$ where a change of basis (pivot) is required. Since the basis changes, so do the cost coefficients in $c_\mathcal{B}$, and the objective function will not increase or decrease at the same rate.
The graph of $\xi^*$ now looks like this (in orange), and we can compare it to the line given by a simple linear interpolation of the optimal objective values associated with $b_1$ and $b_2$.
The problem asks us to show that
$$\text{orange line} \geq \text{blue line}$$
for all pairs of $b_i$. This is true in the graph above, but it seems to me that the orange line could also have its "bend" on the other side of the blue line, violating the inequality.
How should I proceed? The problem suggests that I consider the dual, and I'm wondering if I am perhaps supposed to relate this to the weak duality theorem. I would appreciate a hint.


For $b_1$, the simplex method guarantees an optimal primal solution $x_1$ and optimal dual solution $y_1$ with corresponding costs $c^T x_1 = b_1^T y_1$. Perturb $b_1$ to $t b_1 + (1-t) b_2$. Then $y_1$ remains dual feasible, and $(t b_1 + (1-t) b_2)^T y_1$ provides a lower bound for the optimal objective value of this new problem.
That is,
$$\begin{align} \xi^*\left(t b_1 + (1-t) b_2\right) & \geq (t b_1 + (1-t) b_2)^T y_1 & \text{because $y_1$ remains dual feasible} \\ & = t b_1^T y_1 + (1-t) b_2^T y_1 \\ & \geq t b_1^T y_1 + (1-t) b_2^T y_2 & \text{because $b_2^T y_1 \geq b_2^T y_2$} \\ & = t \xi^*(b_1) + (1-t) \xi^*(b_2) \end{align}$$
($y_2$ denotes the optimal solution to the problem where $b = b_2$, and we know $b_2^T y_1 \geq b_2^T y_2$ because it is a minimization problem.)