Let $x^\star$ be a solution of the linear program \begin{align} \text{maximize} &\quad c \cdot x \\ \text{subject to} &\quad A \cdot x \leq b \end{align} How can one compute the derivatives of $x^\star$ with respect to the linear program parameters? \begin{align} \frac{\partial x^\star}{\partial A} \qquad \frac{\partial x^\star}{\partial b} \qquad \frac{\partial x^\star}{\partial c} \end{align}
where derivatives are interpreted in the matrix calculus sense, e.g. $$\left( \frac{\partial x}{\partial A} \right)_{ijk} = \frac{\partial x_i}{ \partial A_{jk}}$$
I suspect the KKT conditions might be useful here.
This is known as sensitivity analysis. If you have a non-degenerate optimal basic feasible solution, it is relatively simple to find derivatives of the optimal BFS or the optimal objective value with respect to changes in b or c. Changes to A can also be analyzed, but this is somewhat more complicated.
If the optimal BFS is degenerate, these derivatives may not exist.
See just about any textbook on linear programming.
Here's a brief explanation.
First, put your LP into standard form by adding slack variables to eliminate the inequality constraints:
$\min c^{T}x$
subject to
$Ax=b$
$x \geq 0$
Here $A$ is a matrix of size $m$ by $n$ with rank $m$.
If there is a unique and non-degenerate optimal basic feasible solution, Then the variables in $x$ can be split up into a vector $x_{B}$ of $m$ basic variables and a vector $x_{N}$ of $n-m$ nonbasic variables.
Let $B$ be the matrix obtaining by taking the columns of $A$ that are in the basis and $A_{N}$ consist of the remaining columns of $A$. Similarly, let $c_{B}$ and $c_{N}$ be the coefficients in $c$ corresponding to basic and nonbasic variables.
You can now write the problem as
$\min c_{B}^{T}x_{B}+c_{N}^{T}x_{N}$
subject to
$Bx_{B}+A_{N}x_{N}=b$
$x \geq 0$.
In the optimal basic solution, we solve for $x_{B}$ and write the problem as
$\min c_{B}^{T}B^{-1}b + (c_{N}^{T}-c_{B}^{T}B^{-1}A_{N})x_{N} $
subject to
$x_{B}=B^{-1}b-B^{-1}A_{N}x_{N} $
$x \geq 0.$
An important optimality condition that the simplex method ensures is that
$r_{N}=c_{N}-c_{B}^{T}B^{-1}A_{N} \geq 0.$
If the solution is also dual non-degenerate, then $r_{N}>0$. We'll need to assume that as well.
In the optimal basic solution, we set all of the variables in $x_{N}$ to $x_{N}^{*}=0$ and get the values of the basic variables from $x_{B}^{*}=B^{-1}b$. By assumption, this optimal basic feasible solution is non-degenerate, meaning that $B^{-1}b$ is strictly greater than 0. Small changes to $b$ won't change $r_{N}$ and won't violate $x_{B} \geq 0$, so the solution will remain optimal after small changes to $b$.
Now, it should be clear that
$\frac{\partial x_{B}^{*}}{\partial b}=B^{-1}$
and
$\frac{\partial x_{N}^{*}}{\partial b}=0.$
Small changes in $c$ won't change $x_{B}$ at all, and will not lose $r_{N} \geq 0$. Thus the solution will remain optimal, and $x_{B}$ won't change, although the optimal objective value will change. Thus
$\frac{\partial x_{B}^{*}}{\partial c}=0$
and
$\frac{\partial x_{N}^{*}}{\partial c}=0.$
If the assumptions of non-degeneracy are violated then these derivatives may simply not exist!
In a similar way, you can analyze changes in $A_{N}$ or in $B$.
This stuff is discussed in V. Chvatal's Linear Programming among many other textbooks on the subject.