Let $f(\cdot)$ be a linear function.
$f:\mathbb{R}^n\rightarrow\mathbb{R}$
$\;\quad\;\mathbf{x}\;\rightarrow f(\mathbf{x})$.
Let $\mathbf{A}$ be a matrix in $\mathbb{R}^{m\times n}$.
Let $\mathbf{b}=\left(b_1, b_2, \dotsc, b_m\right)^\mathrm{T}$ be a vector in $\mathbb{R}^{m}$.
The following problem $(P)$:
$\mathrm{maximize}\;\;\; f(\mathbf{x})$
subject to: $\;\;\mathbf{A}\mathbf{x}\leq \mathbf{b}$.
is a $0$-$1$ binary programming problem or a linear programming problem depending on $\mathbf{x}$ if $\mathbf{x}\in\{0, 1\}^n$ or $\mathbf{x}\in\mathbb{R}^n$.
Problem $(P)$ is also hard to solve or easy to solve depending on $\mathbf{x}$ if $\mathbf{x}\in\{0, 1\}^n$ or $\mathbf{x}\in\mathbb{R}^n$.
My question is the following: It is a very basic question. May be wrong one but I really need a clarification.
- If $\mathbf{x}\in\mathbb{R}^n$, why we do not simply calculate the derivative of $f(\cdot)$ at point $\mathbf{x}$ and we see its minimum and maximum and verify the constraints and solve $(P)$?
- If $\mathbf{x}\in\{0, 1\}^n$, why we cannot do the same thing?
- Is the derivative make things complicated?
- Does the derivative exists if $\mathbf{x}\in\{0, 1\}^n$?
Please any explanation. And thank you very much for your help.
EDIT: The function $f(\cdot)$ could be nonlinear and has a non-constant derivative. This AFAIK makes the problem even more complex. The problem becomes $0$-$1$ nonlinear binary programming or nonlinear programming problem. Still the derivative cannot solve $(P)$?
Suppose that $n=2$ and $f(x)=x_1+x_2$, so $f$ is always increasing in both $x_1$ and $x_2$. So the problem then is to find all combinations of $x_1,x_2$ that satisfy the inequality constraints and see which one yields the highest value of $f$. In practice, what you do is that you implicitly cycle through different combinations of constraints in an intelligent way, making each constraint in the combination binding, verifying the others, and comparing $f$-values.