Sensitivity of *part* of an objective function to a parameter

59 Views Asked by At

Let $f:\mathbb{R}\mapsto \mathbb{R}$ be a concave function and $g:\mathbb{R}\mapsto \mathbb{R}$ and consider the maximization problem $$ \max_x f(x)+\varepsilon g(x), $$ for some $\varepsilon\geq 0$. Let's denote by $x^*(\varepsilon)$ the maximizer and let's suppose that it is unique. I would like to show that $f(x^*(\varepsilon))$ is decreasing in $\varepsilon$. It seems intuitive and it is clearly true if we move away from $\varepsilon=0$. In this case there is no weight on $g$ in the optimization and we find the maximizer of $f$. So as we increase $\varepsilon$ from 0 it must be that $f(x^*(\varepsilon))$ (weakly) declines. But is that always true as we move from an arbitrary $\varepsilon_1>0$ to an arbitrary $\varepsilon_2>\varepsilon_1$?

Also, I'm actually interested in a more complicated version of this result in which $x$ lives in a compact subset of $\mathbb{R}^n$ so the more general the proof/explanation, the better. Thanks for any help!

1

There are 1 best solutions below

3
On BEST ANSWER

Suppose $f$ and $g$ are strictly concave and twice differentiable to make things simple. Then $f+\varepsilon g$ is also a strictly concave functions, and the first order conditions characterize the unique maximum if one exists.

If you look at $f(x^*(\varepsilon))$, the derivative is $$ f'(x^*(\varepsilon))\dfrac{dx^*(\varepsilon)}{d\varepsilon}. $$ If you take the perturbed problem, the fonc is $$ f'(x^*(\varepsilon)) + \varepsilon g'(x^*(\varepsilon)) = 0 $$ and implicitly differentiating, $$ f''(x^*(\varepsilon)) \dfrac{dx^*(\varepsilon)}{d\varepsilon} + g'(x^*(\varepsilon)) + \varepsilon g''(x^*(\varepsilon)) \dfrac{dx^*(\varepsilon)}{d\varepsilon} = 0, $$ so that $$ \dfrac{dx^*(\varepsilon)}{d\varepsilon}=\dfrac{-g'(x^*(\varepsilon))}{f''(x^*(\varepsilon))+\varepsilon g''(x^*(\varepsilon))}. $$ So if $f$ and $g$ are strictly concave, the denominator is non-zero and negative, which cancels the negative in the numerator. That means the sign of $dx^*(\varepsilon)/d\varepsilon$ is the sign of $g'(x^*(\varepsilon))$. So $$ \text{sign}\left(f'(x^*(\varepsilon))\dfrac{dx^*(\varepsilon)}{d\varepsilon}\right) = \text{sign }\left(f'(x^*(\varepsilon))g'(x^*(\varepsilon))\right). $$

So the intuition would be, the peaks of $f$ and $g$ alone bracket the peak of $f+\varepsilon g$, so that $f'g'$ will always be negative because they have opposite signs: you're on the "downslope" of one and the "upslope" of the other, because otherwise, the signs would both be positive (to the left of both maxima) or both negative (to the right of both maxima) and you can improve the value of both $f$ and $g$ by moving towards both maxima.

Can this kind of argument work in $\mathbb{R}^N$? Hmm. The problem is that introduces the Hessian in $$ H[f] \nabla x^* + \nabla g + H[g] \nabla x^* = 0 $$ so that $$ \nabla x^*(\varepsilon) = - (H[f]+H[g])^{-1}\nabla g(x^*(\varepsilon)) $$ and $$ \nabla f(x^*(\varepsilon))' \nabla x^*(\varepsilon) = \nabla f(x^*(\varepsilon))'(- (H[f]+H[g])^{-1}\nabla g(x^*(\varepsilon))). $$ I can think of a lot of ways that might work to sign that expression, but none will obviously work at the moment, without imposing additional assumptions on the Hessian.

Perhaps you can use an argument like this: maximize $f(x)$ alone, subject to a constraint that $f(x) +\varepsilon g(x) \ge f(x^*(\varepsilon))+\varepsilon g(x^*(\varepsilon))$. Since the right-hand side of the inequality is maximized, the solution will be $x^*(\varepsilon)$, but then you can use the envelope theorem to see how the perturbation changes the value of your objective.