How come when we Solve a constrained optimization problem using a substitution method we take total derivatives but when we solve the same problem using the Lagrange method we take partial derivatives?
- If we have a constrained optimization problem with multiple constraints, and we substitute one constraint into the objective function but not the other, if we solve it with the lagrangian do we use the total derivatives, partials, or a combination?
For the bullet point, here is an example: Suppose we want to solve $$\max_{x,y,z} f(x,y,z) \\ s.t. \quad g_1(x,y)=m\\g_2(x,y,z) = n$$ And then suppose we solve the first constraint to get, $h(x,m)=y$, which we plug in to get $$\max_{x,z} f(x,h(x,m),z) \\ s.t. \quad g_2(x,h(x,m),z)=n$$ If we want to use the lagrange method here, we get the lagrangian of $$\Lambda= f(x,y,z) + \lambda (n-g_2(x,h(x,m),z)$$ Now are the first order conditions $$\frac{\partial \Lambda}{\partial x} =0\\ \frac{\partial \Lambda}{\partial z} =0$$ or are they $$\frac{d \Lambda}{d x} =0 \\ \frac{\partial \Lambda}{\partial z} =0$$ or something else?
It's $\frac{\partial \Lambda}{\partial x}$, but technically it shouldn't really matter since we assume $x,z.\lambda$ to be independent so $$\frac{d\Lambda}{dx} = \frac{\partial \Lambda}{\partial x} \frac{dx}{dx} + \frac{\partial \Lambda}{\partial z} \frac{dz}{dx} + \frac{\partial \Lambda}{\partial \lambda} \frac{d\lambda}{dx} = \frac{\partial \Lambda}{\partial x}$$ since the remaining variables don't depend on $x$ so their derivative with respect to $x$ is $0$