I am numerically solving an optimization problem of the form: Maximize $z$ subject to $f(\alpha,z)=c$. Using the method of Lagrange Multipliers I first write down the Lagranian $$ \mathscr L(\alpha,z,\lambda)=z-\lambda(f(\alpha,z)-c), $$ for which upon setting the gradient equal to zero yields the system of equations $$ \begin{aligned} \lambda\partial_\alpha f(\alpha,z)&=0\\ \lambda\partial_z f(\alpha,z)&=1\\ f(\alpha,z) &=c. \end{aligned} $$ Here is my confusion: I have already proven that $\partial_z f(\alpha,z)>0$ for all $\alpha$ and $z$; thus, according to the second equation $\lambda$ will always be some positive constant. If this is the case, then why do I need the Lagrange multiplier at all? Wouldn't it suffice to simply solve the system $$ \begin{aligned} \partial_\alpha f(\alpha,z)&=0\\ f(\alpha,z) &=c. \end{aligned} $$ I proceeded to (numerically) solve this system of two equations and did indeed verify that the solution solves my maximization problem. So do I need the original system of three equations? What am I missing?
2026-04-08 07:28:26.1775633306
Confusion with Lagrange Multipliers
183 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in OPTIMIZATION
- Optimization - If the sum of objective functions are similar, will sum of argmax's be similar
- optimization with strict inequality of variables
- Gradient of Cost Function To Find Matrix Factorization
- Calculation of distance of a point from a curve
- Find all local maxima and minima of $x^2+y^2$ subject to the constraint $x^2+2y=6$. Does $x^2+y^2$ have a global max/min on the same constraint?
- What does it mean to dualize a constraint in the context of Lagrangian relaxation?
- Modified conjugate gradient method to minimise quadratic functional restricted to positive solutions
- Building the model for a Linear Programming Problem
- Maximize the function
- Transform LMI problem into different SDP form
Related Questions in LAGRANGE-MULTIPLIER
- How to maximize function $\sum_{i=1}^{\omega}\max(0, \log(x_i))$ under the constraint that $\sum_{i=1}^{\omega}x_i = S$
- Extrema of multivalued function with constraint
- simple optimization with inequality restrictions
- Using a Lagrange multiplier to handle an inequality constraint
- Deriving the gradient of the Augmented Lagrangian dual
- Lagrange multiplier for the Stokes equations
- How do we determine whether we are getting the minimum value or the maximum value of a function using lagrange...
- Find the points that are closest and farthest from $(0,0)$ on the curve $3x^2-2xy+2y^2=5$
- Generalized Lagrange Multiplier Theorem.
- Lagrangian multipliers with inequality constraints
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
Your observations are correct, though they apply quite specifically to your problem. It is not uncommon for the method of Lagrange multipliers to yield equations that either you already knew, or are useless (like $0 = 0$).
What is true in general is that you never have to use the method of Lagrange multipliers. It's always possible (perhaps not algebraically, but definitely numerically) to use the constraint to eliminate one of the variables, but this method may be disadvantageous for a couple reasons (it may complicate the calculations, for one). For your problem, in many cases we could use the constraint $f(\alpha, z) = c$ and solve for $z$ as a function of $c$ and $\alpha$ and then set the derivative of $z$ with respect to $\alpha$ to zero like in a normal one-variable extremization problem. These will lead to the exact same equations you have already deduced.
The moral of the story? There is no hands-down most efficient way for solving many extremization problems; it will depend on the nature of the problem.