Question on Euler-Lagrange equation and integral constraints

855 Views Asked by At

Let $L$ be a smooth function and define $J[f]:= \int_a^b L(x,f(x),f'(x)) dx$ for all smooth functions $f$.

If $f$ is an extreme point of $J$, then it satisfies the Euler-Lagrange equation for $L$: $\frac{\partial L}{\partial f}(x,f(x),f'(x)) - \frac{d}{dx} \frac{\partial L}{\partial f'}(x,f(x),f'(x)) = 0$ on $[a,b]$.

I know this result and how to prove this one.

My question is the following:

Let $M$ be another smooth function.

I know the fact that when $f$ is an extreme point of $J$ which satisfies the integral constraint $\int_a^b M(x,f(x),f'(x))dx = C$, then there exists $\lambda$ such that $\frac{\partial (L+\lambda M)}{\partial f}(x,f(x),f'(x)) - \frac{d}{dx} \frac{\partial (L+\lambda M)}{\partial f'}(x,f(x),f'(x)) = 0$ on $[a,b]$

How do I prove this? At least, please someone explains briefly why this holds true. Thank you in advance!

1

There are 1 best solutions below

0
On BEST ANSWER

The goal is to reduce this to the usual result about Lagrange multipliers in two real variables. I'll change your notation from $f$ to $y$. We sort of copy the proof of the usual Euler-Lagrange equation, but adding a second "fake" parameter, and considering $y(x) +\epsilon_1\eta_1(x)+\epsilon_2\eta_2(x)$, where we have the boundary conditions$$\eta_1(a)=\eta_1(b)=\eta_2(a)=\eta_2(b) = 0,$$the parameter $\epsilon_1$ is "free" and playing the role of the "old" $\epsilon$, while $\epsilon_2$ forces $y(x) +\epsilon_1\eta_1(x)+\epsilon_2\eta_2(x)$ to satisfy the integral constraint. So we look at the problem with constraint $$\begin{cases} \min F(\epsilon_1,\epsilon_2) = \displaystyle{\int_a^b}L(x, y(x)+\epsilon_1\eta_1(x)+\epsilon_2\eta_2(x),y'(x)+\epsilon_1\eta_1'(x)+\epsilon_2\eta_2'(x))\,{\rm d}x \\ G(\epsilon_1,\epsilon_2) =\displaystyle{\int_a^b}M(x, y(x)+\epsilon_1\eta_1(x)+\epsilon_2\eta_2(x),y'(x)+\epsilon_1\eta_1'(x)+\epsilon_2\eta_2'(x))\,{\rm d}x = C\end{cases}$$Since $\epsilon_1=\epsilon_2 = 0$ will correspond to the minimum $y$ of $J$, there will be a constant $\lambda$ such that $$\nabla F(0,0) = \lambda \nabla G(0,0).$$This can be rewritten as $\nabla(F-\lambda G)(0,0) = 0$. Meaning that we consider the functional $$y \mapsto \int_a^b L(x,y(x),y'(x)) - \lambda M(x,y(x),y'(x))\,{\rm d}x$$and obtain the Euler-Lagrange equation $$\frac{\partial (L-\lambda M)}{\partial y} - \frac{{\rm d}}{{\rm d}x}\left(\frac{\partial (L-\lambda M)}{\partial y'}\right) = 0.$$See section 5.3 in Mark Kot's A First Course in the Calculus of Variations for more details.