Suppose we wanted to extremise the function (of a finite number of variables) $f$ subject to the constraint $g = 0$. The Lagrange multiplier approach is to extremise without constraint the function
$$ f(x_1, x_2, \ldots) + \lambda g(x_1, x_2, \ldots)$$
I'm not sure I can completely justify this to myself --- I've come across an intuitive geometrical argument but that is all -- and perhaps this is the problem. Essentially, my intuition would be to extend this to functionals like so: suppose we wanted to extremise the functional $F$ subject to the constraint $G = 0$. Then we should extremise without constraint the functional $F +\lambda G$. For instance, suppose
$$ F = \int L(Q_i) \, dx\,dy $$
Where the $Q_i$ are functions of $x$ and $y$, subject to the constraint
$$ \vec{\nabla} \cdot \vec{Q} = A(x,y) \,.$$
My approach has been to define the functional
$$G = \int \vec{\nabla} \cdot \vec{Q} - A(x,y) \, dx \,dy $$
Which means the constraint can be written as $G = 0$. Then to solve the problem we need to extremise without constraint the functional
$$ F + \lambda G$$
for some Lagrange multiplier $\lambda$. That is, we need to solve
$$ \frac{\delta F}{\delta Q_i} + \lambda \frac{\delta G}{\delta Q_i} = 0$$
for each $i$. However, because of the nature of $G$, its functional derivative is identically zero. This means that my Lagrange multiplier drops completely out of the problem, which can't be right. I have found answers to this question and the approach they take is to treat $\lambda = \lambda(x,y)$ as a function of $x$ and $y$, bringing it inside the integral in the expression for $G$. That is
$$ \mathrm{extremise} \qquad \int L(Q_i) + \lambda\left( \vec{\nabla}\cdot \vec{Q} - A)\right)\, dx dy$$
I've never seen anything like this before --- I have always treated the Lagrange multiplier as some constant that sits outside the integral. Can anybody explain what is going on?
Thank you.