Given the standard isoperimetric problem: Minimize the functional
$$ A[y]=\int_a^bF(y,y',x)dx $$
subject to the functional-constraint
$$ B[y]=\int_a^bG(y,y',x)dx = c=\text{constant}$$ (where $ y(x)\in C^2[a;b] $ is the function that we want!)
Every calculus paper says that the problem can be reduced to minimizing
$$ C[y]=\int_a^b (F(y,y',x)+\lambda G(y,y',x))dx$$ Which means, defining $y_2=y+\epsilon\eta $, and then:
$$ \delta C[y]=\left( \frac{d}{d\epsilon}\int_a^b (F(y_2,y_2' ,x)+\lambda G(y_2 ,y_2' ,x))dx \right)_{\epsilon=0}=0$$
Where $\lambda $ is a Lagrange multiplier. This seems plausible in some way; what confuses me is the following: The proof/justification for this formula is often given through a long and complicated process, which starts out by introducing variations given as a linear combination $\hat y=y + \epsilon_1 \eta_1 +\epsilon_2 \eta_2 $ and bla bla... You can look the rest up almost everywhere (I will find a link).
Here is the thing: What I would Naively do with such a problem would just be to apply the Lagrange multiplier directly: setting $ B[y] -c =0$, and follow the standard procedure for ordinary functions:
$$ C[y]=A[y]+\lambda (B[y]-c) $$ $$ \delta C[y]=\left(\frac{d}{d\epsilon} (A[y_2]+ \lambda B[y_2])-\frac{d}{d\epsilon} \lambda c\right)_{\epsilon=0}=0$$
Now, since $ \frac{d}{d\epsilon} \lambda c=0 $, we end with the same result as the textbooks says: $$ \delta C[y]=\left( \frac{d}{d\epsilon}\int_a^b (F(y_2,y_2' ,x)+\lambda G(y_2 ,y_2' ,x))dx \right)_{\epsilon=0}=0$$
The question is simple: is this a correct use of the Lagrange-multiplier? It gives the correct result, but is this always true, or was it just luck?
I feel that there must be something that I have made too simple, since 'isoperimetric problems' have their own name and own chapters:-)