I am looking for a direct method proof of the infimum of a calculus of variation problem. $$\inf_{u\in\scr{C}[0,1],u (0)=0}\int_0^1u^2+(u'-b)^2dx \tag 1$$ The first term wants to set $u=0$ locally, while the second term wants to set $u'=b$. There is a competition between the two. The Euler Lagrange equations would give me something like $u(x)=b\dfrac{\sinh (x)}{\cosh 1}$ and the corresponding energy is $\dfrac{b^2}{e\cosh 1}<b^2$. However, this assumes $u\in\scr{C}^2[0,1]$. I think there might be a minimizing sequence that can lower the energy even more. If a minimum exists, then we should be able to prove that. References to any papers would be helpful.
Some More Analysis That I did $$\int_0^1u^2+(u'-b)^2dx=\int_0^1u^2+u'^2dx+b^2-2u(1)b\geq b^2-2u(1)b \tag 2$$ where to get the inequality, we used the fact that the term in the integral is positive so that the smallest possible value would be controlled by u(1). However, to integral only achieves its minimum value if the function locally is zero. This would impose restrictions on $u(1)$.
In this part of the answer we again show that $$\inf_{u\in\scr{C^1}[0,1],u (0)=0}\int_0^1u^2+(u'-b)^2dx \tag{1*}$$ is attained, and we show that the minimizer is $$ v(x)=b\dfrac{\sinh (x)}{\cosh 1}. $$ That means that $F(v) \le F(u)$ for every admissible $u$. We say that $u$ is admissible when $u\in\scr{C^1}[0,1]$ and $u(0)=0$
This time we use the convexity -- maybe. And compared to the previous, I change the order from "how I got that" to "how one thing follows from another".
Let $F(u)=\int_0^1u^2+(u'-b)^2dx$ be the functional.
Obviously $v$ is $C^\infty$ smooth, hence $C^2$ smooth and also $C^1$ smooth. Obviously, $v$ is admissible. By direct calculation we can check that $v''=v$ and $v'(1)=b$.
If $u$ is $C^2$ and $h$ is $C^1$ and $u(0)=h(0)=0$, I calculate $$ F(u+h)=F(u)+DF(u)(h)+\int_0^1 h^2+\int_0^1 h'{}^2 \tag 3 $$ where (in certain anticipations) I wrote $DF(u)(h)$ instead of $$ \int_0^1 2u h + 2(u'-b) h' = \int_0^1 2u h - 2u'' h + \bigl[ (u' - b) h\bigr]_0^1 . \tag 4 $$
As $v$ is $C^2$, I can do the above with $v=u$. This time we use also that $v''=v$ and $v'(1)=b$. If $h$ is $C^1$ and $u(0)=h(0)=0$, we have $$ F(v+h)=F(v)+DF(v)(h)+\int_0^1 h^2+\int_0^1 h'{}^2 \tag {3v} $$ where as before I wrote $DF(v)(h)$ instead of $$ \int_0^1 2v h + 2(v'-b) h' = \int_0^1 2v h - 2v'' h + \bigl[ (v' - b) h\bigr]_0^1 . \tag {4v} = 0. $$
VARIANT 1:
We will now prove that $F$ attains minimum at $v$. This means that $F(v) \le F(u)$ for every admissible $u$.
So let $h=u-v$. Looking at (3v)&(4v), we see that $F(u)=F(v+h)=F(v)+0+positive\ge F(u)$ which is what we had to prove. QED.
VARIANT 2:
(4v) means (**) that the directional derivative (sometimes called the variation) of $F$ at $v$ in the direction of $h$ is $0$. This can be written as $DF(v)(h)=0$. (Forgive me I used the notation to early, in eager anticipation.) There is a (simple) theorem stating that if $F$ is convex and $DF(v)(h)=0$ for every admissible $h$, then $F$ attains the minimum at $v$. QED. Actually, at (**), the argument is broken. I wonder if any one notices that, and I do not want to rewrite the text again. The correct argument means to use the definition of derivative, together with (3v)&(4v) with $h$ substituted by $t h_0$, and to calculate the corresponding limit where we will observe $t/t$ cancel at the proper place and $t^2/t=t$ make diminishes something that we want to disappear. (There is also alternative to use a ready-made statement relating zero derivative to the Euler-Lagrange equation but this is only weakly related in what I found in Gelfand Fomin, p.28, with end-point condition missing.) This works but VARIANT 1 is easier, after all.