I am attempting to prove that $x^*(t)$ (satisfying the boundary conditions of $x(t_0)=x_0$ and $x(t_1)=x_1$) is a minimiser for the functional $$J[x]=\int_{t_0}^{t_1}[c(t)\dot{x}^2(t)+a(t)x(t)^2]dt$$ where $c(t)\geq 1$ and $a(t)\geq0$ for all $t\in[t_0,t_1]$, and they are both twice differentiable, as is $x^*(t)$.
I am trying to show that $\Delta J=J[y]-J[x^*]\geq0$ by letting $y$ be a weak variation such that $y(t)=x^*(t)+ \gamma(t)$ where $\gamma(t)=0$ at both end points and is also twice differentiable. My proof is looking like this, but I keep getting stuck (and I'm not sure if some of my steps are valid).
$$J[y]-J[x]=\int_{t_0}^{t_1}\Big[c(t)\big[\dot{x}^*(t)+ \dot{\gamma}(t)\big]^2+a(t)\big[x^*(t)+\gamma(t)\big]^2\Big]dt-\int_{t_0}^{t_1}[c(t)\dot{x}^{*2}(t)+a(t)x^{*2}(t)]dt$$ $$=\int_{t_0}^{t_1}\Big[c(t)\big[2\dot{x}^*(t)\dot{\gamma}(t)+ \dot{\gamma}^2(t)\big]+a(t)\big[2x^*(t)\gamma(t)+\gamma^2(t)\big]\Big]dt$$
From here I argue that as $c(t),a(t), \gamma^2(t), \dot{\gamma}^2(t)$ and the constant 2 are all positive, we can reduce this to.
$$\int_{t_0}^{t_1}\Big[\dot{x}^*(t)\dot{\gamma}(t)+x^*(t)\gamma(t)\Big]dt$$
From here I would like to use integration by parts but I just seem to be going around and around in circles, not being able to conclude anything. Could you please point out where I am going wrong? Any help is greatly appreciated. Thank you a lot.
If $x^*$ satisfies $$\int_{t_0}^{t_1} c(t) \dot x^*(t) \dot \gamma(t) + a(t)x^*(t)\gamma(t)\, dt=0 $$ (that is essentially the weak form of the Euler-Lagrange equation) then your calculation shows $$J[y]-J[x^*]=\int_{t_0}^{t_1} c(t)\dot \gamma(t)^2+a(t)\gamma(t)^2 \, dt \ge 0, $$ and actually $J[y]>J[x^*]=0$ unless $\gamma=0$ (i.e. $y=x^*$). So any solution of the Euler-Lagrange equations is a minimiser.
However, you need the existence of $x^*$ to use this argument, which also can be adapted for other convex variational problems.