In my course on Nonlinear PDEs, we investigate the integral functional $\int_{t_0}^{t_1} L(\dot{\gamma}(t), \gamma(t))dt$, defined on $q$-absolutely continuous curves $\gamma: I \rightarrow \mathbb{R} $ with some interval $I \subset \mathbb{R}.$ They form a Banach space, denoted as $AC^q(I)$ and defined as in the Wikipedia article https://en.wikipedia.org/wiki/Absolute_continuity, but with the requirement that the weak derivative is in $L^q(I)$ where $q\geq 1.$
We require that the Lagrange function $L$ is:
$ \bullet L \in C^1(\mathbb{R}^n \times \mathbb{R}^n, \mathbb{R})$
$ \bullet v \mapsto L(v,x)$ is convex for each $x \in \mathbb{R}^n $
$ \bullet |L(v,x)| \leq A |v|^q + B $ for all $x \in \mathbb{R}^n,v \in \mathbb{R}^n $ (continuity)
$\bullet |L(v,x)| \geq \alpha |v|^q - \beta $ (coercivity)
where $\alpha \geq 0, \beta \geq 0, A$ and $B$ constants.
Now the question. We have the claim:
If $(\gamma_{k})_{k \in \mathbb{N}} \subset AC^q(I)$ converges uniformly to $ \gamma^* \in AC^q(I)$ on $I , \dot{\gamma}^*$ bounded on a subinterval $J \subset I,$ then the continuity of $L$ yields that $L(\dot{\gamma}^*(t), \gamma_k(t))$ converges uniformly to $L(\dot{\gamma}^*(t), \gamma^*(t))$ for $t \in J.$
Could anyone please provide details on the reasoning? Apologies if it is something very obvious.