I was working on Problem 3 in Ch. 2 of Gelfand & Fomin's Calculus of Variations, which reads:
Find the extremals of a functional of the form $$\int^{x_1}_{x_0}F(y',z')dx$$ given that $F_{y'y'}F_{z'z'}-(F_{y'z'})^2 \neq 0$ for $x_0 \leq x \leq x_1$. I easily derived the answer which the book gives, which is a family of straight lines in three dimensions.
When I solved the problem, I never used the assumption that the determinant of the Hessian is nonzero. My question is: why do we make this additional assumption? It makes me think we're doing a second-partials test on $F$, but that wouldn't make sense because we care about extrema of the functional, not of the integrand. I say that I "easily derived" the answer the book gives, but I'm worried that I missed some nuance in this problem that depends on the additional assumption.
By "family of lines" the authors mean the functions $$ y(x) = (1-x)y_0+xy_1,\quad z(x) = (1-x)z_0+xz_1 \tag{1} $$
But if the Hessian degenerates, there may be other extremals. For example, let $F(y',z')=y'+z'$. Then $$\int^{x_1}_{x_0}F(y',z')\,dx = y_1-y_0+z_1-z_0$$ which is the same quantity for all $(y,z)$ that satisfy the boundary conditions. Thus, every such $(y,z)$ is extremal, and it need not be a straight line.
For a more realistic example, take $F(y',z')=\sqrt{(y')^2+(z')^2}$: nonlinear, but with degenerate Hessian. Now the extremals are straight lines geometrically, but need not be functions of the form $(1)$, because any parametrization of the line segment from $(y_0,z_0)$ to $(y_1,z_1)$ minimizes $F$. That's a lot of extremals, with no nice formula to cover all of them.
In contrast, if $F(y',z')=(y')^2+(z')^2$ (nondegenerate Hessian) the family of extremals is precisely $(1)$.