I have a Lagrangian $L(x,\dot x)$ and want to solve
$$\arg\min_{\gamma(t)} \int_0^\infty L(\gamma, \dot \gamma)\,dt$$ subject to holding only one of the endpoints fixed: $\gamma(0) = \gamma_0$.
Now I can solve this problem by introducing a new variable which is the second endpoint, then reformulating the above as a nested optimization problem:
$$\lim_{T\to \infty} \min_{\gamma_T} \int_0^T L(\tilde\gamma, \dot{\tilde\gamma})\,dt$$ $$\tilde\gamma = \arg\min_\gamma \int_0^T L(\gamma, \dot\gamma)\,dt\quad \textrm{s.t.}\quad \gamma(0) = \gamma_0, \gamma(T) = \gamma_T$$
which works OK for very simple problems, i.e. for $L = x^2 + \dot x^2$ you can calculate that $\gamma = \gamma_0 e^{-t}$. But this method does not extend to more complicated $L$ where the second variational problem does not have an analytic solution, requiring very complicated shooting methods etc. to find the right $\gamma_T$. Worse, the solution is unstable to errors in $\gamma_T$ (consider what happens to $\gamma = \gamma_0 e^{-t} + \epsilon e^t$ for any $\epsilon$ no matter how small).
Is there a better way of solving the original variational problem?