As one can see for example here, one can use Lagrange's method on Banach spaces.
That is, let $X$ be a banach space and $f:X \to \mathbb{R}$ be a function.
Suppose we have another function $T$ on $X$, that imposes constraints, that is we want to optimize $f$ on the set $\{x \in U| T(x) = 0\}$, where $U$ is some open(convex?) subset of $X$. Now assume that both $f$ and $T$ are frechet differentiable. Then a necessary condition for a minimum $x^*$ is that there exists $\lambda$ such that
$$\mathcal{L}(x^*,\lambda) = D_f(x^*) + \lambda D_T(x^*) =0.$$
In the finite-dimensional case, if $f$ is convex and $T$ is affine, as can be seen for example here, then this is also a sufficient condition.
Now, does this carry over to the infinite-dimensional situation? Especifically, when $T$ is a linear operator, is
$$D_f(x^*) + \lambda T = 0 $$ a sufficient condition?