Sometimes I encounter a book or a research paper where the author insists on working with Banach spaces instead of Hilbert spaces.
I am curious as to what would be a non-trivial difference between these two setups for optimization related applications.
For example, you can still define a Frechet derivative on Banach spaces. That's fine. And the majority of optimization concepts such as strong convexity only involves the norm, and not the inner product. Convergence of a sequence of points $x(k)$ towards the optimum $x^\star$ also involves the norm.
Something trivial that can be done in Hilbert space but not Banach space would be taking the inner product (obvious). But even the inner product can be simply written as the sum of the product of the coordinates of a vector. Hence that can be skipped also.
Is there really any significant difference between workng with Hilbert versus Banach spaces?
Here is a nice result which can be proven in Hilbert and Banach spaces, but the assumptions hide the fact that the space is already Hilbert:
I hope this counts as a non-trivial result. In infinite-dimensional optimization, this theorem is quite useful, since it implies some stability of the minimizer $\bar x$ w.r.t. perturbations of the problem (i.e. a discretization can be seen as a perturbation).
The proof uses just a second-order Taylor expansion of $f$ at $\bar x$ and does not need an inner product or a Hilbert space structure.
However, it can be easily checked that $$ (g,h) \mapsto f''(\bar x) [g,h] $$ defines an inner product on $X$ and the associated norm is equivalent to $\|\cdot\|_X$. Hence, $X$ has to be (isomorphic) to a Hilbert space and the theorem is not applicable in Banach spaces.