Consider an $L^2$ (also called least-squares) optimization problem over a vector space $V$, in which one is trying to minimize the $L^2$-norm of the difference between some "target" vector $y$ and vectors in some subspace $W$ of $V$. (This "difference vector" is often called the residual vector.)
The Hilbert projection theorem guarantees that there is a unique vector $x$ in $W$ that minimizes the norm of this residual vector, and consequently also guarantees that the residual vector $x - y$ is orthogonal to the subspace $W$.
If I consider the same problem, but with the key difference that I now want to minimize a $L^{p\neq2}$ norm of the residual, I can provably lose uniqueness of the optimizing vector $x$ (certainly the case in $L^{1}$).
However, I don't know if you also lose orthogonality of the residual vector w.r.t. the subspace $W$, and for which $L^p$ spaces this occurs. Is there a way to show that the residual for such an optimization problem is/isn't orthogonal to the subspace $W$ for the cases where $L^{p\neq2}$?
After doing some research, I can answer this question. I will leave it up in case this can potentially help anyone.
The first thing is that the notion of orthogonality for $L^{p\neq2}$ is not well-defined; the fact that $L^2$ is a Hilbert space is basically needed in order to be able to define such a property. And if one tried to measure $L^2$ orthogonality of a residual obtained from an $L^{p\neq2}$ optimization problem, it would never be able to be orthogonal unless the $L^{p\neq2}$ optimization problem was identical to its $L^2$ equivalent due to the uniqueness result of the Hilbert projection theorem.
However, one can get close to a definition of orthogonality in normed vector spaces that aren't Hilbert spaces by using Riesz's lemma, and the way in which this lemma is useful for describing "almost orthogonal" functions has been answered here.