I'm trying to explain orthogonality in inner product function spaces (e.g. Hilbert spaces) intuitively. As main expample, take the $L^2$ inner product given by $$<f,g>_{L^2(I)}:=\int_I f(x)g(x)dx,$$ with $I=[i_1,i_2]\subseteq \mathbb R$ a real interval. Two functions $f,g$ are called orthogonal to each other, whenever $<f,g>=0$. Rightnow, I have two different pictures to visualize such an orthogonality property.
1.) Just take the finite dimensional (in fact, 3D) picture: The terms "orthogonal", "orthogonal projection"... more or less push in this direction. However, there are serious problems with this kind of "intuition". For instance, take an inner product space $X$ with a subspace $A$ and its orthogonal complement $A^\perp$. In finite dimensions, we have $X=A\oplus A^\perp$. This in general fails to hold in Hilbert spaces, however. We need $A$ to be closed to regain this result, which is not automatically given e.g. for $A=\mathcal R(T)$ the range of a bounded linear operator. So there are functions (in fact, some limits of Cauchy sequences) that are not covered by this picture.
2.) Take an example of 2 orthogonal functions, e.g. $\operatorname{sin}$ and $\operatorname{cos}$. These functions are orthogonal in $L^2(I)$ whenever $i_2-i_1=n\pi$ for $n\in\mathbb N$. This motivates to see orthogonality as kind of phase shift. Nevertheless, this picture works for me only for periodic functions. Edit: Thanks for the hints, of course this point can be generalized to orthogonal basis functions (e.g. orthogonal polynomials) that play the role of orthogonal vectors spanning the space in finite dimensions.
How can I provide a different kind of, maybe more realistic intuition on orthogonality in function spaces?
If $A$ is a subspace of an inner product space $X$, then there are two kinds of projections onto $A$: orthogonal projection $\mathcal{O}$ and closest point projection $\mathcal{C}$. One type of projection exists iff the other does and, in that case, the two must be equal. Furthermore, the projection is always unique if it exists. That part is exactly as it was for Euclidean $n$-space. What is different is that such a projection may not exist unless $A$ is a complete subspace of $X$.
The ony thing that hinders the existence of a closest point projection is completeness of $A$. Indeed, if $x \in X$ is not in $A$, then you can choose $\{a_n \}\subseteq A$ such that $$ \|x-a_n\| \le \mbox{dist}(x,A)+\frac{1}{n}, $$ and, quite remarkably, $\{ a_n \}$ turns out to be a Cauchy sequence. This sequence converges in $A$ iff there exists a closest point projection of $x$ onto $A$ and, in that case, the sequence converges to the closest point projection. By the previous discussion, the closest point projection of $x$ onto $A$ exists iff the orthogonal projection exists and, in that case, the two are equal. Even if the sequence $\{ a_n \}$ does not converge, you still have an approximate closest-point and orthogonal projection: $$ \lim_{n}\|x-a_n\|=\mbox{dist}(x,A),\\ \lim_{n}(x-a_n,a) = 0,\;\;\; \forall a \in A. $$ Completeness just gives the sequence something to converge to. In that sense, and in keeping with a standard definition of completion, you can think of $\{ a_n \}$ as being the projection of $x$ onto $A$ (either orthogonal or closest point.) And, $x=(x-a_n)+a_n$ is an approximate decomposition of $x$ along $A^{\perp}$ and $A$, which becomes exact in the completion of $X$, provided $A$ is closed.
Whether or not $X$ is complete, a finite-dimensional subspace $A$ is always complete, which means that projection onto $A$ from $X$ is always possible, regardless of whether or not $X$ is complete. This can be demonstrated algebraically, too, using an orthonormal basis $\{e_1,e_2,\cdots,e_N\}$ of $A$ obtained from Gram-Schmidt to write down the orthogonal projection: $$ \mathcal{O}x = \sum_{n=1}^{N}(x,e_n)e_n. $$ The orthogonal projection $\mathcal{O}x$ automatically becomes the closest-point projection $\mathcal{C}x$ of $x$ onto $A$. (This result is also known as the Least Squares theorem in the context of finite-dimensional spaces because the square distance of $x$ to $A$ is minimized by the orthogonal projection.)
In a Hilbert space $X$, the only thing preventing the projection onto $A$ from existing is that $A$ is closed. The closest point approximation $\{ a_n \}$ to a given $x$ converges in $X$, but it may not lie in $A$ unless $A$ is closed.