Suppose that $\{Y_n(t)\}_{n=1}^{\infty}$ is a sequence of random functions in a Hilbert space, and $D(s,t)=\operatorname{Var}(Y_n(t), Y_n(s))$. Let $\{(w_i(t), \tau_i) : i=1,2...\}$ be eigenfunction–eigenvalue pairs associated with $D(s,t)$. To reduce the dimensionality of $Y_n(t)$, project $Y_n(t)$ onto the space spanned by the functions $\{(w_i(t), \tau_i): 1 ≤ i ≤ q\}$.
Here is what I cannot understand: according to the paper I am studying, by doing this, we can obtain \begin{pmatrix} \langle Y_n, w_1\rangle \\ \langle Y_n, w_2\rangle \\ \vdots \\ \langle Y_n, w_q\rangle \end{pmatrix}
In fact, I am trying to understand the techniques on Hilbert space by using my understanding of the finite dimensional case, since I have not fully understood functional analysis. Considering the projection matrix in a finite vector space, $P=W(W'W)^{-1} W'=WW'$, where $W=[w_1, w_2, \dotsc, w_q]$, so $PY_n=\langle Y_n, w_1\rangle w_1+\langle Y_n, w_2\rangle w_2+\dotsb+ \langle Y_q, w_q\rangle w_q$. I thought this result in a finite dimensional space is different from the above projection in an infinite dimensional space. I don't understand this.
Could you explain how I can understand the dimension reduction? The paper I am reading is [Horváth, L., & Reeder, R. (2012). Detecting changes in functional linear models. Journal of Multivariate Analysis, 111, 310-334.]
Thank you for reading my question :)