Let $x,y_1,\dots,y_k$ be random variables in $\mathcal{L}_{\mathbb{R}}^2(\Omega,\mathcal{F},P)$. Let $[x],[y_1],\dots,[y_k]$ be their corresponding equivalent classes in $L_{\mathbb{R}}^2(\Omega,\mathcal{F},P)$.
$L^2$ is real Hilbert space and $\mathcal{G}:=\text{span}\{[y_1],\dots,[y_k] \}$ is a finite dimensional (hence closed) subspace of $L^2$. By the Hilbert space projection theorem there exist a vector $\hat{\beta}\in\mathbb{R}^k$ such that
$$E([x]-[y]'\hat{\beta})^2=\inf_{\beta\in\mathbb{R}^k} E([x]-[y]'\beta)^2$$
where $[y]:=([y_1],\dots,[y_k])'$.
Question: Is $\hat{\beta}$ also such that
$$E(x-y'\hat{\beta})^2=\inf_{\beta\in\mathbb{R}^k} E(x-y'\beta)^2$$
where $y:=(y_1,\dots,y_k)'$ and conversely? Also, is it true that $\hat{\beta}$ is unique if and only if $C:=E(yy')$ is positive definite?
Regarding the second question, if $E(yy')=E([y][y]')$ is positive definite, then by the orthogonality condition for projection we must have $\hat{\beta}=E([y][y]')^{-1}E([x][y])=E(yy')^{-1}E(xy)$. But what about the only if part?
EDIT: The orthogonality condition for $\hat{\beta}$ is
$$E([x][y])=E([y][y]')\hat{\beta}$$
By the projection theorem there is at least one vector $\hat{\beta}$ satisfying this condition, and conversely any $\hat{\beta}$ satisfying this condition determines a solution. Since $E([y][y]')=E(yy')$ is positive semi-definite by construction, $E([y][y]')$ is positive-definite if and only if it is invertible. If $E([y][y]')$ is not invertible, then from here we see that the orthogonality condition does not have a unique solution. Since a system of linear equations has either zero, one, or infinitely many solutions, we conclude that there are infinitely many solutions $\hat{\beta}$.
Is this correct?
Suppose $X$,$Y_1,\ldots,Y_k$ are (real valued, the complex case is handle similarly) functions in $L_2(\mu)$ and Let $\mathcal{H}_k:=\operatorname{span}(Y_1,\ldots,Y_k)$.
The projection theorem guarantees that there exists a unique function $\widehat{X}\in\mathcal{H}_k$ such that $$ \widehat{X}=\operatorname{arg}\min_{Z\in\mathcal{H}_k}\|X-Z\|_{L_2}$$ moreover, $X-\widehat{X}\in\mathcal{H}^\perp_k$, that is $\langle X-\widehat{X},Z\rangle:=0$ for all $Z\in\mathcal{H}_k$. (Here $\langle U,V\rangle :=\int UV\,d\mu$).
As $\widehat{X}\in\mathcal{H}_k$, there exists $\boldsymbol{\alpha}=[\alpha_1,\ldots,\alpha_k]^\intercal\in\mathbb{R}^k$ such that $\widehat{X}=\alpha_1Y_1+\ldots \alpha_kY:=\boldsymbol{\alpha}\cdot\boldsymbol{Y}$, where $\boldsymbol{Y}=[Y_1,\ldots,Y_k]^\intercal$. The problem that the OP is interested is that of necessary and sufficient conditions for uniqueness of $\boldsymbol{\alpha}$.
Since $X-\widehat{X}\in\mathcal{H}^\perp_k$, $\boldsymbol{\alpha}\in\mathbb{R}^k$ satisfies $\widehat{X}=\boldsymbol{\alpha}\cdot\boldsymbol{Y}$ iff $\boldsymbol{\alpha}$satisfies the linear equations $$\begin{align} \begin{pmatrix} \langle X,Y_1\rangle\\ \ldots\\ \langle X,Y_k\rangle\end{pmatrix} &= \begin{pmatrix} \langle Y_1,Y_1\rangle &\ldots &\langle Y_k,Y_1\rangle\\ \vdots &\ldots &\vdots\\ \langle Y_1,Y_k\rangle &\ldots &\langle Y_k,Y_k\rangle \end{pmatrix}\begin{pmatrix} \alpha_1\\ \vdots\\ \alpha_k\end{pmatrix}\\ \langle X,\boldsymbol{Y}\rangle &= \langle \boldsymbol{Y}\boldsymbol{Y}^\intercal\rangle\,\boldsymbol{\alpha} \end{align} $$
If the $\{Y_1,\ldots,Y_k\}$ are linearly independent, then the matrix $A:=\langle \boldsymbol{Y}\boldsymbol{Y}^\intercal\rangle$ is invertible, in which case $\boldsymbol{\alpha}$ is unique. One way to see this is by constructing first an orthonormal basis for $\mathcal{H}_k$ through the Gram-Schmidt algorithm applied to the ordered basis $(Y_1,\ldots,Y_k)$. The expression for $\widehat{X}$ in terms of this new basis is easily obtained (and unique) by calculation of Fourier coefficients. Then, we invert the GS procedure to recover the expression for $\widehat{X}$ in terms of the ordered basis $(Y_1,\ldots,Y_k)$.
In general (for example when $\{Y_1,\ldots,Y_k\}$ are not linearly independent) the matrix $A$ is not invertible. Still, we can use a generalize inverse (there are infinitely many of them) to obtain expression for a set of coefficients $\boldsymbol{\alpha}$. Amongst all solutions (and generalized inverses) there is one that is special and widely used. This is the Moore-Penrose inverse $A^+$. the special solution $\boldsymbol{\alpha}_*=A^+\langle X,\boldsymbol{Y}\rangle$ will have the additional property that $$ \boldsymbol{\alpha}_*=\operatorname{\arg}\inf\{|\boldsymbol{\alpha}|_2:\widehat{X}=\boldsymbol{\alpha}\cdot\boldsymbol{Y}\} $$
Observation: If $\{Y_1,\ldots,Y_k\}$ are linearly independent and thus, when $A$ is invertible, $A^+=A^{-1}$.