Suppose, $H$ is a Hilbert space over $\mathbb{R}$. Suppose, $X$ and $Y$ are random vectors in $H$. Let’s define Hilbert expectation of a random vector $X$ in a Hilbert space $H$ as a vector $v \in H$, such that $\forall u \in H \text{ } (E\langle X, u \rangle = \langle v, u \rangle)$. If a Hilbert expectation exists, then it is unique, due to the fact that every Hilbert space admits an orthonormal basis. Let’s denote Hilbert expectation of a random vector $X$ as $E_HX$. Now let’s define scalar covariance of two random vectors $X$ and $Y$ as $Cov_H(X, Y) = E\langle X, Y\rangle - \langle E_HX, E_HY \rangle$
Is it always true, that if $X$ and $Y$ are independent, then $Cov_H(X, Y) = 0$?
If $H$ is $l_2$ or any its subspace, and $X = (X_n)_{n = 1}^\infty$ and $Y = (Y_n)_{n = 1}^\infty$. Then $Cov_H(X, Y) = \Sigma_{n = 1}^{\infty} EX_nY_n - \Sigma_{n = 1}^{\infty} EX_nEY_n = \Sigma_{n = 1}^{\infty} Cov(X_n, Y_n)$. Thus, if $X$ and $Y$ are independent, then $Cov_H(X, Y) = 0$. And because every separable Hilbert space is isometrically isomorphic to a subspace of $l_2$, the statement is proven for any separable Hilbert space.
However, I do not know, what to do in case, when $H$ is not separable.
Not sure the following argument works (I'm just learning this stuff)... critiques & corrections most welcome!
First, we have the usual:
$$ \begin{array}{} E\langle X-E_H X, Y - E_H Y\rangle &= E\langle X,Y\rangle - E\langle E_H X, Y\rangle - E\langle X, E_H Y\rangle + \langle E_H X, E_H Y\rangle \\ & = E\langle X,Y\rangle - \langle E_H X, E_H Y\rangle = Cov_H(X,Y) \end{array} $$
where the first equality is due to linearity, and the second equality is due to e.g. $E\langle X, E_H Y\rangle = \langle E_H X, E_H Y \rangle$ by definition of $E_H X$.
Now if $X,Y$ are independent, $E[\cdot]$ can be written as $E_X [ E_Y[\cdot]]$, in the sense that integrating over the joint distribution is equal to integrating over (the marginal, i.e. unconditioned) $Y$ first and then over (the marginal, i.e. unconditioned) $X$. So:
$$Cov_H(X,Y) = E_X [ E_Y \langle X-E_H X, Y - E_H Y\rangle ]$$
Now, the inner expression $f(X) = E_Y \langle X-E_H X, Y - E_H Y\rangle$ is a scalar function of $X$. However, by definition of $E_H Y$ we have:
$$\forall u \in H: E_Y \langle u, Y - E_H Y\rangle = E_Y \langle u, Y\rangle - \langle u, E_H Y\rangle = 0$$
So by substituting $u = X-E_H X$ we have $f(X) \equiv 0$ and $Cov_H(X,Y) = E_X [0] = 0$.
Sanity check: if $X,Y$ were dependent (contrary to OP), then $E[\cdot] = E_X[E_{Y|X}[\cdot]]$ and the inner expression, considered as a scalar function of $X$, becomes $g(X) = E_{Y|X}\langle X-E_H X, Y - E_H Y\rangle$ where the distribution of relevance is the conditional distribution of $Y$ conditioned on the specific input $X$. We can no longer claim $g(X) \equiv 0$ because nothing allows us to conclude that $E_{Y|X}\langle u, Y\rangle = \langle u, E_H Y\rangle$ for a particular input $X$.