How to define the expectation and covariance of a random variable taking values in an inner product space?

199 Views Asked by At

I'm dealing with random variables that takes values in an inner product space, that is possibly finite dimensional but not necessarily Euclidean. To be precise: let $dim(V)=d$, and we equip $V$ with the topology induced from the norm coming from the inner product $<,>$. Once done we define a random variable $X : \Omega \to V$ as a function that pulls back the open subsets of $V$ on the Borel sigma algebra of $\Omega$.

My question is: given the above, how do we go about the definition of $\mathbb{E}X, \mathbb{Cov} [X]$ respectively? So to do this, let's take a basis $B$ of $V$, and then a vector space isomorphism $\Phi_B: V \to B$ that takes the basis $B$ of $V$ onto the canonical basis $\{v_1,...v_n\}$ of $\mathbb{R}^d$.

Next, we define the the expectation as:

Suggested definition 1: $\mathbb{E}X:= \Phi_B^{-1} \mathbb{E}[\Phi_B \circ X] \in V$. Note that we can also try to define it as follows:

Suggested definition 2: Let $B:=\{v_1,...v_d\}$ be a basis. Then we express: $X= \sum_{i=1}^{d}X_iv_i, X_i: \Omega \to \mathbb{R}$ are real random variables ("the components w.r.t. the basis"). Then we can define: $\mathbb{E}X:= \sum_{i=1}^{d}(\mathbb{E}X_i) v_i$. I think this definition is more amenable to prove basis-independence, by the following argument. Assume that: $X= \sum_{i=1}^{d}X_iv_i = \sum_{i=1}^{d}Y_i w_i$. So, by taking the integral w.r.t. the prob. measure on $\Omega$, and taking the $v_i$ out of the integrals as they're ($V$-valued) constants, we arrive at: $\sum_{i=1}^{d}(\mathbb{E}X_i)v_i = \sum_{i=1}^{d}(\mathbb{E}Y_i) w_i$

But then of course, we need to prove the equivalence between these two suggested definitions, how do we do that?

Next, we define the covariance as follows:

Suggested definition 1: Consider $\mathbb{Cov}[\Phi_B \circ X]$, which is a matrix, but think of a linear operator on $\mathbb{R}^d$. Then define $\mathbb{Cov} [X]$ as a linear operator on $V$, defined below:

$\mathbb{Cov} [X] := \Phi_B^{-1} \circ \mathbb{Cov}[\Phi_B \circ X] \circ \Phi_B$.

My problem is: I can't readily see why these two definitions above are basis independent?

To prove basis independence for $\mathbb{E}X$, we have to prove that for any two bases $B, B'$ of $V$, we must have:

$\Phi_B^{-1} \mathbb{E}[\Phi_B \circ X] = \Phi_{B'}^{-1} \mathbb{E}[\Phi_{B'} \circ X]$. But I'm not sure why it's true?

Similarly, to prove that the definition of covariance is basis-independent, we need to show:

$\Phi_B^{-1} \circ \mathbb{Cov}[\Phi_B \circ X] \circ \Phi_B = \Phi_{B'}^{-1} \circ \mathbb{Cov}[\Phi_{B'} \circ X] \circ \Phi_{B'}$. But I'm not sure why it's true?

Suggested definition 2: (only attempted, not finished!) Much like the way I defined the expectation in the suggested definition 2, we can go about defining the covariance using its linearity in one variable: $\mathbb{Cov}(X):= \mathbb{Cov}(X,X) = \mathbb{Cov}(\sum_{i=1}^{d}X_i v_i,\sum_{j=1}^{d}X_j v_j) = \sum_{i=1}^{d} \sum_{j=1}^{d}\mathbb{Cov}(X_i, X_j)??? $ I put $???$ here because I don't know what should be there, but for sure $???$ should be a linear operator on $V$. If you take $V:= \mathbb{R}^d$ with its canonical inner product, then $???$ will be the matrix/linear operator $e_i e_j^{T}$. I'll think more about it!

Any constructive help would be appreciated!!!

1

There are 1 best solutions below

0
On

Expectation of random variables taking values in topological vector spaces (in particular inner products) is usually defined in terms of Pettis integration:

Suppose $X:(\Omega,\mathscr{F},\mu)\rightarrow (V,\mathcal{B})$, where $V$ is a topological vector space equipped with the Baire $\sigma$-algebra $\mathcal{B}$. Suppose that $\int_\Omega|\lambda(X)|\,d\mu<\infty$ for all $\lambda\in V^*$, the dual of $V$ (space of continuous linear functionals on $V$). A vector $\mu_X\in V$ such that $$\lambda(\mu_X)=\int_\Omega \lambda\circ X\,d\mu$$ for all $\lambda\in V^*$ is called the mean (or integral) of $X$ and is denoted by $\mu_X:=E[X]=\int_\Omega X\,d\mu$.

For inner product spaces $V$, the notion of covariant is defined also through this type of integral. For simplicity, suppose $(V,\langle\cdot,\cdot\rangle)$ is a real inner product space and let $X:(\Omega,\mathscr{F},\mu)\rightarrow(V,\mathcal{B})$. Suppose that $E[|\langle x,X\rangle|^2]<\infty$ for all $x\in V$. Then $\mu_X$ is the unique vector such that $$\langle x,\mu_X\rangle=E\langle x,X\rangle\,\qquad x\in V$$ Also, the function $f:V\times V\rightarrow \mathbb{R}$ $$f(x,y):=\operatorname{cov}\big(\langle x,X\rangle,\langle y, X\rangle\big)$$ is a bilinear form. Thus there exists a unique $\Sigma\in L(V,V)$ such $$f(x,y)=\langle x, \Sigma y\rangle, \qquad x\in V, y\in V$$ It is easy top check that $\Sigma$ is autoadjoint. This $\Sigma\in L(V,V)$ is defined as the covariance of $X$ and denoted as $\operatorname{cov}(X):=\Sigma$.

It is not difficult to check that tis definitions coincide with the component wise definition of mean and covariance given in vector valued random variables on Euclidean spaces.