Calculating individual mean from a sum of random variables.

68 Views Asked by At

Suppose that I have $X_1, X_2, X_3$ as a sequence of independent random variables with $E(X_i)<+\infty$ but not identically distributed, so they have different $E(X_1)\neq E(X_2)\neq E(X_3)$. Let's say that $P(n)=E(X_1^n)+E(X_2^n)+E(X_3^n)$ is available for $n \in \mathbb{N}$ but not the individual $E(X_i)$. How can I calculate each $E(X_i)$ using $P(n)$?

1

There are 1 best solutions below

7
On

Here is my restatement of the problem, incorporating material from the OP's comments.

We are told there exist three probability measures $\alpha_1,\alpha_2,\alpha_3$ on the reals, are given the values of $P(n)=\sum_{i=1}^3 \int_{\mathbb R} x^n \alpha_i(dx)$, for $n=1,2,\ldots$, and are told that $P(1)=1$ and $P(n)<1$ for $n>1$. Further, no two of the $\alpha_i$ are equal.

Note that $\alpha_1$ is not uniquely determined by this data: if $(\alpha_1,\alpha_2,\alpha_3)$ satisfy the conditions, so do $(\alpha_2,\alpha_3,\alpha_1)$, and $((\alpha_1+\alpha_2)/2,(\alpha_2+\alpha_3)/2, (\alpha_3+\alpha_1)/2)$, and so on.

The question is: do these data imply any bounds on $\int_{\mathbb R} x \alpha_1(dx)$, and if so, how can we calculate them?

An answer is, yes, theoretically, there is a maximal closed interval $[R,S]$ such that for any $a\in [R,S]$ we have $\int_{\mathbb R} x \alpha_1(dx)=a$ for some choice of the $\alpha_i$ consistent with given $P(n)$ sequence. What follows is an outline of why: a sketch of a proof and of a calculation. (Unfortunately, the calculation is probably feasible only in contrived special cases.)

Let $\mu=\alpha_1+\alpha_2+\alpha_3$, so $P(n)$ is the $n$-th moment of $\mu$. Since $P(n)$ is bounded, $\mu$ has bounded support (in fact, contained in $[-1,1]$), and $\mu$ is uniquely determined by the moment sequence $P(n)$ by an instance of the Hausdorff moment problem. In theory, we "know" $\mu$. Since $\alpha_i\ll \mu$ we can, by the Radon-Nikodym theorem, write $\alpha_i(dx)=A_i(x)\mu(dx)$, where the functions $A_i$ are the RN derivatives $$A_i(x)=\frac{d\alpha_i}{d\mu}.$$ So far we don't know the $\alpha_i$ and don't know the $A_i(x)$ but do know that $\sum A_i(x)=1$ and $0\le A_i(x)\le 1$ for $\mu$-almost all $x$, and that $\int_{\mathbb R}A_i(x)\mu(dx)=1$.

Now consider the problem of minimizing $\phi(A)=\int_{\mathbb R}xA(x)\mu(dx)$ subject to the constraints $0\le A(x)\le 1$ and $\int_{\mathbb R}A(x)\mu(dx)=1$. The Neyman-Pearson lemma tells us that the minimum is attained, by a function of form $$A(x)=\begin{cases}1 & x<x_0\\c&x=x_0\\0&x>x_0\end{cases}$$ for some $x_0$ and $c$, with $c\in[0,1]$. The sought-for $R$ is the value of the integral $\phi(A)$. The $S$ is given by maximizing $\phi$ subject to the same constraints; the maximizer has form$$A(x)=\begin{cases}0& x<x_0\\c&x=x_0\\1&x>x_0\end{cases}$$ for some other $x_0$ and $c$, with $c\in[0,1]$. One can exhibit $(\alpha_1,\alpha_2,\alpha_3)$ by taking $A_1$ to minimize $\phi$, by taking $A_3$ to maximize $\phi$, and then taking $A_2=1-(A_1+A_3)$, and then building $\alpha_i=A_i\mu$.