Help Proving The Following Theorem On Taylor Expansions

161 Views Asked by At

I am trying to prove this theorem on page 6 (https://www.jstor.org/stable/3001633):

Theorem: If $x_1, \dots, x_k$ are independently distributed with density functions: $$f_{n_i}(x_i) = \left(\frac{n_i}{2}\right)^{\frac{n_i}{2}} \frac{x_i^{\frac{n_i}{2}-1}e^{-\frac{n_ix_i}{2}}}{\Gamma\left(\frac{n_i}{2}\right)}$$ for $0 \leq x_i < \infty$ and $R(x_1,\dots,x_k)$ is a rational function with no singularities for $0 < x_1,\dots,x_k < \infty$, then $\text{Ave}\{R(x_1,\dots,x_k)\}$ can be expanded in an asymptotic series in $\frac{1}{n_i}$. In particular: $$\text{Ave}\{R(x_1,\dots,x_k)\} = R(1,\dots,1) + \sum_{i=1}^k \frac{1}{n_i} \left.\frac{\partial^2 R}{\partial x_i^2}\right|_{(1,\dots,1)} + O\left(\sum \frac{1}{n_i^2}\right)$$

I was able to begin the proof:

Proof

  • Let $X_1, X_2, \dots, X_k$ be independent random variables with density functions $f_{n_i}(x_i)$ as given in the theorem.
  • Assume some function $R(x_1, x_2, \dots, x_k)$.

By Taylor's theorem, we can expand $R(x_1, x_2, \dots, x_k)$ around the point $(1, 1, \dots, 1)$: $$R(x_1, x_2, \dots, x_k) = R(1, 1, \dots, 1) + \sum_{i=1}^k \frac{\partial R}{\partial x_i}\bigg|_{(1, 1, \dots, 1)} (x_i - 1) + \frac{1}{2}\sum_{i=1}^k \frac{\partial^2 R}{\partial x_i^2}\bigg|_{(1, 1, \dots, 1)} (x_i - 1)^2 + \dots$$

Now, we take the average of this function over $X_1, X_2, \dots, X_k$. In probablity theory, averages can be thought of as "Expected Values":

$$\text{Ave}\{R(X_1, X_2, \dots, X_k)\} = \int \dots \int R(x_1, x_2, \dots, x_k) f_{n_1}(x_1)f_{n_2}(x_2) \dots f_{n_k}(x_k) dx_1 dx_2 \dots dx_k$$

Substituting the density functions $f_{n_i}(x_i)$ into the average expression, we have: $$\text{Ave}\{R(X_1, X_2, \dots, X_k)\} = \int \dots \int R(x_1, x_2, \dots, x_k) \left(\prod_{i=1}^k \left(\frac{n_i}{2}\right)^{\frac{n_i}{2}} \frac{x_i^{\frac{n_i}{2}-1}e^{-\frac{n_ix_i}{2}}}{\Gamma\left(\frac{n_i}{2}\right)}\right) dx_1 dx_2 \dots dx_k$$

I tried to simplify this a bit further:

$$\text{Ave}\{R(X_1, X_2, \dots, X_k)\} = \frac{1}{2^{\frac{k}{2}}} \left(\prod_{i=1}^k n_i^{\frac{n_i}{2}}\right) \int \dots \int R(x_1, x_2, \dots, x_k) \left(\prod_{i=1}^k \frac{x_i^{\frac{n_i}{2}-1}e^{-\frac{n_ix_i}{2}}}{\Gamma\left(\frac{n_i}{2}\right)}\right) dx_1 dx_2 \dots dx_k$$

But I don't know how to continue this from here (or if I have even done the work correctly). Can someone please help me out here?

Thanks!

1

There are 1 best solutions below

6
On BEST ANSWER

This result is very computational, so there may be a computational error here or there in my answer, hopefully no more than a constant.

First, the Taylor expansion you provide in multiple dimensions is actually incorrect. See, for example, the wikipedia page on Taylor expansions. The second order expansion of $R$ about $\mathbb{1} := (1,\dots,1)$ is $$R(x_1,\dots,x_k) = R(\mathbb 1) + \sum_{j=1}^k D_jR(\mathbb 1)(x_j - 1) + \frac{1}{2}\sum_{i,j=1}^{k}D_{ij}R(\mathbb 1)(x_i-1)(x_j-1) + \varepsilon,$$ where $$D_iR(\mathbb 1) := \frac{\partial R}{\partial x_i}(\mathbb 1), \quad D_{ij}R(\mathbb 1) := \frac{\partial^2 R}{\partial x_i\partial x_j}(\mathbb 1), \quad \varepsilon := O\left(\sum_{l,m,n=1}^k(x_{l}-1)(x_m-1)(x_n-1)\right).$$

This yields (using the customary notation of $\mathbb E$ instead of Ave) $$\mathbb E[R(X_1,\dots,X_n)] = R(\mathbb 1) + \sum_{j=1}^k D_jR(\mathbb 1)(\mathbb E[X_j]-1) + \frac{1}{2}\sum_{i,j=1}^{k}D_{ij}R(\mathbb 1)\mathbb E[(x_i-1)(x_j-1)] + \mathbb E[\varepsilon].$$ Now, we can calculate $$\mathbb E[X_i] = 1, \quad \mathbb E[(X_i-1)^2] = \frac{2}{n_i}$$ for all $i$ straight from the definition of expectation, hence $\mathbb E[X_i] - 1 = 0$. By independence it holds $\mathbb E[(X_i-1)](X_j-1)] = \mathbb E[(X_i-1)\mathbb E[(X_j-1)] = 0$ when $i\neq j$, hence $$\mathbb E[R(X_1,\dots,X_n)] = R(\mathbb 1) + \frac{1}{2}\sum_{i=1}^{k}\frac{\partial^2R}{\partial x_i^2}R(\mathbb 1)\mathbb E[(X_i-1)^2] + \mathbb E[\varepsilon] = R(\mathbb 1) + \sum_{i=1}^{k}\frac{1}{n_i}\frac{\partial^2R}{\partial x_i^2}(\mathbb 1)+ \mathbb E[\varepsilon].$$

We turn our attention towards the computation of $\mathbb E[\varepsilon]$. We have $$\mathbb E[\varepsilon] = O\left(\sum_{l,m,n=1}^{k} \mathbb E[(X_l-1)(X_m-1)(X_n-1)]\right) = O\left(\sum_{n=1}^{k} \mathbb E[X_n^3]\right) = O\left(\sum_{n=1}^{k} \frac{1}{n_i^2}\right),$$ where we used independence and the fact $$\mathbb E[X_n^3] = 1 + 3\frac{2}{n_i} + 2\frac{4}{n_i^2}.$$

In total we then have $$\mathbb E[R(X_1,\dots,X_n)] = R(\mathbb 1) + \sum_{i=1}^{k}\frac{1}{n_i}\frac{\partial^2R}{\partial x_i^2}(\mathbb 1)+ O\left(\sum_{i=1}^{k} \frac{1}{n_i^2}\right).$$ This is precisely what the authors claimed.

EDIT: As @forgottenarrow points out, a little more justification is needed to explain why

$$\mathbb E [\varepsilon] = O\left(\sum_{l,m,n=1}^{k} \mathbb E[(X_l-1)(X_m-1)(X_n-1)]\right).$$ This can be seen by simply going up to the next order of the Taylor expansion. We have $$R(x_1,\dots,x_k) = R(\mathbb 1) + \sum_{j=1}^k D_jR(\mathbb 1)(x_j - 1) + \frac{1}{2}\sum_{i,j=1}^{k}D_{ij}R(\mathbb 1)(x_i-1)(x_j-1) + \frac{1}{6}\sum_{l,m,n}D_{lmn}R(\mathbb 1) (x_l-1)(x_m-1)(x_n-1) + \varepsilon,$$ where $$\varepsilon = O\left(\sum_{l,m,n,p}C_{lmnp}(x_l-1)(x_m-1)(x_n-1)(x_p-1)\right).$$ Taking expectations yields $$\mathcal E[R(X_1,\dots,X_k)] = R(\mathbb 1) + \sum_{i=1}^k\frac{1}{n_i}\frac{\partial^2 R}{\partial x_i^2} + O\left(\sum_{i=1}^{k} \frac{1}{n_i^2}\right) + \mathbb E[\varepsilon].$$ We have $$\mathbb E[\varepsilon] = O\left(\sum_{l=0}^{k}\frac{\mathbb E[C_{llll}]}{n_l^3}\right),$$ hence we can pull the $\mathbb E[\varepsilon]$ into the $O$ to get

$$\mathbb E[R(X_1,\dots,X_k)] = R(\mathbb 1) + \sum_{i=1}^k\frac{1}{n_i}\frac{\partial^2 R}{\partial x_i^2} + O\left(\sum_{i=1}^{k} \frac{1}{n_i^2}\right) + \mathbb E[\varepsilon] = \mathbb E[R(X_1,\dots,X_n)] = R(\mathbb 1) + \sum_{i=1}^{k}\frac{1}{n_i}\frac{\partial^2R}{\partial x_i^2}(\mathbb 1)+ O\left(\sum_{i=1}^{k} \frac{1}{n_i^2}\right).$$

EDIT 2: Small math error pointed out in comments corrected, thanks to @forgottenarrow.