Finding the variance of a function with two vectors of independent random variables.

447 Views Asked by At

$$f=\mathbf{k} \cdot e^{-\mathbf{\alpha}} \tag{1}$$

where $\mathbf{k}$ is a random vector and $\mathbf{\alpha}$ is a vector whose elements $\alpha_i$ are a function of a deterministic variable $w_i$ and another random parameter $u_i$.

$$\alpha_i(w_i; u_i)= \frac{u_i}{w_i}$$

(by which it follows that $\mathbf{\alpha}$ is the vector of elasticities of $f$ w.r.t. the $w_i$, but I don't think that matters for this question.)

Important to note that I assume that $\mathbf{u}$ and $\mathbf{k}$ are independent (so, no covariance between them).

My question is what is the variance of $f$? At first I was thinking:

$$Var(f) = e^{-\mathbf{\alpha}} \cdot \Sigma_{\mathbf{k}} \cdot e^{-\mathbf{\alpha}}$$

where $\Sigma_{\mathbf{k}}$ is the covariance matrix of $\mathbf{k}$

But then I was wondering what about the variance in $\mathbf{u}$? How to account for that?

It's not so bad in logs (again assuming no covariance between $\mathbf{u}$ and $\mathbf{k}$):

$$Var(\ln(f))=Var(\ln(\mathbf{k})\cdot \mathbf{1})- Var(\mathbf{u \cdot w^{-1}})$$

(where $\mathbf{1}$ is a vector of ones)

$$= \mathbf{1} \cdot \Sigma_{\mathbf{\ln(k)}} \cdot \mathbf{1} - \mathbf{w^{-1}} \cdot \Sigma_{\mathbf{u}} \cdot \mathbf{w^{-1}}$$

But then how to get $Var(f)$ from $Var(\ln(f))$?

2

There are 2 best solutions below

3
On BEST ANSWER

I hope I understood the notation you're using: $\alpha$ is vector $(\alpha_1,\ldots,\alpha_n)$ and $e^{-\alpha}$ denotes the vector $(e^{-\alpha_1},\ldots,e^{-\alpha_n})$, that right? So $$ f = \textbf{k}\cdot e^{-\alpha} = \sum_{i=1}^n k_ie^{-\alpha_i}$$ Therefore $$\mathbb{E}[f] = \sum_{i=1}^n\mathbb{E}[k_ie^{-\alpha_i}] = \sum_{i=1}^n\mathbb{E}[k_i]\mathbb{E}[e^{-\alpha_i}] = \mathbb{E}[k]\cdot \mathbb{E}[e^{-\alpha}]$$

We now want to compute $\mathbb{E}[f^2]$. So let more generally $X=(X_1,\ldots,X_n)$ and $(Y_1,\ldots,Y_n)$ be two independent random vectors. Then $$\mathbb{E}[(X\cdot Y)^2] = \mathbb{E}\bigg[\left( \sum_{i=1}^n X_iY_i\right)^2\bigg] = \mathbb{E}\bigg[ \sum_{i,j=1}^n X_iY_iX_jY_j\bigg]\\ = \sum_{i,j=1}^n \mathbb{E}[X_iY_iX_jY_j] = \sum_{i,j=1}^n \mathbb{E}[X_iX_j]\mathbb{E}[Y_iY_j]$$

Therefore we have $$Var(X\cdot Y)=\mathbb{E}[(X\cdot Y)^2]-\mathbb{E}[X\cdot Y]^2\\ = \sum_{i,j=1}^n \mathbb{E}[X_iX_jY_iY_j] - (\sum_{i=1}^n\mathbb{E}[X_iY_i])^2\\ = \sum_{i,j=1}^n \Big( \mathbb{E}[X_iY_iX_jY_j]-\mathbb{E}[X_iY_i]\mathbb{E}[X_jY_j] \Big) $$ In the final expressions, the entries are $Cov(X_iY_i,X_jY_j)$ ranging over $i$,$j$. In particular the expression is the sum of all the entries of $\Sigma_Z$, where it denotes the covariance matrix of the vector $Z$ given by $Z_i=X_iY_i$. Actually such fact holds independently from $X$ and $Y$ being independent. We can rewrite it as $$ Var(X\cdot Y) = \textbf{1}^T \Sigma_Z \textbf{1}$$ where $\textbf{1}$ is the vector whose entries are all $1$. Maybe this suffices, known the densities of $\textbf{k}$ and $\alpha$, to solve your problem? I've realized I couldn't exploit the independence hypothesis, because when computing the variance of $X\cdot Y$ also the dependence of $X_i$ with $X_j$ comes into play. Maybe you have some additional hypothesis on that as well?

UPDATE

I've found a nicer expression for that variance that takes also the independence in account. In fact

$$\mathbb{E}[X_iY_iX_jY_j]-\mathbb{E}[X_iY_i]\mathbb{E}[X_jY_j]\\ = \mathbb{E}[[X_iX_j]\mathbb{E}[Y_iY_j]-\mathbb{E}[[X_i]\mathbb{E}[X_j]\mathbb{E}[Y_i]\mathbb{E}[Y_j]\\ = \left( \mathbb{E}[[X_iX_j] - \mathbb{E}[[X_i]\mathbb{E}[X_j]\right) \mathbb{E}[Y_iY_j] + \left( \mathbb{E}[[Y_iY_j] - \mathbb{E}[[Y_i]\mathbb{E}[Y_j]\right) \mathbb{E}[[X_i]\mathbb{E}[X_j]\\ = \left( \mathbb{E}[[X_iX_j] - \mathbb{E}[[X_i]\mathbb{E}[X_j]\right)\left( \mathbb{E}[[Y_iY_j] - \mathbb{E}[[Y_i]\mathbb{E}[Y_j]\right) + \left( \mathbb{E}[[X_iX_j] - \mathbb{E}[[X_i]\mathbb{E}[X_j]\right) \mathbb{E}[[Y_i]\mathbb{E}[Y_j] + \left( \mathbb{E}[[Y_iY_j] - \mathbb{E}[[Y_i]\mathbb{E}[Y_j]\right) \mathbb{E}[[X_i]\mathbb{E}[X_j]$$

Now this might seem a horrible expression, but summing over $i$, $j$, actually it turns out to be $$ Var(X\cdot Y) = \Sigma_X :\Sigma_Y + \mathbb{E}[X]^T \Sigma_Y \mathbb{E}[X] + \mathbb{E}[Y]^T \Sigma_X \mathbb{E}[Y]$$ where $\Sigma_X, \Sigma_Y$ respectively denote the variance matrices of $X$ and $Y$, and $:$ is the scalar product between matrices, i.e. $A:B=\sum_{i,j} A_ij B_ij$. In your case, $X=\textbf{k}, Y=e^{-\alpha}$. Therefore, once you have the mean and covariance matrices of this two vectors (so you will need the density of $u$ to compute the mean and covariance of $e^{-\alpha}$) you can use this formula.

0
On

when X and Y are independent, $var(XY)=E(X^2)E(Y^2)-E(X)^2E(Y)^2$, so

$var(f)=E(k^2)E(e^{-2\frac{u}{w}})-[E(k)E(e^{-\frac{u}{w}})]^2$

Suppose the distribution of $k$ and $u$ are known. If $u$ follows normal distribution, we can also calculate

$E(e^{-\frac{u}{w}}) = \overline{E(e^{-\frac{u_i}{w_i}}) }=\overline{e^{-\frac{\mu_u}{w_i}+\frac{\sigma_u^2}{2w_i^2}} }$

The bar means average across i.