Derivation of skewness and kurtosis algebra of random variables

154 Views Asked by At

In algebra of random variables, the symbolic rule for computing variance of random variable $X\in\mathbb{R}^{n\times p}$ multiplied by a coefficient vector, $a\in\mathbb{R}^p$, is

$$Var(X\cdot a) = a^\top Var(X) a = a^\top \Sigma a $$ where $\Sigma$ is the covariance matrix. What explains vector $a$ coming out twice, i.e. the full derivation?

Given the above, what is the skewness algebra of random variables, and kurtosis algebra of random variables, i.e. derivation of the two expressions below?

$$Skew(X\cdot a) =? \hspace{3cm} Kurt(X\cdot a)=?$$

1

There are 1 best solutions below

2
On BEST ANSWER

It helps to sum over repeated indices. For the first problem,$$\operatorname{Var}(a_iX_i)=\operatorname{Cov}(a_iX_i,\,a_jX_j)=a_i\underbrace{\operatorname{Cov}(X_i,\,X_j)}_{\Sigma_{ij}}a_j=a\cdot\Sigma a=a^T\Sigma a,$$where the last $=$ uses a slight abuse of notation. Since $a_iX_i$ has mean $a_i\mu_i$ with $\mu_i:=\Bbb EX_i$, and standard deviation $\sigma:=\sqrt{a\cdot\Sigma a}$, its skew is$$\Bbb E(a_iX_i-a_i\mu_i)^3/\sigma^3=(a\cdot\Sigma a)^{-3/2}a_ia_ja_k\Bbb E((X_i-\mu_i)(X_j-\mu_j)(X_k-\mu_k)).$$We can't calculate further than that without further assumptions. The case of kurtosis is similar.