Dimensional Analysis of product of matrices

39 Views Asked by At

I am attempting to compute the following product of matrices

$$\Sigma^{-\frac{1}{2}}_{AA} \Sigma_{AB} \Sigma^{-1}_{BB} \Sigma_{BA} \Sigma^{-\frac{1}{2}}_{AA}$$

where $\Sigma_{AB}$ denotes the sample covariance matrix of two vector time series $A_t$ and $B_t$.

The problem I have is that for bivariate series the matrices $\Sigma_{AB}$ and $\Sigma_{BA}$ are $(4 \times 4)$ but the matrices $\Sigma_{AA}$ and $\Sigma_{BB}$ are $(2 \times 2)$ matrices so the matrix product given initially cannot be computed.

In Python's numpy I am using the np.cov function, if I input

np.cov(A.T, A.T) instead of just np.cov(A.T) then the dimension of $\Sigma_{AA}$ is $(4 \times 4)$ but it is not invertible.

The paper I am looking at seems to assume the computation of these matrices is straightforward so I wonder if I am doing something wrong.

1

There are 1 best solutions below

0
On

If I understand your question, it might be a matter of NumPy convention regarding the covariance calculation.

In particular, NumPy assumes that the variables are the rows of the data matrix, and the observations are the columns (details here: https://numpy.org/doc/1.19/reference/generated/numpy.cov.html?highlight=cov#numpy.cov)

The below code illustrates that one obtains a $2 \times 2$ covariance matrix from a pair of time series data, when following the convention as described.

I hope this helps.

import numpy as np

# fix seed for reproducibility
np.random.seed(1234)

# create a pair of time series
a = np.random.uniform(size=32)
b = np.random.uniform(size=32)

# compute covariance matrix.
#
# numpy default convention for cov is that
# rows are variables and columns are observationa.
#

X = np.array([a,b])
cov_mat = np.cov(X)

# covariance result is 2x2
print(cov_mat)