Numpy/PyTorch break problem symmetry due to numeric precision

67 Views Asked by At

In my problem, I apply a linear transformation to a Gaussian random variable $Y \in \mathbb{R}^n$, by multiplying it with a matrix $\mathbf{f} \in \mathbb{R}^{n \times d}$, with $d<<n$.

With $\Sigma_1 \in \mathbb{R}^{n \times n}$ and $\Sigma_2 \in \mathbb{R}^{d \times d}$ being the covariance matrices of the original and the transformed variable respectively, I am using the fact that, $\Sigma_2 = \mathbf{f}^T \Sigma_1 \mathbf{f}$ to compute $\Sigma_2$ in PyTorch, since $\Sigma_1$ is known and fixed, and I want to update $\Sigma_2$ as I learn $\mathbf{f}$.

From the above, we expect both $\Sigma_1$ and $\Sigma_2$ to be symmetric positive definite matrices (SPDMs). However, because of numeric precision, $\Sigma_2$ is slightly non-symmetric, with the difference in the values of symmetric elements being of the order e-7 in PyTorch and e-15 in Numpy. This is resulting in some inconveniences later on, since some operations complain about $\Sigma_2$ not being a SPDM.

I wonder if there is some work around to have $\Sigma_2$ be SPDM without necessarily having to fix the end result by hand, and without compromising much on computation speed.

I would also be interested in understanding a bit more how the asymmetries in the result originate, since it seems like elements $i,j$ and $j,i$ of the final matrix should originate from the exact same set of numbers.

Some code to reproduce this observation:

import numpy as np
n = 400
d = 5
A = np.random.randn(n, n)
Sigma1 = np.matmul(A, A.transpose()) * 0.01  # Symmetric positive definite matrix
f = np.random.randn(d, n) * 0.1  # Random filters
Sigma2 = np.einsum('kd,db,mb->km', f, Sigma1, f)  # Result of quadratic form
print(f'Symmetric residue of Sigma1: \n {Sigma1 - Sigma1.transpose()}')
print(f'Symmetric residue of Sigma2: \n {Sigma2 - Sigma2.transpose()}')
Symmetric residue of Sigma1: 
 [[0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 ...
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]]
Symmetric residue of Sigma2: 
 [[ 0.00000000e+00 -2.55351296e-15 -4.21884749e-15  1.11022302e-16
  -2.33146835e-15]
 [ 2.55351296e-15  0.00000000e+00 -3.33066907e-15  1.47104551e-15
   9.33975119e-15]
 [ 4.21884749e-15  3.33066907e-15  0.00000000e+00  1.14769305e-14
   8.88178420e-16]
 [-1.11022302e-16 -1.47104551e-15 -1.14769305e-14  0.00000000e+00
  -5.49560397e-15]
 [ 2.33146835e-15 -9.33975119e-15 -8.88178420e-16  5.49560397e-15
   0.00000000e+00]]

1

There are 1 best solutions below

0
On

If you want something more elegant then manual fix of the result, I believe increasing the accuracy of the computation, and then cutting the last few digits of the result may be good enough. For example,

A = np.float32(np.random.randn(n, n))
Sigma1 = np.float64(np.matmul(A, A.transpose()) * 0.01) # Symmetric positive definite matrix
f = np.float64(np.random.randn(d, n) * 0.1 ) # Random filters
Sigma2 = np.einsum('kd,db,mb->km', f, Sigma1, f)  # Result of quadratic form
Sigma2 = np.float32(Sigma2) 

Should give a symmetric result.