I am trying to find the positive projection in the frobenius norm of a real matrix. Consider the following matrix $\hat{Z}$: \begin{bmatrix}1&2&0\\0&5&0\\0&8&9\end{bmatrix}
I need to find Z such that $\lvert\lvert Z-\hat{Z} \rvert\rvert_{F}^{2}$ is minimum. Here F indicates the frobenius norm.
According to the paper Computing a Nearest Symmetric Positive Semidefinite Matrix by Nicholas J Higam, it possible to find the frobenius norm positive approximant $Z$ by theorem 2.1:
Take $B = (\hat{Z} + \hat{Z^{T}})/2$
Let $B = UH$ be a polar decomposition
Then $Z = (B+H)/2$ is the positive approximant.
Then, in equations 2.1 and 2.2, the paper proposes a faster method to find the positive approximant using eigendecomposition. According to this method, let $Q$ be the (right) eigenvector matrix of $\hat{Z}$ and $v = [v_{1}, v_{2}, ... v_{m}]$ the vector of eigenvalues.
Then, the eigenvalue decomposition of $\hat{Z} = Q \times diag\{v\} \times Q^{*}$
Then positive approximant $Z = Q \times diag\{ max(v_{1},0), ... max(v_{m},0)\} \times Q^{*}$
Using the two methods, I am getting different results. The first method gives me the following matrix: \begin{bmatrix} 1&1&0 \\ 1&5&4 \\ 0&4&9\\ \end{bmatrix} Code for method 1:
import numpy as np
import scipy.linalg as sl
Z = np.matrix([[1,2,0],[0,5,0],[0,8,9]]).astype(np.float32)
b = 0.5*(Z + Z.T)
u,h = sl.polar(b)
pos1 = (b+h)*0.5
While the second method gives me \begin{bmatrix} 1.2380&0.4761&-0.9523 \\ 0.4761&0.9523&-1.9047 \\ -0.9523&-1.9047&12.8095 \\ \end{bmatrix} Code for method 2:
import numpy as np
Z = np.matrix([[1,2,0],[0,5,0],[0,8,9]]).astype(np.float32)
w,v = np.linalg.eig(Z)
wd = np.diag(w)
pos2 = np.matmul(v,np.matmul(wd, v.T))
Why am I getting two different results?
Actually, I found your error. In your second method, you must use spectral decomposition of $B = \dfrac{Z+Z^T}{2}$, not $Z$ itself. Besides, your $Z$ and the paper's $Z$ are different and that's not a good practice. Anyway here is the code: