I am looking for a way of getting a "best estimate" for a symmetrical and positive definite matrix $A\in \mathbb R^{n\times n}$ with knowledge of a set of evaluations: Given (known) vectors $v_i\in \mathbb R^n$, $i=1,\ldots,m$, I have data $y_i = v_i^TAv_i$. Now I would like to compute $A$. Of course this might be under- or overdetermined (depending on $m$), so I am happy with a least-squares/minimum-norm/lowest-rank approximation to $A$ as well (and I am not really picky regarding the selection criterion here, I just want something that works).
I have found (among others) the following papers (and a stack exchange thread):
- Recht, Benjamin, Maryam Fazel, and Pablo A. Parrilo. "Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization." SIAM review 52.3 (2010): 471-501.
- M. Mesbahi and G. P. Papavassilopoulos, "On the rank minimization problem over a positive semidefinite linear matrix inequality," in IEEE Transactions on Automatic Control, vol. 42, no. 2, pp. 239-243, Feb. 1997, doi: 10.1109/9.554402.
- P. A. Parrilo and S. Khatri, "On cone-invariant linear matrix inequalities," in IEEE Transactions on Automatic Control, vol. 45, no. 8, pp. 1558-1563, Aug. 2000, doi: 10.1109/9.871772.
- Low-rank matrix satisfying linear constraints linear mapping
But the level of generality seems much higher in all of them (my matrix is symmetrical and positive definite, and the data is of very simple form), so there should (probably) be an easier way than to solve a very complicated optimization problem here.
OK so I got an idea. Could someone please check my solution? I am restating the problem in the following form for clarification:
$\min \|A\|_F^2$ under $y_i = v_i^TAv_i$, where $\|\cdot\|_F$ is the Frobenius norm. This can be modeled with a Lagrangian:
$L(A,\lambda) = \|A\|_F^2 - \sum_k \lambda_i (y_i-v_i^TAv_i)$. Differentiating yields:
$\partial_A L(A,\lambda) = A - \sum_i \lambda_i v_iv_i^T$. This means that a candidate for a minimizer must have form $A = \sum_i \lambda_i v_iv_i^T$, with $\lambda_i$ yet to be determined.
The conditions can now be rewritten for this form of $A$:
$y_i = v_i^TAv_i = \sum_k \lambda_k (v_i^Tv_k)^2$, i.e. $(\lambda_i)_i$ is a solution to the following linear equation:
$\begin{pmatrix} (v_i^Tv_j)^2\end{pmatrix}_{i,j} \cdot \lambda = y$,
which can subsequently be used to obtain a closed-form solution for $A$.