I am working with a machine learning problem where I have an observed random variable $X \in \mathbb{R}^d$ and a latent variable $Z$. $Z$ can take values in the real interval $(a,b)$. Each value of the latent variable, indexed $Z_j$, has associated to it a covariance matrix $\Sigma_j$ for the random variable $X$ conditioned on $Z_j$. Thus, the possible values of $Z$ parametrize a curve in the Symmetric Positive Definite Matrices (SPDM) manifold.
The $\Sigma_j^{-1}$ are used in the model to estimate the value of $Z$ for a given test $X$. In my particular problem, however, we only have access to data for some values of $Z$, and so only have estimates for some $\Sigma_j$, and not all the possible ones. We denote the covariance matrices that we actually measure with a hat, $\hat{\Sigma_{i}}$.
Now, to get estimates of $\Sigma_j$ for all $j \in (a,b)$, we can fit a curve to our set of $\\{ \hat{\Sigma_{j_1}}, \dots, \hat{\Sigma_{j_n}} \\}$ in the SPDM manifold, or just interpolate between these matrices (or better still, their inverses, which is what is used for inference). For this to be more successful, we can favor the straightness of the curve drawn by the set of matrices $\\{ \hat{\Sigma_{j_1}}, \dots, \hat{\Sigma_{j_n}} \\}$ during the learning process.
Now, in my problem, I am also interested in the coefficients of the polynomial $X^T \Sigma_j^{-1} X$, because these can be related to the weights of a neural-network implementation of the algorithm that I am interested in characterizing. Specifically, in the output layer of the network, each neuron $j$ implements a quadratic combination of the elements of X, with weights given by $\Sigma_j^{-1}$ (i.e. the polynomial associated with the matrix). What I am wondering, is whether there is some way of thinking of the coefficients of the polynomials associated to $\Sigma_j^{-1}$'s that is relevant to the geometry of the SPDM manifold as described above. For example, is there some geometrical way of thinking of the polynomial coefficients (or neuron weights) where straigthness in the SPDM manifold also translates to straightness of the coefficients? Can interpolation in the SPDM manifold be described as interpolation in the polynomial coefficients directly? Is there some expected systematic effect on the structure of the output layer (i.e. $X^T \Sigma_j^{-1} X$ with the different j's) that is induced by increasing the straightness in the SPDM manifold?