Mean of (more than) two rank-1 positive semi-definite matrices

150 Views Asked by At

I've already spent some time researching literature, but wasn't able to find any answer to my question.

Maybe you can help?

Let's assume a set of $i \geq 2 \in \mathbb{N}$ rank-1 positive semi definite matrices $\boldsymbol{A}_i$ with corresponding normalized weights $w_i$. $\boldsymbol{A}_i$ has only one non-zero eigenvalue, which is assumed to be 1 for each matrix in our set. We can then express $\boldsymbol{A}_i$ by means of dyadics $\boldsymbol{A}_i = \boldsymbol{n}_i \otimes \boldsymbol{n}_i$, wherein the vectors $\boldsymbol{n}_i$ are unit-vectors $\boldsymbol{n}_i \cdot \boldsymbol{n}_i = 1$.

Now, I've two questions:

  1. Is there an formula/algorithm to determine a weighted mean of two dyadic tensors $\boldsymbol{A}_1 = \boldsymbol{n}_1 \otimes \boldsymbol{n}_1 $ and $\boldsymbol{A}_2 = \boldsymbol{n}_2 \otimes \boldsymbol{n}_2$, such that the averaged value $\boldsymbol{\bar{A}} $ preserves the characteristics of its arguments and can also be expressed as dyadic tensor $\boldsymbol{\bar{A}} = \boldsymbol{\bar{n}} \otimes \boldsymbol{\bar{n}}$?
  2. If yes, is there an generalization of this approach for an arbritary set of dyadics tensors?

Help would be really appreciated.

What I've figured out so far:

  • It's obvious an Euclidean approach $\boldsymbol{\bar{A}}_\text{EU} = \sum_i w_i \boldsymbol{A}_i$ doesn't work in the general case, since it would alter the rank of the averaged matrix.
  • There exists an affine approach for positive definite matrices 1. However, the formulas used involve matrix logarithms and exponentials that are only well defined for this specific group of matrices.
  • I've tried to visualize it for a 2D-case. Assume the matrix $\boldsymbol{A} = \begin{pmatrix} \alpha & \beta \\ \beta & \gamma \end{pmatrix}$. From the positive semi-definite condition, we know that $\alpha \geq 0$, $\gamma \geq 0$, $\alpha \gamma - \beta^2 = 0$. This implicit equation is illustrated by the cone in the figure. For our set of matrices we knew that $\alpha + \gamma = 1$. So all matrices in our set are on the red line. I wish to interpolate between two points on the line in such a way, that the interpolated values is also on this line. Visualization
1

There are 1 best solutions below

7
On BEST ANSWER

For those, who are interested: Took me some time to understand and implement the method proposed by Bonnabel. Thanks for the hint, @Conifold.

The method does exactly what I want it to do: interpolate between two rank-1 matrices , preserving the relevant characteristics:

  • rank of the $\boldsymbol{A}_1, \boldsymbol{A}_2$
  • trace of $\boldsymbol{A}_1, \boldsymbol{A}_2$

For the 2d case i have illustrated the interpolation results in component space:

enter image description here

One remark, since @Conifold proposed to average directly on the vectors. $\boldsymbol{A}_i = \boldsymbol{n}_i \otimes \boldsymbol{n}_i = -\boldsymbol{n}_i \otimes -\boldsymbol{n}_i $. Due to this sign ambiguity, it is not possible to determine an averaged vector in an unique way and then, construct a dyadic matrix from it.

Some further remarks:

I want to do a mesh-to-mesh mapping that involves tensor fields. Euclidean approaches (a.k.a. component averaging) yield non-sense results, since they induce artifical minima and maxima in tensor invariants, that are non-physical. My discrete set of tensors usually contains negative-definite and semi-definite entries, so I cannot use the Riemannian approach ("geometric mean"). Therefore, I try to use a decomposition-reassembling approach on tensor shape and tensor orientation. There are a lot of methods described in literature, usually operating on spectral/eigen decomposition $\boldsymbol{A} = \boldsymbol{Q} \boldsymbol{\Sigma} \boldsymbol{Q}^\top$ with $\boldsymbol{Q} \in SO(3)$. Problem is that $\boldsymbol{Q}$ is not unique and chosing an "reference rotation" violates the isotropy-condition, meaning results will depend on the specific coordinate system. Therefore I try to use an approach, that operates on the projector decomposition $\boldsymbol{A} = \sum_i \lambda_i \boldsymbol{P}_i$ with $\gamma_i$ being the Eigenvalues. Now, I'm looking for a method to average/interpolate on those projectors $\boldsymbol{P}_i$. One option is to use Euclidean method, again, minimize the Frobenius distance to find the closest rank-1 projector, but this yields weighted averages, that are very close to the support points (cf. figure below: red spheres). So, the method from Bonnabel is very promising for me.

figure2 Sorry for the missing decorations, done this on the fly.