How to optimally distinguish between linearly independent vectors in higher dimensional complex/real space?

223 Views Asked by At

I am to distinguish between 4 linearly independent vectors belonging to $\mathbb{C^{16}}$ space by creating a set of Positive Operator Valued Measurements (POVM) that will act on these vectors. I have found that the given set of vectors are such that there exists no projectors which can categorize them into mutually orthogonal sub-spaces under $\mathbb{C^{16}}$, so that I cannot simplify the problem as a problem in two vectors within these subspaces. The set of POVM ${\{E_i\}}$ has to follow completeness $\sum_{i=1}^nE_i=\mathbb{I}$ for some $n$ and optimally distinguish between these vectors with maximum probability possible through optimization. I tried generalizing the procedure proposed in http://iopscience.iop.org/article/10.1088/0305-4470/31/34/013/pdf for 4 vectors in $\mathbb{R^4}$ but it is not working out because of the fact that the cross product they used in 3 dimensions cannot be generalized in 4 or higher dimensions directly (Hurwitz Theorem, composition algebra). On page 5 of the paper they have proposed to use outer product for its generalization, but that is not satisfying the matrix multiplication. If someone may help doing the same procedure in $\mathbb{R^4}$ for four vectors, would be enough help I seek for its use in $\mathbb{C^{16}}$ space.