Equality-constrained QCQP with non-negativity constraints

112 Views Asked by At

Given symmetric matrix $N \in \Bbb R^{n \times n}$,

$$\begin{array}{ll} \underset{X \in \Bbb R^{n \times k}}{\text{minimize}} & \mbox{tr} \left( X^T N X \right)\\ \text{subject to} & \left\Vert e_i ^T X\right\Vert_2^2 = 1, \quad \forall i \in [n]\\ & X \ge 0\end{array}$$

where the equality constraints denote that all $n$ rows of $X$ have unit Euclidean norm. Note that $X \ge 0$ denotes that matrix $X$ is non-negative.

How can I find the optimal $X$? Analytical or iterative solutions are welcome.


Background

The application is to group $n$ sensor nodes in $k$ clusters based on their reading correlations and see if this matches up with their geographical positions. I'm not certain that this is the right approach, but I'm actually using what I'm calling the "noise matrix" obtained entry-wise from the correlation matrix,

$$N = {1-corr^{2}}$$

as a proxy for distance. The idea is to cluster the nodes so that total noise within the clusters is minimized.

The matrix $X$ is introduced as a smooth $k$-partition of the nodes so that I can initialize a random partition and apply gradient descent from there. I expect to attain a matrix $X$ with canonical unit vectors for rows, fully separating the clusters. It may be helpful, therefore, to assume that the columns of $X$ are orthogonal.

The clustering obtained this way is not guaranteed to be optimal so I wonder if there is a closed-form solution or a particular way of initializing $X$ and applying gradient descent that ensures an optimal clustering.