I have an optimization program that searches for $\mathbf{X}$ that minimizes some objective function. My setup is as follows
- Objective function being convex
- A linear constrain $\mathbf{Xy=b}$
- A rank-1 constrain $rank(\mathbf{X})=1$
Solving this program will return me a solution that might just be a local minimum. I will call this solution $\tilde{\mathbf{X}}$.
Since rank-1 constrain makes it a non-convex problem, I keep only linear constrain. It turns out that solving new problem with convex objective function and a linear constrain gives me a solution $\mathbf{X^*}$ which has $\sigma_i<<\sigma_1\quad \forall \quad i>2$ ($\sigma$'s comes from $svd(\mathbf{X^*})$). Therefore, it turns out that $\mathbf{X^*}$ is effectively rank-1.
How should I argue that the two solutions $\tilde{\mathbf{X}}$ and $\mathbf{X^*}$ are nearby matrices. And we dont loose much by doing this convex relaxation?
One option for me is to show that
\begin{equation} \|\mathbf{X^*}-\tilde{\mathbf{X}}\|_1\leq\epsilon \end{equation}
Any other idea will be much appreciated