I am trying to solve a problem like
$$\min_{u,v} \Vert X-uv^T \Vert_F^2 + \beta \Vert v \Vert_1 \mbox{ s.t. } \Vert u \Vert_2 \le 1$$
where $X \in \mathbb{R}^{N\times N}$, $u, v \in \mathbb{R}^{N}$, which is basically like a rank-1 (truncated) SVD decomposition with sparsity constraints. I understand that this objective is not convex in both $u$ and $v$, but if the sparsity penalty is removed (and a similar constraint is added for $v$) it could be solved exactly using the SVD.
So my question is: Are there variants to SVD that can incorporate penalties/constraints like the $\ell_1$ norm? Would something like variable splitting (as in ADMM) work in this case and find the global minimum?
The problem you mention is closely related to the sparse principal component analysis (Sparse PCA). There is a vast literature on this problem including greedy nonconvex algorithms and semidefinite relaxations. Just google "sparse pca" and you will find "highly cited" papers.
For your particular formulation, you might want to look at the recent literature on nonconvex optimization such as alternating minimization.