The PCA objective function can be formulated in several equivalent ways. Are any of them convex?
Let $\mathbf{X} \in \mathbb{R}^{M \times N}$ denote a matrix of mean-centered data. One way of viewing PCA (with $K$ components) is in terms of reconstruction error:
$$ \underset{\mathbf{U}, \mathbf{V}}{\text{minimize}} \quad \lVert \mathbf{X} - \mathbf{UV}^T \lVert_F^2 $$
with $\mathbf{U} \in \mathbb{R}^{M \times K}$ and $\mathbf{V} \in \mathbb{R}^{N \times K}$. This objective is clearly nonconvex due to the $\mathbf{UV}^T$ term.
PCA can be equivalently viewed as identifying directions, $\mathbf{w}_i$ below, of maximal variance in the data: $$ \underset{\mathbf{w}_i}{\text{maximize}} \quad \sum_{i=1}^K \mathbf{w}_i^T \mathbf{X}^T \mathbf{X} \mathbf{w}_i \\ $$ $$ \text{subject to} \quad \mathbf{w}_i^T \mathbf{w}_j = \delta_{ij} \\ $$
Where $\delta_{ij} = 1$ if $i = j$ and $\delta_{ij} = 0$ if $i \neq j$ (i.e. the Kronecker delta). Here the objective function is convex, but the orthogonality constraint is nonconvex. Furthermore, you are maximizing a convex function, so the overall problem is nonconvex.
(Note that I don't necessarily care whether $\mathbf{U}$ and $\mathbf{V}$ are orthogonal matrices in the first optimization problem, but the orthogonality constraint is necessary in the second problem to prevent all $\mathbf{w}_i$ from converging to the top eigenvector.)
Question: Are there any manipulations to the above equations that would formulate PCA as a convex optimization problem?
PCA can be solved efficiently via the singular value decomposition (SVD) of $\mathbf{X}$ or eigendecomposition of the covariance matrix. Furthermore, while the first formulation I gave above is nonconvex, it can be shown that all local minima correspond to a global solution, and all non-optimal critical points are saddle points (e.g. Baldi & Hornik, 1989).
Does this special structure / simplicity suggest that there is a really a convex formulation?