Can SVM be special case of PCA?

79 Views Asked by At

Let $X$ and $Y$ two linearly separable finite subsets of a $K$-dimensional real vector space $V$ with orthonormal basis $A = \{a_1,\ldots, a_K\}$. The covariance matrix $\Sigma_A$ of the set $X \cup Y$ is just the representation of a symmetric bilinear form $Cov$ relative to the basis $A$. Let now $\Sigma_B$ be the matrix representing the same bilinear form relative to the orthonormal basis $B = \{b_1,\ldots, b_K\}$ formed by the normalized eigenvectors of $\Sigma_A$. Is it true that every hyperplane that separates $X$ and $Y$ with largest possible margin is necessarily normal to one of the vectors of the basis $B$ ?

If this was true then the technique of Support Vector Machines (SVM) would be a quick consequence of the application of $K$ different Principal Component Analysis (PCA), each one keeping ($K-1$)-dimensions.

1

There are 1 best solutions below

0
On

I think I have a counterexample: the red line is the line of best separation, while the black line is the affine subspace of maximum variance

The red line is the line of best separation, while the black line is the affine subspace of maximum variance. The two subspaces are not orthogonal to each other.