In Support-Vector-Machine, why is the hyperplane given by $(p-1)$ dimensions?

316 Views Asked by At

In SVM, each feature vector is viewed as a data point in a $p$-dimensional plane and different labels are separated by a $(p-1)$ dimensional hyperplane.

Please see the wikipedia entry: https://en.wikipedia.org/wiki/Support-vector_machine#Motivation

My question is:

  • Why does the hyperplane have $(p-1)$ dimensions and not just $p$?
2

There are 2 best solutions below

0
On BEST ANSWER

Think as follows:

In the simplest case you have datapoints from a $1$ dimensional set, which you can represent as points on a line (think like the number line), you could separate these points with one point. For concretenss sake you can imagine having your dataset describing weights of mice ranging from $85$ grams to $245$ grams and you say that all mice which has weight above $100$ grams are categorised as "overweight" so you can set a point down at $100$ and say that everything above is overweight and below is not overweight. Think what we have just done, we separated (classified) a $1$ dimensional dataset with a $0$ dimensional object, a point. It wouldn´t make sense to separate a line with a $1$ dimensional object.

When you have a $2$ dimensional data e.g. a plane of points here you would´t be able to separate anything with a point i.e. a $0$ dimensional object but a line or curve would do perfectly which is in turn a $1$ dimensional object. Once again a plane would not work as a separator since from the datasets "point of view" nothin else exists than this plane.

I think you can figure out the rest...

0
On

A linear classifier of dimension $p$ in a $p$-dimensional space would contain all data points. If it were of dimension smaller than $p-1$ then it wouldn't separate the space into two regions. It wouldn't classify. A $(p-1)$-dimensional hyperplane separates the space of possible data points into two disjoint half spaces.