Linear algebra in higher-dimensional spaces

72 Views Asked by At

I've started exploring machine learning, and some concepts from linear algebra are intuitive in 2 and 3 dimensional spaces, but not in 4 and more dimensional spaces(like angles between vectors). I'm currently studying binary classification. Suppose we have a dataset with labels 1 and -1. In order to verify that a model(in 2d case a line) correctly classifies the objects, we must multiply the dot product of a particular object(vector) and the vector orthogonal to the line by the label of this object(the whole thing is called margin). Which is intuitive in 2d, because if we multiply a negative dot product(which is always negative for obtuse angles) by a negative label, we get a positive result(which corresponds to being correct) and if we multiply a positive dot product(which is always positive for acute angles) by a positive label, we also get a positive result in other cases a point isn't classified correctly. How can I generalize this idea to four and more dimensions? Can an affine space answer my questions? Or should I not look for a relation between dot products and angles in higher dimensions at all?graph

1

There are 1 best solutions below

0
On

I will hazard a guess at what you are asking.

The picture suggests that you are looking for a flat thing (line in the plane, plane in space) that separates points into two categories using some criterion.

Yes, this is what you use linear algebra in high dimensions for. There is a dot product there, and it does describe the angles between vectors. In your example you probably want the signs of the dot products between the vectors to be classified and the normal vector to the flat thing.

You can learn about these ideas in a course or book on linear algebra, which is very important in machine learning. You probably can't get a satisfactory answer here until you know more.