Nearest-neighbour classifier

51 Views Asked by At

Given a dataset $\mathcal D =\{x_i, y_i\}$ where $i$ runs from $1$ to $N$ and a new sample $x$, we simply assign to $x$ the label of the nearest sample to $x$ in $\mathcal D$, i.e.

$f(x)=y_i$ such that $i=$ arg $min_j \Vert x-x_j\Vert^2$

where $\Vert . \Vert$ denotes the usual Euclidean norm.

How do I show that the nearest neighbout classifier can be written purely as inner/dot product?

This is what I have so far.

Since I want the argument $x_i$ that minimizes $i=$ arg $min_j \Vert x-x_j\Vert^2$, I want to find the $x_i$ such that $(x_m-x_i)^2$ is as close to zero as possible since $\Vert x-x_j\Vert^2=(x_1-xj)^2+(x_2-xj)^2+...+(x_j-xj)^2+...+(x_N-xj)^2$.

But how do i show that it can be written purely as an inner product?

1

There are 1 best solutions below

0
On

The answer is simple - this problem could not be solved other than using $argmin$ notation.