Let's say I have a dataset with $n$ rows. The $i$th row ($x^i$) has entries $x^i_j$. For each $x^i$ I have an integer label, $p^j$.
New data comes along, with rows $y^i$, and I want to predict the corresponding labels $q^j$. To predict the label for a particular $y^i$, I calculate the distance between $y^i$ and all $x^j$s with metric $g$, ie $$D^j=g(y^i,x^j)$$ and the predicted label for $y^i$ will be $p^{\alpha}$ where $D^{\alpha}$ is the smallest of all $D^j$s.
This is the core of k-nearest neighbors algorithm (with $k=1$ here).
How could I algorithmically use my original dataset to improve the metric $g$, so the predictions of the labels of $y^i$ rows are more accurate?
Make sure that you have standardized the values first before using $L_2$ distance; that often helps.
If that's not sufficient, you could try using metric learning. But keep in mind that one of the appeals of k-nearest neighbors is that it is simple, so if you find yourself resorting to metric learning, it is possible that maybe nearest neighbors isn't the most appropriate classifier -- maybe you can find a better classification strategy.