I have this problem:
I have a set of n-dimensional points $P$. I have one more n-dimensional point $q$.
The points in $P$ are linearly separable from $q$ (i.e. it always exists an hyperplane $n^t x + d = 0$ leaving $q$ on one side and all the points on the other side).
I'm trying to find the hyperplane (n, d) that best separates the points, using as optimality criteria maximizing the distance from the hyperplane and q.
I'm trying to solve this problem by local optimization (i.e. gradient descent), but I'm having trouble defining a cost function that achieves the best solution possible (I'm stuck in finding some "equilibria" hyperplane which is somewhere between P and q, but is not sticking to some points in $P$).
Here's a figure in R2 to clarify the problem:

the magenta point is $q$ and the blue points are the points in $P$. the blue line shows an hyperplane (not the optimal one, which would be passing for at least two points of $P$)
any idea?
I think what you want here is just a special case of the typical linear support vector machine. In most SVM applications you'll have multiple points on each side of the hyperplane, but there's nothing wrong with having just one here. The SVM will require solving a quadratic program or second-order cone program with $n+1$ variables, I believe, and $|P|+1$ inequalities.
Furthermore, the maximum margin classifier will typically return the hyperplane in the center of the gap as well as the margin; but all you would need to do is shift the hyperplane away from $q$ and towards the points $P$ by that margin.