Consider the optimization problem $\mathcal{P}_0$
\begin{equation*} \begin{aligned} & \underset{x\in \mathbb{R}^2}{\text{minimize}} & & \left\| x-p \right\|^2 \\ & \text{subject to} & & \ A x \leq b, & \ \ x_1^2 + x_2^2 = 1 \end{aligned} \end{equation*}
where $p \in \mathbb{R}^2$ is a parameter, and $x^*(p)$ is the optimal solution.
I am looking for a $convex$ optimization problem $\mathcal{P}$ such that $x^*(p)$ "approximates" the optimal solution of $\mathcal{P}$.
There is a whole field devoted to this problem. Look up material on semidefinite relaxations, sum-of-squares and moment methods. Papers by Jean Bernard Lasserre, such as "Global optimization with polynomials and the problem of moments" SIAM J. Optimization 11, pp 796--817. " might be a good start.
There is software for the problem too, such as the MATLAB toolboxes sostools, gloptipoly and YALMIP. For reference, here is a quick test with YALMIP (developed by me). I solve the naive relaxation discussed in the comments (which typically yields a poor solution), the global optimization problem using YALMIPs built-in global solver (solving a nonconvex quadratic problem in $R^2$ is trivial), and using a semidefinite relaxation (which typically solves the problem here exactly, i.e., the semidefinite relaxation is tight and the solution $x$ can be recovered)