I designed an optimization problem to improve the performance of the neural network learning process on the gradients. I spent hours, but unfortunately it did not work out.
Given vectors ${\bf a}_1, {\bf a}_2 \in \Bbb R^n$, we want to
$$\begin{array}{ll} \underset{{\bf x} \in \Bbb R^n}{\text{minimize}} & \| {\bf x} - {\bf a}_1 \|^2 + \| {\bf x} - {\bf a}_2 \|^2\\ \text{subject to} & {\bf a}_1 \cdot {\bf x} > 0 \\ & {\bf a}_2 \cdot {\bf x} > 0\end{array}$$
Any assistance would bring me closer to solving the problem.