Let $\pmb{a}, \pmb{b} \in \mathbb{R}^d$ be fixed known vectors, then I have formulated the following minimzation problem:
$$\mathcal{X}^* = \arg\min_{\substack{\pmb{x}\in \mathbb{R}^d : \\ \forall k\exists j, x_k=a_j}}\min_{y,z,\in\mathbb{R}}\|\pmb{x} - (y\pmb{b}+z\pmb{1})\|^2_2$$
That is, I optimize over all vectors $\pmb{x} \in \mathbb{R}^d$ that may be constructed by using components from $\pmb{a}$, while the inner optimization is over some scalars $y,z \in \mathbb{R}$. Are there efficient, even if approximate, numerical algorithms to solve a problem of this kind?
My best idea was to relax the discrete optimization to a continuous one, so that $x_k \in [\min_{i}a_i,\max_{j}a_j]$. The issue is that even if I find a solution to the problem defined in such a way, I am not sure how to formulate a good projection onto the discrete set of solutions. It seems like the error may be quite large from performing such a projection. Any details/publications on how to realize this method, or suggestions for other methods are welcome.