There is the method of projected gradient descent. Why there is no projected coordinate descent?
Suppose $f$ is a smooth and convex function we would like to minimize. Yet, we would like the minimizer to be in a convex set $\mathcal{X}$.
For projected gradient descent, it is simply
$$x^{(t+1)} = \mathcal{P}_{\mathcal{X}} \left( x^{(t)} -\alpha_t \nabla f \left( x^{(t)} \right) \right)$$
for the $(t+1)^{th}$ iteration.
Why is there no "projected coordinate descent"? I imagine it to be
$$x^{(t+1)} = \mathcal{P}_{\mathcal{X}} \left(\arg\min_y f \left( \dots,x_i^{(t+1)},y,x_{i+1}^{(t)}\dots \right)\right),$$
i.e., minimizing the $i^{th}$ coordinate while fixing other coordinates and then project onto $\mathcal{X}$.
I wonder why it is so difficult that nobody ever studied it before?