Guaranteed convergence of coordinate descend algorithm

119 Views Asked by At

Suppose that there is an objective function $f$, which is defined with respect to two different variables, $x$ and $y$. We can find the minimum of $f$ using Coordinate descend algorithm by minimizing it along one variable at a time until convergence. Now assume that the minimum of $f$ with respect to $x$ has a closed form solution. On the other hand, minimizing $f$ with respect to $y$ does not yield a closed form solution, but it's a convex function that can be optimized using gradient-based methods. In this situation, can we say the convergence is guaranteed, or we need to do further analysis?