I am trying to solve the problem
minimize $f(x)$ subject to $x_1 \in C_1, x_2\in C_2, ... x_m\in C_m$
where $x_1, ..., x_m$ are block subvectors of $x$, and $C_i$ are each closed convex sets (not necessarily polyhedral).
I want to use the block coordinate descent algorithm, which cyclically computes
$x_i^+ = \text{argmin}_{u\in C_i} f(\hat x)$
where $\hat x = (x_1,x_2,...,x_{i-1},u,x_{i+1},...,x_m)$.
From Powell's "On Search Directions for Minimization Algorithms", we know that the block coordinate descent method is not guaranteed to converge. But, from Auslender's "Optimisation : méthodes numériques" (which no I have not read this but everyone cites this) we know that if $f(x)$ is strongly convex then this method will converge.
I'd like to be able to understand this concept more intuitively.
1) Does anyone have any intuitive geometric interpretations of Powell's and Auslender's results?
2) what would be an example of a function $f(x)$ which is convex but not strongly convex, in which this method would converge?
3) What would be an example of a function $f(x)$ where the method would converge, but would also stall for a finite number of iterations (which is a phenomena Powell describes, but for a different problem.)
I'd ultimately like to say that if $f(x)$ is strongly convex then the method is guaranteed to converge and not stall, and if $f(x)$ is not strongly convex but is convex, it might either stalls or not converges (and provide a concrete example).
Thanks for any insight!