Consider the following optimization problem:
$$\max_{\{a=(a_1,a_2,\ldots,a_m)\in A\}}\min_{\{b:=(b_1,\ldots,b_m)\in B\}} \sum_{j=1}^m |b_j-a_j|.$$ Is computing the optimal value of this problem tractable? Note that $A$ and $B$ are convex and bounded sets.
I don't believe it is. I have a deleted answer sitting here as a scratchpad for me to try some various reformulations to a tractable form, without success. It resembles a saddle-point problem, but the joint convexity of the objective is an obstacle.
The inner minimization produces a convex function of $a$; e.g., $$\begin{array}{ll} \text{maximize} & f(a) \triangleq \inf_B \sum_{i=1}^n |a_i-b_i| \\ \text{subject to} & a \in A \end{array}$$ So the outer problem is a convex maximization. Even if you could find an analytic formula for $f(a)$, you are still faced with a convex maximization, which is known to be intractable except in special cases.
What we do know, thanks to the maximum principle for convex maximizations, is that the optimum will be attained on the boundary if $A$. Depending upon your knowledge of $A$, you may be able to design effective heuristics.
Let me share one of the things I came up with in my scratching. If strong duality holds for the inner minimization---for instance, if $B$ is polyhedral, or has a non-empty interior---then the outer maximization is equivalent to $$\begin{array}{ll} \text{maximize}_{a,d} & a^T d - \varphi_B(d) \\ \text{subject to} & a \in A \\ & \|d\|_\infty \leq 1 \end{array}$$ where $\varphi_B$ is the support function for $B$. To me, the bilinear term suggests a sort of alternating maximization approach may be fruitful: fix $d$ and maximize over $a$, then fix $a$ and maximize over $d$, etc. When $a$ is fixed, you're just solving the dual of the inner minimization, so a properly designed primal/dual algorithm applied to that inner minimization will work.