I was wondering if there is an efficient way of calculating the proximal mapping of the following function $f : \mathbb{R}^3 \rightarrow \mathbb{R}$, $b_i \in \mathbb{R}^3$, $c_i \in \mathbb{R}$ : $$ f(y) = \max \{ \frac{1}{2} \| y \|^2 + b_1^T y + c_1, b_2^T y + c_2 \}, $$ i.e. $$\arg \underset{y \in \mathbb{R}^3} \min \, f(y) + \frac{1}{2 \sigma} \| y - y^0 \|^2.$$
Right now I'm considering to use subgradient descent, but I was wondering if there is a more efficient variant.
Perhaps someone can find an analytical formula for this prox operator, but if we're going to evaluate it numerically then one option is to reformulate our optimization problem as \begin{align} \operatorname{minimize}_{y,t} & \quad t + \frac{1}{2 \sigma} \|y - y^0 \|^2 \\ \text{subject to} & \quad t \geq b_2^T y + c_2 \\ & \quad t \geq \frac12 \|y \|^2 + b_1^T y + c_1. \end{align} This problem can then be solved using interior point methods. (For example we could solve this using CVX in Matlab.)
This trick of introducing $t$ is used a lot in convex optimization.