Bounding the solution of convex optimization problem

52 Views Asked by At

Let $\beta \in \mathbb{R}^K$, and assume that $\alpha \in [0,1]$. Let $\bar{\beta} = \frac{1}{K} \sum_{i=1}^K \beta_i$ denote the mean of $\beta$, and let $\tilde{\beta}$ denote a median of $\beta$. Assuming $\bar{\beta} \leq \tilde{\beta}$, I wish to show that the solution $\hat{t}$ of the convex optimization problem:

$$\underset{t \in \mathbb{R}}{\text{arg min}} \sum_{i=1}^K \left(\frac{1}{2}(1-\alpha)(\beta_i - t)^2 + \alpha|\beta_i - t| \right),$$

is bounded such that $\bar{\beta} \leq \hat{t} \leq \tilde{\beta}$ for all $\alpha \in [0,1]$.

I have, without any luck, tried to find an explicit formula for $\hat{t}$ by simply differentiating the objective function with respect to $t$ and equating to zero, which yields

$$t = \bar{\beta} + \frac{\alpha}{K(1-\alpha)} \sum_{i=1}^K \text{sign}(\beta_i-t),$$

from which point I am stuck. It is clear that $\sum_{i=1}^K \text{sign}(\beta_i-t) \in [-K, K]$, putting lower and upper bounds on $t$, but I am not sure if this helps me at all.

In particular, I am unsure if this approach is even worth pursuing, or if there is an easier way of showing the inequalities, that does not rely on finding an/the explicit solution?