I am trying to solve an optimization problem. The objective function is as follow
$arg\ min \lVert\mathbb{A}\mathbf{x} - \mathbf{b}\rVert^2 + other\ linear\ least\ squares\ terms + \mathcal{I}(\mathit{x_0<a}) \lVert\mathit{x_0 - a}\rVert^2 + \mathcal{I}(\mathit {x_n>b}) \lVert\mathit{x_n-b}\rVert^2$
$\mathcal{I}$ is the indicator function that returns 1 for true condition and 0 otherwise.
$x_0, x_1, ..., x_n$ should be between a and b.
If $x_0$ or $x_n$ is out of the range, one cost will be added to the objective function.
If the indicator function doesn't appear in the objective function, it's simply one linear least squares optimization problem and is simple to solve. Indicator function is not a continuous function and make the problem difficult.
I am not an expert on numerical optimization. Any hints, links and materials are appreciated.
Let: $$ f(\mathbf{x}) = \begin{cases} &\|A\mathbf{x}-\mathbf{b}\|^2, &x_0\geq a, x_n\leq b \\ &\|A\mathbf{x}-\mathbf{b}\|^2 + (x_0-a)^2, &x_0<a, x_n\leq b \\ &\|A\mathbf{x}-\mathbf{b}\|^2 + (x_n-b)^2, &x_0\geq a, x_n>b \\ &\|A\mathbf{x}-\mathbf{b}\|^2+ (x_0-a)^2 + (x_n-b)^2, &x_0<a, x_n>b \end{cases} $$ Then the gradient is: $$ \nabla f(\mathbf{x}) = \begin{cases} &2A^T(A\mathbf{x}-\mathbf{b}), &x_0\geq a, x_n\leq b \\ &2A^T(A\mathbf{x}-\mathbf{b}) + 2I_0(\mathbf{x}-\mathbf{a}), &x_0<a, x_n\leq b \\ &2A^T(A\mathbf{x}-\mathbf{b}) + 2I_n(\mathbf{x}-\mathbf{b}), &x_0\geq a, x_n>b \\ &2A^T(A\mathbf{x}-\mathbf{b}) + 2I_0(\mathbf{x}-\mathbf{a})+I_n(\mathbf{x}-\mathbf{b}), &x_0<a, x_n>b \end{cases} $$ where $I_i$ is a square matrix with $1$ on the $i$-th diagonal entry and of zeros elsewhere.
The optimal solution $\mathbf{x}^*$ is: $$ \mathbf{x}^* = \begin{cases} &(A^TA)^{-1}A^T\mathbf{b}, &x^*_0\geq a, x^*_n\leq b \\ &(A^TA+I_0)^{-1}(A^T\mathbf{b}+I_0\mathbf{a}), &x^*_0<a, x^*_n\leq b \\ &(A^TA+I_n)^{-1}(A^T\mathbf{b}+I_n\mathbf{b}), &x^*_0\geq a, x^*_n>b \\ &(A^TA+I_0+I_n)^{-1}(A^T\mathbf{b}+I_0\mathbf{a}+I_n\mathbf{b}), &x^*_0<a, x^*_n>b \end{cases} $$