I am trying to construct a solver for the following problem:
Given $x = \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix}$, the objective can be written as: \begin{align} &L(\hat{x}) = argmin_{\hat{x}} ||A \hat{x} ||_1 \\ &\textrm{s.t.} \quad \hat{x} = x + \ker (A+I) \end{align} where $A$ is a $n \times n$ matrix.
As a simple example I have the following: \begin{equation} A = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \end{equation} \begin{equation} x = \begin{bmatrix} 1 \\ -1 \end{bmatrix} \end{equation}
In this case the constraint is: \begin{equation} \hat{x} = \begin{bmatrix} 1 \\ -1 \end{bmatrix} + c \begin{bmatrix} -1 \\ 1 \end{bmatrix} \end{equation} where $c$ is a free parameter.
Now clearly if I solve this system by hand I get the optimal solution by setting $c=1$ which gives me 0 in the objective.
My attempt at solving this generally in python is thus far unsuccessful. So far I have tried to use scipy.optimize, but I am having trouble writing the constraints. Here is what I have so far:
import numpy as np
import scipy
x0 = np.asarray([1,-1])
A = np.asarray([[0,1],[1,0]])
I = np.identity(2)
kernel = scipy.linalg.null_space(A + I)
def objective(x):
E = np.linalg.norm(A @ x,1)
return E
res = scipy.optimize.minimize(objective,x0) # Solving the optimization, without the constraint
# constraint = ?
# res = scipy.optimize.minimize(objective,x0,constraints=constraint)
I think I figured out how to do this. There might still be better ways to do this, but at least this seems to work.
First I realized that this function was non-differentiable due to the L1 norm, and thus I changed the optimization algorithm to cvxpy
In cvxpy I could solve the problem as follows: