I hope to solve this problem.
$$\min \quad \left\| CX \right\|_{1} $$ $$ \text{s.t.}\quad AX=b, X >0 $$
where $C \in \mathbb{R}^{m \times m}$, $X \in \mathbb{R}^{m \times n}$, $A \in \mathbb{R}^{k \times m}$, $b \in \mathbb{R}^{k \times n}$. $C$ is known weight, $X$ is unknown matrix. My problem is how to calculate the proximal operator of $ \left\| CX \right\|_{1}$, I know, if without $C$ the proximal operator will be apply Shrinkage elementwise.
This problem will be easy if $x$ is a vector, we just need to solve a LP, but my $X$ is a matrix.
$$ \min \quad c^Tx $$ $$ \text{s.t.}\quad Ax=b , x>0 $$
the overall problem I hope to solve is: $$ \min \left\| CX \right\|_{1} + \lambda \left\| Y \right\|_{*} $$ $$ \text{s.t.}\quad AX+Y=b , X>0 $$ Y has the same dimension with $b \in \mathbb{R}^{k \times n}$. X is known to be sparse.
The proximal operator for $\|CX\|_1$ does not admit an analytic solution. Therefore, to compute the proximal operator, you're going to have to solve a non-trivial convex optimization problem.
So why do that? Why not apply a more general convex optimization approach to the overall problem.
This problem is LP-representable, since $$\|CX\|_1 = \max_j \sum_i |(CX)_{ij}| = \max_j \sum_i \left| \sum_k C_{ik} X_{kj} \right|$$ So any linear programming system can solve this problem readily. Of course, having a modeling framework will help; for instance, in my package CVX, this is just:
This assumes that $X>0$ is to be interpreted elementwise. You could also use
norm(C*X,1)instead ofmax(sum(abs(C*X)))but in fact CVX will end up doing the same thing either way.EDIT: From the comments, it looks like you want
sum(sum(abs(C*X)))instead. Technically, $\|\cdot\|_1$ refers to the induced matrix norm, not the elementwise sum of the absolute values.