Dual vectors and subgradients

260 Views Asked by At

In Candes and Recht (2008)'s paper on matrix completion, there is a statement that is suggested to be from "standard optimization theory:"

quote from Candes and Recht 2008

I am trying to understand the highlighted statement. I will attempt to restate the question in more general terms:

Suppose you are solving the following convex optimization problem:

$\min g(X)$ such that $h(X) = 0$

where $h$ is a linear operator and $g$ is a convex function.

We are considering $M$ as a possible solution to this optimization problem. It is a solution, if there exists a vector $\lambda$ such that $h(\lambda) \in \partial g(M)$, where $\partial g(M)$ is the subgradient of $g$ at $M$. $\lambda$ is referred to as the "dual certificate."

Why is this true? Can you refer me to the theorem in Convex Optimization they are referring to? I can find literature on dual functions and also on subgradients, but I can't find this statement.

1

There are 1 best solutions below

0
On BEST ANSWER

You’ve misstated the KKT condition. It’s not that $h(\lambda) \in \partial g(X)$ but rather that

$\sum_{i=1}^{m} \lambda_{i} \nabla h_{i}(X) \in \partial g(X)$

In this particular case, since the constraint is simply that elements of $X$ in specified locations are equal to the corresponding elements of $M$, $h_{i}(X)=X_{j,k}-M_{j,k}$ where constraint $i$ relates to the entries in row $j$, column $k$ of $X$ and $M$, the gradient of $h_{i}(X)$ is $E_{j,k}$, the matrix with a one in the $(j,k)$ entry and zeroes elsewhere. Thus the KKT condition simplifies to

$R_{\Omega}^{*}(\lambda) \in \partial g(X)$