Optimization with symmetric matrix constraint

3.6k Views Asked by At

Consider the following optimization problem:

''Minimize some objective $f(A)$ over all matrices $A$ s.t. $A \mathbf{1} = \mathbf{1}$ and $A = A^T$.''

I wonder in which ways one can handle the symmetry constraint.

A) One way would be to introduce a Lagrange-multiplier $\mu_{ij}$ for each matrix entry, i.e., the Lagrangian would appear as

$\mathcal{L}(A, \lambda_i, \mu_{ij}) = (something) - \lambda^T (A\mathbf{1} - \mathbf{1}) - \sum_{i,j} \mu_{ij} (A(i,j) - A(j, i))$.

But this complicates the latter a lot, since one has to deal with all these $\mu_{ij}$'s.

B) Another way is shown by the following example, which I do not fully understand: Therein, the constraints are modified to ''s.t. $A \mathbf{1} = \mathbf{1}$ an $A^T \mathbf{1} = \mathbf{1}$ and $A = A^T$'', for which the Lagrangian is deriven as follows:

$\mathcal{L}(A, \lambda_i, \mu_i) = (something) - \lambda^T (A\mathbf{1} - \mathbf{1}) - \mu^T (A^T \mathbf{1} - \mathbf{1})$

So the symmetry constraint is not yet involved. Now they state that from symmetry $A = A^T$ it follows that $\lambda = \mu$, so they get

$\mathcal{L}(A, \lambda_i) = (something) - \lambda^T (A\mathbf{1} - \mathbf{1}) - \lambda^T (A^T \mathbf{1} - \mathbf{1})$

Unfortunately, it is not clear to me why $\lambda = \mu$ follows.

C) Are there other ways? Can one for example just ignore the symmetry constraint in the Lagrangian and bring in the symmetry at any other point later?


So, to clearify my question: Could somebody please enlight me with the reasoning in B) and/or explain other useful strategies to respect the symmetric matrix constraint?