How to show this algorithm on positive semidefinite matrices converges to a global maximum determinant

550 Views Asked by At

I'm dealing with an algorithm which is supposed to converge to the maximum determinant of certain positive semidefinite matrices.

The problem is that we have such a matrix, and we vary certain entries within it. We have an algorithm that maximizes the determinant in each coordinate sequentially. Given a starting matrix $A_0$, this generates a sequence $\{A_i\}_{i=0}^\infty $of positive semidefinite matrices with increasing determinant which converges to something. I want to show that something is the maximum positive semidefinite completion.

The set of positive semidefinite matrices when varying these coordinates is compact and convex, and I've already shown our maximum is unique. But say our converges to a completion $A_\infty$ for which $\det(A_\infty) < \det(A_{max})$. How do I show this is a contradiction?

1

There are 1 best solutions below

6
On BEST ANSWER

I do not think it is possible to fully answer your question without more information about the algorithm and the particular set of "certain" positive semidefinite matrices you are considering. In other words, what are the constraints on $A$ other than semidefiniteness? Still, there are several things that can be said.

First: your solution $A_\infty$ is positive definite, not just positive semidefinite; because if it were not, then your sequence could not have increasing determinant.

Second: both $f_1(A)=\det(A)^{1/n}$ and $f_2(A)=\log\det(A)$, are smooth, concave functions of the elements of $A$ on the open set of positive definite matrices. Therefore, the maximization of the determinant is a convex optimization problem, if the set you are considering is a convex set. Therefore, if the set of matrices under consideration is convex, we know a convergent algorithm must exist. Indeed, determinant maximization is a very common convex optimization paradigm. (Generally people talk about $f_2$ more than $f_1$, but in theory you could maximize either.)

It sounds like you're doing coordinate ascent. I do not believe that coordinate descent has been shown to converge globally for all functions, but I'll bet for the specific case of a smooth, strictly convex function on a convex set, there are some useful results in the literature.

EDIT: Based on the comments below, the you know a certain fraction of the values of $A$. This is just a positive semidefinite matrix completion problem. It's actually a well known problem in the convex optimization space. Search for "determinant maximization" and "positive definite matrix completion" online, and look in particular at work by Stephen Boyd, Lieven Vandenberghe, etc. Their MAXDET software is a bit out of date, though; I'd say SDPT3 is a good choice for code to solve maxdet problems of small to medium size. Coordinate descent seems awfully inefficient for this kind of problem, but I assume you've found a way to find descent directions cheaply.