Does simplex assumption make Exponentiated Gradient Method applicable only to a specific optimization problem?

86 Views Asked by At

According to page 1 of this lecture here, and page 12 of here, I see that there is always an assumption of a simplex feasible region when using Exponentiated Gradient (EG) Method to solve convex optimization problem. It makes me think that EG is only applicable to the optimization problems which constraints are in the format of $\textbf{1}^T\textbf{x} \leq b$. Does someone have any idea of using EG for a more general case, i.e. a convex objective function with $\textbf{A}\textbf{x} \leq \textbf{B}$ constraints?

In fact, it seems to me that such the assumption is only to initiate the first iterate so that $\textbf{x}_1 = (1/n, 1/n, ..., 1/n)$, where $n$ is the dimension of the variable $\textbf{x}$, is always at the "center". But it's just my guess (which is rarely correct).

1

There are 1 best solutions below

2
On

This is actually two different questions.

To answer your first question, the algorithm itself only produces iterates in the simplex. In that case, center of the simplex is good initial guess IMO.

Probably, the algorithm can be generalized so that it can be applied on any decision space. But it would heavily depend on properties of the matrix $A$. It is non-trivial and unfortunately, I do not have a reference for it.