Methods to translate global constraints to local constraints

639 Views Asked by At

Are there any general methods for (global) optimisation which can translate a global optimisation problem to a "local" one?

Or in other words, translate global constraints to local constraints.

To elaborate further, to translate global constraints to constraints which can be computed locally at each step of an optimisation process.

A complete such methodology would have implications for intractable optimisation problems of course, but in any case is there any research or results in this area?

A simple thing would be to express global constraints in a functional form which has unique values (and extrema) when expressed in "local coordinates" space (or sth like that)

To give an (slightly ambitious) example just for clarification purposes, think of a TSP problem where the global optimum can be found using "greedy" (or more correctly "local") methods (if the global contraints can be expressed in such form as to be computed using only local information)

Some results and papers in this direction (from a rough search)

  1. Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds (Journal of Machine Learning Research 4 (2003))
  2. Global optimization using local information with applications to flow control, Bartal, Y.
  3. Why Natural Gradient?, Amari S., Douglas S.C.
  4. Local optimization of global objectives: competitive distributed deadlock resolution and resource allocation, Awerbuch, Baruch, Azar, Y.
2

There are 2 best solutions below

0
On BEST ANSWER

will provide an answer to own question.

Apart from the references mentioned in question (re-produced here):

  1. Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds (Journal of Machine Learning Research 4 (2003))
  2. Global optimization using local information with applications to flow control, Bartal, Y.
  3. Why Natural Gradient?, Amari S., Douglas S.C.
  4. Local optimization of global objectives: competitive distributed deadlock resolution and resource allocation, Awerbuch, Baruch, Azar, Y.

a couple more references in same direction

  1. Learning with Local and Global Consistency
  2. Constraint Satisfaction Problems Solvable by Local Consistency Methods

There are a couple of references which tackle the problem of translating global evaluation functions (or constraints) to local ones (using local information) and their consistency (i.e convergence to same global optimum):

  1. Local Evaluation Functions and Global Evaluation Functions for Computational Evolution, HAN Jing, 2003
  2. Emergence from Local Evaluation Function, Han Jing and Cai Qingsheng, 2002

Abstract (from 1. above)

This paper presents a new look on computational evolution from the aspect of the locality and globality of evaluation functions for solving classical combinatorial problem: the kcoloring Problem (decision problem) and the Minimum Coloring Problem (optimization problem). We first review current algorithms and model the coloring problem as a multi-agent system. We then show that the essential difference between traditional algorithms (Local Search, such as Simulated Annealing) and distributed algorithms (such as the Alife&AER model) lies in the evaluation function: Simulated Annealing uses global information to evaluate the whole system state, which is called the Global Evaluation Function (GEF) method; the Alife&AER model uses local information to evaluate the state of a single agent, which is called the Local Evaluation Function (LEF) method. We compare the performances of LEF and GEF methods for solving the k-coloring Problems and the Minimum Coloring Problems. The computer experimental results show that the LEF is comparable to GEF methods (Simulated Annealing and Greedy), in many problem instances the LEF beats GEF methods. At the same time, we analyze the relationship between GEF and LEF: consistency and inconsistency. The Consistency Theorem shows that Nash Equilibria of an LEF is identical to local optima of a GEF when the LEF is consistent with the GEF. This theorem partly explains why the LEF can lead the system to a global goal. Some rules for constructing a consistent LEF are proposed. In addition to consistency, we give a picture of the LEF and GEF and show their differences in exploration heuristics.

Specificaly the paper addresses methods to determnine whether a local function (LEF) is consistent with a global function (GEF) and methods to construct consistent LEFs from given GEFs (Consistency theorem).

Excerpt from Conclusion section (from 1. above)

This paper is just the beginning of LEF&GEF studies. In addition to the research report above, there is still a lot of future work: more experiments on LEF methods; analytical study on LEF; sufficiency of local information for LEF; and the existence of a consistent GEF for any LEF; Is the consistency concept sufficient? Since Genetic Algorithms also have an evaluation function (fitness function), can we apply LEF&GEF to Genetic Algorithms? … It is our intention to study and attempt to answer all of these questions

1
On

A greedy approach on the TSP would correspond to a standard local method on continuous optimization. Just go where it is best at the moment, and don't care about anything global. Global optimization is about two things: Finding the global optimum (which can be done using gradient search/greedy etc) and certifying that you have found the global optimum (this is the hard part, not possible from local derivatives or in a greedy setting etc). There is no quick standard fix to solve the second problem.

However, as an exception to the rule, nonconvex optimization over posynomials can be written as a convex program by performing a coordinate change using logarithms (called geometric programming), and can easily be solved to global optimality.