I am trying to understand how to minimize the given function.
In the paper [1] t says, ${{f}_{i}}\left( x \right)$ should be minimized in subject to:
$\left\| x-{{x}_{j}} \right\|\ge \beta \text{ }{{\Delta }_{n}}$
$j=1,...,k+i-1$
$x\in D$
$\Delta =\underset{x\in D}{\mathop{\max }}\,\underset{1\le j\le n}{\mathop{\min }}\,\left\| x-{{x}_{j}} \right\|$
I do understand, that:
- $x$ is the value that we are trying to find
- $j$ is the known points ($k$ are initial, and $i$ are the later calculated points)
Now my problem start:
- How to understand the $\underset{x\in D}{\mathop{\max }}\,\underset{1\le j\le n}{\mathop{\min }}\,$ part? Won't $\left\| x-{{x}_{j}} \right\|$ return a 1D vector? So after min I only have one value.
Should $\underset{1\le j\le n}{\mathop{\min }}\,$ find the $x$ which minimizes the distance between the new and the old points? But why do I need to maximize this again I thought it is already minimized.
I hope this is efficient information for you to understand the problem.
[1] Regis, Rommel G.; Shoemaker, Christine A., Constrained global optimization of expensive black box functions using radial basis functions, J. Glob. Optim. 31, No. 1, 153-171 (2005). ZBL1274.90511.
$\left\| x-{{x}_{j}} \right\|$ is a distance, and thus a scalar for every $j$. Fix $x$, and find the distance to the closest point among the known points $x_j$, that is the $\min$ part. Now let $x$ free and try to move it to a position such that it is as far away as possible from the closest $x_j$. That's the $\max$ part. In other words you want $x$ to be as as far away as possible from its closest neighbor.
You can alternatively formulate it as maximize $\Delta$ subject to $\left\| x-{{x}_{j}} \right\| \geq \Delta ~\forall j$ (the term $\beta$ is redundant)