Normalization in least-p'th minimax algorithm

89 Views Asked by At

In the book "Practical Optimization: Algorithms and Engineering Applications", the least-$p$th minimax algorithm is presented, for approximation of the minimax optimizer (Alg. 8.1):

$Loss_x(k)$ = $E(X)(\sum_{i=1}^{N} (|e_i(X)|/E(X)^p))^{1/p}$

where $E(X)=max_i(e_i(X))$ for i={1,2,...N}. So $E(X)$ is the current worst-case error of the optimizer for iteration k.

My question is why is there even a normalization by the current $E(X)$? The book mentions that they set the initial $E(X)$ to $10^{99}$. I'm guessing this is a numerical optimization issue, but I can't understand why it's needed, or even helpful.