In order to solve this over-determined system of equations numerically: $$ f_l(\mathbf x) = \displaystyle \left \lvert \sum_{k=1}^Kx_k^2e^{-j\frac{2\pi}Np_kl} \right \rvert , \qquad P = \{p_1,p_2,\cdots,p_K\} \subset\{1,2,\cdots,N\} $$ $$ f_1(\mathbf x) = f_2(\mathbf x) = \cdots=f_{N-1}(\mathbf x) $$ I suggested minimizing this function : $$ h(\mathbf x) = \sum_{l=1}^{N-2}\left \lvert f_l(\mathbf x)-f_{l+1}(\mathbf x)\right \rvert^2 $$ I want to know what optimization method is most suitable for this function? I couldn't imagine how extremums of $h(\mathbf x)$ are distributed, to select one optimization method, Do you have any idea?
I've plotted it with respect to $x_1$ and $x_2$, for the case $\mathbf x \in \mathbb R^4$ and setting $x_3$ and $x_4$ a constant. here are plots of different views:

First, notice that since the $f$s are nonnegative, $$f_1(x) = f_2(x) = \ldots = f_{N-1}(x)$$ is equivalent to $$f_1(x)^2 = f_2(x)^2 = \ldots = f_{N-1}(x)^2.$$ This second system of equations is nicer to deal with since each term is a quartic polynomial (quadratic in $y_k = x_k^2$).
Your approach might work: try minimizing the function $\tilde{h}(y) = \sum (f_{i+1}(y)^2 - f_{i}(y)^2)^2$ using e.g. Newton's method (the Jacobian and Hessian are trivial to compute) and hope you find a global minimum (a minimum of residual zero) and not a local one.
Alternatively, you can try specialized software packages for finding solutions to polynomial equations, for instance PHCpack.