How do you maximize a polynomial over an integer domain?

615 Views Asked by At

I am working on maximizing the polynomial $f:\mathbb{R}^N \rightarrow \mathbb{R}$ $$f(v):=\prod_{i=1}^{N}( v_i+\alpha_i^2)$$ over integer $n$-partitions of $P$, $n\leq N$: $$\left\{v \left|\, v_i \in \{0,1,2,\dots, N\} \ ; \sum_i v_i = P \right. \right\}$$

This can be done by brute-force in application, but that seems very awful for even moderate $P, N$.

An implicit form for the continuous case is done in Lemma II of this paper: http://ee.ucd.ie/~mark/papers/ISSC_Water_Pouring.pdf

If you find the continuous solution and round it, the error will be less than I think $(0.5+A)^N$, $A$ being the largest of all the $\alpha^2$s corresponding to a nonzero weight. (Haven't looked very closely yet but this is what I got after some algebra). This is a very rough bound. Is it possible to get closer in general?

Can anyone suggest any reading on the topic?

2

There are 2 best solutions below

0
On BEST ANSWER

If $N$ and $P$ are not too large, and @Macavity is correct about the solution minimizing the largest deviation of $v_i+\alpha_i^2$, the problem appears to be rather easily solved using mixed-integer linear programming. However, I do not always get the correct solution.

I've developed a test below in the MATLAB Toolbox YALMIP (disclaimer, developed by me) using Gurobi to solve the MILPs.

To check if the solution is correct (for very small $N$), I use a much more expensive approach and create a binary expansion of each integer, i.e, $v_1 = b_{01} + 2b_{11}+4b_{21}...$. With that, we can exploit the fact that a product of expressions involving binaries, can be linearized by introducing a large amount of new variables and constraints (that is why I limit this to $N=4$). Hence, a MILP is obtained again. This linearization is available in YALMIP in the command binmodel. The code runs for a while and then terminates with a counter example, typically for $P=2$

ok = 1
while ok
    % Random data for small instance
    N = 4;
    P = ceil(rand(1)*N^2);
    alpha = ceil(rand(1,N)*sqrt(N));

    % Create a binary expansion
    d = 1+floor(log2(N));
    b = binvar(d,N,'full');
    v = (2.^(0:1:d-1))*b;

    % Solve by by binary linearization. Expensive!
    Model = [sum(v)==P, v <= N];
    [Objective,cut] = binmodel(prod((alpha.^2+v)));
    solvesdp([cut,Model],-Objective);
    trueobj = double(Objective);
    v1 = double(v);

    % Solve by integer model and simple objective
    v = intvar(1,N);
    sdpvar smallest largest
    Model = [sum(v)==P,0 <= v <= N, smallest <= alpha.^2+v <= largest]
    solvesdp(Model, largest-smallest)      
    objalternative = prod(alpha.^2+double(v));
    v2 = double(v)

    % Check for failure
    ok = abs(trueobj-objalternative)<=1e-8
    if ~ok                
        trueobj
        v1
        v1+alpha.^2
        objalternative
        v2
        v2+alpha.^2 
        break
    end     
end

Hopefully I've missed something simple, as the MILP model with deviation of $v_i+\alpha_i^2$ is really fast and terminates in 0s for $N=200$. The known correct method with the binary expansion is intractable for $N>5$ or something like that. Obviously, a problem is that the distance measure leads to non-unique solutions here

Here is one of the problematic cases

P =

     2


alpha^2 =

     1     1     1     4

trueobj =

    16


v1 =

     0     1     1     0


v1+alpha^2 =

     1     2     2     4


objalternative =

    12


v2 =

     2     0     0     0


v2+alpha^2 =

     3     1     1     4
7
On

With the lagrange multipliers, I get :

$\forall i \ \ v_i=\frac{(P+\sum\alpha_i^2)}{N}- \alpha_i^2$.

What do you think ?