Let say I have to solve a large QUBO (quadratic unconstrained binary optimization) problem $$ \min_{x}{x^\top Qx}, $$ where $x\in\{0,1\}^N$ is a binary variable and $Q\in R^{N\times N}$ encodes the problem. This is in general an intractable task. Now suppose that somebody gives me the global minimizing solution for this problem (that is, the smallest possible value of ${x^\top Qx}$) but not a minimizing argument $x_{min}$. Does it somehow alleviate the task of finding an $x_{min}$?
One strategy, whose effectiveness I am not sure about, is that if I randomly sample the solutions $x$ I can efficiently check against the minimal value and it tells me how close I am to the global minimum. But I may just be exploring a suboptimal valley that can be very far from the global minimum in the landscape.
Note that since the problem is not concave there might be many $x_{min}$'s. So by solving I mean to get at least one $x_{min}$.
If it cannot help find a minimizing solution, can this knowledge speed up some sort of approximate search for close-to-optimal solutions?
A standard approach is to linearize the 0-1 quadratic programming problem by introducing variables
$y_{ij}=x_{i}x_{j}$
and then solving the QUBO as a 0-1 integer linear programming problem.
If the variables are all treated as 0-1 integer variables, then this product constraint $y_{ij}=x_{i}x_{j}$ can easily be enforced by linear inequality constraints:
$2y_{ij} \leq x_{i}+x_{j}$
$y_{ij} \geq x_{i}+x_{j}-1$
If you know the optimal objective, you can enforce that as a linear equation constraint in the 0-1 integer linear programming problem and its LP relaxations. The additional constraint might dramatically reduce the solution time.