Complexity of SVP solvers

92 Views Asked by At

The complexity of SVP (Shortest Vector Problem) solvers for lattices is always indicated only as a function of the lattice dimension $n$, for example one of the fastest sieve algorithms is the Nearest Neighbor Searching with time complexity of $2^{0.292n+o(n)}$.

However, the input for the algorithm is basically any integer matrix. Shouldn't the complexity depend also on length of these numbers?

1

There are 1 best solutions below

0
On

Yes, the complexity will generally depend on the size of the inputs as well. Usually the dependence on the size of the numbers is only logarithmic however, e.g. for reading an entry or doing an addition/multiplication of such entries. So these entries have to be really large for them to play a significant role in the overall time complexity, and for this not to be swallowed up by the $+o(n)$ in the exponent.

Note that when running things like LLL or BKZ, where SVP sometimes needs to be run in some projected sublattice, it can be nasty to work with the exact versions of these projected lattices; formally, one would have to scale up all entries to make it an integer lattice again, but then the entries might also blow up. Various papers have studied how much precision is needed for these algorithms to basically give the same output when we use floating point representations of these numbers, rather than their exact values. See for instance this paper, and the fplll library, where the "fp" stands for floating point.