What is the definition of the parameter of a computational complexity?

47 Views Asked by At

Complexity classes like P, NP, and #P is defined to have a time complexity bounded under a polynomial. But over what is the polynomial? Let's say it's $n$. $n$ is defined as:

  • For integers, its number of digits (as in for example, addition)

  • For fixed-point numbers, its number of precision (as in for example, $\exp$ function)

  • For containers, its number of entries (as in for example, sorting algorithm)

  • For square matrices, its dimension (as in for example, matrix multiplication)

Is there a general definition of $n$ that derives these ideas? I assumed it was the number of bits encoding the input/output, and asked this question, but it looks like it's wrong.

Moreover, I've encoded all computable real numbers in Haskell like this:

data Computable = Exact Rational | Inexact (Word -> Integer)

Inexact and the function it labels are defined as (using lambda calculus):

$$ \text{Inexact}^{-1}(x) = \lambda k. \lfloor10^k \times x \rceil $$

So Inexact labels every Cauchy sequence whose limit is a computable number, where the sequence's entries are multiplicated by $10^k$. This makes the numbers of bits encoded by Computable... well... infinite.

So how is $n$ really defined?

Edit: A more elaborate re-statement of the question would be: In addition algorithm, $n$ is defined as the number of bits of the inputs, rather than the inputs themselves. I expected this would be the general definition, but the Computable type above clearly contradicts this idea.

2

There are 2 best solutions below

3
On BEST ANSWER

Usually $n$ is the number of input elements, where an input element is atomic in the sense of the computational model, i.e. for which constant-time operations are available.

For general complexity evaluations, $n$ is defined freely. For instance, you might discuss the complexity of a sorting algorithm in terms of the initial number of inversions.

Note that the parameter(s) need not even be counts of something. For instance, you could evaluate the cost of a hashing strategy in terms of the probability of a collision.

0
On

As far as I know both time and space complexity are measured in a function on the bitlength of the input, at least if you use Turing machines as your model of computation. I don’t see anything in the question you linked, which contradicts this.

Regarding the computable numbers I must admit that I don’t know how to measure complexity in functional languages. However neither your datatype nor the elements of this datatype can possibly have infinite bits, as otherwise your computer would explode. In fact, the second part of your definition seems a bit odd to me and I doubt that you have covered every Cauchy sequence with this. Regardless, it is a function, which takes in numbers and throws out numbers computed by simple arithmetic, so it is a computable function and as such encoded by a finite bitstring (representing its code)