Is there such a thing as "DTIME($n^k$)-completeness"?

111 Views Asked by At

That is, is there a conception of a problem's being $\mathrm{DTIME}(n^k)$-complete for some fixed value of $k$? For example, it seems like it should be provable--likely via a Turing-machine construction--that searching an unsorted list would be $\mathrm{DTIME}(n)$-complete; or that finding a subsequence optimizing some catamorphic objective function would be $\mathrm{DTIME}(n^2)$-complete. Not sure about $n^3$ or higher. Do such proofs exist?


1 Jan 2020: In view of this answer, I realize I did a disservice by failing to point out that my question is conditioned on resource limitations for the reductions being allowed. It makes sense to want a $o(n^k)$-time reduction for a $O(n^k)$-time problem. This being impossible for $k=1$, that particular case might need a reduction technique I haven't even considered.


17 Jan 2020

Constant-time mapping reduction for $\mathrm{DTIME}(n)$

Here is a sketch of how a proof of what I'm speculatively calling '$\mathrm{DTIME}(n)$-completeness' might go down. The reduction is to the (right) fold from functional programming.

Suppose we start with a Turing machine $T_L$ that recognizes a language $L$ in $O(n)$ time. If it always completes in under $c \cdot n$ steps, for constant $c$, then $T_L$ is equivalent to a primitive recursive function making no more than $c$ passes over the input. By the universal and fusion properties [1], all $c$ passes may be combined into one and factored out. In a Haskell-ish notation,

$$ \exists f, z. \ T_M \cong fold\, f\, z $$

Let the $fold$ function be implemented by a machine $T_{fold}$ which takes three inputs: a description of another Turing machine $T_f$ implementing the functionality of $f$, which it then simulates; the input to $f$; and a seed value $z$ for the catamorphism. $T_f$ and $z$ only need to be constructed once, the cost of which depends only on $T_L$ itself. Since the mean runtime of $T_f$ must be $O(1)$, its simulation by $T_{fold}$, even on a single-tape machine, remains $O(1)$, and the runtime of the compound machine $(T_{fold}\ T_f\ z)$ stays at $O(n)$. Consequently, by passing instances of $L$ to $(T_{fold}\ T_f\ z)$ I can postulate $$\forall L \in \mathrm{DTIME}(n).\ L \le_m fold$$ With the reduction's overhead depending on $L$ but not on the size of the input, it is $O(1)$.

I can roughly envision an inductive argument, using this as the base case, extending to a $k$-fold $fold$ composition and $\mathrm{DTIME}(n^k)$, but I don't have the details. Due to this lack of completeness as well as rigor (what if the $O(n)$ complexity of $T_L$ was amortized?), I am unwilling yet to posit this as an answer to my own question. I also can't shake the feeling that a ready resolution to it all may be available from a fp expert, now that it is turning in that direction.

[1]: Hutton, G. (1999). A tutorial on the universality and expressiveness of fold. Journal of Functional Programming, 9(4), 355–372. doi: 10.1017/s0956796899003500

1

There are 1 best solutions below

9
On

Cook-Levin theorem result is, and I quote:

That is, any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the problem of determining whether a Boolean formula is satisfiable.

It is redundant to define polynomial (or even Log-space reduction reductions between $DTIME(n^{k_1})\text{ and }DTIME(n^{k_2})$

For polynomial reduction, the reduction itself can solve the problem.
Which means every $L_1\in DTIME(n^{k_1})\leq_p DTIME(n)$ under reduction $p$ thats solves $L_1$ the solution as output.

For Log-space reduction, The reduction can square the size of the input for example by repeating each letter in the input $n$ times.
This means $$\forall L_1\in DTIME(n^{2k})\ \ \exists L_2\in DTIME(n^{k}) \ \ s.t\ \ L_1\leq_p L_2 $$

Important Remark
This does not mean $DTIME(n^{k + 1}) = DTIME(n^{k})$ as the Time hierarchy theorem proves

Maybe you will be interested in the $P$-complete class (under log-space reductions).