Why is minimization (always) effectively computable?

153 Views Asked by At

Please excuse me if my question is stupid or obvious, but I don't understand why minimization (in recursion theory) defined as:

$$\operatorname{Mn}[f](x_1,\ldots,x_n)=\begin{cases} y,&\text{if }f(x_1,\ldots,x_n,y)=0\\ &\text{ and for all }t<y,f(x_1,\ldots,x_nt)\\ &\text{ is defined and}\ne 0\\ \text{undefined},&\text{if there is no such }y\;. \end{cases}$$

is effectively computable. The definition that my book gives for an effective computable function is, if I am not mistaken, a function of which, given any argument(s), we can determine the value in a finite amount of time and steps.

What if the function I'm considering is defined for every $t$, but there's no $y$ that makes the value $0$? In this case how can I make sure that there is not such $y$? I would be stuck checking if the value is $0$ for $t=0,t=1,t=2$, and so on, so I would never stop. Doesn't this make it not always effectively computable?

1

There are 1 best solutions below

0
On

What you’ve given is an informal and incomplete definition of effective computability. Nowadays the Church-Turing thesis is generally taken as a definition of effective computability: a function is effectively computable precisely in case it can be computed by a Turing machine or, equivalently, is a general recursive function (and there are several other provably equivalent formalisms as well).

A more accurate informal version of the concept would be that an effectively computable function is one that is computable by a finite computation if the argument is in its domain, and otherwise the computation must not output a value. In the case of minimization, if there is no $y$ such that $f(x_1,\ldots,x_n,y)=0$, then $\langle x_1,\ldots,x_n\rangle$ is not in the domain of $\operatorname{Mn}[f]$, and the computation should not (and does not) return an output.