The proof of Theorem 1 that PTime is not semi-decidable in this recent preprint effectively shows that it is $\mathsf{R}\cup\mathsf{coR}$-hard. The proof itself is similar to undecidability proofs at cs.SE and cstheory.SE. However, what is different is a footnote, which highlights that proving both $\mathsf{R}$-hardness and $\mathsf{coR}$-hardness seems to be new:

1The undecidability of PTime seems to be folklore, but non-semi-decidability seems to be new, even though semi-decidability is a natural property here.

But I want to ignore the question whether this observation is new. What interests me is what is known about the degree of undecidability of PTime. For example, is computation in the limit powerful enough to decide PTime? Are there better lower bounds than $\mathsf{R}\cup\mathsf{coR}$ for the difficulty of PTime?

Note: I denote the problem to decide "whether a given Turing machine runs in polynomial time" as PTime here. I know that this is incorrect, because PTime is a class of languages and not a class of Turing machines. However, I found this misuse of terminology convenient for this question.

有帮助吗?

解决方案

The answer is $\bf 0''$ (and so in particular computation in the limit - which corresponds to $\le_T\bf 0'$ - is not enough). And this stronger result is also folklore (I was assigned it as an exercise way back when).

As an upper bound we just check quantifier complexity: $\Phi_e$ runs in polynomial time iff there exists some polynomial $p$ such that for every input $n$ we have $\Phi_e(n)[p(\vert n\vert)]\downarrow$ (that final clause only uses bounded quantifiers). So running in polynomial time is a $\Sigma^0_2$ property.

Now we just need to show optimality. We'll actually show a bit more: not just that the Turing degree is $\bf 0''$, but that the set is $\Sigma^0_2$-complete. Supposing $$A=\{x:\exists y\forall z\theta(x,y,z)\}$$ (with $\theta\in\Delta^0_0$) is any other $\Sigma^0_2$ set, we want to reduce $A$ to the set of programs running in polynomial time.

So fix an instance $i$. We'll say $m$ is $s$-bad for $i$ if $m<s$ and there is some $n<s$ such that $\neg\theta(i,m,n)$. Let $t_i(s)$ be the least $m$ which is not $s$-bad for $i$. Now we'll whip up a stupid program $\Phi_{e_i}$ as follows: on input $s$, $\Phi_{e_i}(s)$ runs for ${\vert s\vert}^{t_i(s)}$-many steps and then halts and outputs $0$. The point is that if $i\in A$ then $t_i(s)$ stabilizes and so $\Phi_{e_i}$ runs in polynomial time, and otherwise $\Phi_{e_i}$ does not run in polynomial time.

Now the map $\mu: i\mapsto e_i$ is computable, so we get a many-one reduction of $A$ to our set.


As a quick comment, note that we only used one property of polynomial time above: that there is no biggest polynomial time bound. This let us keep pushing up the runtime bit by bit (each time $t_i(s)$ goes up) while preserving the possibility of still running in polynomial time. By contrast, "halts within seven steps on all inputs" (say) is merely $\Pi^0_1$-complete. Basically, whenever we have a concrete resource-bounded complexity class with no "maximal bound," being in that class will be $\Sigma^0_2$-complete. In particular, this applies to NP, EXPTIME, PSPACE, ...

许可以下: CC-BY-SA归因
不隶属于 cs.stackexchange
scroll top