Question

In the book "Computational complexity" of Barak and Arora, on page 112, they state that:

Theorem 6.15: A language has logspace-uniform circuits of polynomial size iff it is in P.

The proof of this one is left as an exercise to the reader. I think both directions are trivial:

=> seems trivial, as a logspace TM that generates a circuit also runs in polynomial time, and hence is a P-uniform circuit, which is part of P.

<= seems trivial, as a language that has a polynomial-time TM can be transformed into a circuit with Cook-Levin's theorem in logspace.

However, what I don't get is why the theorem 6.15 explicitly states that the circuits must be of "polynomial size". How can there exist a logspace-uniform circuit that isn't polynomial in size? The logspace computable function itself cannot exceed a polynomial bound, how can it produce a circuit of superpolynomial size?

Also, this theorem would imply that logspace-uniform circuits comprise the same languages as P-uniform circuits, which seems very unintuitive to me. I can't find any information on the relation between logspace-uniform and P-space uniform circuits on the web, so my assumption that they are equal is probably false, but I fail to see see why.

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top