Recently, I have been thinking on-and-off about the following problem and variants thereof. Fix a function . When is it easier to *verify* that a pair satisfies then to compute from scratch? (One can always verify by computing , so it can never be harder.) Without being given any additional information along with , verification can never be much easier than computation (proof: given a verification procedure we can compute as follows: if then output 0, else output 1), so to make things interesting we allow verification to additionally take as input a “proof string” .

Let’s define what we want more formally. There are various possibilities here, but to keep things clean let’s say we want an algorithm such that:

- For all there exists a such that .
- For all and all , we have .

The reader can easily see that this is nothing more than a question about when non-deterministic computation is “better” than deterministic computation. Assuming , there are several interesting functions (e.g., the characteristic function for 3SAT) for which verification is faster than computation.

All very well and good, but here are some things I really want to know:

- For what functions can we
*prove*, without assumptions, that verification is “better” than computation? - Even allowing reasonable complexity-theoretic assumptions, is verification faster than computation for every poly-time computable function ? For a concrete question, is for every ?

Related to the first question, Lance has a recent post that shows, in particular, that any function in can be verified by an verifier. Rather than reduce to 3SAT, let’s prove this directly. Fix a function and a machine that computes it. Given an input , the proof will be the complete tableau of ‘s computation on the input . A tableau can be checked for correctness by some polynomial number of *local* computations, each looking at only a constant number of bits. An unbounded fan-in AND gate at the bottom can thus be used to express correctness of all the local checks.

Since parity is in but not in , this gives a provable example of where verification is easier than computation.

What’s the crypto connection? Well, say we have a low-power clinet that wants to compute . One thing it can do is to *outsource* computation of to a more-powerful server. But since the client may not trust the server, it will also request the server to provide proof that the computation was done correctly. Of course ,we would like the process of verification to be cheaper than simply having the client compute itself.

See the work of Goldwasser et al. and Hohenberger-Lysyanskaya for some related work in this area.

Welcome to the blog world.

By the way Michael Rabin has old paper on computing vs checking that is probably one of oldest on this issue.

By:

rjliptonon June 19, 2009at 7:54 pm

If you have a reference, I’d appreciate it.

PS: I am, of course, thrilled that you’re the first commenter on my blog. =)

By:

jonkatzon June 21, 2009at 11:03 pm

Looking forward to reading the blog.

By:

Steve Weison June 24, 2009at 3:05 am

Its great that you have decided to start a blog. I have been thinking about doing that for a long time. Anyway, I hope you will keep it alive, well, as opposed to another blog that was recently started in our community 😉

By:

vipul goyalon June 24, 2009at 7:52 am

Here is a randomized “algorithm” that I’ve been bringing up in various places. For some reason nobody but me thinks it is interesting :(. It builds on an example given by Leonid Levin.

Let R = an n-bit random string, for some sufficiently large n (say n=2 million).

Let P be the proposition “R cannot be compressed to less than half its length” (Levin). This asserts that R’s Kolmogorov complexity is > |R|/2, which is obviously almost certainly true if R is really random. (For any epsilon there is a large enough n so that P is true with probability > 1-epsilon).

By a theorem of Chaitin, if P is true, it cannot be proved by any reasonable axiom system (Chaitin’s version of Godel’s incompleteness theorem). To be more precise, for any recursive axiom system there is a constant c, such that the axioms cannot prove any statement of the form K(x) > c.

So here you’ve got a function for which randomized “computation” is trivial, but verification is undecidable.

I guess that is sort of a trick answer, but I felt I had to bring it up.

By:

noneon July 2, 2009at 3:19 am

I think the definitions implies that the language is in co-NP as well.

By:

Anonymouson July 4, 2009at 2:59 am