Posted by: jonkatz | September 14, 2011

## Is Ladner’s theorem important (to teach)?

I’m teaching complexity theory this fall. (Check out my syllabus and lecture notes!) I’ve fallen about half a lecture behind where I thought I would be at this point, so to catch up I’m debating whether to omit Ladner’s theorem. Should I? Here are the main reasons I was originally planning to cover it:

1. It is a famous result.
2. The proof is not very difficult (so it doesn’t take a lot of time to cover); though see below.
3. It is in Chapter 3 of the Arora-Barak book, which I am roughly following.

On the other hand, once I thought about omitting it, I came up with stronger reasons not to cover it:

1. To me it seems like a “so what?” result. So there are languages in NP\P that are not NP-complete…who cares? (It is a natural question, but the result isn’t very satisfying because you get a completely ridiculous language.) I am also unaware of any interesting results that are proved by relying on Ladner’s theorem.
2. I find the proof extremely technical, messy, and un-illuminating.
3. I am not aware of the specific techniques being useful in some other context, certainly not for anything I was planning to teach later on this semester.

It is worth comparing Ladner’s theorem to the two other results in Chapter 3 of Arora-Barak (that I will cover this semester): hierarchy theorems, and the [BGS] result that the P vs. NP question does not relativize. Point-by-point:

1. I think both these results are very important in their own right. Moreover, hierarchy theorems are invoked frequently in proofs of other results. The [BGS] result is interesting as an illustration of being able to rule out a particular approach to solving a problem.
2. The proofs of both these results are somewhat technical, but I find them much cleaner than the proof of Ladner’s theorem. I think they also both have a pretty clear high-level intuition (something I find lacking in the proof of Ladner’s theorem).
3. The techniques used are applicable elsewhere (though not anywhere later in my class). The hierarchy theorems offer a clear demonstration of diagonalization (and, of course, are useful starting points when trying to prove other hierarchy theorems). Oracle separations were in vogue a decade or two ago in complexity, and are still popular (though in a different sense) in cryptography. I could imagine that when starting to work on many complexity questions, even today, a first question would be to ask whether the statement you are trying to prove relativizes.

I guess I’ve managed to convince myself not to cover it this semester. Anyone want to try to convince me otherwise?

## Responses

1. The result is important: anyone who catches the P vs NP bug must eventually wonder about this question, and Ladner’s theorem puts it to rest.

The techniques, on the other hand, are only of special interest. Few papers today use related ideas. Also, while technical, the gist of the proof can be conveyed quickly.

So I think one should describe the question, the answer, and give a very brief proof sketch. Total cost: 5-10 minutes.

More senior researchers, invested in the contributions of theorists from Ladner’s generation, may protest. But the truth is that students are better served by studying other material—not exclusively new results, but ones that still show signs of life.

2. I did mention (in 1 minute) that there exist problems in NP/P that are not NP-complete. And they will see if (if they care) in the textbook.

For the gist of the proof, I have in my lecture notes something like “diagonalize against all poly-time algorithms, as well as all poly-time Karp reductions from 3SAT”. But somehow the proof is more complicated that that intuition…

3. The intuition for one proof of Ladner’s theorem is that, for any L in NP-P,
we can make a watered down version L’ that is strictly easier than L but still hard enough not to be in P. Say that the best algorithm for L takes time
n^{k(n)}, and let l be a function so that l (n^O(1)) \in o (k(n)), e.g., l(n) is
the square root of k(\log n)$will do. Let L’ be L padded to size n^{l(n)}. If L’ were in P, then L would be decideable in time n^{O(l (n)} On the other hand, L’ is decideable in time N^{k(N)} where$N^{l(N}} = n$, so N= n^{o(1)}, and this time is$n^{o(k(n^{o(1)))$. So if L were reducible to L’, L would be solvable in time$n^{o(k(n))\$, a contradiction to the “best” time for L.
The problem is in establishing that “best times” for NP-complete problems
exist and can be in some sense computed, which is where the diagonalization comes in.

4. Russell makes a good point. Much of the complexity of Ladner’s theorem comes from the fact that we don’t know the “best” time for NP-Complete problems. However, if we assume the Exponential-Time Hypothesis, so SAT takes 2^n time in the worst case, Russell’s padding argument shows that there are NP-intermediate problems. In a sense this captures the pizazz of Ladner’s theorem, without much of the work.

5. 1) Agree that BGS is important as it rules out Computability Theory Techniques.
But it would be good to SEE a Comp Theory Technique that is harder
than Diagonalization- so Ladner’s result is good for that.

2) SO- is there some OTHER, less technical perhaps, result that uses
computability theory techniques. Perhaps that would be a sub for Ladner.

3) Here is a SHOUT OUT to Ryan Williams whose result (which we are covering in seminar) uses NONDET TIME HIER THEOREM. Some classic results get used in unexpected ways.

4) Back to Ladner- it bothers me that the result has NOTHING to do
with poly time. A very general form of it could be obtained with the
same proof, which is good, I guess, but means that its not really about
P and NP.

6. I once ran a P vs NP seminar, and I thought that Ladner’s theorem was a good example of how to do a nontrivial diagonalization. But then again, I’m not an expert.