Don’t be misled by the title of the post — it’s not another post about FOCS. (I will address some of the issues raised in the comments later, but wanted to talk about something else for now.)
Consider two players A and B holding inputs and who wish to jointly compute the value . If you’re a complexity theorist you probably think of communication complexity, where the parties are cooperative and just want to minimize the communication needed to compute the function. But cryptographers are not so trusting. We are instead interested in the question of when the parties can compute securely. Without giving a formal definition, let’s just say that “secure” here means, in particular, that (1) neither party should learn anything about the other party’s input (beyond what is implied by itself) and (2) neither party should be able to make the other party output an “incorrect” answer.
One of the great achievements of cryptography has been to show that, under reasonable assumptions, secure computation as above is possible for any .
There is one caveat. The definition of “secure”, above, did not address fairness. Namely, it might be possible for one party to learn while preventing the other party from learning it. Indeed, Cleve proved (STOC ’86) that fairness is impossible even for a simple function like binary XOR. A little thought will convince you that this must be true for any (non-trivial) : in any protocol computing , a malicious party can simply abort the protocol as soon as it learns . Since one party must learn the output first, this means no protocol can be fair.
Surprisingly, this intuition turns out to be wrong. Gordon, Hazay, Lindell, and I recently showed that certain non-trivial functions can be computed with complete fairness. We give two different “types” of protocols for doing this (for specific functions):
1. In the first protocol, the round in which a party learns the output depends on its input. In particular, if we number the possible inputs of A as (and similarly for B), then when A holds input it learns the output in round (and similarly for B). Moreover, if a party (say, A) aborts the protocol in round , and the other party (B in this case) has not yet learned the output, then B simply takes A’s input to be and evaluates the function itself (i.e., outputs , where is B’s actual input.)
Why does this have a hope of giving fairness? If A aborts before round , then A did not learn the output and so fairness is trivial. If A aborts in round (i.e., as soon as it has learned the output), then B ends up using A’s input anyway when it evaluates the function, and so we get fairness here as well. The difficult case is when A aborts in some round , trying to “trick” B into using the “incorrect” input . For specific functions, however, we can order the inputs of the parties to ensure that it does not matter whether B computes or , since B gets the same answer in either case. (This description is slightly inaccurate; see the paper for details.) This works when is the comparison function, and permutations thereof. Note that the round complexity is linear in the size of the input domain.
2. The second type of protocol we show has the property that parties do not know when they have received the correct output. Namely, the protocol is such that the probability that a player receives the output before round is (for some constant ); in all prior rounds, it essentially receives a random output. (See the paper for details.) Now if a party aborts, it is not sure whether it has learned the correct output or not and it is possible to prove that this approach gives complete fairness for certain functions (if is set appropriately). Note that, in order for correctness to hold (with all but negligible probability) for honest parties running the protocol, we need the number of rounds to be , where is the security parameter. We also have a matching lower bound showing that this round complexity is necessary.
The technique of giving random outputs in several rounds, in order to hide the round when the “true” output is revealed, was used subsequently in work of Moran, Naor, and Segev to get optimal bias in coin tossing, as well as in recent work by Gordon and myself on partial fairness. (More on that in a later post.)
There are several open question in this area, but let me focus on one. Can we characterize which functions can be computed with complete fairness in the two-party setting? As a start, can we rule out fairness for any specific functions? (The only examples of functions for which we know that fairness is impossible are those that imply coin tossing, and those ruled out by the impossibility result in the partial fairness paper.) What is so frustrating about this question is that fairness is so “obviously” impossible for many natural functions, yet so difficult to rule out.