Don’t be misled by the title of the post — it’s not another post about FOCS. (I will address some of the issues raised in the comments later, but wanted to talk about something else for now.)

Consider two players A and B holding inputs and who wish to jointly compute the value . If you’re a complexity theorist you probably think of *communication complexity*, where the parties are cooperative and just want to minimize the communication needed to compute the function. But cryptographers are not so trusting. We are instead interested in the question of when the parties can compute *securely*. Without giving a formal definition, let’s just say that “secure” here means, in particular, that (1) neither party should learn anything about the other party’s input (beyond what is implied by itself) and (2) neither party should be able to make the other party output an “incorrect” answer.

One of the great achievements of cryptography has been to show that, under reasonable assumptions, secure computation as above is possible for *any* .

There is one caveat. The definition of “secure”, above, did not address *fairness*. Namely, it might be possible for one party to learn while preventing the *other* party from learning it. Indeed, Cleve proved (STOC ’86) that fairness is impossible even for a simple function like binary XOR. A little thought will convince you that this must be true for any (non-trivial) : in any protocol computing , a malicious party can simply abort the protocol as soon as it learns . Since one party must learn the output first, this means no protocol can be fair.

Surprisingly, *this intuition turns out to be wrong*. Gordon, Hazay, Lindell, and I recently showed that certain non-trivial functions *can* be computed with complete fairness. We give two different “types” of protocols for doing this (for specific functions):

1. In the first protocol, the round in which a party learns the output *depends on* its input. In particular, if we number the possible inputs of A as (and similarly for B), then when A holds input it learns the output in round (and similarly for B). Moreover, if a party (say, A) aborts the protocol in round , and the other party (B in this case) has not yet learned the output, then B simply takes A’s input to be and evaluates the function itself (i.e., outputs , where is B’s actual input.)

Why does this have a hope of giving fairness? If A aborts before round , then A did not learn the output and so fairness is trivial. If A aborts in round (i.e., as soon as it has learned the output), then *B ends up using A’s input anyway* when it evaluates the function, and so we get fairness here as well. The difficult case is when A aborts in some round , trying to “trick” B into using the “incorrect” input . For *specific* functions, however, we can order the inputs of the parties to ensure that *it does not matter* whether B computes or , since B *gets the same answer in either case*. (This description is slightly inaccurate; see the paper for details.) This works when is the comparison function, and permutations thereof. Note that the round complexity is linear in the size of the input domain.

2. The second type of protocol we show has the property that *parties do not know* when they have received the correct output. Namely, the protocol is such that the probability that a player receives the output before round is (for some constant ); in all prior rounds, it essentially receives a random output. (See the paper for details.) Now if a party aborts, it is not sure whether it has learned the correct output or not and it is possible to prove that this approach gives complete fairness for certain functions (if is set appropriately). Note that, in order for correctness to hold (with all but negligible probability) for honest parties running the protocol, we need the number of rounds to be , where is the security parameter. We also have a matching lower bound showing that this round complexity is necessary.

The technique of giving random outputs in several rounds, in order to hide the round when the “true” output is revealed, was used subsequently in work of Moran, Naor, and Segev to get optimal bias in coin tossing, as well as in recent work by Gordon and myself on partial fairness. (More on that in a later post.)

** Open Problems **

There are several open question in this area, but let me focus on one. Can we characterize which functions can be computed with complete fairness in the two-party setting? As a start, can we rule out fairness for any *specific* functions? (The only examples of functions for which we know that fairness is impossible are those that imply coin tossing, and those ruled out by the impossibility result in the partial fairness paper.) What is so frustrating about this question is that fairness is so “obviously” impossible for many natural functions, yet so difficult to rule out.

Your protocol 2 possibility result and the coin tossing impossibility aren’t that far apart.

Intuitively, your protocol 2 requires one player to have “more control” over the output of the function. This makes it acceptable to give him an advantage in computing , because he already has that advantage in misleading his opponent.

More formally, suppose a binary has an result matrix with columns . Then protocol 2 works if the rows of have positive volume in -dimensional space (at least for polynomial domains). Equivalently, it works if adjoining a column of ones gives a rank- matrix. Also equivalently, if there do not exist any non-trivial* scalars with . It also works if any of these properties hold for .

But one can show that, if there exist non-trivial scalars with and , and the same is true for , then the coin tossing impossibility result applies. Player plays with probability proportional to , and outputs if and otherwise. He gets a bit with distribution independent of how plays. Player does the same. If , the bits are correlated and we can toss coins.

So the only gap (for binary polynomial-domain functions) is that pesky “and ” condition in the impossibility result. Unfortunately, there are functions that fit in this gap. For example, a result for

doesn’t easily follow from your protocol, since it’s a square matrix without constant or duplicated rows or columns. But the column player can only get a distribution independent of by playing This is also independent of , so my coin-tossing reduction doesn’t apply.

Something like the “” condition is necessary on the impossibility side, since it’s why the impossibility doesn’t apply whenever has duplicated rows or columns. Duplicated rows or columns are also case where has no volume but protocol 2 works, so my guess is that all cases in this gap can be fairly computed.

* Non-trivial in the sense that some where is not constant.

By:

ecpriceon July 10, 2009at 9:21 pm

ecprice, I don’t see how you go from a biased coin that the parties might not agree on, to a fair coin that they agree on.

By:

anonymouson July 12, 2009at 11:18 am

Hmm, looks like I messed up. The actual condition should be something like rather than , in addition to . Not as good, sadly.

We want to show that each party can go from their biased coin to an unbiased coin that agrees with with probability . Since each party’s coin is independent, two such coins agree with a decent chance.

Scale the so , and let . Let be the biased coin computes, which is if and otherwise. Then regardless of , and some algebra gives . So if and . Repeating for gives and gives two fair coin flips that agree with a decent chance.

By:

ecpriceon July 13, 2009at 6:15 am

[There was an HTML failure in my last comment]

Hmm, looks like I messed up. The actual condition should be something like rather than , in addition to . Not as good, sadly.

We want to show that each party can go from their biased coin to an unbiased coin that agrees with with probability . Since each party’s coin is independent, two such coins agree with a decent chance.

Scale the so , and let . Let be the biased coin computes, which is if and otherwise. Then regardless of , and some algebra gives . So if , we have and . This means that recentering around (for example, by choosing with probability and otherwise) gives an with and . Repeating for gives and gives two fair coin flips that agree with a decent chance.

By:

ecpriceon July 13, 2009at 6:22 am

ecprice: You say – “Then protocol 2 works if the rows of M have positive volume in C-dimensional space (at least for polynomial domains). Equivalently, it works if adjoining a column of ones gives a rank-(C+1) matrix. Also equivalently, if there do not exist any non-trivial* scalars \{a_y\}, b with \sum a_yc_y = b\mathbf{1}. It also works if any of these properties hold for M^T.”

The condition given in the paper is “X * M = C” where X is a stochastic matrix, M is the given matrix representing f(x,y) and C is some matrix (defined in the paper). I don’t how the above comment follows from the condition given in the paper… Can you explain a bit more?

By:

ranjiton August 11, 2009at 3:26 pm

ranjit: The condition in the paper is that there exists some probability row vectors such that for two specific row vectors . Equivalently, that the two specific vectors lie in the convex combination of the rows of .

But if you look at at how the are defined, it is close to the average row vector . In particular, , where is the chance of revealing the output in each round. If the convex hull of the rows of contain volume, then lies inside that polytope. Furthermore, for polynomial-sized domains, there is a ball around that lies entirely within the polytope with 1/poly radius.

Now, the paper chooses to be a specific value in its proof. But the value in the paper is only an upper bound on the feasible , and the proof works just fine for smaller (with round complexity proportional to ). So we can decrease until lies inside the convex hull of rows of , and the round complexity remains polynomial.

This implication isn’t two-way, of course. If you duplicate a column, the convex hull of rows has no volume, but the will lie on the hyperplane with those columns equal and so lie inside the convex hull. I’m not sure if there are more interesting cases where the reverse doesn’t hold, though.

By:

ecpriceon October 20, 2009at 5:43 pm