Posted by: jonkatz | July 10, 2009

(When) is life fair?

Don’t be misled by the title of the post — it’s not another post about FOCS. (I will address some of the issues raised in the comments later, but wanted to talk about something else for now.)

Consider two players A and B holding inputs x and y who wish to jointly compute the value f(x,y). If you’re a complexity theorist you probably think of communication complexity, where the parties are cooperative and just want to minimize the communication needed to compute the function. But cryptographers are not so trusting. We are instead interested in the question of when the parties can compute f securely. Without giving a formal definition, let’s just say that “secure” here means, in particular, that (1) neither party should learn anything about the other party’s input (beyond what is implied by f(x,y) itself) and (2) neither party should be able to make the other party output an “incorrect” answer.

One of the great achievements of cryptography has been to show that, under reasonable assumptions, secure computation as above is possible for any f.

There is one caveat. The definition of “secure”, above, did not address fairness. Namely, it might be possible for one party to learn f(x,y) while preventing the other party from learning it. Indeed, Cleve proved (STOC ’86) that fairness is impossible even for a simple function like binary XOR. A little thought will convince you that this must be true for any (non-trivial) f: in any protocol computing f, a malicious party can simply abort the protocol as soon as it learns f(x,y). Since one party must learn the output first, this means no protocol can be fair.

Surprisingly, this intuition turns out to be wrong. Gordon, Hazay, Lindell, and I recently showed that certain non-trivial functions can be computed with complete fairness. We give two different “types” of protocols for doing this (for specific functions):

1. In the first protocol, the round in which a party learns the output depends on its input. In particular, if we number the possible inputs of A as x_1 \ldots x_n (and similarly for B), then when A holds input x_i it learns the output in round i (and similarly for B). Moreover, if a party (say, A) aborts the protocol in round i, and the other party (B in this case) has not yet learned the output, then B simply takes A’s input to be x_i and evaluates the function itself (i.e., outputs f(x_i, y), where y is B’s actual input.)

Why does this have a hope of giving fairness? If A aborts before round i, then A did not learn the output and so fairness is trivial. If A aborts in round i (i.e., as soon as it has learned the output), then B ends up using A’s input anyway when it evaluates the function, and so we get fairness here as well. The difficult case is when A aborts in some round j>i, trying to “trick” B into using the “incorrect” input x_j. For specific functions, however, we can order the inputs of the parties to ensure that it does not matter whether B computes f(x_i, y) or f(x_j, y), since B gets the same answer in either case. (This description is slightly inaccurate; see the paper for details.) This works when f is the comparison function, and permutations thereof. Note that the round complexity is linear in the size of the input domain.

2. The second type of protocol we show has the property that parties do not know when they have received the correct output. Namely, the protocol is such that the probability that a player receives the output before round i is 1-\alpha^{-i} (for some constant \alpha); in all prior rounds, it essentially receives a random output. (See the paper for details.) Now if a party aborts, it is not sure whether it has learned the correct output or not and it is possible to prove that this approach gives complete fairness for certain functions (if \alpha is set appropriately). Note that, in order for correctness to hold (with all but negligible probability) for honest parties running the protocol, we need the number of rounds to be \omega(log k), where k is the security parameter. We also have a matching lower bound showing that this round complexity is necessary.

The technique of giving random outputs in several rounds, in order to hide the round when the “true” output is revealed, was used subsequently in work of Moran, Naor, and Segev to get optimal bias in coin tossing, as well as in recent work by Gordon and myself on partial fairness. (More on that in a later post.)

Open Problems

There are several open question in this area, but let me focus on one. Can we characterize which functions can be computed with complete fairness in the two-party setting? As a start, can we rule out fairness for any specific functions? (The only examples of functions for which we know that fairness is impossible are those that imply coin tossing, and those ruled out by the impossibility result in the partial fairness paper.) What is so frustrating about this question is that fairness is so “obviously” impossible for many natural functions, yet so difficult to rule out.

Advertisements

Responses

  1. Your protocol 2 possibility result and the coin tossing impossibility aren’t that far apart.

    Intuitively, your protocol 2 requires one player to have “more control” over the output of the function. This makes it acceptable to give him an \alpha advantage in computing f(x,y), because he already has that advantage in misleading his opponent.

    More formally, suppose a binary f(x,y) has an R\times C result matrix M with columns \{c_y\}. Then protocol 2 works if the rows of M have positive volume in C-dimensional space (at least for polynomial domains). Equivalently, it works if adjoining a column of ones gives a rank-(C+1) matrix. Also equivalently, if there do not exist any non-trivial* scalars \{a_y\}, b with \sum a_yc_y = b\mathbf{1}. It also works if any of these properties hold for M^T.

    But one can show that, if there exist non-trivial scalars \{a_y\}, b with \sum a_yc_y = b\mathbf{1} and \sum a_y \neq 0, and the same is true for M^T, then the coin tossing impossibility result applies. Player Y plays y with probability proportional to |a_y|, and outputs f(x,y) if a_y > 0 and 1-f(x,y) otherwise. He gets a bit with distribution independent of how X plays. Player X does the same. If \sum a_y \neq 0, the bits are correlated and we can toss coins.

    So the only gap (for binary polynomial-domain functions) is that pesky “and \sum a_y \neq 0” condition in the impossibility result. Unfortunately, there are functions that fit in this gap. For example, a result for
    M = \displaystyle \left(\begin{array}{cccc} 1&1&1&1\\ 1&0&1&0\\ 0&1&0&1\\ 0&0&1&1 \end{array}\right)
    doesn’t easily follow from your protocol, since it’s a square matrix without constant or duplicated rows or columns. But the column player can only get a distribution independent of X by playing \frac{1}{4}(1, -1, -1, 1). This is also independent of (x, f(x,y)), so my coin-tossing reduction doesn’t apply.

    Something like the “\sum a_y \neq 0” condition is necessary on the impossibility side, since it’s why the impossibility doesn’t apply whenever M has duplicated rows or columns. Duplicated rows or columns are also case where M has no volume but protocol 2 works, so my guess is that all cases in this gap can be fairly computed.

    * Non-trivial in the sense that some a_y \neq 0 where c_y is not constant.

  2. ecprice, I don’t see how you go from a biased coin that the parties might not agree on, to a fair coin that they agree on.

  3. Hmm, looks like I messed up. The actual condition should be something like \sum a_y > b > 0 rather than \sum a_y \neq 0, in addition to \sum a_yc_y =b\mathbf{1}. Not as good, sadly.

    We want to show that each party can go from their biased coin to an unbiased coin that agrees with f(x, y) with probability \frac{1}{2} + \epsilon. Since each party’s coin is independent, two such coins agree with a decent chance.

    Scale the a_y so \sum |a_y| = 1, and let p_Y = \sum_{a_y > 0} a_y = (1 + \sum a_y)/2. Let g_y be the biased coin Y computes, which is f(x, y) if a_y > 0 and 1 - f(x, y) otherwise. Then \Pr[g_y = f(x, y) \mid x] = p_Y regardless of x, and some algebra gives \Pr[g_y = 1\mid x] = b + 1 - p_Y. So if 0 < b  1/2 and 1-p_Y < \Pr[g_y = 1 \mid x]  \Pr[h_Y = g_Y]\Pr[g_Y = f(x, y)\mid x] > 1/2. Repeating for X gives h_X and h_Y gives two fair coin flips that agree with a decent chance.

  4. [There was an HTML failure in my last comment]

    Hmm, looks like I messed up. The actual condition should be something like \sum a_y > b > 0 rather than \sum a_y \neq 0, in addition to \sum a_yc_y =b\mathbf{1}. Not as good, sadly.

    We want to show that each party can go from their biased coin to an unbiased coin that agrees with f(x, y) with probability \frac{1}{2} + \epsilon. Since each party’s coin is independent, two such coins agree with a decent chance.

    Scale the a_y so \sum |a_y| = 1, and let p_Y = \sum_{a_y > 0} a_y = (1 + \sum a_y)/2. Let g_y be the biased coin Y computes, which is f(x, y) if a_y > 0 and 1 - f(x, y) otherwise. Then \Pr[g_y = f(x, y) \mid x] = p_Y regardless of x, and some algebra gives \Pr[g_y = 1\mid x] = b + 1 - p_Y. So if 0 < b < \sum a_y, we have p_Y > 1/2 and 1-p_Y < \Pr[g_y = 1 \mid x] < p_Y. This means that recentering g_Y around 1/2 (for example, by choosing h_Y = g_Y with probability 1/(2\Pr[g_Y]) and h_Y = 0 otherwise) gives an h_Y with \Pr[h_Y] = 1/2 and \Pr[h_Y = f(x, y)\mid x] > \Pr[h_Y = g_Y]\Pr[g_Y = f(x, y)\mid x] > 1/2. Repeating for X gives h_X and h_Y gives two fair coin flips that agree with a decent chance.

  5. ecprice: You say – “Then protocol 2 works if the rows of M have positive volume in C-dimensional space (at least for polynomial domains). Equivalently, it works if adjoining a column of ones gives a rank-(C+1) matrix. Also equivalently, if there do not exist any non-trivial* scalars \{a_y\}, b with \sum a_yc_y = b\mathbf{1}. It also works if any of these properties hold for M^T.”

    The condition given in the paper is “X * M = C” where X is a stochastic matrix, M is the given matrix representing f(x,y) and C is some matrix (defined in the paper). I don’t how the above comment follows from the condition given in the paper… Can you explain a bit more?

  6. ranjit: The condition in the paper is that there exists some probability row vectors X_b such that X_b M = C_b for two specific row vectors C_b. Equivalently, that the two specific vectors C_b lie in the convex combination of the rows of M.

    But if you look at at how the C_b are defined, it is close to the average row vector P = \frac{1}{n}\cdot M. In particular, |C_b - P| = O(\alpha), where \alpha is the chance of revealing the output in each round. If the convex hull of the rows of M contain volume, then P lies inside that polytope. Furthermore, for polynomial-sized domains, there is a ball around P that lies entirely within the polytope with 1/poly radius.

    Now, the paper chooses \alpha to be a specific value in its proof. But the value in the paper is only an upper bound on the feasible \alpha, and the proof works just fine for smaller \alpha (with round complexity proportional to 1/\alpha). So we can decrease \alpha until C_b lies inside the convex hull of rows of A, and the round complexity remains polynomial.

    This implication isn’t two-way, of course. If you duplicate a column, the convex hull of rows has no volume, but the C_b will lie on the hyperplane with those columns equal and so lie inside the convex hull. I’m not sure if there are more interesting cases where the reverse doesn’t hold, though.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: