The Eurocrypt deadline is today, which gives me an excuse to talk about a recent paper of mine that I had been promising to talk about for a while.

In the setting of distributed computation, a protocol is *fair* if all parties receive their outputs even in the event of malicious behavior by some of the other parties (in particular, even if they abort the protocol early). Unfortunately, Cleve showed in 1986 that complete fairness is impossible — even for some very simple functions — whenever an honest majority is not present. In particular, fairness is impossible, in general, in the two-party setting. (Although complete fairness turns out to be possible for some non-trivial functions, as discussed here.)

As such, there has been a significant amount of work trying to achieve *partial* fairness. I won’t survey all this work here; instead, I will just note that (1) several papers on the topic don’t give *any* formal definition of what they are trying to achieve; (2) a few papers give definitions that are either ad-hoc, or are not very easy to understand (to put it mildly); (3) protocols suggested in many of the papers have significant drawbacks.

Expanding a little on this last point, one line of work has studied protocols guaranteeing something of the following form: at any point in the protocol, the computational efforts required by each party to recover their output are within a constant factor of each other. While at first appealing, one severe drawback of this approach is that it leaves ambiguous what the honest party is supposed to do in case of an abort! If the honest party always tries to recover its output, then it can be forced to run for exponential time. On the other hand, if there is some cut-off round before which point the honest party does not invest the time to recover its output, then the adversary can always abort the protocol just before that point.

In a recent paper by Dov Gordon and myself, we suggest for the first time a *simulation-based* definition of partial fairness within the standard real/ideal world paradigm. (A slightly outdated version of our paper is available here.) Our definition is as follows: we take the usual ideal-world model (used when defining secure computation with complete fairness) and require that the real-world execution of the protocol and this ideal world be indistinguishable *up to to an additive distance of * (for some specified polynomial ). We refer to this notion as “-security”. Note that -security is technically incomparable to (but intuitively more appealing than) the standard approach (“security with abort”), which *weakens* the ideal model to one where fairness is not guaranteed at all but requires full-fledged computational indistinguishability (with respect to this weaker ideal model). Also, although the definition of -security allows privacy to be violated with probability , in fact all our protocols are completely private. (Some of them will also be secure with abort.)

In addition to introducing this new definition, we also completely settle the question of feasibility of partial fairness (with respect to this definition) in the two-party setting. Namely, we show:

**Positive results:**

Let be a (randomized) functionality, where player 1 provides input and receives output , and player 2 provides input and receives output .

- As long as one of is polynomial-size, then for any polynomial there is a protocol computing that is both -secure and secure-with-abort.
- As long as one of is polynomial-size, then for any polynomial there is a protocol computing that is -secure.

**Negative results:**

Our negative results show that the above are optimal:

- There is a (deterministic) function where each of have super-polynomial size, such that there is no protocol computing that is both -secure and secure-with-abort.
- There is a (deterministic) function where each of have super-polynomial size, such that there is no protocol computing that is -secure.

## Open questions:

This line of work (continuing the earlier work on complete fairness) is among my favorites of the things I’ve worked on recently. Fairness is a natural and basic problem that we still don’t fully understand; questions in the area are easily accessible, yet resolving these questions is difficult and seems to require new techniques. Several compelling questions remain; here are some of my favorites:

- Partial fairness in the
*multi-party setting*is wide open. The only positive results I am aware of (besides those functions for which complete fairness is possible) are for coin tossing, and the only negative results I know about are those that can be derived as extensions of the impossibility results from the two-party case. - Determining the exact
*round complexity*of protocols achieving fairness or partial fairness seems interesting — not just in its own right, but because I suspect it will shed light on the issue of fairness itself. The round complexity of partially fair, two-party coin tossing was recently resolved by Moran, Naor, and Segev; other than that, the question is completely open.

## Leave a Reply