As part of some consulting work I was recently doing, I was asked to evaluate a proposed standard for some crypto protocol more complex than encryption, signatures, etc. (I am being deliberately vague here for obvious reasons.)
The proposal came with a [published] proof of security (which is good!). But it turned out that there was an easy attack on the proposed scheme (which is bad!). What was going on here? When I traced back through the literature, I found that the proof was fine — the problem was that the security definition the authors used exactly disallowed the attack we had found. Not exactly something I was happy to see, especially after I had just stressed to the people I was working with how important provable security is…
It did get me to thinking about when weak security definitions are appropriate. I thought of two classic examples, which seem to me to be fundamentally different, and ok for somewhat different reasons:
1) The most common example is when we consider weak security definitions even when we know that stronger security definitions exist. E.g., dealing with chosen-plaintext security even though we know that the “right” definition [if one can really call it that] is chosen-ciphertext security. Or focusing on semi-honest security rather than malicious security.
This can be justified in several ways. First, we often understand that we can use a scheme that satisfies a weak definition in order to construct a scheme that satisfies a strong definition (e.g., going from semi-honest security to malicious security using a general compilation technique, or going from CPA-security to CCA-security in the private-key setting); therefore, it is independently interesting to explore the weaker definition. Or, there may be application scenarios where — for whatever reason — a weaker definition is ok. (E.g., if the communication channel is such that passive eavesdropping is much easier than active tampering.)
Ultimately, I think the crucial point is that when studying weaker definitions of this type we are aware that a stronger definition exists and so are able to apply a “sanity check” to our work (with the weaker definition) to see whether it makes sense. Note also that, in some sense, every definition we work with is “weak” in that it ignores various real-world aspects: for example, the standard definition of CCA-security ignores leakage attacks, but that does not mean that from now on we should only deal with a definition that incorporates leakage…
2) I claim that a fundamentally different example occurs when a “natural” definition that we would like to achieve is impossible. The best example here is our definition of chosen-ciphertext security, where we allow the adversary to request the decryption of any ciphertext of its choice except the one the adversary is interested in! (I can think of one other example which isn’t quite as good. Feel free to share other examples in the comments.) If you think about it, this is a very strange definition — clearly, if the adversary had access to a decryption oracle in the real world, the first thing it would do is request decryption of the ciphertext it cared about! And, indeed, it is often hard to motivate to people outside cryptography why this definition is a good one. (Let me be clear about the question I am raising. Of course we cannot consider a definition where the adversary can submit the challenge ciphertext to its decryption oracle, since that is impossible to achieve. The question then is: given that we can’t achieve this, maybe it is not worth studying CCA-security at all…)
I don’t like what might be the “standard” answer, which is that we just consider the strongest definition that we can hope to achieve. (In fact, I think this kind of thinking was what resulting in the flaw with the proposed standard.) While not being a perfect answer, one way to motivate why the definition of CCA-security is useful is to show various attack scenarios where replaying the challenge ciphertext does not help. As a simple example, consider the well-known “auction attack” where bidders submit encrypted bids and we do not want the adversary to be able to make his bid depend on the bid of someone else. It is true that submitting an equal bid would be bad, but it is also true in this scenario that the auctioneer could detect misbehavior if he receives an identical ciphertext from multiple bidders. (When we discuss CCA-security in our book, we give several other examples and spend a bit of time motivating why the definition makes sense.)
Here, however, I think one has to be very careful in proposing and studying a weak definition “just because” we can’t achieve what we really want. At a minimum, when introducing a new definition with a seemingly arbitrary weakening, there needs to be some discussion as to why this weakening doesn’t make the definition vacuous. Such discussion was missing in the case I was studying. I would also add that, in the context of the proposed standard, it is not immediately obvious that the “ideal” definition is impossible to achieve; it may be just that the proposal itself does not achieve it.