Posted by: jonkatz | February 14, 2012

Two attacks and two announcements

I’m teaching Computer and Network Security this semester, and I usually start out by covering crypto at a high level. My goal is not really to have the students understand the crypto itself (that what my semester-long Cryptography course is about) but primarily to have them understand how to properly use crypto.

The past few times I have taught a security course, I followed up my lectures on crypto with 2-3 lectures about how crypto gets broken in the real world. One obvious way is when it gets implemented wrong (e.g., the WEP attacks). But there are other, less obvious and more clever, ways as well. Several examples come to mind (and are covered in my class), but let’s name two: bad random-number generation, and what I’ll call a mismatch between the security provided by a cryptographic primitive and the requirements of the application.

With this in mind, it’s fortuitous to hear today about two such attacks:

  1. The flaw in the random-number generation that was apparently used for several real-world implementations of RSA-based cryptosystems (paper here). (Interesting side note: some news articles I’ve seen say that the paper will be presented at a conference in Santa Barbara in August. The Crypto deadline is 3 days away. Has the paper been accepted to Crypto before the review process has even started??)
  2. The attack on Google Maps over SSL. Summary: the researchers exploited the fact that even the best encryption leaks the length of the plaintext. This observation has been applied several times before in different contexts. While it may be “obvious” that encryption (practically speaking) can’t hide the plaintext length, I wish cryptographers, when teaching the definition of secure encryption, would point out that leaking the plaintext length is often a real problem.

More examples for me to cover in class!


On a separate note, two announcements:

  1. Steven Galbraith referred me to a wiki listing invited speakers at cryptography conferences. (The scope is to be interpreted as broadly as possible — if a conference is not listed, it just means he didn’t get around to it yet.) One purpose is to assist conference program chairs in the selection of invited speakers. Help is welcome from anyone who can fill in the gaps.
  2. Videos of the talks from the (infamous?) “Is Cryptographic Theory Practically Relevant?” workshop are now online. I’m glad to see that my blog got mentioned at the beginning of Vaudenay’s talk — see here (and especially the comments) for the history.
About these ads

Responses

  1. Another great example of a recent attack is the one by Shahram Khazaei and Douglas Wikström (see http://eprint.iacr.org/2012/063). In their paper they demonstrate weaknesses in mix-nets with randomized partial checking, introduced by Jakobsson, Juels, and Rivest (in 2002). These mixnets were used in at least two implementations of voting systems (at least one of which was deployed in real life).

    As far as I know, the original protocols did not come with a full proof of security (neither the mixnet nor the implementation that uses the mixnet). This is yet another demonstration of the importance of security reductions in protocol design (which actually relates to the topic of the workshop you mentioned in the post).

  2. “This is yet another demonstration of the importance of security reductions in protocol design”

    Yet the *invited speaker* at Eurocrypt 2012 is A. Menezes talking about “Another Look at Provable Security”

  3. “This is yet another demonstration of the importance of security reductions in protocol design”

    Yet the *invited speaker* at Eurocrypt 2012 is A. Menezes talking about “Another Look at Provable Security”

    Actually, I think this misses part of the point I was trying to make. You can take the most provably secure encryption scheme you like, but if it leaks the length then your overall application may be insecure. And this is not an isolated example.

    For all the issues I take with Koblitz’s and Menezes’s delivery, I happen to think they have some valid points that the theoretical crypto community would do well to take to heart.

  4. FYI — It looks like there is another work related to the NY Times article:

    https://freedom-to-tinker.com/blog/nadiah/new-research-theres-no-need-panic-over-factorable-keys-just-mind-your-ps-and-qs

    I didn’t get the title of the ePrint paper “Ron was wrong, Whit is right”. We should now use ElGamal because some people are incorrectly implementing RSA? That’s ridiculous…

  5. Actually, I think this misses part of the point I was trying to make. You can take the most provably secure encryption scheme you like, but if it leaks the length then your overall application may be insecure. And this is not an isolated example.

    I don’t think I ever heard anybody claim that a security reduction covers you from all conceivable attacks (except perhaps as part of a straw-man argument against them). One of the main advantages of having security reductions is that, when done properly, they force the designer to define security guarantees explicitly, and as a result make it totally clear what is the power of the adversary and what constitutes a break of the system.

    There are of course many examples of attacks that do not fall into the scope of a definition of security. But this doesn’t mean that security reductions should be abolished altogether. It only means that they should be interpreted properly (which amounts to actually understanding what is guaranteed by the definition of security).

    It seems to me that this boils down to the following question: what is more likely? That a proof of security is misinterpreted, or that a system that lacks a proof of security is broken?

    Personally, I can’t see how a scheme is better off without a security reduction than with it. The only argument I can see against having such a reduction is that it may result in a “contrived” (and hence less efficient/effective) construction. This could be a legitimate argument by somebody who is predominantly driven by pragmatic considerations. But it doesn’t mean that we should refrain from striving to attain such reductions. In the long term, this is bound to result in simple and solid constructions and is well worth the effort.

  6. I never claimed that proofs are not a good thing to have. However, there are some people who don’t recognize the limitations of provable security (or realize them but choose to ignore them).

  7. “However, there are some people who don’t recognize the limitations of provable security”

    Is provable security more limited than non-provable one?


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

Follow

Get every new post delivered to your Inbox.

Join 40 other followers

%d bloggers like this: