Posted by: jonkatz | October 13, 2009

Algorithmic cryptography (guest post by Yehuda Lindell)

I asked Jon to allow me to write a guest blog about an issue that I have been thinking a lot about lately. The question is whether there is a place for “algorithmic cryptography research” within the crypto community. Of course, I need to explain what I mean. Algorithmic research is all about finding increasingly more efficient solutions to problems. If the best algorithm known for a problem is n^5 and you find an algorithm that takes less time, then this is progress even if the solution takes n^4 and this is still quite expensive. Eventually, experience shows that this methodology yields solutions that are more and more efficient, until they are “optimal” or at least much better.

In crypto, things seem more complicated. It is relatively easy for me (and I guess for others) to judge a pure theory paper. I understand the contribution, whether it is conceptual or technical, and so on (of course, there are papers that are easier and harder to judge, but overall it’s not too problematic). However, it seems much more difficult to judge a paper that constructs a “more efficient protocol”. Once again, I don’t mean a protocol that is the first to achieve a constant number of rounds, because this is again a theoretical achievement. I mean a protocol that is more efficient in terms of round complexity, number of exponentiations, and so on than other previously-known protocols. Working in this area seems to be very dangerous. Sometimes such papers are very well received, and sometimes they are not. The question is what distinguishes the well-received and less well-received papers? Well, I have some ideas. First, a protocol with a clear asymptotic improvement often has a better chance. Second, introducing significantly new techniques seems to be almost crucial to getting such a paper accepted. The problem with judging new protocols in this way is twofold. First, there are some asymptotically efficient protocols out there that perform really badly on concrete parameters. Second, if I take a problem and come up with a more efficient solution than anything known (and of course provide a rigorous proof), then should I throw it out because the technique isn’t new?! I’ll even go as far as to venture that it’s better to have a less efficient protocol with new techniques than a more efficient protocol that uses old techniques in a better way. What about using the old techniques but coming up with a much better analysis that shows that a smaller security parameter can be used? All of these “achievements” are hard to judge, and generally hard to sell, unless we explicitly state that we want to have algorithmic cryptography.

In my opinion, if we don’t encourage algorithmic improvements (that don’t have radically new techniques, but are still often very hard to achieve), then we won’t ever reach the goal of having really efficient secure protocols. At the moment, I’m hesitant to work on such problems, because I’m not sure that I’ll have an audience. (I stress that it’s being in the middle that’s a problem. If I come up with a protocol that’s really efficient and can be used in practice, then I can sell it to the ACM CCS community. However, I also think crypto should be interested in it, and this is also really my community. Furthermore, it will take many years to get practical protocols and we need to respect small improvements in that direction, or we’ll never get there.)

Your thoughts?

Advertisements

Responses

  1. Nice post. I have felt the same way as Yehuda at several points. Crypto is different from many other areas in theory in that eventually we do want our protocols to be efficient enough to be deployed in real systems. Designing such efficient protocols cannot be left to system security community. It has to be done by cryptographers and currently it seems that such a research program is not given priority is our community (primarily because of the issues related to the difficulty of judging the contribution).

  2. If one considers algorithmic crypto as legitimate crypto research, then should experiments also be considered relevant?

  3. Obelix, I would guess that it depends what type of experiments you mean. Implementations that increase our understanding of the costs of the different types of operations (bandwidth, memory, asymmetric, symmetric) is definitely valid research. I wouldn’t publish such a paper at TCC but if the paper has an important crypto contribution (and not every such paper has), then it could definitely go to CRYPTO or EUROCRYPT. In any case, this issue is a little bit different because these types of experiments do have an audience at conferences like ACM CCS, whereas the types of problems I am talking about fall between the cracks and don’t have much of an audience at all.

  4. Yehuda: It seems inherently harder to judge algorithmic improvements in Crypto. Aside from the issues you mention,

    * How does one evaluate solutions based on different assumptions? Say, an O(n)-time encryption based on DDH assumption versus Bilinear ABCXYZ assumption versus (even worse) something totally different, like a lattice assumption? How about if I throw a random oracle into the party?

    Also, if I write a paper whose selling point is an efficient crypto protocol *without random oracles*, is it a worthwhile contribution? Given that in practice, people will anyway fall back to a more efficient protocol that uses random oracles left, right and center? (for the record, I don’t have a clear opinion about this either way).

    * Which efficiency measure is the right one to use? Rounds, number of exponentiations, or Communication complexity? I have been told (by a systems person) that improving round complexity is a waste of time, since latency in communication is not so much of a bottleneck any more (although, admittedly, I have no clue why that is the case).

    It is also conceivable that the a solution that improve one of these measures pays dearly in terms of another. This kind of problem seems largely absent when one comes up with an algorithm for, say, bipartite matching with better running time.

  5. Ah, Vinod, I couldn’t agree more. This is exactly the problem. However, the current solution – which is to essentially discourage research in the area due to this difficulty – will end up hindering progress on this very important issue. So, maybe we need to brainstorm this as a community and come up with some EXPLICIT guidelines on when we consider something like this a contribution and when not. This could take the form of a “working group” with an online discussion forum and physical meetings at TCC, Eurocrypt, CRYPTO and so on (like a birds-of-a-feather meeting). I am also aware that a lot of “junk research” in this direction is submitted, so we do need a good way to distinguish the good from the bad.

  6. Yehuda: I think the problem is not as acute as you describe it.

    First of all, there seem to be enough venues whose focus is algorithmic improvements in performance. (FSE and CCS come into mind, and I don’t think that CRYPTO, EUROCRYPT, and ASIACRYPT like conferences are as averse to the issue as you think.)

    Second of all, there is nothing wrong with people from the Crypto community publishing in other venues. I would even argue that it is preferrable over publishing within your own community. After all, researchers in those communities are much more well eqiupped in judging how much of a practical improvement a certain algorithm offers.

    You should keep in mind that once you enter the realm of performance, there is less room for philosophy. You simply try to write as fast an implementation as you can and measure it in comparison to other implementations. The qualitative discussion on the nature of the underlying cryptographic assumption can be left to theoretical cryptographers, who are always happy to engage in idle philosophical debates (regardless of performance issues).

    When we (Vadim, Daniele, Chris and myself ) had designed a hash function that we though was very fast, we implemented it and submitted a paper to FSE. As a result, we received very valuable feedback from the practical community, and learned a lot (both from the feedback and from actually having to address “real life” performance issues). I don’t think that we would have gotten an equally relevant feedback from the theoretical community. As an added benefit, we exposed the practical community to theoretical ideas, and this in turn may end up having some impact on them in return. A classic win-win situation.

    I really don’t see where is the problem, and why we need to set up new forums to address it (and isn’t this blog a good enough forum for this kind of a discussion anyway?).

  7. Alon, your experience relates to a practical construction. As such, it has a good audience elsewhere. However, algorithmic improvements that improve the current situation but still leave you very far away from what is practical have a harder time finding an audience (especially outside of the crypto community). My claim is that without these intermediate steps it will be hard to get to practical (or close to practical) constructions that can then be judged as you discussed.

    Yes, this is a good place to discuss and open the issue up. However, I think that to reach conclusions, more discussion than here is probably warranted.

  8. Yehuda, I’m not convinced the situation is as bad as you describe either. First of all, I think papers “without new ideas” are not, generally speaking, appropriate for the top venues, or for theoretical venues in general. An exception might be made for experimental papers (I’m thinking specifically here of papers that carry out attacks or solve factoring challenges, though I guess it could apply to implementing known protocols as well.)

    So let’s restrict our attention to papers that do have some new idea, and whose main focus is efficiency improvement. Then, besides evaluating the novelty of the new idea, the question is: how important is the efficiency improvement? Is it a significant improvement? It is an improvement in a protocol no one cares about, a protocol not currently in use but that might potentially be used, or a widely-used protocol where efficiency improvements would have immediate impact? Ultimately, the question of acceptance boils down to “how interesting would this paper be to the Crypto/Eurocrypt community?”

    Yehuda, can you suggest any papers improving efficiency that should have been accepted to Crypto/Eurocrypt, but weren’t? I can think of plenty of papers that were accepted to Crypto/Eurocrypt, as well as papers accepted to places like CCCS that fit very well there. So I’m not sure I see much of a problem.

  9. PS: I’m glad your post is demonstrating that people are still reading this blog. What — no one likes posts about teaching? 😉

  10. Here is a thought: are crypto protocols that sophisiticated algorithmically wise? Do they even allow such granularty in performance (i.e. n^4 vs n^3)? What other algorithmic ideas do we use beyond Gaussian elimination, modular exponentiation, FFT and GCD (all of which are a-priori well studied)?

    I would be happy to see an example of a cryptographic protocol where, say, the local running time of the parties exceeds n^3. Right now, I can’t think of an example (except perhaps the original protocols for NIZK, or Craig’s new cryptosystem).

    For protocols that run slower than that, I can imagine how an algorithmic idea that offers some asymptotic improvement adds value, and I feel it should be possible to find a suitable venue for publishing this kind of a result. The criteria here should be whether the result is interesting and/or there is a new idea in there. Just as for any other kind of result.

    However, for cases in which the running time is less than than say O(n logn) then there is no much point in comparing asymptotic behavior. You may as well compare concrete performance. Same applies for other measures, such as space and round complexity (and this is without even taking the difficulty in comparing accross different measures into consideration). So we are left with the gap between O(n^2) and O(n log n). In this case it’s very simple. Either you can compare on paper (in which case I there is no big issue), or you ultimately compare via implementation.

    NIZK is actually a telling example: first there were protocols running in mostrous time, and then Kilian and Petrank showed how to do it in almost linear time (in the size of the NP-verification circuit). It seems that any improvement beyond that would require a comparison through implementation to justify (unless the authors are able to give an appealing agrument in favor of their result, which is currently a perfectly normal scenario).

    As for Craig’s cryptosystem, I am sure that any non-trivial improvement in its performance would receive the attention it deserves from the community (until we get to O(nlogn), and then only implementation will be able to differentiate performance).

  11. Jon: maybe it’s because nobody dislikes it? (Not that I’ve read it — it seemed too long ;-))

  12. I don’t know how bad the situation is either. I just know that working in this area is risky. I can work very hard, come up with an improved protocol that is very non-trivial to prove secure, and get shot down because the techniques aren’t novel enough. By the way, when is a technique novel enough? Here is an example: in my paper with Benny from Eurocrypt 2007, we end up with an “error” of $2^{-s/17}$ where $s$ is a statistical security parameter. So, in order to get a distinguishing gap between the ideal and real worlds of $2^{-40}$ we need to take $s=680$. This means that we need to send 680 Yao-type garbled circuits. If I improve the analysis in a non-trivial way (with possibly slight changes to the protocol) and get the error down to $2^{-s/4}$ then it would suffice to take $s=160$ which is a huge improvement. And no, this protocol is not used by anyone, and neither will it be unless we manage to get much much more efficient. So, is it not interesting?

    Alon, if you want to see expensive protocols, look at secure computation in the presence of malicious adversaries. They are very hard to get right, and often very inefficient.

    Coming back to Jon’s comment: yes, there are many such papers accepted and many not accepted. Again, my point is that the criteria are very unclear. I want to be optimistic and say that we can make it clearer so that research in this direction will be more encouraged.

  13. Improving from $2^{-s/17}$ to $2^{-s/4}$ for the sake of a huge improvement is interesting only if the improvement comes through some new idea and/or technique (as opposed to say “bookkeping”). If it does, I see no reason for it not to get published, even by a venue such as FSE/CCS (you can find there improvements of cryptanalysis from $2^{128}$ to $2^{127}$ that are technically interesting, and it’s not like anybody is going to implement those improvements, right?)

    If on the other hand. the techniques/ideas are not interesting then what you are effectively suggesting is to switch from a “high-risk”/”low potential benefit” situation into a “low-risk”/”low potential benefit” situation. Sounds very undesirable to me.

    The example of protocols for secure computation in the presence of malicious adversaries actually seems to support my point. We have seen quite a few papers dealing with these issues in recent years, many of which propose clever optimizations and improvements. It seems like such results were well accepted by the community, or weren’t they?

  14. I suppose I come from a completely different corner of cryptography, but for me, efficiency/algorithmic improvements are probably the most exciting things in crypto. I really want to see my protocols implemented, and even if they are currently too inefficient (for example, any (n,1)-OT protocol out there is computationally too inefficient to be implemented by anybody who doesn’t have the lifespan of Methuselah), I hope my work helps to make such protocols more efficient. I also think Crypto/Eurocrypt should have much bigger exposure of algorithmic cryptography.

    The question how to review/grade such paper has no obvious answer, and it really depends on the application. For example, when you are designing an e-auction protocol with many auctioneers, you should put most effort into minimizing the computation of the server who collects the bids since otherwise the auction would get stalled. It does not really matter how fancy your cryptography is when the server can only handle 100 bids per hour :-).

    The second question is what cryptographic assumptions you are going to make. Also here you should think of the concrete application. If the application has to hide information for ages, you should go for a stronger/better known assumption and longer keys. Otherwise, you can use whatever pairing assumption you fancy (given that computing the pairing would not slow down the protocol of course).

    Obviously, for those applications you need theory. You need efficient building blocks. You need security definitions. But the meat of crypto *for me* is the chance that someone, some day, uses my stuff for something important.

    (And I am still annoyed by reviews that say that this or that protocol is not interesting since already the previous protocol was inpolynomial-time, even if the speed-up is something like n^8 -> n^2.)

  15. […] game theory or psychology? In a recent (guest) blog post, Yehuda Lindell makes fun of (or maybe just complains about) “applying game-theoretic […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: