Posted by: jonkatz | November 13, 2009

Do we have too few problems to work on?

Recently I submitted a paper and found that someone else was working on the same problem. (In this particular case there was more involved, as well as a possible breach of ethics, but none of that is relevant to the point I am trying to make in this post.) I don’t have any statistics here, but this seems to happen relatively often in our field. It has happened to me at least 3 times — once on a problem that I would consider relatively obscure — with the result being a merged paper each time. Sometimes the results obtained were incomparable, but it was determined that a merge was in everyone’s best interests as well as the right thing to do scientifically. Other times the results were essentially the same.

I know I am not alone in this. At just about every conference there is at least one merged paper (I am aware of one at the upcoming TCC). An example from the summer that caught my attention was a set of three overlapping results showing constructions of HIBE based on lattices: here, here, and here. I am aware of several other examples as well, though will refrain from mentioning them since sometimes the authors don’t want information about the way a paper was written to become public. (Feel free to share your own stories in the comments…)

What does it mean? Are there too few [good] problems to work on, so that we are all mining the same ground? If so, is this an indication that the community is stuck in a rut, or have we all just collectively identified what are the most important problems? While I think this explains part of the issue, many of the cases I am aware of involve, as I said, pretty obscure problems that are not the type I would expect everyone to jump on. Perhaps the social nature of our field, with people discussing their latest results at workshops, and open problems being “in the air”, encourages people to focus on similar sets of problems. Is this a good thing or a bad thing?

What do you think?



  1. Have we all just collectively identified what are the most important problems?

    The TCS community is good collectively identifying the easy problems for which you can get good credit (also known as trends). Our overall record of working on the hard-but-important problems is abysmal.

  2. I just wanted to add a data point from topology. The HOMFLY polynomial is a knot invariant that generalizes the previously known Alexander and Jones polynomials. It is fair to say that it is a very important contribution to knot theory.

    It was originally published in the Bulletin of the American Mathematical Society in a paper that merged FOUR groups: Freyd & Yetter, Hoste, Lickorish & Millet, and Ocneanu. The editor notes that they received four research announcements “virtually within a period of a few days” (and the “four groups arrived at their results completely independently of each other”).

    So I just want to point out that it is possible that an important result can be arrived at by many groups simultaneously because the theory naturally builds in their direction. (In this case, the HOMFLY polynomial could have easily been developed a decade earlier, so it was waiting to be discovered by anyone.)

    Whether this is happening in the TCS community is another issue.

  3. Just a clarification: the first two HIBE results you cite were independent, and the third builds upon them with improved efficiency. (For those familiar with the pairing literature, very roughly speaking the first two are “Canetti-Halevi-Katz-style,” while the third is a Boneh-Boyen analogue.)

    Still, having had what amount to three semi-collisions in the past year alone, I think you have a point. Partly I think that this is a consequence of working in a fast-growing (aka “trendy”) area with lots of interesting and solvable problems.

    As for multiple solutions to obscure problems, I believe that science as a whole has more of these cases than one would expect. Not to pull a Malcolm Gladwell, but there are often subtle shifts in the collective perspective of a community, which can make certain problems suddenly tractable.

  4. I think this is because the accepted definition of a ‘good’ problem is a problem that’s hot and popular. So as long as you are working on a ‘good’ problem, there will be a high chance that someone else is working on the same problem. If you are not working on a ‘good’ problem, then you will have something new, but your paper won’t get accepted.

  5. One other explanation I can find for this is the fact that the researchers in the community are to some extent hesitant to talk about their research before it is published. If the community was a bit more open in sharing their current research to their colleagues (in conferences, rump sessions, workshops, mailing lists, blogs, …) we would have fewer surprises like this, and most of the mergers would happen before the submissions, and consequently we would have better quality papers.

  6. payman —

    I don’t think sharing results before being published is the issue. I recently announced that a group of us had solved an open problem on my blog and were working on the paper (that we were adding further extensions to, and intended to submit to a later conference). I also announced the result at a conference at the same time. Several weeks after my announcement, two papers came out with the basic result (albeit none of the extensions….) before ours was ready.

    I’m not sure this is a big “problem”; for me personally it’s more of an annoyance. But for a student, or a not-yet-tenured faculty member, it’s much more problematic.

  7. MM, your phrasing is odd. I don’t think I would ever announce a result without putting a paper on the arXiv simultaneously. But that’s a digression…

    Something similar happened to me once. I felt it was not worth worrying about. If somebody else had the same idea, then your work wasn’t very important in the first place. I hope that my best work will still be important a hundred years from now. If somebody has the same idea at the same time, or if somebody could have had the same idea within, say, the next five years, then even if the problem is important, the solution can’t be. (Of course I understand the five year timeframe should be adjusted for work with urgent applications.)

    “Have we all just collectively identified what are the most important problems?”

    No, my opinion is that this is evidence that the field is missing creativity.

  8. “No, my opinion is that this is evidence that the field is missing creativity.”

    This is what I would say if there were more paper merging going on. In fact, from only a single merged paper per conference, I don’t think any significant conclusions can be drawn.

  9. “If somebody has the same idea at the same time, or if somebody could have had the same idea within, say, the next five years, then even if the problem is important, the solution can’t be.”

    I’m not sure what you mean by the solution being un-important even when the problem is. Are you talking specifically about crypto, or science in general?

    For example, Darwin and Wallace basically independently came up with the idea of natural selection to explain the diversity of life and appearance of design. I don’t think anyone would argue with the importance of the solution (it’s sometimes said to be the best idea in history).

  10. Assume that there are 10,000 good problems out there, and each researcher chooses a problem uniformly at random. Then, if there are 100 submissions by 100 different researchers, then the probability that two of them will be on the same problem is greater than 1/2 :-). So, no, there is no shortage of problems.

  11. I can’t agree that evolution by natural selection is the best idea in history. It was a very low-hanging fruit. With today’s scientific infrastructure, with thousands of full-time, secular scientists, this kind of idea would be trivial. Since the circumstances are so different, I don’t think good comparisons can be drawn.

  12. I think biologists and historians would disagree that it was low-hanging fruit. Actually, my understanding is that the conventional wisdom (shout-out to Lipton) of the time was the opposite.

    It would seem trivial to us to now because so much of modern science is predicated on the idea. I tend to think it resonates even more with us computer scientists because it is essentially an algorithmic idea (one of Leslie Valiant’s current research thrust is about just this…)

  13. I think Yehuda hits the nail on the head…what you’ve described is exactly the birthday paradox. Though there is a good chance that in any conference there will be at least one hit, the chances that your particular work will have a hit is still very low.

  14. …what you’ve described is exactly the birthday paradox. ..

    Maybe, but I’m not convinced. I think a birthday-style analysis would give many more collisions. (Are there really 10,000 interesting open questions?)

  15. Some more collisions, and an argument about why collisions happen:

    (skip to pg 4 and 7 if you don’t want to read the whole thing)

    “In the nineteen-sixties, the sociologist Robert K. Merton wrote a famous essay on scientific discovery in which he raised the question of what the existence of multiples tells us about genius. No one is a partner to more multiples, he pointed out, than a genius, and he came to the conclusion that our romantic notion of the genius must be wrong. A scientific genius is not a person who does what no one else can do; he or she is someone who does what it takes many others to do. The genius is not a unique source of insight; he is merely an efficient source of insight. “Consider the case of Kelvin, by way of illustration,” ….”

  16. I have often felt that there are usually some obvious open problems you can see as you read a paper. They are not necessarily great problems, BUT they’ll get you a paper in a reasonable crypto conference (crypto/eurocrypt/tcc/asiacrypt/pkc/ches/acm-ccs/… ok I think I should stop listing them). Such problems per paper are not so many though, and if two people read a given paper, it is not as unlikely that they’ll see these open questions there (and in quite a few cases, solve them too). Since good papers get read by the community more often, and in my opinion since our community is very much publication oriented (whether the problem is important or not does not the majority from solving and writing it), it is not surprising that we come across so many merges.

    In my short career as a researcher, I’ve already had 3 merges.

  17. An important (and, somewhat unethical) issue here is that many researchers tend to jump to those “simple but guaranteed-to-be-published ideas”. Another issue is that such papers are getting accepted en masse. Ergo, a vicious circle.

  18. The birthday analysis, I think, would severely *under-estimate* the number of collisions. In many cases when a collision happens, there is a good chance that the two (or more) works did not come about independently.

  19. My off the top of my head theory: there are a limited set of problems that everyone chases, mainly because there are a limited set of tools available in the mainstream of crypto and other mathematical fields of study. By tools I mean things like using reductions to provide proofs of security, lattices, bilinear forms, etc., but if you moved over to algorithms you might talk about things like chernoff bounds (out of my depth).

    This is not a “problem” by itself, it just reflects that there is a high cost to learning a new tool and since it doesn’t usually pay off, a sub field will stick with those tools. That’s the way the “economics” of theory fields works as far as I can tell.

    Every now and then someone adds a new tool, like lattices or bilinear forms. Then a whole new set of questions open up as to whether we can come up with variations of previously known results that are stronger in some way, or which shed light on the difference between the new tool and the “old” tools. These are important and interesting questions, but they also tend to flow “naturally” from the framework established by theoretical cryptography. So pretty much everyone sees them at the same time and has a chance to start working on them.

    To complete the picture, once you have the right formulation of the problem and the right tools, many (NOT ALL) questions can be settled in a reasonable amount of time. So you have rapid iteration by multiple groups through the set of problems. That leads to collisions.

    This is separate from another issue, which is that we have a tendency in crypto to also create new problems by permuting old ones. Mihir Bellare’s crypto topic generator sums it up

    In contrast, in a field which has less agreement on the theoretical framework to guide questios, less of a barrier to entry for new “tools,” but a higher cost to addressing/publishing on a specific problem, we might expect fewer collisions. I’m thinking here of security where I don’t hear about collisions and merges as much, or NSDI/OSDI/SOSP where I hardly ever hear about them. (Curious what MichaelM’s take is here).

    I am not saying any of this is bad. If anything it shows the strength of the theory underlying our half of CRYPTO/Eurocrypt/etc. Just my take on how the “economics of problem solving” affects this phenomenon.

  20. Its interesting how, once Jon pointed out the question, so many of us quickly try to establish our own theory to explain the phenomenon.

    I wonder how effective will be when presented with the other direction: guess how many collisions will be there in a given area in a given year/conference. Eurocrypt 09 is not far, guesses guesses?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s


%d bloggers like this: