The Dept. of Computer Science at the University of Maryland has multiple open positions at the assistant-professor level. These include “open” positions in any area of computer science, as well as a targeted position in cybersecurity. See here for further information.
Allow me to reiterate my announcement that a postdoc position is still available. Applicants in any areas of cybersecurity are welcome, and I would also consider applicants in certain areas of theoretical computer science (email me if you are interested). As a bonus, you now have the opportunity to work with Elaine Shi or Nick Feamster as well!
Although I somehow hate to say this, my past experience tells me that I should: I am unlikely to consider your application if you have no publications in recognized ACM/IEEE or LNCS conferences, unless you have some extenuating circumstances to explain this.
I would also consider short-term visits by established researchers and/or current PhD students. If you are an undergraduate interested in applying to the graduate program at UMD, feel free to drop me a line also!
(Unfortunately, I will be traveling and thus not able to attend this event. But it looks like it will be great!)
Registration is open for the First Annual Maryland Cybersecurity Center Symposium!
When: Tuesday, May 15 and Wednesday, May 16, 2012
Where: Riggs Alumni Center, University of Maryland
Time: 9:00 a.m. – 5:00 p.m.
Drawing on regional experts of national and international acclaim, MC2’s first Annual Cybersecurity Symposium will showcase the latest research, trends, and topics in cybersecurity, including:
– Keynote addresses by Dr. Gary McGraw, CTO of Cigital, and Dr. Patrick McDaniel, Professor in Computer Science and Engineering at Penn State,
– Tutorials by MC2 faculty on secure web programming; cybersecurity law, regulations, and policy; code analysis for eliminating software vulnerabilities; and code instrumentation for finding flaws and mitigating attacks
– Technical talks on cutting edge research by MC2 faculty in both computer science and engineering; topics include digital fingerprinting, privacy-preserving computation, trusted hardware, and physical-layer security, and many others.
– Panels on current challenges in cybersecurity education and the projected impact of cybersecurity legislation, featuring a distinguished group of experts including Bill Newhouse of NIST and Ellen Nakashima of the Washington Post.
– Welcome addresses by Maryland Congressman Dutch Ruppersberger, outspoken cybersecurity proponent, and University System of Maryland Chancellor William “Brit” Kirwan
The MC2 Symposium program will broaden your knowledge, skillset, and awareness of cybersecurity problems and directions, and the event is sure to present unique opportunities to connect with colleagues across academia, industry, and the state and federal government.
We hope to see you there!
-Michael Hicks, Director of the Maryland Cybersecurity Center
-Eric Chapman, Associate Director of the Maryland Cybersecurity Center
Please note that this two-day symposium would not be possible without the strong support of our corporate partners:
SAIC, Google, Lockheed Martin, Tenable Network Security, Lunarline, Inc., Future Skies, Inc., AdvanTech, SuperTEK, CyberPoint, MIT Lincoln Laboratory, MAR, Inc.
It appears I will have a postdoc position available in my group for next year. This will be a 1-year position, with the possibility of extending it to a second year pending available funding. Shorter-term visits can also be considered.
The position is fairly open. In particular, I encourage applicants who work on more applied aspects of computer/network security in addition to those who work on cryptography. Students working in areas relevant to cybersecurity will also be considered.
If you are interested, please send me an email with a copy of your CV, a short research statement, and the name of at least one reference.
Penn State will be hosting two co-located summer schools May 30 to June 1, 2012. These are aimed at graduate students doing research on cryptography and computer security.
The Cryptography school will explore recent developments in cryptography. Topics include fully homomorphic encryption, the theory of symmetric-key encryption, and relations between foundational notions such as one-way functions, computational entropy, and zero-knowledge. This school is being organized by Jonathan Katz (U. Maryland) and Adam Smith (Penn State).
The Software Security school will explore the latest in the theory and techniques for both attacking programs and building verifiable defenses into programs. Topics include return-oriented programming, methods for automated retrofitting of legacy code, control-flow integrity, and general-purpose security verification for programs. This school is being organized by Trent Jaeger (Penn State).
Omer Reingold, Microsoft Research Silicon Valley
Philip Rogaway, UC Davis
Vinod Vaikuntanathan, University of Toronto
Software Security School:
Andrew Myers, Cornell University
Vinod Ganapathy, Rutgers University
Hovav Shacham, University of California, San Diego
Nikhil Swamy, Microsoft Research, Redmond
Gong Tan, Lehigh University
Somesh Jha, University of Wisconsin (to be confirmed)
The schools are directed towards graduate students who are already engaged in research in cryptography and computer security. To apply, students should submit an abstract of a research project in which they are or were recently involved (a description of work in progress is also acceptable). All attendees are expected to present a poster on their research.
In addition to graduate students, the schools have space for a smaller number of more established researchers (postdocs, faculty, and industrial researchers).
The summer school will consist of mini-courses on topics of current research (including, in some cases, a laboratory component) and keynote talks. Meals, poster sessions, social events, as well as some lectures, will be joint between the two schools. Attendees are expected to participate in the entire program.
The schools will be co-located on Penn State’s University Park campus in State College, Pennsylvania. State College is accessible by air (State College Airport — SCE) and is within a few hours’ drive of New York, Washington, Philadelphia, and Pittsburgh.
Registration is free for accepted participants. Participants are responsible for their travel and accommodations. A number of travel scholarships are available to help defray travel costs.
Applications received by April 15 will receive full consideration for funding. Applications will continue to be accepted until at least May 15 based on available space and funds.
Applicants will be asked to provide the abstract of a research project (either published or in progress) in which they have been significantly involved. Application can be made online here.
Please direct any questions to the organizers at firstname.lastname@example.org.
Disappointment in class today. (I am teaching an undergraduate class in computer and network security.)
I have covered the issue of malleable encryption in at least 4 lectures so far this semester: in the private-key setting (with examples of attacks against CBC mode and CTR mode), in the public-key setting (with examples of attacks against RSA and El Gamal encryption), in the half-lecture review of cryptography at the end of that the unit on cryptography, and when talking about the attacks on WEP. I have also mentioned that non-malleable encryption schemes are available and should be used, explaining authenticated encryption in the private-key setting and telling them about (but not giving them the details of) RSA-OAEP in the public-key setting.
Today I described the following protocol for password-based authentication in a setting where the client knows the server’s public key (in addition to a password they share):
- The server sends a nonce R
- The client responds with an encryption of (pw, R)
I then pointed out that if encryption is not done carefully, there is an attack. (An easy example is given if Enc(pw, R) = Enc(pw), Enc(R).) I noted that the reason this attack is possible is precisely because of malleability. I then asked what type of encryption scheme should be used instead.
Not a single student was able to give a correct answer (“a non-malleable encryption scheme”).
Do I expect too much? I keep resisting the idea of “dumbing down” the class too much, but faced with things like this I am not sure what to do.
Note: Many researchers are justifiably concerned about the fact that Alfred Menezes will be giving an invited talk at Eurocrypt 2012 related to his line of papers criticizing provable security. I share this concern. I hope to blog about his (and Koblitz’s) papers over the next few weeks leading up to the conference.
What follows is something that Yehuda sent me unsolicited.
In an ideal world, people will be gracious and scientists will work primarily to promote science. (I am not naive, and am aware that scientists are people and need to promote themselves. However, self-promotion should go together with the promotion of science and not at its expense.)
The specific issue that I wish to talk about in this context is how to deal with bugs, errors, flaws and so on that you discover in other people’s papers. The right thing to do is to write the authors a nice email, saying that you believe you have found a bug and would like to inform them about it, or be corrected in case it is your mistake. If you are correct, and the authors are also gracious (as they should be), then they will correct their paper and give you a nice acknowledgement thanking you for the correction. You can then continue to do productive research.
Of course, if such a correction requires novel research that you have already done, then the above strategy may not work. In such a case, one can try to be creative in order to be gracious. One such example happened to me when I proved an impossibility result, only to discover that a (contradictory) positive result was published on this exact topic a few years beforehand. I couldn’t work out who was wrong, so I spoke to the authors. After discussion we realized that they were wrong, and that their proof holds in a much weaker model. In my paper, all I wrote was “Our impossibility result does not contradict [X] since their positive result holds for a different, weaker model”. The other authors acknowledged me for finding the bug, and everyone walked away happy. Would I have gained anything by writing “we show that [X] were wrong”; most certainly not!
Unfortunately, not everyone in our community takes this approach. Indeed, there are even people who actively search for errors in order to promote an agenda of attacking the entire crypto-theory community. Two examples are Koblitz and Menezes. You can see their newest paper on eprint. This time they really outdid themselves since there is actually no error. Rather the proof of security is in the non-uniform model, which they appear to not be familiar with. Even after being told about this, they still chose to leave their attack unchanged. You can see my discussion post.
The ideal view of writing a paper:
- The paper is written weeks before the deadline, so that all the authors can check correctness of the proofs and possibly improve the results.
- The day of the deadline is spent doing final polishing, to make sure the paper is clear and readable. After all, the goal is to disseminate your ideas throughout the research community.
The real way most papers are written:
- We had the results 3 months ago, but we’ll only finish writing them the day before the deadline. (We were too busy getting other results, that we will sit on until the day before the next deadline.) We don’t really need to check correctness — conference reviewing is so good that the referees will surely verify whether the proofs are actually right.
- The day of the deadline is spent playing with margins and moving random portions of text to the appendix, just to satisfy a ridiculous page-limit requirement. I guess we’ll sacrifice readability; the main goal anyway is to get the paper published so we can add a line to our CVs.
(No, I’m not cynical at all…)
I’m teaching Computer and Network Security this semester, and I usually start out by covering crypto at a high level. My goal is not really to have the students understand the crypto itself (that what my semester-long Cryptography course is about) but primarily to have them understand how to properly use crypto.
The past few times I have taught a security course, I followed up my lectures on crypto with 2-3 lectures about how crypto gets broken in the real world. One obvious way is when it gets implemented wrong (e.g., the WEP attacks). But there are other, less obvious and more clever, ways as well. Several examples come to mind (and are covered in my class), but let’s name two: bad random-number generation, and what I’ll call a mismatch between the security provided by a cryptographic primitive and the requirements of the application.
With this in mind, it’s fortuitous to hear today about two such attacks:
- The flaw in the random-number generation that was apparently used for several real-world implementations of RSA-based cryptosystems (paper here). (Interesting side note: some news articles I’ve seen say that the paper will be presented at a conference in Santa Barbara in August. The Crypto deadline is 3 days away. Has the paper been accepted to Crypto before the review process has even started??)
- The attack on Google Maps over SSL. Summary: the researchers exploited the fact that even the best encryption leaks the length of the plaintext. This observation has been applied several times before in different contexts. While it may be “obvious” that encryption (practically speaking) can’t hide the plaintext length, I wish cryptographers, when teaching the definition of secure encryption, would point out that leaking the plaintext length is often a real problem.
More examples for me to cover in class!
On a separate note, two announcements:
- Steven Galbraith referred me to a wiki listing invited speakers at cryptography conferences. (The scope is to be interpreted as broadly as possible — if a conference is not listed, it just means he didn’t get around to it yet.) One purpose is to assist conference program chairs in the selection of invited speakers. Help is welcome from anyone who can fill in the gaps.
- Videos of the talks from the (infamous?) “Is Cryptographic Theory Practically Relevant?” workshop are now online. I’m glad to see that my blog got mentioned at the beginning of Vaudenay’s talk — see here (and especially the comments) for the history.
This article led me to the homepage for Wombat Voting, which has implemented an end-to-end cryptographically verifiable voting system that will be used in the primary election for one of Israel’s political parties (Meretz). Alon Rosen is the faculty member in charge of the project, and the system is built on top of a mix-net (“Verificatum”) developed by Douglas Wikstrom.
It’s always nice to see cryptographic protocols developed by the research community actually being used in practice.