Posted by: jonkatz | May 27, 2015

Upcoming Google Hangout

I will be participating in a Google hangout next week focusing on MIT alumni teaching MOOCs. Please join and ask questions, or just listen!

Registration can be done here

Posted by: jonkatz | April 24, 2015

Upcoming CyberFETCH Spotlight Interview

Since I haven’t been writing much content on my blog, I may as well use it to advertise other things I’m doing.

I’m going to be doing a version of “ask me anything” on the CyberFETCH.org website. (See announcement below.) The first question you can ask me is “why don’t you blog more often…?”

———————————————

CyberFETCH.org is a DHS Science & Technology sponsored web site that is a repository for all NIST test results for cyber forensic tools. It also hosts a community forum for discussions of best tools and practices in a vendor neutral and secure environment.

CyberFETCH is now hosting a series of community driven interviews with key people in the cyber / forensics community, which we are calling “Spotlights”. These will be similar to the “Ask Me Anything” sessions on Reddit (reddit.com/r/AMA). Rather than being completely open-forum and live though, we work with the guest and the community by moderating the exchange, making this more like the interview format popular on SlashDot (http://slashdot.org/tag/interviews).

We have three excellent volunteers to start this off with, all set and ready for your questions…

Dr. Marcus Rogers, Director of the Cyber Forensics Program Purdue University, popular guest lecturer, expert in digital forensics, founder of the Digital Forensics Certification Board
Dr. Jim Lyle, Project leader for the Computer Forensic Tool Testing project at the National Institute of Standards and Technology (NIST)
Dr. Jonathan Katz, Co-author of “Introduction to Modern Cryptology”, Director, Maryland Cybersecurity Center and Professor, Dept. of Computer Science, University of Maryland

Many more will follow (and we’re wide open to your suggestions on who else we should invite to the Spotlight).

So please, check out the link below. Read the bio, see who piques your curiosity, and send in some questions. Don’t be bashful. 🙂

http://cyberfetch.org/spotlight

Posted by: jonkatz | November 10, 2014

My course goes live!

With excitement mixed with trepidation I watched my MOOC on cryptography go live today! It is a mixed blessing to have 35,000+ students taking the course, and already posting comments (and pointing out errors).

It was a huge investment of time to get to this point, but so far students seem to be finding it worthwhile. Wait until we get to defining computational security. =)

It’s not too late to join the fun!

Posted by: jonkatz | May 19, 2014

Getting closer…

(A teaser.)

In preparing the second edition, we have made a concerted effort to integrate a more practical perspective while retaining a rigorous approach. This is reflected in a number of changes and additions we have made:

  • We have increased our coverage of stream ciphers, introducing them as variant of pseudorandom generators, discussing stream-cipher modes of operation, and describing modern stream-cipher design principles and examples.
  • We have emphasized the importance of authenticated encryption, and have included a discussion of secure communication sessions.
  • We have moved our treatment of hash functions into its own chapter, added coverage of some standard applications of cryptographic hash functions, and describe hash-function design principles and widely used constructions.
  • We have also improved our treatment of birthday attacks (covering small-space birthday attacks), and added a discussion of rainbow tables and time/space tradeoffs for function inversion. We have augmented our discussion of differential cryptanalysis, including a worked example.
  • After much consideration, we have decided to introduce the random-oracle model much earlier in the book. This allows us to give a proper treatment of standardized, widely used public-key encryption and signature schemes in later chapters instead of relegating those important schemes to a forgotten chapter at the end of the book.
  • We have strengthened our coverage of elliptic-curve cryptography, and have added a discussion of its impact on recommended key lengths.
  • In the chapter on public-key encryption, we introduce the KEM/DEM paradigm as a natural way to do hybrid encryption. Among other schemes, we cover DHIES/ECIES as well as RSS PKCS #1 v1.5 and~2.0.
  • In the chapter on digital signatures, we now cover the construction of signatures from identification schemes using the Fiat-Shamir transform, with the Schnorr signature scheme as a prototypical example. We have also improved our coverage of DSA/ECDSA. We include brief discussions of SSL/TLS and signcryption, both of which serve as nice culminations of everything covered up to that point.
  • In the “advanced topics” chapter, we have amplified our treatment of homomorphic encryption, and included sections on secret sharing and threshold encryption.

Beyond the above, we have also gone over the entire book carefully to make extensive corrections as well as smaller adjustments (including more worked examples) to improve the exposition. Several additional exercises have also been added.

(The new edition should be out next year.)

Posted by: jonkatz | January 24, 2014

MOOCs and me

In some very exciting news, several of us at the University of Maryland are going to be offering a series of courses as part of a specialization on cybersecurity on Coursera. As part of this, I will be teaching a course on cryptography. I’m both excited and apprehensive about be doing this: excited about the opportunity to reach so many students, as well as to think through (yet again…) what should be covered in an introductory course on cryptography. But apprehensive because I’ve heard that the time require to prepare good video lectures is an order of magnitude more than the time needed to prepare for a regular lecture. We’ll see how things go next fall…

 

 

Posted by: jonkatz | January 14, 2014

Real-World Crypto, Day 2, Session 3

(I now see the problem with live-blogging: once you start, you are not allowed to stop…)

Dave Anderson led off the 3rd session with a talk on encrypted hard drives. The idea is to have a highly secure drive in which encryption cannot be turned off; all data written to disk is encrypted by default. So if your hard drive is stolen, the data in inaccessible (unless your password can be guessed). The drive uses a combination of locking and encryption. That is, the drive is locked when power is turned off, and remains locked even when power is turned on, until the user enters a password that unlocks the drive. Unlocking the drive also means that the encryption key EK for the drive is available, and so from that point on all data read/written from/to the drive is decrypted/encrypted.

The cryptography underneath is standard: when off, the drive holds the encrypted data (encrypted using EK), a hash of the user’s password, and an encryption of EK under the user’s password. After being turned on and taking a password from the user, the drive verifies the hash and—if correct—uses the password to recover EK. From then on EK is used inside a dedicated AES circuit that can encrypt/decrypt at line speed. What is nice is all the engineering done to make this work, and to defend against various forms of attack.

Since the entire drive is encrypted, including the operating system (OS), one may wonder how the drive boots so as to request a password from the user. The answer is that a special pre-boot OS is installed on the drive for exactly this purpose; the “actual” OS is loaded only once the password is verified.

Additional points of interest mentioned in the talk: The drive also implements a true random-number generator (also used to generate the EK) based on the mechanical characteristics of the drive itself. To protect against malicious code being run in firmware, all firmware is verified as being digitally signed. Something not mentioned: in this model, timing/tampering attacks that cryptographers spend a lot of time worrying about seem to go away. Indeed, if the correct password is entered then security is lost; until the correct password is entered, EK is unavailable and so not being used.

Currently, the drive uses CBC/XTS mode for encryption, which means the encrypted data can potentially be modified in predictable ways. Addressing this seems like an interesting research direction related to the 2nd session.

In the next talk, Christian Rechberger talked about lightweight block-cipher design. He began, though, by trying to debunk the notion that “crypto is never the weakest point of a system,” pointing to several recent examples: the exploitation of MD5 collisions by Flame, RC4 cryptanalysis leading to attacks on WEP and TLS, attacks on the Mifare classic chipcard, and cryptanalysis of Keeloq. In the latter two examples, in particular, companies rolled their own cryptography, at least in part because they felt that standard crypto algorithms were not efficient enough.

Regrading the state-of-the-art, he noted that AES uses about 2500 gate equivalents (GEs) when implemented in hardware. Some more recent ciphers use about 1000 GEs, but tend to be slower than AES. In his work, he focused on designing a lightweight cipher with very low latency. (It was unclear to me why he focused on low latency rather than high throughput, though perhaps hard-drive encryption is good motivation.)

The new cipher, called PRINCE uses a DESX design, with the internal keyed permutation constructed so that encryption and decryption use the same circuit (with a different key).

The final talk, by Francois Dupressoir, was extremely interesting, though unfortunately some technical details eluded me in the somewhat short talk he gave and I will have to check the paper. The goal of the work he presented is to generate provably secure machine code starting from a high-level implementation. To start, they use EasyCrypt to produce a formally verifiable cryptographic proof of security for the high-level pseudocode. Next, they have an intermediate, C-like language to which the pseudocode can be compiled while maintaining the formal guarantees. The most novel part of this worked involved translating from this C-like language to actual machine code, again while maintaining formal guarantees, proven here in Coq.They claim an end-to-end verifiable compilation of the PKCS #1 v2.1 RSA encryption standard (essentially OAEP).

One of the more interesting parts of the work to me was that they have a way to reason about “side-channel” leakage, which I admit to not having fully understood. The basic idea is to explicitly give the attacker information about the value of the program counter at various steps during the execution of the high-level pseudocode; the proof of security at that level takes into account this extra information given to the attacker. Then at each step during the compilation it is proved that leaking the sequence of values taken by the program counter still does not harm security. This seems like a nice way to reduce/eliminate common implementation errors.

I am quite surprised at the work, actually. One of the issues in EasyCrypt that I had raised at an EasyCrypt workshop this past summer was exactly related to the “mismatch” between the abstract types that EasyCrypt supports and their low-level instantiation as byte arrays in C or the eventual machine code. The work Francois talked about seems exactly to address this problem (modulo the fact that the math libraries used to actually implement the modular arithmetic are assumed to be secure).

Posted by: jonkatz | January 14, 2014

Real-World Crypto, Day 2, Session 2

The second session was on encryption. Jonathan Trostle kicked things off with a talk about authenticated encryption. Besides the “standard” security goals, he mentioned the idea of misuse resistance which requires, roughly that nonce reuse by the sender only leaks whether the same message is encrypted twice (using the same nonce) but otherwise does not breach privacy or authenticity. Jonathan’s talk focused on “lightweight” authenticated encryption schemes, that aim to minimize network overhead (i.e., ciphertext length) as well as the amount of energy expended per encrypted byte.

One point he made, which is obvious but nevertheless bears repeating (especially in light of the CAESAR competition), is that there is unlikely to be a single “best” authenticated-encryption scheme. The best scheme to use depends on the application (e.g., how many messages will be encrypted using a given key) or the deployment environment (e.g., hardware/software, or the relative cost of computation vs. transmission). It would be useful, therefore, to have a set of algorithms standardized, suited for different uses.

Tom Shrimpton spoke next about format-transforming encryption. (I had heard a previous talk on this topic by his student, Kevin Dyer.) The basic goal here is to make encrypted data “look like” other data so as to prevent it from being filtered using deep-packet inspection (DPI), as several countries do. DPI looks at payload-level information, including the data itself as well as information about what applications are being used. “Regular” encryption does not help here, since the very fact that encryption is being used may cause the packet to be filtered out; more specifically, the DPI may be using a whitelist of allowed applications and/or words. Previous work in this space was slow, inflexible, and lacked empirical validation.

So, the goal is to develop a (private-key) cryptographic transformation that makes arbitrary traffic look like “benign” traffic from some other application. The transformation will take as input a [specification of a] set of strings that the DPI will classify in the desired way, and output ciphertexts that will match the desired classification. This is very flexible, since the desired format can be changed, as needed.

Almost all the DPIs evaluated use regular expressions (or something close to that) to perform filtering. So the specification of strings that will be allowed to pass the DPI corresponds to a regular language, that furthermore can be learned from the traffic that is allowed to pass.

To map ciphertexts to words of the regular language, the key insight is to using ranking, that is, an efficiently computable/reversible map from integers to words in the regular language. (This was shown to be possible by Goldberg and Sipser in 1985. It turns out that this gives exactly the rank of a word in the lexicographic ordering of the words in the language) Given this, the solution is relatively straightforward: encrypt the plaintext as usual to get some ciphertext; view the ciphertext as a series of integers; then map these integers to words of the language. There are, as is to be expected, several practical difficulties that emerge in trying to get this to work in practice, but Tom and his co-authors show that this can be done.

One question I was left with was whether the approach can be applied to DPI based on blacklists. While it is true that the complement of a regular language is still regular, the complexity of ranking depends on the size of the regular language [sic — see below] and so may get too large. (In the Q&A, Tom claimed that everything still works, so I guess I’ll have to look in the paper for details. Update: after reading my original post, Tom clarified that the complexity depends on the size of the DFA for the language, not the size of the language. Since a regular language and its complement have DFAs of the same size, everything works out fine.)

In the final talk of the session, Mike Bond spoke about “Crypto as a service,” i.e., providing crypto for other organizations. (This is something his company, Cryptomathic, is working on.) This raises a number of practical questions, such as backing up keys, updating keys, and achieving key separation. There is also the question of making the crypto API useable (and understandable) to the client.  He gave a walkthrough of a banking application that might use the service. Admittedly, this was a bit outside my expertise.

Posted by: jonkatz | January 14, 2014

Real-World Crypto, Day 2, Session 1

The workshop kicked off with two talks on OPACITY. OPACITY is a protocol suite with two protocols: ZKM and FS. Both of these are Diffie-Hellman-based authenticated key-agreement protocols, geared toward smartcards, that incorporate some privacy measures. It was developed by the company ActiveIdentity, is apparently supported by the US Department of Defense, and has been standardized.

Marc Fischlin began with a talk about a cryptographic analysis of OPACITY. Marc pointed out that the fact that the protocol is standards-compliant does not imply anything about whether or not it is secure; following a standard is no substitute for a rigorous analysis/proof. (This is clearly true, since the standards don’t contain or imply any particular security definition.)

Both protocols use two rounds, and assume the smartcard holds a long-term, certified public key. The ZKM protocol is a static-ephemeral Diffie-Hellman protocol, where the card reader generates an ephemeral key and the smartcard uses its long-term (“static”) Diffie-Hellman public key to derive a shared key. Interestingly (or surprisingly), there is no authentication from reader to client, and anyone can easily impersonate a reader. The FS protocol is similar, though in this case both the smartcard and the reader have long-term, certified public keys. Here, two ephemeral-static Diffie-Hellman protocols are run to derive shared values, and these are then used in a “cascading” fashion to derive shared keys.

One complicated part of both protocols is that, for efficiency reasons, they allow what is called “persistent binding”: the card reader can cache the public key of a smartcard it has interacted with before. There is also a way for the smartcard to “blind” the certificate it sends, which is supposed to provide some measure of privacy for the smartcard.

In trying to analyze the protocol, Marc found that the security goals were unclear. (This is a general problem when security goals are stated informally, and demonstrates why formal models/definitions are useful even in the absence of a security proof.) Marc and his co-authors used a well-known security definition for authenticated key agreement by Bellare and Rogaway, adapted to capture one-way auhtentication (which is all we can hope for ZKS to provide). Marc was able to prove that ZKM achieves one-way authentication, and that the derived, shared keys are hidden from a passive attacker. FS also satisfies some notion of identity hiding for the card. In both cases, the analysis break down if persistent binding is used (it ws not clear to me whether there is a concrete attack in that case).

The second talk in the session was by Eric Le Saint, who introduced the OPACITY protocol. He noted that OPACITY was designed for protecting contactless transactions between chip-based mobile devices and terminals, where currently no security mechanisms are in place. For this reason, the protocol was designed to be fast: the reader/terminal can pre-process things so that it is ready to sent the first message of the protocol as soon as a card comes into close proximity; the card can then quickly compute and send the second (and final) protocol message. This can all be done in the time it takes to physically swipe a card at a reader, without inconveniencing the user holding the card.

Eric addressed a statement from Marc’s papers to the effect that the security guarantees of OPACITY are rather weak. In response he noted that one-way authentication may be sufficient for the intended applications, and resistance to eavesdropping may also be enough in practice. (Frankly, the justification for this did not make sense to me, but perhaps it was just poor presentation.) He also claims (without any explanation) that the notion of identity hiding achieved by ZKM is considered strong enough for US government applications.

He concluded by stating that he was currently working on an improved version of OPACITY that would achieve stronger security goals. He described the protocol using a high-level block diagram but I was unable to follow the details. I hope he (or someone else) will be attempting the prove security this time around.

Posted by: jonkatz | January 14, 2014

Real-World Crypto Workshop

It’s become more and more difficult for me to travel to conferences. With family, teaching and other work responsibilities (for those who don’t know, I’ve taken on a <a href=”http://cyber.umd.edu”>new role</a> recently), and work-related travel besides conferences, I find it just too onerous and disruptive to get away. Sadly, for example, I missed Crypto last year for the first time since 1999.

I’m not teaching this semester, so perhaps things will change a bit. This week I am attending (part of) the third <a href=”http://realworldcrypto.wordpress.com/”>Real-World Cryptography workshop</a>. The workshop seems to have become a big success, with over 400 registrants last I heard — that is, roughly the same number of attendees as (or even slightly more than) Crypto or Eurocrypt. What explains its popularity? There are a couple of obvious possibilities, including the location (last year at Stanford, this year in NYC)

and the free registration. Somehow the workshop has generated lots of “buzz”; the fact that it is not an academic workshop probably helps attract a lot of people outside academia, or in areas related to (but not directly in) cryptography.

I’m here only through today, and I’ll try live-blogging the talks (at least until my battery runs out!)

Posted by: jonkatz | December 10, 2013

Postdoc in post-quantum crypto

The newly formed Joint Center for Quantum Information and Computer Science, of which I am a member, is seeking exceptional candidates for a Postdoctoral Fellowship in Quantum Information and Computer Science. People working on quantum cryptography as well as “post-quantum” cryptography are welcome to apply.

Anyone who is interested is welcome to email me directly with questions.

Older Posts »

Categories