Posted by: jonkatz | January 14, 2014

Real-World Crypto, Day 2, Session 3

(I now see the problem with live-blogging: once you start, you are not allowed to stop…)

Dave Anderson led off the 3rd session with a talk on encrypted hard drives. The idea is to have a highly secure drive in which encryption cannot be turned off; all data written to disk is encrypted by default. So if your hard drive is stolen, the data in inaccessible (unless your password can be guessed). The drive uses a combination of locking and encryption. That is, the drive is locked when power is turned off, and remains locked even when power is turned on, until the user enters a password that unlocks the drive. Unlocking the drive also means that the encryption key EK for the drive is available, and so from that point on all data read/written from/to the drive is decrypted/encrypted.

The cryptography underneath is standard: when off, the drive holds the encrypted data (encrypted using EK), a hash of the user’s password, and an encryption of EK under the user’s password. After being turned on and taking a password from the user, the drive verifies the hash and—if correct—uses the password to recover EK. From then on EK is used inside a dedicated AES circuit that can encrypt/decrypt at line speed. What is nice is all the engineering done to make this work, and to defend against various forms of attack.

Since the entire drive is encrypted, including the operating system (OS), one may wonder how the drive boots so as to request a password from the user. The answer is that a special pre-boot OS is installed on the drive for exactly this purpose; the “actual” OS is loaded only once the password is verified.

Additional points of interest mentioned in the talk: The drive also implements a true random-number generator (also used to generate the EK) based on the mechanical characteristics of the drive itself. To protect against malicious code being run in firmware, all firmware is verified as being digitally signed. Something not mentioned: in this model, timing/tampering attacks that cryptographers spend a lot of time worrying about seem to go away. Indeed, if the correct password is entered then security is lost; until the correct password is entered, EK is unavailable and so not being used.

Currently, the drive uses CBC/XTS mode for encryption, which means the encrypted data can potentially be modified in predictable ways. Addressing this seems like an interesting research direction related to the 2nd session.

In the next talk, Christian Rechberger talked about lightweight block-cipher design. He began, though, by trying to debunk the notion that “crypto is never the weakest point of a system,” pointing to several recent examples: the exploitation of MD5 collisions by Flame, RC4 cryptanalysis leading to attacks on WEP and TLS, attacks on the Mifare classic chipcard, and cryptanalysis of Keeloq. In the latter two examples, in particular, companies rolled their own cryptography, at least in part because they felt that standard crypto algorithms were not efficient enough.

Regrading the state-of-the-art, he noted that AES uses about 2500 gate equivalents (GEs) when implemented in hardware. Some more recent ciphers use about 1000 GEs, but tend to be slower than AES. In his work, he focused on designing a lightweight cipher with very low latency. (It was unclear to me why he focused on low latency rather than high throughput, though perhaps hard-drive encryption is good motivation.)

The new cipher, called PRINCE uses a DESX design, with the internal keyed permutation constructed so that encryption and decryption use the same circuit (with a different key).

The final talk, by Francois Dupressoir, was extremely interesting, though unfortunately some technical details eluded me in the somewhat short talk he gave and I will have to check the paper. The goal of the work he presented is to generate provably secure machine code starting from a high-level implementation. To start, they use EasyCrypt to produce a formally verifiable cryptographic proof of security for the high-level pseudocode. Next, they have an intermediate, C-like language to which the pseudocode can be compiled while maintaining the formal guarantees. The most novel part of this worked involved translating from this C-like language to actual machine code, again while maintaining formal guarantees, proven here in Coq.They claim an end-to-end verifiable compilation of the PKCS #1 v2.1 RSA encryption standard (essentially OAEP).

One of the more interesting parts of the work to me was that they have a way to reason about “side-channel” leakage, which I admit to not having fully understood. The basic idea is to explicitly give the attacker information about the value of the program counter at various steps during the execution of the high-level pseudocode; the proof of security at that level takes into account this extra information given to the attacker. Then at each step during the compilation it is proved that leaking the sequence of values taken by the program counter still does not harm security. This seems like a nice way to reduce/eliminate common implementation errors.

I am quite surprised at the work, actually. One of the issues in EasyCrypt that I had raised at an EasyCrypt workshop this past summer was exactly related to the “mismatch” between the abstract types that EasyCrypt supports and their low-level instantiation as byte arrays in C or the eventual machine code. The work Francois talked about seems exactly to address this problem (modulo the fact that the math libraries used to actually implement the modular arithmetic are assumed to be secure).


Responses

  1. Thanks for this post. Seems like there’s always something new I learn even after being in the field for 25 years…


Leave a comment

Categories