# Real World Cryptography Conference 2021: A Virtual Experience

Earlier this month, our Cryptography Services team got together and attended (virtually) the IACR’s annual Real World Cryptography (RWC) conference. RWC is a fantastic venue for the latest results in real world cryptography from industry and academia.

Holding this conference virtually inevitably introduced some changes: to accommodate as many time zones as possible, the daily program was compressed from the usual 8 hours to 4 hours. Most presenters had only 10 minutes to speak! Talks were streamed live on Zoom and YouTube and participants could ask the speakers questions via the chat app Zulip.

In this post, we share summaries and insights from some of our favorite talks from RWC 2021:

The talks were recorded and we’ve included links below for those already available on YouTube. We hope to attend RWC 2022 in person in Amsterdam!

### SoK: Computer-Aided Cryptography

Kevin Liao delivered a brief and interesting introduction (video) to machine-checked formal methods, based on his team’s “SoK: Computer-Aided Cryptography” paper. Cryptography is hard to implement securely, as demonstrated by the profusion of past and recent design and implementation vulnerabilities. The presentation attempted to answer the question about how we can use computer-aided tools to design and implement cryptography and provide higher levels of security assurance.

The speaker defined computer-aided cryptography as “formal, machine-checkable approaches… to help design, analyze, and implement cryptography”. Kevin articulated his presentation around three areas:

• design-level security,
• functional correctness, and
• implementation-level security.

He described past challenges in each area, and how computer-aided cryptography can solve them. Kevin used the TLS 1.3 protocol as an example of successful use of formal methods, including but not limited to the identification of vulnerabilities at the design stage, and better documentation of the protocol information security guarantees.

He further explained that these methods are currently constrained by a simplification of the real-world environment they attempt to model, including that they may ignore some attacker capabilities (and the NCC Group Cryptography Services team has in fact discovered and privately reported vulnerabilities in formally checked cryptography implementations on several occasions). There are also gaps between the aforementioned design-level, functional correctness, and implementation-level security areas and he proposed potential solutions and research ideas to address these.

Ultimately, computer-aided cryptography will probably not solve all design and implementation security issues; but it already has – and will certainly and substantially – raise the bar.

— Gérald Doussot

### Not as Private as We Had Hoped – Unintended Privacy Problems in Some Centralized and Decentralized COVID-19 Exposure Notification Systems

Vanessa Teague gave a well-received invited talk (video) about privacy issues in COVID-19 exposure notification systems. A joint statement on contract tracing signed by over 300 leading cryptographers in April 2020 warned against creating systems that would allow unprecedented surveillance of society at large. This talk presented a variety of lessons learned related to privacy for both centralized and decentralized apps. While many talks described theoretical advancements, this talk also described a variety of practical bugs and issues encountered in Australia’s COVIDSafe app. A specific focus of the talk was inference in social graphs. Note that Singapore has recently revealed that COVID tracking data is available to police, contrary to prior policy.

— Eric Schorn

### Post-Quantum Crypto: The Embedded Challenge

This talk by Joppe Bos (video) discussed challenges arising from the use of the new post-quantum cryptographic algorithms in some real-world contexts, namely embedded systems. Such algorithms are currently being designed and standardized; they are meant to resist attacks from future quantum computers, if they exist one day, but, as Joppe pointed out, the standards will be written, and the algorithm will be deployed, regardless of whether the quest for quantum computers turns out to be chimerical or not. While the NIST post-quantum cryptography project tries to be mindful of “embedded systems”, some challenges remain, and are often overlooked:

• The representative architecture for embedded systems in the NIST project is the ARM Cortex M4, which is in fact a quite “high end” CPU, with powerful features (e.g. a very fast 32×32->64 bits multiplier, a DSP unit with parallel computations…). Most embedded systems must accommodate smaller hardware, in particular due to severe constraints on energy usage.
• While lattice-based post-quantum cryptographic algorithms tend to be competitive with elliptic curve cryptography in terms of raw cycle counts, they use larger objects (public keys, private keys, ciphertexts…) thereby requiring a greater network bandwidth, and substantially more RAM space for intermediate results. These two metrics are usually among the most limiting constraints in practical embedded systems.

To at least begin to address the first point, Joppe presented some strategies to leverage big integer accelerator hardware into lattice-based computations. Such hardware can be, for instance, a circuit dedicated to 128×128->256 multiplications; this is how small systems such as cheap smartcards manage to use elliptic curve cryptography, or even RSA. However, the mathematics behind lattices are quite different, and tend to use polynomials with many small coefficients, not big integers. Various mathematical tricks can be employed here; Joppe pointed to his new research (with Renes and van Vredendaal) which improves on previously known results in that area.

This is still a very novel and active research area. For instance, how to make such implementations more robust against physical attacks (differential power analysis, fault attacks…) has barely been studied yet. As Joppe said: “Interesting times ahead.”

— Thomas Pornin

### In-Band Key Negotiation: Trusting the Attacker

In this talk (video), Sophie Schmieg described an interesting attack exploiting a pernicious design flaw common to many existing protocols performing a key negotiation step. Specifically, the flaw is related to in-band key negotiation, namely when protocols infer the key or algorithm to use based on unauthenticated user input.

To start with a well-known example, the speaker first described how JSON Web Tokens (JWT) commonly suffer from the same flaw. JWTs are generally composed of three fields, the header, the payload, and the signature, which is computed over the concatenation of the header and payload.

The header field, see below, is composed of an algorithm field ("alg") specifying what algorithm should be used to verify the token signature.

{   "alg": "HS256",   "typ": "JWT" }

Verification bypass attacks have been demonstrated against this construction by replacing the "alg" field with "none", indicating that the token is unauthenticated. A vulnerable verifier might then simply skip the verification step. An illustration of this process is presented in Figure 1, in which an attacker intercepts and tampers with a message, then modifies the metadata to indicate to the server that the message should not be verified.

The core topic of the talk followed a similar technique, albeit more involved since it combined the above principle with a padding oracle attack. The speaker proceeded to describe a vulnerability (CVE-2020-8912) they discovered in AWS KMS S3 buckets and the SDKs to access them.

Using a technique similar to the one described with JWTs, they replaced the algorithm field of ciphertexts (initially AES-GCM, an authenticated encryption mode) with AES-CBC, which is known to be vulnerable to padding oracle attacks if the implementation reveals too much information about decryption failures. An attacker may then decrypt ciphertexts by repeatedly submitting guesses of plaintexts, 16 bytes at a time, confirming their guess if the server indicated that the padding was valid. Additional details can be found in the vulnerability writeup.

Sophie continued her talk by detailing how Tink, a cryptographic library developed by Google, addresses these challenges, and concluded by providing some key takeaways, among which we’ll specifically remember the following: “Never trust the ciphertext”!

— Paul Bottinelli

### Mesh Messaging in Large-scale Protests: Breaking Bridgefy

Lenka Mareková delivered a talk (video) on a messaging app that operates over ad-hoc mesh networks using Bluetooth. The app’s ability to work offline has led to its use at protests around the world, including in Hong Kong, Zimbabwe, India, Belarus, Thailand, and the US. While we do not have precise usage statistics, it is safe to assume that many people have depended, and continue to depend, on this app’s security for their personal safety.

The app is designed to use both classic Bluetooth and BLE for message transport. Messages are broadcasted using a flood-based method with TTL counters. Users are identified by public keys which are automatically shared with any device in range as part of an automatic handshake.

Reverse engineering identified several severe cryptographic flaws, including the use of ECB encryption, the composition of RSA with PKCS#1 v1.5 and gzip (permitting a variant of Bleichenbacher’s attack), full leakage of social graph data, lack of any support for authenticated handshakes (permitting full attacker-in-the-middle between any two users), and more. The app was also found to be trivially vulnerable to denial of service via a zip bomb.

The researchers summarized that “Bridgefy permits its users to be tracked, offers no authenticity, no effective confidentiality protections and lacks resilience against adversarially crafted messages… Thus, if protesters rely on Bridgefy, an adversary can produce social graphs about them, read their messages, impersonate anyone to anyone and shut down the entire network with a single maliciously crafted message.” Nevertheless, the app has seen widespread adoption, due in part to a lack of clearly preferable alternatives.

This research highlights an urgent need for user-focused research and development aimed at identifying achievable security goals for ad-hoc mesh networks, accurately modeling protesters’ security needs, and providing viable alternative messaging solutions for use in highly adversarial contexts.

— Eli Sohl

### Asynchronous Remote Key Generation: An Analysis of Yubico’s Proposal for W3C

This talk by Emil Lundberg (video) described a new cryptographic primitive for delegated, asynchronous public key generation, based on his team’s paper. One practical application discussed was user-friendly backup keys for W3C WebAuthn. The scheme needs to ensure that no adversary can gain access to a user’s account without knowledge of the Backup Authenticator’s (BA) private key $s$, since multiple public keys $P$ for BA may be registered by the Primary Authenticator (PA) on its behalf at different Relying Parties (RPs). The registered public keys $P$ and key handles $E$ must remain unlinkable such that no RP can decide whether they were registered for the same or different accounts.

This was achieved by using a relationship between private and public keys in elliptic curve cryptography, where an arithmetic trick is used to “mask” a private key by combining a randomly generated public key with a designated “seed” public key. The algorithms of the Asynchronous Remote Key Generation (ARKG) scheme for DL-based keys include setup, key generation, verification, public key derivation, and secret key derivation. The combination of ARKG and WebAuthn allows the PA to register keys for the BAs on behalf of the user without requiring the BAs to be present. The core benefit of the scheme is that users do not need to manage their backup authenticators manually by having them present and registering them by hand when creating accounts.

— Javed Samuel

### Raccoon Attack: Finding and Exploiting Most-Significant-Bit-Oracles in TLS-DH(E)

In the first session of RWC on Monday, Robert Merget (video) spoke about the Raccoon Attack, one of an ever-growing number of cryptographic attacks with a cute name, a logo, and a website. This attack targets TLS 1.2 implementations using finite-field (FF) Diffie-Hellman (DH) key exchange. Finite-field Diffie-Hellman is not really “modern” cryptography and TLS 1.2 isn’t the latest version, but this attack is notable for a few reasons: it recovers the shared Diffie-Hellman value $g^{ab}$ by solving an instance of the Hidden Number Problem. It exploits timing leakage, placing it among the illustrious ranks of Lucky Thirteen and timing variants of Bleichenbacher’s vulnerability. And the vulnerability enabling this attack went unnoticed in the TLS 1.2 specification (RFC 5246) for over 20 years!

As part of the TLS 1.2 handshake, the client and server compute a “master secret” using a PRF (usually HMAC-SHA256) keyed with a “pre-master secret” ($g^{ab}$, the value derived from the DH key exchange). The Raccoon attacker targets one specific TLS connection between a client and server whose DH public values are $g^a$ and $g^b$. The attacker’s goal is to learn the “pre-master secret” $g^{ab}$, which would allow it to decrypt all of their communications. A few stars have to align for the attack to work. First, the attacker needs to be able to detect when the most significant byte(s) of the “pre-master secret” are zero bytes. Second, the attacker must be able to perform a number of key exchanges with the server re-using the same DH value $g^b$.

The existence of a timing side channel arises from how HMAC-SHA256 behaves when it is keyed with the “pre-master secret” $g^{ab}$. Whenever an HMAC key is longer than the block size of the underlying hash function (which will be the case for any reasonably-sized FFDH modulus), it is hashed first — and the time to hash it depends on how long it is. One sentence in the TLS 1.2 specification means that the length of the “pre-master secret” is variable:

Leading bytes of [g^{ab}] that contain all zero bits are stripped before it is used as the pre_master_secret.

When hashing a value with SHA256, the message is always padded with at least 1 bit and then processed in 512-bit chunks. So, if the DH key exchange is performed in a finite field of size 2048 bits, SHA256 will operate on either 5 blocks (if there were no leading zero bytes) or fewer. The time elapsed between when the client sends its DH value and when the server sends its next message can reveal this small difference.

To crack a particular DH key exchange, the attacker must submit many variants of the client’s DH value, $g^a$. The attacker performs a key exchange with DH value $g^a \cdot g^r$ for many different random values of $r$ and notes the ones for which the server re-used $g^b$ and leaked that the upper byte(s) of the “pre-master secret” $(g^a \cdot g^r)\cdot g^b$ were zero byte(s). Suppose the FF has size 2048 bits. Then, for each $r$ value that resulted in $k \geq 1$ zero bytes, the attacker learns that $g^{ab}\cdot g^{rb} < 2^{2040 - 8k}$. A number of these inequalities (around 50-200 for the examples presented) can then be fed into a solver for the Hidden Number Problem to recover $g^{ab}$. For the remainder of the details, see the published paper.

— Marie-Sarah Lacharité