Double-odd Elliptic Curves

This post is about some new (or sort of new) elliptic curves for use in cryptographic protocols. They were made public in mid-December 2020, on a dedicated Web site: https://doubleodd.group/

There is also a complete whitepaper, full of mathematical demonstrations, and several implementations.

Oh noes, more curves! Will this never end?

It is true that there is no shortage of elliptic curves in use in cryptography. In the 1990s, various proposals were floated around, and picked up by standardization bodies, in particular NIST. In FIPS 186, some 15 “standard curves” were described; the most famous of them is P-256. Another well-known curve is Curve25519 (which has another equivalent form called Edwards25519). Using these curves, computers routinely perform key exchange and signature protocols, e.g. as part of SSL/TLS. This works well enough in most cases; almost everything on the Web nowadays uses HTTPS, and while elliptic curve cryptography is the foundation of the security of most of these connections, its computational cost is negligible (when using modern computers and smartphones) and is not a common source of security issues.

However, there still are situations where the exact curves you use matter, and the current solutions are not perfect. In particular, small embedded systems with severe constraints on power, RAM and CPU abilities would benefit from curves allowing faster and cheaper implementations. Also, protocols more advanced than classic key exchange and signature may require curves that have some more specific properties. In these areas, there is still room for improvement.

What’s wrong with P-256 or Curve25519?

Classic curves (usually called short Weierstraß, from their equation shape, first described by Karl Weierstraß), such as P-256, have incomplete formulas: when computing over these curves, there are special cases to take into account. In particular, when adding two points together, the addition formulas don’t work if the two input points are, in fact, the same point. That situation must be detected, and appropriate alternate formulas used in that case. The natural way to handle this is to have some runtime test and conditional execution, which leads to timing attacks and can endanger security. Another method is to trust that such a situation does not happen, based on some prior mathematical analysis, which is easier said than done; any mishap in that analysis tends to translate into exploitable protocol flaws. Complete formulas that avoid these issues have been published, but they induce some extra cost (typically about +40% or so), which is a problem for small embedded systems.

Twisted Edwards curves (like Edwards25519) have efficient complete formulas, with no exceptional case, sidestepping this issue. However, they introduce another problem, which is the cofactor. Namely, cryptographic protocols normally need a group with a prime order, and the number of points on a twisted Edwards curve cannot be prime, since it is always a multiple of 4. At best, one can have a curve with 4n elements, with n being prime. The ratio between the total number of points on the curve, and the (prime) size of the interesting subgroup, is called the cofactor. Protocols strive to remain in the prime order subgroup. However, when a point is received from the outside, ascertaining whether that point is in the right subgroup is too expensive to be done systematically. The non-trivial cofactor is a known source of flaws in protocols that build upon such curves. Such issues can usually be mitigated in the protocol with some extra operations, but that complicates design and analysis, and it is hard to know whether all cofactor issues have been properly exorcised.

What about Decaf / Ristretto?

Decaf and Ristretto are a very nice solution for the cofactor issues, when working with twisted Edwards curves with cofactor 4 (for Decaf) or 8 (for Ristretto). Internally, they work with the “normal” twisted Edwards curve, using the efficient and complete formulas; but when encoding to bytes, or decoding from bytes, special maps are applied, which ensure a sort of canonicalization, to the effect that, from the outside, one obtains a prime order group, and all cofactor issues disappear. In practice, curve Edwards25519 has cofactor 8, and when Ristretto is applied on it, the result is a prime order group called ristretto255. The group order is a prime integer slightly above 2252, and elements are encoded over 255 bits (in low bandwidth situation where every bit matters, it is possible to encode ristretto255 elements over 254 bits).

Ristretto is technically neat and it is recommended to use it for new cryptographic protocol designs that need a prime order group. It is not perfect since the encoding and decoding maps have a slight overhead in computational cost. My work on double-odd elliptic curves is the exploration of an hitherto neglected alternative method, which turns out to have some potential benefits, notably in terms of performance (especially on small embedded systems).

So what are these newfangled “double-odd” curves?

First, they are not really new. All elliptic curves can be represented with a short Weierstraß equation, so, in a sense, they were already all described when curves where first proposed to be used for cryptography, in the 1980s. Maybe more importantly, all the security of such cryptography relies on the idea that for most curves, the elliptic curve discrete logarithm problem is hard (i.e. given points G and kG for some unknown k, it should be computationally infeasible to find k). We can piggyback on that standard assumption as long as whatever curve we propose is from a class of curves large enough that it is included in that notion of “most curves”. In other words, there might be some special curves which are weak, but they are rare, and any proposal for “new” curves should fare fine as long as that proposal is for enough curves that they are, indeed, not so new.

This is the case for double-odd elliptic curves. They are the curves whose order (number of points on the curve) is the double of an odd integer; hence the name. About 1/4th of all curves are double-odd curves: this is a large class of curves. This is easily seen as follows: for a given base field, the orders of curves defined over that field are mostly uniformly distributed over a given range; half of them will be even, and half of these will be double-odd (i.e. even but not multiple of 4).

Such curves have nominally been supported by generic standards such as ANSI X9.62 (ECDSA signatures over any curve) for the last two decades. They have been somewhat neglected, because when you use a classic short Weierstraß equation you can simply choose a curve with a prime order (no cofactor, by definition), and if you accept a non-trivial cofactor then you can go straight to Montgomery / twisted Edwards curves (cofactor at least 4).

But then, double-odd curves have a cofactor (of 2); so, what is the point?

They indeed have a cofactor of 2; but one can make a prime order group out of them with a proper encoding/decoding process. The main trick is that if a curve has order 2r for some odd integer r, then there is a reasonably efficient way to verify that a given point is in the subgroup of order r. This allows maintaining the “prime order group” abstraction, i.e. no cofactor issue.

But we also get some other nice features:

  • Encoding is “economical”: for a prime order group of order about 2n, encoding uses only n+1 bits.
  • We can use the unique point of order 2 on the curve as a pivot to perform point additions with a sort of skew that removes the exceptional case of adding a point to itself. With a bit more work and an appropriate system of coordinates, we get comfortably complete formulas, with no exceptional case at all.
  • These complete formulas are fast. Notably, when computing a sequence of successive doublings (a common operation when multiplying a point by a scalar), the per-doubling cost can be as low as 6 multiplications, while twisted Edwards curves need at least 7.
  • Last but not least, the class of double-odd curves includes some curves which were previously described in 2001 by Gallant, Lambert and Vanstone; namely, curves with equation y2 = x3 + bx. These “GLV curves” have some special extra structure (an easily computed endomorphism on the curve) that can be leveraged to speed some operations up (e.g. point multiplication by a scalar). Two decades after that publication, no way to leverage that structure for attacks was found, so this seems to be safe. The GLV method was long rumoured to be patented, but the commonly quoted patent references supporting that rumour are now listed as “expired”.

This is confusing. Is there a more visual way to describe double-odd elliptic curves?

Yes! The double-odd curves site includes a geometrical introduction with pictures.

Are double-odd elliptic curves faster than twisted Edwards curves?

Sort of. It depends. Yes on small embedded systems.

Specifically, we define two specific sets of parameters, for curves do255e and do255s. Curve do255e is a GLV curve, and is the one most recommended. Curve do255s is a “normal curve” with no GLV endomorphism; its role is to show that even without the endomorphism, double-odd elliptic curves are still a quite acceptable tool. On an ARM Cortex M0+ embedded CPU, our implementation (which is fully constant-time) can do a key exchange with do255e in 2.6 million cycles. By comparison, with Curve25519, the best reported performance is 3.2 million cycles; moreover, with do255e, you also know whether the input point was correct (in the right subgroup of the right curve), while Curve25519 does not provide that information. For signatures, our code can produce a signature in 1.5 million cycles, and verify it in 3.3 million cycles, which is again faster than Edwards25519 performance.

On large architectures (recent x86 CPUs in 64-bit mode), we also get very decent performance. For instance, with do255e, we can sign in 54000 cycles and verify in 112000 cycles. The verification figure includes the cost of decoding the public key used for verification; without that decoding, the verification cost is about 94000 cycles. By comparison, the well-known curve25519-dalek implementation for Edwards25519 needs 110000 cycles for a signature verification (again not counting public key decoding cost). I am a bit cherry-picking figures here, because curve25519-dalek also includes a batch verification procedure (applicable when verifying many signatures at the same time) which has a lower per-verification cost (about 61000 cycles), but the same procedure should conceptually be applicable to double-odd curves. Batch signature verification and optimized implementation of double-odd elliptic curves with SIMD instructions (e.g. AVX2) are still unexplored subjects.

I must insist that the point of double-odd elliptic curves is to provide a safe structure, amenable to building cryptographic protocols on top of the “prime order group” abstraction. However, the obtained performance is still neat. This avoids creating a trade-off between security and speed; we can get both.

Isn’t all of this pointless since elliptic curves will be utterly annihilated by quantum computers?

In truth we do not know. I do not know whether quantum computers will ever work. I can argue that nobody else knows either. A characteristic of research on quantum computers is that it is expensive science. People trying to build better prototypes need to convince fund providers to pay for the experiments, which are already in the billion of dollars range; for that, they must remain optimistic at all times, because nobody is going to drop a billion dollars on an experiment whose predicted outcome is that it won’t work. Thus, nobody can express a really independent and objective assessment on the ultimate workability of the endeavour, let alone on an achievement date prediction. This is a situation quite similar to that of controlled nuclear fusion: a perfectly sound and solid scientific concept, large-scale experiments that show that things work as expected, but a never-ending succession of technological issues that always keep success at least ten years away, while development budgets keep rising.

Even if a truly functioning quantum computer were to be build one day, with enough quantum bits and gates to break discrete logarithm on practical curves, then the sheer initial price of that first machine would be such that there would be a delay of at least a few years until one may assume that potential attackers have enough resources to build a similar computer.

Thus, it is reasonable to assert that elliptic curve cryptography has still at least one or two decades of useful life, at least for short-term usage (e.g. authentication, where there is no long-term encrypted secret data), which is enough to make this research relevant. It is, of course, a very good idea to develop new “post-quantum” algorithms that will ensure security even against quantum computers; but that does not make elliptic curves immediately obsolete.

Call us before you need us.

Our experts will help you.

Get in touch