Building Intuition for Lattice-Based Signatures – Part 2: Fiat-Shamir with Aborts


This two-part blog series aims to build some intuition for the main techniques that are used to construct lattice-based signatures, focusing in particular on the techniques underlying Falcon and Dilithium, the two lattice-based signature schemes selected for standardization by the National Institute of Standards and Technology (NIST). In part 1 of this two-part blog post (Building Intuition for Lattice-Based Signatures – Part 1: Trapdoor Signatures), we covered how to build lattice-based trapdoor signatures based on the hardness of the Closest Vector Problem (CVP) using the hash-and-sign paradigm, which lies at the core of Falcon.

In this second part, we will describe an alternative construction of lattice-based signatures relying on the hardness of the Shortest Vector Problem (SVP) and the Fiat-Shamir paradigm, which is used as a basis for the signature scheme Dilithium. For a quick refresher on lattice theory and notation that will be used throughout this post, see the Lattice Background section in part 1 of this blog post.

Table of Contents

Constructing Signatures Using Fiat-Shamir and the SVP

Recall that the SVP_\gamma problem asks to find a short lattice vector of length at most a multiplicative approximation factor \gamma \geq 1 of the length of the shortest (non-zero) vector in the lattice. In order to construct a lattice-based signature scheme based on the SVP_\gamma, we focus on a special case of the SVP_\gamma problem instantiated on q-ary lattices, known as the Short Integer Solution (SIS) problem. Formally, SIS_{n,m,q,\beta} can be defined as follows. Given A \subseteq \mathbb{Z}_q^{m \times n}, find a short \vec{z} \in \mathbb{Z}^m in the q-ary lattice \Lambda^\perp(A) satisfying A\vec{z} \equiv \vec{0} \mod q and \|\vec{z}\| \leq \beta. (Note that any norm can be used here. Common choices include the \ell_2 and \ell_\infty norms.)

The SIS problem lends itself well to constructing cryptographic primitives. If we choose our domain to be a set of short vectors, then we can show the function \vec{x} \to A\vec{x} is in fact a hash function1 which is collision-resistant and one-way.

Indeed, let D_b^m : \{\vec{x}: \|\vec{x}\|_\infty \leq b\} and suppose we define the hash function f_A: D^m \to \mathbb{Z}_q. If we could find a collision \vec{x}_1, \vec{x}_2 \in \mathbb{Z}^m such that A\vec{x}_1 \equiv A\vec{x}_2 \mod{q}, then \vec{x}_1  - \vec{x}_2 is a solution to the SIS_{n,m,q,\beta = 2b} problem. Indeed, we have that A\vec{x}_1 - A\vec{x}_2 = A(\vec{x}_1 - \vec{x}_2) \equiv \vec{0} \mod{q} and, since both \vec{x}_1 and \vec{x}_2 are short, we know \vec{x}_1 - \vec{x}_2 is also bounded, with \|\vec{x}_1 - \vec{x}_2\|_\infty \leq \|\vec{x}_1\|_\infty + \|\vec{x}_2\|_\infty  = 2b, and hence is a solution to SIS_{n,m,q,\beta = 2b}.

We will now show how to use this one-way function to construct lattice-based identification schemes and signatures.

Signature Schemes from Identification Schemes

In order to use SIS as the hard problem at the heart of a lattice-based signature scheme, we use a construction based on identification schemes. In general, one can use a one-way function to construct an identification scheme, which can then in turn be used to construct a signature scheme using the Fiat-Shamir transform as follows.

A commit-challenge-response identification protocol can be used for Alice to prove knowledge of some secret information (such as a private key \mathrm{sk}, corresponding to a public key \mathrm{pk}). It consists of three main steps: First, Alice chooses a random (secret) value y, and computes a commitment to y using a one-way function f(y), which she sends to Bob. Then, Bob responds to this with a random challenge c. Finally, Alice provides a response combining y, c and the secret key \mathrm{sk} in such a way that it can be verified using the commitment f(y) and the public key \mathrm{pk}, but no information about \mathrm{sk} is leaked.

As an example, suppose Alice wants to prove to Bob that she knows s, the discrete logarithm of S = g^s \in \mathbb{Z}_q. One method of doing so is the Chaum Identification Scheme:

Note that someone impersonating Alice (i.e. someone who does not know sk = s) would be able to return an answer that Bob accepts at most 1/2 of the time, as they can either choose a commitment that verifies correctly in the case of c=0 or c=1 (but not both). Repeating this interaction k times (successfully) would thus convice Bob that the person he is interacting with is Alice with probability 1 - 1/2^k. This process can be parallelized, as is done in Schnorr’s Identification Scheme:

The main downside of this process is the requirement for it to be interactive, as it incurs communication costs, and requires Alice to be available when proving her identity.

To avoid those drawbacks, one can use the Fiat-Shamir heuristic – a method to transform an interactive identification scheme such as the ones above to a non-interactive signature scheme. The basic idea is to replace the challenge with the output of a random oracle (in practice, a cryptographic hash function is used instead) that depends on the message and public values.

For the example above, the identification scheme can be adapted to generate Schnorr (non-interactive) signatures as follows:


1. \mathrm{sk}: s \leftarrow \mathbb{Z}_q \setminus {0}
2. \mathrm{pk}: g^s, where g is a public value that generates \mathbb{Z}_q

Sign(s, m):

1. y\leftarrow\mathbb{Z}_q\setminus\{0\}
2. c=H(g^y, m)
3. z = cs + y
4. Return signature \sigma = (z, g^y)

Verify(g^s, m, \sigma = (z, g^y)):

1. c' = H(g^y, m)
2. Output g^z== g^s(g^y)^{c'}

Equivalently, the signer could instead send the signature \sigma = (z, c), in which case the verifier computes g^{y'} = g^z(g^s)^{-c} and checks that c == H(g^{y'}, m) to check that it was properly generated.

Lattice-Based Identification Schemes

Taking a similar approach, we can use the one-way function defined using SIS to define lattice-based identification and signature schemes, using an approach first introduced by Lyubashevsky in [Lyu08].

As a first try, we might try to define an identification scheme as follows. First, fix a matrix A \in \mathbb{Z}_q^{n \times m} as part of the protocol parameters. Then, for each iteration of the identification scheme, choose a random bounded \vec{y}, commit to it by computing the one way function \vec{w} = A \vec{y}, and respond to challenges as follows:

Unfortunately, this protocol can leak information about Alice’s secret key. Indeed, since the multiplication and addition step \vec{z} = \vec{s}c + \vec{y} is performed over an infinite group – in particular, over a bounded subset of the infinite group, due to practical limitations – the result cannot be uniformly distributed over the resulting space. In particular, edge values of coordinates of \vec{z}, such as particularly large or small values or z_i, leak data about the corresponding coordinates of the secret.

If c=1, signatures risk leaking data about the secret \bar{s}. Indeed, if any coordinate z_i of \vec{z} is equal to 0 for a given signature, then it must be that y_i +cs_i = 0, and hence (since all values are non-negative) that {y}_i = s_i = 0. On the other hand, if any coordinate z_i of \vec{z} is equal to 5m, we must have that {y}_i = 5m-1 and s_i = 1. Similarly, the signature scheme Dilithium chooses c from the set \{-1,0,1\}. In this case, computing \vec{z} = \vec{y} + c\vec{s}_1 leaks information whenever \|\vec{z}\|_\infty is above a particular bound (when \|\vec{z}\|_\infty \geq 5m if the parameters are chosen as above).

Additionally, we also cannot modify \vec{y} to never leak information. Indeed, since \vec{y} is output when c = 0, the distribution of \vec{y} values that don’t leak information would itself leak information about the secret. For instance, if we modified the sampling process of \vec{y} so that y_i is never 0 for any i such that s_i = 0 (to avoid the leakage of the first case described above), then the fact that some coordinates of \vec{y} are never 0 would be discovered when sending the challenge c = 0, and would reveal that those coordinates of the secret \vec{s} are also 0 with high probability.

One possible modification to the identification scheme that can be done to avoid this leakage is simply to make the bounds so large that getting a value of z_i that would leak information happens with negligible probability. Unfortunately, this would lead to much larger key and communication sizes, and would correspond to an easier version of the SIS problem.

The solution to this dilemma is straightforward – simply check whether a given \vec{z} = \vec{s}c + \vec{y} would leak information before responding to a challenge. If it would, we simply abort by sending \vec{z} = \perp (which is a special value rejected by the verifier), and try again. If no information would be leaked, we proceed as usual.2

Determining the success probability of a single round can then allow us to determine the expected number of rounds necessary for the verifier to accept with a given probability. It can be shown that this protocol is witness indistinguishable, so these rounds can be performed in parallel, at the cost of some computation and communication overhead.

However, if this new “identification scheme with aborts” is turned into a signature scheme using the Fiat-Shamir heuristic, then we can do better. Indeed, since the “challenge” is generated from the commitment and message using a cryptographic hash function, the signer simply needs to generate new (random) commitments to retry an aborted round of the identification scheme, and repeat until no data is leaked. This process is called “Fiat-Shamir with Aborts”, and allows a signature to have the same size as the communication costs as a single (successful) round of the identification scheme. Putting it all together, we get the following signature scheme:


1. \mathrm{sk}: \vec{s} \sim \{0,1\}^m
2. \mathrm{pk}:  \vec{p} = A\vec{s}


1. Let \sigma = \perp
2. While \sigma = \perp:
   1. For i = 1 .. k, sample \vec{y}_i \sim \{0,...,5m-1\}^m, and let \vec{w}_i = A\vec{y}. Let Y = [\vec{y}_1, ..., \vec{y}_k] and 
      W = [\vec{w}_1,...,\vec{w}_k]
   2. c=H(W, m) \in \{0,1\}^n
   3. For i = 1.. k, \vec{z}_i = \vec{y}_i + c_i \vec{s}. Let Z = [\vec{z}_1,..., \vec{z}_k]
   4. If any coordinate Z_{i,j} is 0 or 5m, set \sigma = \perp. Otherwise, set \sigma = ( Z, W)
3. Return signature \sigma = (Z, W)

Verify(\sigma = (Z, W), m):

1. c' = H(W, m)
2. For i = 1 ..k, check that A\vec{z}_i == c'_i\vec{p} + \vec{w}_i and that all coordinates 0<Z_{i,j}<5m

The signature scheme Dilithium that has been chosen for standardization by NIST is designed using the above template. As the time required to generate a signature depends on the expected number of iterations, parameters are generally selected so that the expected number of iterations required until a valid signature is generated is relatively small. For example, Dilithium is expected to take between 4 and 7 rounds to output a signature.

In general, lattice signature schemes based on the one-way function \vec{x} \to A\vec{x} are very fast, as matrix-vector multiplications are very efficient operations. However, these schemes generally have very large keys and signatures. This is since each key consists of a vector of m elements, and each signature consists of two matrices of m \times k elements, a commitment matrix W and the matrix Z. This results in parameters of a few thousand bytes for public keys and signatures. (In Dilithium, for example, the recommended (optimized) parameters for NIST level III result in a public key size of 1472 bytes, and a signature size of 2701 bytes.) However, there are a number of different optimizations that can be used to reduce the key sizes, possibly at the cost of complexity or of increased computations, which we’ll discuss in the following section.

Options and Optimizations

One straightforward optimization that can be done is to generate the matrix A, which is part of a public key, from a seed, using an extendable output function such as SHAKE. This can then be re-computed by the verifier at a relatively low cost, and avoids having the entire matrix A as part of the pubic key. The matrix A can also be defined as a public parameter of a scheme, with each individual key pair simply consisting of the short value \vec{s} as a secret key, and of A\vec{s} as the public key. However, this requires generating the global parameter A in a trusted method, and, combined with the above optimization, only reduces the size of the public key by the size of the seed, so this optimization is usually not included.

A second common optimization method is to define the lattices over rings or modules. This results in additional structure within the lattices, which can be used to obtain smaller key sizes as well as faster computation speeds. As an example, Dilithium is defined over elements from the polynomial ring \mathbb{Z}_q[X]/(X^{256} + 1), which allows the use of the Number Theoretic Transform (NTT) to speed up matrix-vector multiplications. As a consequence of this change, the signature scheme depends on slightly different security assumptions (the ring/module versions of the existing ones), but these are not currently known to be less secure than their non ring/module counterparts.

Another possible optimization method comes from a modification of the rejection sampling procedure: one can think of the aborts described earlier as a form of rejection sampling with a target uniform distribution. Other target distributions can be used instead, such as a gaussian distribution (or related variant), which allows to minimize the signature size for an equivalent security level, see for example [Lyu11] and [DDLL13]. However, this comes at the cost of additional code complexity, as it requires implementing high-precision gaussian sampling. Hence, some signature schemes, including Dilithium, chose to stick with the standard uniform distribution for rejection sampling for simplicity.

Using a different approach, some additional efficiency can be obtained for an SIS-based signature scheme by introducing a second security assumption, as done by [BG13]. The Learning With Errors (LWE) problem is usually used in lattice-based encryption schemes, and asks to determine the secret \vec{s} when given the pair (A, b = A\vec{s} + \vec{e}), where \vec{e} is a short error vector. The decisional version of the LWE problem asks to distringuish pairs (A, As + e) from pairs generated from a uniform distribution. However, note that we could instead phrase this as the decisional problem of distinguishing the pairs (\bar{A}, \bar{A}\vec{s}) from uniformly random pairs, where \bar{A} = [A|I] and \vec{s} = (\vec{s}_1, \vec{s}_2), since \bar{A}\vec{s} = \bar{A}(\vec{s}_1, \vec{s}_2) =A\vec{s}_1 + I \vec{s}_2 = A\vec{s}_1 + \vec{s}_2 (and is thus exactly an LWE sample provided \vec{s}_1 and \vec{s}_2 are sampled from the appropriate distributions). This allows one to use the same Fiat-Shamir structure to obtain signatures with hardness based on the LWE assumption by modifying the keygen process (in addition to the original hardness assumption of SIS, for technical reasons) – and results in shorter signatures. This optimization is included in Dilithium, and can be seen in their keygen process.

Finally, once the signature framework is fully defined, it is also possible to make additional scheme-specific optimizations or apply compression techniques to minimize the key size. In Dilithium, for example, it was observed that it is possible to compress the signature by omitting some of the low-order bits of some elements in \mathbb{Z}_q, and instead including a vector of one-bit hints that are used to ensure the result of computations is correct.


The description of lattice-based signature schemes can often seem intimidating at first glance, but at the heart of these schemes are the same constructions used for well-known classical signature schemes, with a few small modifications to adapt these constructions to the case of the infinite groups known as lattices.

Since these first lattice-based schemes, many additional techniques have been introduced to mitigate the various drawbacks of lattice primitives, whether that is the large size of keys and signatures in Fiat-Shamir schemes, or the complexity of implementing secure preimage sampling in hash-and-sign schemes. However, these simple constructions can still be found at the core of modern lattice signature schemes today, and will hopefully help gain an intuition for the working of the more complex constructions that are being standardized.

Many thanks to Paul Bottinelli and Giacomo Pope for their careful review. Any remaining errors are mine alone.


1: This hash function is not suitable for all cryptographic applications; while it is collision resistant and one-way, it is not a fully random function (for instance, A(x_1 + x_2) = Ax_1 + Ax_2, which allows one to infer an underlying structure). While this may be a problem for some applications that require the output of a hash function to be indistinguishable from random, it is also often useful in other settings such as in homomorphic encryption.

2: A similar approach can also works in the classical setting: Girault’s ID scheme [Gir90] is a variant of Schnorr’s identification protocol, but computes the discrete log modulo a composite number N and performs the multiplication z = sc + y over the integers. Security comes from choosing a y in a range much larger than that of sc, so no leak occurs with very high probability. In [Lyu08], Lyubashevsky showed that the parameters can be significantly reduced by switching to a variant with aborts.


[Falcon]: P. Fouque et al., Falcon: Fast-Fourier Lattice-based Compact Signatures over NTRU, 2020,

[Dilithium]: S.Bai et al., CRYSTALS-Dilithium Algorithm Specifications and Supporting Documentation (Version 3.1), 2021,

[Gir90]: M. Girault, An identity-based identification scheme based on discrete logarithms
modulo a composite number, 1990,

[Lyu08]: V. Lyubashevsky, Lattice-Based Identification Schemes Secure Under Active Attacks, 2008,

[Pei10]: C. Peikert: An Efficient and Parallel Gaussian Sampler for Lattices, 2010,

[Lyu11]: V. Lyubashevsky, Lattice Signatures Without Trapdoors,

[DDLL13]: L. Ducas et al., Lattice Signatures and Bimodal Gaussians,

[BG13]: S. Bai et S. Galbraith, An improved compression technique for signatures based on learning with errors,

Call us before you need us.

Our experts will help you.

Get in touch
%d bloggers like this: