1 Introduction
Cryptography usually adopts a worst-case angle on complexity. For example, in the context of concrete security, a typical theorem shows that an adversary running for at most t steps succeeds with advantage at most . In this paper, we instead study the concrete security of cryptographic schemes and assumptions as a function of the expected running time of the adversary.
Expected-time complexity is a natural measure in its own right – e.g., it is very common in cryptanalysis, as it is often much easier to analyze. But it is also a useful technical tool – indeed, simulators and extractors are often expected time, sometimes inherently so [1]. To use these technical tools, we need assumptions to hold with respect to expected time.
The problem has been studied closely by Katz and Lindell [14], who also suggest expected-time adversaries as a natural model, which however also comes with several technical challenges. Either way, the resulting common wisdom is that assumptions which are true with respect to (non-uniform) worst-case polynomial time are true for expected polynomial-time, and often more fine-grained statements are possible via Markov’s inequality (see below). However, for concrete security, such generic argument fail to give tight bounds.
Summary of contributions. This paper makes progress on two fronts.
First, as our main technical contribution, we introduce general tools to give tight concrete security bounds in information-theoretic settings (e.g., in the random-oracle or generic-group models) for expected-time adversaries. Our tools can easily translate many existing proofs from the worst-case to the expected-time regime. We derive for example tight bounds for finding collisions in a random oracle, for the PRF security of random oracles, and for computing discrete logarithms in the generic-group model. We also obtain bounds for the security of key-alternating ciphers against expected-time adversaries.
Second, we study a “Forking Lemma” to prove soundness of multi-round public-coin proofs and arguments (of knowledge) satisfying a generalized notion of special soundness, enabling witness extraction from a suitable tree of accepting interactions. In particular, we follow a blueprint by Bootle et al. [6], which has also been adopted by follow-up works [7, 8, 24]. In contrast to prior works, we provide a concrete analysis of the resulting expected-time witness extraction strategy, and also give a modular treatment of the techniques which may be of independent interest.
We showcase these tools by deriving concrete bounds for the soundness of Bulletproofs [7] in terms of the expected-time hardness of solving the discrete logarithm problem. Instantiating the bound with our generic-group model analysis will in particular illustrate the dependence of soundness on group parameters and on the complexity of the statement to be proved. We are unaware of any such result having been proved, despite the practical appeal of these protocols.
The remainder of this introduction provides a detailed overview of our results.
1.1 Information-Theoretic Bounds for Expected-Time Adversaries
Our first contribution is a framework to prove tight bounds with respect to expected-time adversaries. We focus on information-theoretic analyses, such as those in the random oracle [3] and the generic group [18, 22] models.



![$$\mu _T = \mathsf {E}[T]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq5.png)

![$$\begin{aligned} \varepsilon (\mu _T, N) \leqslant \mathsf {Pr}\left[ T > T^* \right] + \frac{(T^*)^2}{2N} \leqslant \frac{\mu _T}{T^*} + \frac{(T^*)^2}{2N} \leqslant 2 \cdot \root 3 \of {\frac{\mu _T^2}{2N}}\;, \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ1.png)







Both (1) and (2) show that must hold to find a collision with probability one. However, exact probability bounds are important in the regime
. For example, say we are asked to find a collision in at least one out of u independent random oracles, and the expected number of queries to each is
. Then, a hybrid argument bounds the probability by
, making the difference between a square-root and a cube-root bound on
important.
A Generic Approach for bad-flag analyses. We aim for a general approach to transform information-theoretic bounds for worst-case query complexity into bounds with respect to expected query complexity. If an existing analysis (with respect to worst-case complexity) follows a certain pattern, then we easily obtain an expected query complexity bound.












![$$\begin{aligned} \mathsf {Pr}\left[ \mathsf {BAD}^{\mathcal {A}} \;|\; Q_1 = q \right] \leqslant \delta (q) \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ8.png)






![$$\begin{aligned} \mathsf {Pr}\left[ \mathsf {BAD}^{\mathcal {A}} \right] \leqslant 5 \cdot \left( \frac{\varDelta \mathsf {E}[Q_0]^d}{N} \right) ^{1/d} \;, \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ9.png)










![$$\begin{aligned} \mathsf {Pr}\left[ \mathsf {BAD}^{\mathcal {A}} \right] \leqslant \mathsf {Pr}\left[ \mathsf {BAD}^{\mathcal {B}} \right] + \mathsf {Pr}\left[ Q_0 > U \right] \;, \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ10.png)
![$$\mathsf {Pr}\left[ \mathsf {BAD}^{\mathcal {A}} \wedge Q_0 \leqslant U \right] \leqslant \mathsf {Pr}\left[ \mathsf {BAD}^{\mathcal {B}} \right] $$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq46.png)
![$$\begin{aligned} \mathsf {Pr}\left[ Q_0 > U \right] \leqslant \frac{\mathsf {E}\left[ Q_0\right] }{U} = \varTheta \left( \root d \of {\varDelta \mathsf {E}\left[ Q_0\right] ^d/N}\right) \;, \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ11.png)
Therefore, the core of the proof is to show . This will require using
-boundedness first, but a careful reader may observe that this will only upper bound the probability with respect to
, and not
. The bulk of the proof is then to switch between the two.
Examples. We apply the above framework to a few examples, to show its applicability. We show bounds on the hardness of discrete logarithms in the generic-group model [18, 22], and on the collision-resistance and PRF security of random oracles. In particular, our framework also works for notions which are not indistinguishability based, such as collision-resistance of a random oracle, by introducing a suitable world where it is hard to win the game.
The H-Coefficient method. Equivalent-until-bad analyses are not always the simplest way to prove security (despite the fact that in principle every analysis can be cast in this format, as shown in [19]). We also give a variant of the above approach tailored at proving security in a simpler version of the H-coefficient method [9, 20] which considers what is referred to as pointwise-proximity in [12]. This amounts to using the standard H-coefficient method without bad transcripts. (To the best of our knowledge, this simpler version of the method is due to Bernstein [5].) This allows us to obtain expect-time versions of security bounds for the PRF/PRP switching lemma and for key-alternating ciphers, the latter building on top of work by Hoang and Tessaro [12].We defer details on this to the full version of this paper [13].
1.2 Forking Lemmas and Concrete Soundness
One motivation for studying expected-time adversaries is as a tool to prove bounds for worst-case complexity, rather than as a goal on itself. We expose here one such application in the context of proving soundness bounds for public-coin proofs/arguments (of knowledge). In particular, soundness/proof-of-knowledge proofs for several protocols (like [6–8, 24]) rely on generalizations of the Forking Lemma (originally proposed by Pointcheval and Stern [21] for three-round protocols) which adopt expected-time witness extraction strategies. These have only been analyzed in an asymptotic sense, and our goal is to give a concrete-security treatment. We propose here a modular treatment of these techniques, and instantiate our framework to provide concrete bounds on the soundness of Bulletproofs [7], a succinct proof system which has enjoyed wide popularity.
Forking Lemmas. Pointcheval and Stern’s original “Forking Lemma” [21] deals with -protocols that satisfy special soundness - these are three-round protocols, where a transcript takes the form (a, c, d), with c being the verifier’s single random challenge. Here, given common input x, the prover
proves knowledge to
of a witness w for a relation
. The proof of knowledge property is proved by giving an extractor
which produces a witness for x given (black-box) access to a prover
– if
succeeds with probability
, then
succeeds with probability (roughly)
. Concretely,
simulates an execution of
with a random challenge c, which results in a transcript (a, c, d), and then rewinds
to just before obtaining c, and feeds a different challenge
to obtain a transcript
. If both transcripts are accepting, and
, a witness can be extracted via special soundness. Bellare and Neven [2] give alternative Forking Lemmas where
’s success probability approaches
, at the cost of a larger running time.
Expected-time extraction. It is natural to expect that the success probability of above degrades exponentially in the number of required accepting transcripts. Crucially, however, one can make the Forking Lemma tight with respect to probability if we relax
to have bounded expected running time. Now,
runs
once with a random challenge c and, if it generates a valid transcript (a, c, d), we rewind
to before receiving the challenge c, and keep re-running it from there with fresh challenges until we obtain a second valid transcript
for
. The expected running time is only twice that of
.
A general Forking Lemma. An extension of this idea underlies the analysis of recent succinct public-coin multi-round interactive arguments of knowledge [6–8, 24], following a workflow introduced first by Bootle et al. (BCCGP) [6] which extracts a witness from a tree of multi-round executions obtained by clever rewinding of . In particular, since the number of generated accepted interactions is large (i.e., exponential in the number of rounds), the usage of an expected-time strategy is essential to extract with good enough probability.
These works in fact prove the stronger property of witness-extended emulation [11, 16]. This means that with black-box access to a prover , an expected-time emulator
(1) generates a transcript with the same distribution as in an interaction between
and the verifier
, and (2) if this transcript is accepting, then a valid witness is produced along with it. In the case of arguments, it is possible that (2) fails, but this would imply breaking an underlying assumption.
The BCCGP framework was refined in follow-up works [7, 8, 24], but these remain largely asymptotic. We give here a clean and modular treatment of the BCCGP blueprint, which makes it amenable to a concrete security treatment. This will in particular require using our tools from the first part of the paper to analyze the probability that we generate a well-formed tree of transcripts from which a witness can be generated. We believe this to be of independent interest.
In the full version of this paper [13], we compare this expected-time forking lemma to one with strict running-time guarantees and confirm that the expected-time approach achieves a clear benefit in terms of tightness of the reduction.
Application to Bulletproofs. Finally, we apply the above framework to obtain a bound on the concrete soundness for public-coin interactive argument systems, and focus on Bulletproofs [7]1. We obtain a bound in terms of the expected-time hardness of the discrete logarithm problem, and we combine this with our generic-group analysis to get a bound on the soundness in the generic-group model2. Of independent interest, the result relies on a tight reduction of finding non-trivial discrete log relations to the plain discrete log problem – which we give in Lemma 3.













This bound is interesting because it highlights the dependence of the soundness probability on the group size and on M. It in fact shows that for typical instantiations, where
, the guaranteed security level is fairly low for modest-sized circuits (say with
). It is a good question whether this bound can be made tighter, in particular with respect to its dependence on M.
We also note that for specific instance generators G our tools may be helpful to estimate .
2 Preliminaries
Let and
. For
, let
. For
we adopt the conventions that
and
. Equivalence mod p is denoted
.
We let denote the set of all permutations on set S and
denote the set of all functions from S to
. Sampling x uniformly from the set S is denoted
. The notation
means that
and
, i.e.,
and
partition S. We let
denote the set of finite-length bitstrings and
denote the set of infinite-length bitstrings.
We let denote the execution of
on input
and coins
with access to oracle(s)
, producing output y. When c is chosen uniformly we write
. For a stateful algorithm
with state s we use
as shorthand for the expression
. When some of an algorithm’s output is not going to be used we will write
in place of giving it a variable name.
We use pseudocode games, inspired by the code-based game framework of Bellare and Rogaway [4]. See Fig. 1 for some example games. If is a game, then
denotes the probability that it outputs
. We use
,
,
, and
for the logical operators “and”, “or”, “iff”, and “not”.
Running-time conventions. The most commonly used notion for the running time of an algorithm is worst-case. For this, one first fixes a computational model with an associated notion of computational steps. Then an algorithm has worst-case running time t if for all choice of
and c it performs at most t computation steps in the execution
, no matter how
responds to any oracle queries
makes.
In this paper we are interested in proving bounds that instead depend on the expected number of computation steps that performs. There may be randomness in how the inputs
to
and the responses to
queries are chosen (in addition to the random selection of c).
There is more variation in how expected running time may be defined. We will provide our bounds in terms of the expected running time of adversaries interacting with the “real” world that they expect to interact with. Such a notion of expected runtime is brittle because the expected runtime of the adversary may vary greatly when executing in some other world; however, this notion is the strongest for the purposes of our results because it will guarantee the same bounds for notions of expected running time which restrict the allowed adversaries more. See [10, 15] for interesting discussion of various ways to define expected polynomial time.
For many of the results of this paper, rather than directly measuring the runtime of the adversary we will look at the (worst-case or expected) number of oracle queries that it makes. The number of oracle queries can, of course, be upper bounded by the number of computational steps.
Useful lemmas. We will make use of Markov’s inequality and the Schwartz-Zippel Lemma, which we reproduce here.

![$$\begin{aligned} \mathsf {Pr}[X > c] \leqslant \mathsf {Pr}[X \geqslant c] \leqslant \mathsf {E}[X]/c. \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ13.png)

![$$p\in \mathbb {F}[x_1,x_2,\dots x_n]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq147.png)

![$$\begin{aligned} \mathsf {Pr}[p(r_1,\dots ,r_n)=0]\leqslant d/|\mathbb {F}| \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ14.png)



Left: Game defining discrete log security of group . Middle: Game defining discrete log relation security of group
. Right: Reduction adversary for Lemma 3.
Discrete Logarithm Assumptions. Let be a cyclic group of prime order p with identity
and
be its set of generators. Let
and
. If
and a least one of the
are non-zero, this is said to be a non-trivial discrete log relation. It is believed to be hard to find non-trivial discrete log relations in cryptographic groups (when the
are chosen at random). We refer to computing
as a multi-exponentiation of size
.
Discrete log relation security is defined by the game in the middle of Fig. 1. In it, the adversary is given a vector
and attempts to find a non-trivial discrete log relation. We define
. Normal discrete log security is defined by the game in the left panel of Fig. 1. In it, the adversary attempts to find the discrete log of
with respect to a generator
. We define
.
It is well known that discrete log relation security is asymptotically equivalent to discrete log security. The following lemma makes careful use of self-reducibility techniques to give a concrete bound showing that discrete log relation security is tightly implied by discrete log security.









The proof of this theorem is deferred to the full version of the paper [13].
3 Bad Flag Analysis for Expected-Time Adversaries
In this section we show how to (somewhat) generically extend the standard techniques for analysis of “bad” flags from worst-case adversaries to expected-time adversaries. Such analysis is a fundamental tool for cryptographic proofs and has been formalized in various works [4, 17, 23]. Our results are tailored for the setting where the analysis of the bad flag is information theoretic (e.g., applications in ideal models), rather than reliant on computational assumptions.

Identical-until-bad games defined from game specification .
3.1 Notation and Experiments for Identical-Until-Bad Games
Identical-until-bad games. Consider Fig. 2 which defines a pair of games and
from a game specification
. Here G and
are stateful randomized algorithms. At the beginning of the game, coins
,
, and
are sampled uniformly at random3. The first two of these are used by G and
while the last is used by
4. The counter t is initialized to 0, the flag
is set to
, and states s and
are initialized for use by G and
.
During the execution of the game, the adversary repeatedly makes queries to the oracle
. The variable t counts how many queries
makes. As long as
is still
(so
is
), for each query made by
the algorithm
will be given this query to determine if
should be set to
. When
, the behavior of
does not depend on whether
is set because the output of the oracle is always determined by running
. When
, the output of the oracle is defined in the same way up until the point that
is set to
. Once that occurs, the output is instead determined by running
. Because these two games are identical except in the behavior of the code
which is only executed once
, they are “identical-until-bad”.
In this section, the goal of the adversary is to cause to be set to
. Bounding the probability that
succeeds in this can be used to analyze security notions in two different ways. For indistinguishability-based security notions (e.g., PRG or PRF security), the two games
would correspond to the two worlds the adversary is attempting to distinguish between. For other security notions (e.g., collision resistance or discrete log security), we think of one of the
as corresponding to the game the adversary is trying to win and the other as corresponding to a related “ideal” world in which the adversary’s success probably can easily be bounded. In either case, the fundamental lemma of game playing [4] can be used to bound the advantage of the adversary using a bound on the probability that
is set.
A combined experiment. For our coming analysis it will be useful to relate executions of and
to each other. For this we can think of a single combined experiment in which we sample
,
, and
once and then run both games separately using these coins.
For , we let
be a random variable denoting how many oracle queries
makes in the execution of
during this experiment. We let
denote the event that
sets
to
in the execution of
. Note that
will occur if and only if
occurs, because the behavior of both games are identical up until the first time that
is set and
is never again executed once
is
. Hence we can simplify notation by defining
to be identical to the event
, while keeping in mind that we can equivalently think of this event as occurring in the execution of either game. We additionally define the event that
is ever set
, the event that
is set by one of the first j queries the adversary makes
, and the event that
is set after the j-th query the adversary makes
. Clearly,
Again we can equivalently think of these events as occurring in either game. When the adversary is clear from context we may choose to omit it from the superscript in our notation.
The fact that both games behave identically until is set
allows us to make several nice observations. If
does not hold, then
must hold. If
holds for some t, then both
and
must be at least t. One implication of this is that if
holds for some q, then
is equivalent to
. Additionally, we can see that
must hold.
Defining our events and random variables in this single experiment will later allow to consider the expectation for some
. In words, that is the expected value of
raised to the d-th power conditioned on
having been chosen so that
held. Since
and
can only differ if
occurs we will be able to use
to bound how far
can be from
.



![$$\mathsf {Pr}[\mathsf {BAD}^{\mathcal {A}}] \leqslant \delta (q_{\mathcal {A}})$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq272.png)





![$$\begin{aligned} \mathsf {Pr}[\mathsf {BAD}^{\mathcal {A}} | Q_1 = q] \leqslant \delta (q). \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ16.png)



![$$\mathsf {Pr}[\mathsf {BAD}^{\mathcal {A}} | Q_1=q] = \mathsf {Pr}[\mathsf {BAD}^{\mathcal {A}}_{\leqslant q} | Q_1=q]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq281.png)
We will, in particular, be interested in that case that for some
5. We think of
and d as “small” and of N as “large”. The main result of this section bounds the probability that an adversary sets
by
for either b if
is
-bounded for such a
.
While -boundedness may seem to be a strange condition, we show in Sect. 3.3 that the existing techniques for proving results of the form
for
making at most
queries can often be easily extended to show the
-boundedness of a game
. The examples we consider are the collision-resistance and PRF security of a random oracle and the security of discrete log in the generic group model. In particular, these examples all possess a common form. First, we note that the output of
is independent of
. Consequently, the view of
when
is independent of
and hence
is independent of
. To analyze
we can then think of
and
being fixed (fixing the transcript of interaction between
and its oracle in
) and argue that for any such length q interaction the probability of
is bounded by
over a random choice of
.
We note that this general form seems to typically be implicit in the existing analysis of flags for the statistical problems one comes across in ideal model analysis, but would not extend readily to examples where the probability of the bad flag being set is reduced to the probability of an adversary breaking some computational assumption.
3.2 Expected-Time Bound from
-boundedness
We can now state our result lifting -boundedness to a bound on the probability that an adversary sets
given only its expected number of oracle queries.






![$$\begin{aligned} \mathsf {Pr}[\mathsf {BAD}^\mathcal {A}] \leqslant 5\root d \of {\frac{\varDelta \cdot \mathsf {E}[Q^\mathcal {A}_0]^d}{N}} = 5\root d \of {\delta \left( \mathsf {E}[Q^\mathcal {A}_0] \right) }. \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ17.png)


![$$\begin{aligned} \mathsf {Pr}[\mathsf {BAD}^\mathcal {A}] \leqslant 3\root d \of {\frac{\varDelta \cdot \mathsf {E}[Q^\mathcal {A}_1]^d}{N}} = 3\root d \of {\delta \left( \mathsf {E}[Q^\mathcal {A}_1] \right) }. \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ18.png)
We provide bounds based on the expected runtime in either of the two games since they are not necessarily the same. Typically, one of the two games will correspond to a “real” world and it will be natural to desire a bound in terms of the expected runtime in that game. The proof for is slightly more complex and is given in this section. The proof for
is simpler and deferred to the full version of this paper [13]. In the full version we show via a simple attack that the d-th root in these bounds is necessary.










![$$O\left( \root d \of {\delta \left( \mathsf {E}[Q^\mathcal {A}_0] \right) }\right) $$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq337.png)
![$$\begin{aligned} \mathsf {Pr}[\mathsf {BAD}^\mathcal {A}]&= \mathsf {Pr}[\mathsf {BAD}^{\mathcal {A}}_{\leqslant U}]+\mathsf {Pr}[\mathsf {BAD}^{\mathcal {A}}_{>U}] \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ3.png)
![$$\begin{aligned}&= \mathsf {Pr}[\mathsf {BAD}^{\mathcal {B}}_{\leqslant U}]+\mathsf {Pr}[\mathsf {BAD}^{\mathcal {A}}_{>U}] \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ4.png)
![$$\begin{aligned}&\leqslant \mathsf {Pr}[\mathsf {BAD}^\mathcal {B}] + \mathsf {Pr}\left[ Q_0^\mathcal {A}> U \right] \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ5.png)
![$$\begin{aligned}&\leqslant \mathsf {Pr}[\mathsf {BAD}^\mathcal {B}] + \mathsf {E}[Q_0^\mathcal {A}]/U \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ6.png)
![$$\begin{aligned}&\leqslant \mathsf {Pr}[\mathsf {BAD}^\mathcal {B}] + 3\mathsf {E}[Q_0^\mathcal {A}]\root d \of {\varDelta /N}. \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ7.png)











![$$\mathsf {Pr}[\mathsf {BAD}^\mathcal {B}]\leqslant 2\mathsf {E}[Q_0^\mathcal {A}]\root d \of {\varDelta /N}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq348.png)
![$$\mathsf {E}[Q_0^\mathcal {B}]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq349.png)
![$$\mathsf {E}[Q_0^\mathcal {A}]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq350.png)
![$$\mathsf {Pr}[\mathsf {BAD}^\mathcal {B}]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq351.png)
![$$\mathsf {E}[(Q_1^\mathcal {B})^d]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq352.png)
![$$\mathsf {E}[(Q_1^\mathcal {B})^d]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq353.png)
![$$\mathsf {E}[(Q_0^\mathcal {B})^d]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq354.png)
![$$\mathsf {E}[Q_0^\mathcal {B}]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq355.png)
![$$\mathsf {Pr}[\mathsf {BAD}^{\mathcal {B}}]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq356.png)



![$$\begin{aligned} \mathsf {Pr}[\mathsf {BAD}^{\mathcal {B}}]&= \sum _{q= 1}^U \mathsf {Pr}[\mathsf {BAD}^{\mathcal {B}} | Q^{\mathcal {B}}_1=q] \mathsf {Pr}[Q^{\mathcal {B}}_1=q] \leqslant \sum _{q= 1}^{U} (\varDelta \cdot q^d/N)\mathsf {Pr}[Q^{\mathcal {B}}_1=q]\\&= \varDelta /N \sum _{q= 1}^{U}q^d\mathsf {Pr}[Q^{\mathcal {B}}_1=q] = \varDelta \mathsf {E}[(Q^{\mathcal {B}}_1)^d]/N. \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ20.png)
![$$\mathsf {E}[(Q^{\mathcal {B}}_1)^d]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq360.png)
![$$\mathsf {E}[(Q^{\mathcal {B}}_0)^d]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq361.png)
![$$\mathsf {E}[(Q^{\mathcal {B}}_0)^d | Q_1^{\mathcal {B}}=q]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq362.png)










![$$\begin{aligned} \mathsf {E}[(Q_0^{\mathcal {B}})^d | Q_1^{\mathcal {B}}=q]&\geqslant \mathsf {E}[R_0^d | Q^{\mathcal {B}}_1=q]\\&= q^d \mathsf {Pr}[\lnot {\mathsf {BAD}}^{\mathcal {B}} | Q^{\mathcal {B}}_1=q] + 0^d \mathsf {Pr}[\mathsf {BAD}^{\mathcal {B}} | Q^{\mathcal {B}}_1=q]\\&= q^d (1-\mathsf {Pr}[\mathsf {BAD}^{\mathcal {B}} | Q^{\mathcal {B}}_1=q])\\&\geqslant q^d (1-\delta (q)) \geqslant q^d(1-u). \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ21.png)


![$$\mathsf {E}[(Q_1^{\mathcal {B}})^d]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq375.png)

![$$\mathsf {E}[(Q_0^{\mathcal {B}})^d | Q^{\mathcal {B}}_1=q]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq377.png)

![$$\begin{aligned} \mathsf {E}[(Q_1^{\mathcal {B}})^d]&=\sum _{q= 1}^U q^d\cdot \mathsf {Pr}[Q_1^{\mathcal {B}}=q]\\&=\sum _{q=1}^U\mathsf {E}[(Q_0^{\mathcal {B}})^d | Q_1^{\mathcal {B}}=q]\cdot \frac{q^d}{\mathsf {E}[(Q_0^{\mathcal {B}})^d | Q_1^{\mathcal {B}}=q]}\cdot \mathsf {Pr}[Q_1^{\mathcal {B}}=q]\\&\leqslant \sum _{q= 1}^U\mathsf {E}[(Q_0^{\mathcal {B}})^d | Q_1^{\mathcal {B}}=q]\cdot \frac{q^d}{ q^d(1-u)}\cdot \mathsf {Pr}[Q_1^{\mathcal {B}}=q]\\&=(1-u)^{-1}\mathsf {E}[(Q_0^{\mathcal {B}})^d] \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ22.png)
![$$\mathsf {Pr}[\mathsf {BAD}^\mathcal {B}]\leqslant (1-u)^{-1}\mathsf {E}[(Q^\mathcal {B}_0)^d]\cdot \varDelta /N$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq379.png)
![$$2\mathsf {E}[Q_0^\mathcal {B}]\root d \of {\varDelta /N}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq380.png)


![$$\begin{aligned} (1-u)^{-1}\mathsf {E}[(Q^\mathcal {B}_0)^d]\cdot \varDelta /N \leqslant (1-u)^{-1}\mathsf {E}[Q^\mathcal {B}_0]\cdot U^{d-1}\cdot \varDelta /N. \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ23.png)


![$$\begin{aligned} (1-u)^{-1}\mathsf {E}[Q^\mathcal {B}_0]\cdot U^{d-1}\cdot \varDelta /N \leqslant (1-u)^{-1} (u^{(d-1)/d})\mathsf {E}[Q^\mathcal {B}_0]\root d \of {\varDelta /N} . \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ24.png)


![$$\mathsf {E}[Q^\mathcal {B}_0]\leqslant \mathsf {E}[Q^\mathcal {A}_0]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq386.png)
![$$\mathsf {Pr}[\mathsf {BAD}^{\mathcal {A}}]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq387.png)

3.3 Example Applications of Bad Flag Analysis
In this section we walk through some basic examples to show how a bound of can be proven using essentially the same techniques as typical bad flag analysis for worst-case runtime, allowing Theorem 1 to be applied. All of our examples follow the basic structure discussed earlier in this section. We write the analysis in terms of two games which are identical-until-bad and parameterized by a bit b. In the
game, the output of its oracles will depend on some coins we identify as
, while in the
case the output will depend on both
and independent coins we identify as
. Then we think of fixing coins
and the coins used by the adversary, which together fix
(the number of queries
would make in the
case), and argue a bound on the probability that
is set over a random choice of
.
We write the necessary games in convenient pseudocode and leave the mapping to a game specification to apply Theorem 1 implicit. We will abuse notation and use the name of our pseudocode game to refer to the corresponding game specification.
![$$h:\{0,1\}^*\rightarrow [N]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq402.png)






Game capturing collision-resistance of a random oracle (when ).

Games capturing PRF security of a random oracle.
In it, we have written in a somewhat atypical way to allow comparison to
with which it is identical-until-bad. The coins used by these games determine a permutation
sampled at the beginning of the game and a value X chosen at random from [N] during each
query6. We think of the former as
and the latter as
. Ignoring repeat queries, when in
the output of
is simply
in order. Thus clearly,
since there are no collisions in
. In
the variable X modifies the output of
to provide colliding outputs with the correct distribution.
![$$\begin{aligned} \mathsf {Pr}[{ \textsf {H}}^{\mathsf {cr}}_0(\mathcal {A})]&\leqslant \mathsf {Pr}[{ \textsf {H}}^{\mathsf {cr}}_0(\mathcal {A}) \text{ sets } \mathsf {bad}] + \mathsf {Pr}[{ \textsf {H}}^{\mathsf {cr}}_1(\mathcal {A})] = \mathsf {Pr}[{ \textsf {H}}^{\mathsf {cr}}_0(\mathcal {A}) \text{ sets } \mathsf {bad}]. \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ26.png)












![$$\begin{aligned} \mathsf {Pr}[\mathsf {BAD}| Q_1 = q] \leqslant q(2q-1)/N. \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ27.png)



![$$\begin{aligned} \mathsf {Pr}[{ \textsf {H}}^{\mathsf {cr}}_0(\mathcal {A})] \leqslant 5\root 2 \of {\frac{2 \cdot \mathsf {E}[Q^\mathcal {A}_0]^2}{N}}. \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ28.png)
![$$[N]\times \mathcal {D}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq441.png)





The coins of the game are random tables T and F as well as a random key K. We think of the key as and the tables as
. Because we have written the games so that the consistency check occurs in
, we can clearly see the output of the oracles in
are independent of
.
![$$\begin{aligned} \mathsf {Pr}[{ \textsf {H}}^{\mathsf {prf}}_0(\mathcal {A})]-\mathsf {Pr}[{ \textsf {H}}^{\mathsf {prf}}_1(\mathcal {A})]&\leqslant \mathsf {Pr}[{ \textsf {H}}^{\mathsf {prf}}_0(\mathcal {A}) \text{ sets } \mathsf {bad}]. \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ29.png)




![$$\begin{aligned} \mathsf {Pr}[\mathsf {BAD}| Q_1 = q] \leqslant q/N. \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ30.png)



![$$\begin{aligned} \mathsf {Pr}[{ \textsf {H}}^{\mathsf {prf}}_0(\mathcal {A})]-\mathsf {Pr}[{ \textsf {H}}^{\mathsf {prf}}_1(\mathcal {A})] \leqslant 5\cdot \mathsf {E}[Q^\mathcal {A}_0]/N. \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ31.png)






At the beginning of the game polynomials ,
, and
are defined. These are polynomials of the symbolic variable X, defined over
. Then a random x is sampled and the goal of the adversary is to find this x. Throughout the game, a polynomial
represents the group element
. Hence
represents the identity element of the group,
represents the generator g, and
represents
. We think of the subscript of a polynomial as the adversary’s label for the corresponding group element. The variable t tracks the highest label the adversary has used so far.







![$$\prod _i g_{\mathbf {j}[i]}^{\mathbf {\alpha }[i]}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq482.png)
![$$g_{\mathbf {j}[i]}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq483.png)
![$$\mathbf {j}[i]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq484.png)
![$$g^{p_{\mathbf {j}[i]}(x)}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq485.png)



Game capturing discrete logarithm security of a generic group (when ). For
and
, we use the notation
and
.
The only coins of this game are the choice of x which we think of as . In
, the adversary is never told when two labels it handles non-trivially represent the same group element so the view of
is independent of
, as desired7. Because the view of
is independent of x when
we have that
.
![$$\begin{aligned} \mathsf {Pr}[{ \textsf {H}}^{\mathsf {dl}}_0(\mathcal {A})]&\leqslant \mathsf {Pr}[{ \textsf {H}}^{\mathsf {dl}}_0(\mathcal {A}) \text{ sets } \mathsf {bad}] + \mathsf {Pr}[{ \textsf {H}}^{\mathsf {dl}}_1(\mathcal {A})] = \mathsf {Pr}[{ \textsf {H}}^{\mathsf {cr}}_0(\mathcal {A}) \text{ sets } \mathsf {bad}] + 1/|\mathbb {G}| \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ32.png)







![$$\begin{aligned} \mathsf {Pr}[\mathsf {BAD}| Q_1 = q] \leqslant \left( {\begin{array}{c}q+3\\ 2\end{array}}\right) \cdot (1/|\mathbb {G}|) \leqslant 6q^2/|\mathbb {G}|. \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ34.png)

![$$\mathsf {Pr}[\mathsf {bad}| Q_1 = q]=0$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq507.png)




![$$\begin{aligned} \mathsf {Pr}[{ \textsf {H}}^{\mathsf {dl}}_0(\mathcal {A})] \leqslant 5\root 2 \of {\frac{6 \cdot \mathsf {E}[Q^\mathcal {A}_0]^2}{|\mathbb {G}|}} + \frac{1}{|\mathbb {G}|}. \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ35.png)
4 Concrete Security for a Forking Lemma
In this section we apply our techniques to obtaining concrete bounds on the soundness of proof systems. Of particular interest to us will be proof systems that can be proven to achieve a security notion known as witness-extended emulation via a very general “Forking Lemma” introduced by Bootle, Cerulli, Chaidos, Groth, and Petit (BCCGP) [6]. Some examples include Bulletproofs [7], Hyrax [24], and Supersonic [8]. Our expected-time techniques arise naturally for these proof systems because witness-extended emulation requires the existence of an expected-time emulator for a proof system which is given oracle access to a cheating prover and produces transcripts with the same distribution as the cheating prover, but additionally provides a witness w for the statement being proven whenever it outputs an accepting transcript.
In this section we use a new way of expressing witness-extended emulation as a special case of a more general notion we call predicate-extended emulation. The more general notion will serve as a clean, modular way to provide a concrete security version of the BCCGP forking lemma. This modularity allows us to hone in on the steps where our expected time analysis can be applied to give concrete bounds and avoid some technical issues with the original BCCGP formulation of the lemma.
In the BCCGP blueprint, the task of witness-extended emulation is divided into a generic tree-extended emulator which for any public coin proof system produces transcripts with the same distribution as a cheating prover together with a set of accepting transcripts satisfying a certain tree structure and an extractor for the particular proof system under consideration which can extract a witness from such a tree of transcripts. The original forking lemma of BCCGP technically only applied for extractors that always output a witness given a valid tree with no collisions. However, typical applications of the lemma require that the extractor be allowed to fail when the cheating prover has (implicitly) broken some presumed hard computational problem. Several works subsequent to BCCGP noticed this gap in the formalism [7, 8, 24] and stated slight variants of the BCCGP forking lemma. However, these variants are still unsatisfactory. The variant lemmas in [7, 24] technically only allows extractors which fail in extracting a witness with at most negligible probability for every tree (rather than negligible probably with respect to some efficiently samplable distribution over trees, as is needed). The more recent variant lemma in [8] is stated in such a way that the rewinding analysis at the core of the BCCGP lemma is omitted from the variant lemma and (technically) must be shown separately anytime it is to be applied to a proof system. None of these issues represent issues with the security of the protocols analyzed in these works. The intended meaning of each of their proofs is clear from context and sound, these issues are just technical bugs with the formalism of the proofs. However, to accurately capture concrete security it will be important that we have a precise and accurate formalism of this. Our notion of predicate-extended emulation helps to enable this.

Predicates we use. Other predicates and
are only discussed informally.
4.1 Syntax and Security of Proof Systems











Interaction between (honest) prover and verifier
with public parameters
. Here
is the transcript and
is the decision.
Here is the transcript of the interaction and
is the decision of
(with
representing acceptance and
representing rejection). Perfect completeness requires that for all
and
,
. If
is public-coin, then
output by
each round is set equal to its random coins. In this case, we let
denote
’s decision after an interaction that produced transcript
8. Throughout this section we will implicitly assume that any proof systems under discussion is public-coin. We sometimes refer to the verifier’s outputs as challenges.
Predicate-extended emulation. The proof systems we consider were all analyzed with the notion of witness-extended emulation [11, 16]. This requires that for any efficient cheating prover there exists an efficient emulator
which (given oracle access to
interacting with
and the ability to rewind them) produces transcripts with the same distribution as
and almost always provides a witness for the statement when the transcript it produces is accepting. We will capture witness-extended emulation as a special case of what we refer to as predicate-extended emulation. We cast the definition as two separate security properties. The first (emulation security) requires that
produces transcripts with the same distribution as
. The second (predicate extension) is parameterized by a predicate
and requires that whenever
produces an accepting transcript, its auxiliary output must satisfy
. As we will see, this treatment will allow a clean, modular treatment of how BCCGP and follow-up work [6–8, 24] analyze witness-extended emulation.
We start by considering game defined in Fig. 8. It is parameterized by a public-coin proof system
, emulator
, and bit b. The adversary consists of a cheating prover
and an attacker
. This game measures
’s ability to distinguish between a transcript generated by
and one generated by
. The emulator
is given access to oracles
and
. The former has
and
perform a round of interaction and returns the messages exchanged. The latter rewinds the interaction to the prior round. We define the advantage function
. For the examples we consider there will be an
which (in expectation) performs a small number of oracle queries and does a small amount of local computation such that for any
and
we have
.
Note that creating a perfect emulator is trivial in isolation; can just make
calls to
to obtain a
with the exactly correct distribution. Where it gets interesting is that we will consider a second, auxiliary output of
and insist that it satisfies some predicate
whenever
is an accepting transcript. The adversary wins whenever
is accepting, but the predicate is not satisfied. This is captured by the game
shown in Fig. 8. We define
. Again this notion is trivial in isolation;
can just output rejecting transcripts. Hence, both security notions need to be considered together with respect to the same
.




Games defining predicate-extended emulation security of proof system .
Hard predicates. One class of predicates to consider are those which embed some computational problem about the public parameter that is assumed to be hard to solve. We will say that
is witness-independent if its output does not depend on its second input u. For example, if
outputs of length n vector of elements from a group
(we will denote this setup algorithm by
) we can consider the predicate
which checks if
specifies a non-trivial discrete log relation. This predicate is useful for the analysis of a variety of proof systems [6, 7, 24]. Other useful examples include: (i) if
output parameters for a commitment scheme with
that checks if
specifies a commitment and two different opening for it [6, 8, 24] and (ii) if
outputs a group of unknown order together with an element of that group and
checks if
specifies a non-trivial root of that element [8].
Whether a witness-independent predicate is hard to satisfy given the output of
is captured by the game
shown on the left side of Fig. 9. We define
. Note, for example, that if
and
is used, then this game is identical to discrete log relation security, i.e.,
for any adversary
.






![$$\mathsf {Adv}^{\mathsf {sound}}_{\mathsf {PS},G}(\mathsf {P}^*)=\mathsf {Pr}[{ \textsf {H}}^{\mathsf {sound}}_{\mathsf {PS},G}(\mathsf {P}^*)]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq617.png)
![$$\mathsf {Adv}^{\mathsf {wit}}_{\mathsf {PS},G}(\mathcal {B})=\mathsf {Pr}[{ \textsf {H}}^{\mathsf {wit}}_{\mathsf {PS},G}(\mathcal {B})]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq618.png)

Left. Game defining hardness of satisfying predicate . Right. Games defining soundness of proof system
with respect to instance generator G and difficulty of finding witness for statements produced by G.
Note that the standard notion of soundness (that proving false statements is difficult) is captured by considering G which always outputs false statements. In this case, for all
. In other contexts, it may be assumed that it is computationally difficult to find a witness for G’s statement.
4.2 Concrete Security Forking Lemma
Now we will work towards proving our concrete security version of the BCCGP forking lemma. This lemma provides a general framework for how to provide a good witness-extended emulator for a proof system. First, BCCGP showed how to construct a tree-extended emulator which has perfect emulation security and (with high probability) outputs a set of transcripts satisfying a tree-like structure (defined later) whenever it outputs an accepting transcript. Then one constructs, for the particular proof system under consideration, an “extractor”
which given such a tree of transcripts can always produce a witness for the statement or break some other computational problem assumed to be difficult. Combining
and
appropriately gives a good witness-extended emulator.
Before proceeding to our forking lemma we will provide the necessary definitions of a tree-extended emulator and extractor, then state some simple lemmas that help build toward our forking lemma.
Transcript Tree. Fix a proof system and let the vector
be given. Let
be an output of
and u be a statement. For
we will inductively define an
-tree of transcripts for
. We will often leave some of
implicit when they are clear from context.
First when , a ()-tree is specified by a tuple
where
and
is an empty list. Now an
-tree is specified by a tuple
where
and
is a length
list of
-trees for
.
When discussing such trees we say their height is h. When we will sometimes refer to it as a partial tree. We use the traditional terminology of nodes, children, parent, root, and leaf. We say the root node is at height h, its children are at height
, and so on. The leaf nodes are thus each at height 0. If a node is at height h, then we say it is at depth
.
Every path from the root to a leaf in a height h tree gives a sequence where
are the pair from the node at height i. Now if we fix a transcript prefix
, then we can think of
and the tree as inducing
different transcripts
, one for each path. We will say that the tree is valid for
if
for each transcript
induced by the tree. Note that
is an empty list when
so we can omit reference to
and simply refer to the tree as valid.
Suppose ’s coins are drawn from
for some set S and
. We will refer to the second component of its coins are the integer component. Let
be a parent node at height
. If any two of its children have
with identical integer components, then we say that
has a challenge collision. A tree has a challenge collision if any of its nodes have a challenge collision.


returns
iff
is a valid
-tree.
returns
iff
is an
-tree that does not have a challenge collision.























- 1.
- 2.
- 3.
- 4.
The expected number of times
executes
is N.
- 5.
The expected number of queries that
makes to
is less than
9. Exactly one of these queries is made while
in
.


The BCCGP tree-extended emulator.
(of Lemma 4). All of the claims except the third follow from BCCGP’s analysis of . The advantage
can be upper-bounded by the probability that the integer component of
’s output is repeated across any of
’s queries to
. BCCGP bounded this probability by applying Markov’s inequality to obtain an upper bound on
’s running time and then applying the birthday bound to get an
bound. We can instead apply our switching lemma analysis from the full version of this paper [13] (or the techniques from our analysis of the collision resistance of a random oracle in Sect. 3.3) to obtain the stated bound because
will sample
challenges in expectation.
Extractors. Let be an algorithm and
be predicates. We say that
is a
-extractor if
. Let
be an emulator. Then we define
to be the emulator that on input
with oracle access to
and
will first compute
and then returns
. The following straightforward lemma relates the security of
and
.









Reduction adversary for Theorem 2.
Forking lemma. Finally, we can state and prove our concrete security version of the BCCGP forking lemma. It captures the fact that any protocol with a -extractor has a good witness-extended emulator (assuming
is computationally difficult to satisfy)10.












![$$\mathsf {E}=\mathsf {E}^{\dag }[\mathsf {T},\mathsf {X}]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq766.png)

- 1.
- 2.
- 3.
The expected number of times
executes
(inside of
) is N.
- 4.
The expected number of queries that
makes to
is less than
. Exactly one of these queries is made while
in
.
- 5.
The expected runtime of
is approximately
where
is the worst-case runtime of
and
is the expected number of queries that
makes to
in
.
It will be useful to have the following simple lemma for comparing with different choices of predicate that are related by logical operators. It can be derived from basic probability calculations.















![$$\begin{aligned} \mathsf {Adv}^{\mathsf {predext}}_{\mathsf {PS},\mathsf {E},\lnot \mathrm {\varPi }^*}(\mathsf {P}^*,\mathcal {A})&=\mathsf {Pr}[\mathsf {V}_\pi (u,tr)\wedge \lnot (\lnot \mathrm {\varPi }^*(\pi ,u,\mathrm {aux}))\text { in }{ \textsf {H}}^{\mathsf {predext}}]\\&\leqslant \mathsf {Pr}[\mathrm {\varPi }^*(\pi ,u,\mathrm {aux})\text { in }{ \textsf {H}}^{\mathsf {predext}}]\\&=\mathsf {Pr}[\mathrm {\varPi }^*(\pi ,\varepsilon ,\mathrm {aux})\text { in }{ \textsf {H}}^{\mathsf {pred}}] =\mathsf {Adv}^{\mathsf {pred}}_{\mathsf {S},\mathrm {\varPi }}(\mathcal {B}_{\mathsf {E}}). \end{aligned}$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_Equ38.png)




4.3 Concrete Bounds on Soundness
Now we discuss how the forking lemma we just derived can be used to provide concrete bounds on soundness. First we make the generic observation that witness-extended emulation implies soundness. Then we discuss how we can use these results together with our expected-time generic group model bound on discrete log security to give concrete bounds on the soundness of various proof systems based on discrete log security, in particular giving the first concrete bound on the soundness of the Bulletproofs proof system for arithmetic circuits.
Witness-extended emulation implies soundness. The following theorem observes that finding a witness for u cannot be much more difficult that convincing a verifier u if an efficient witness-extended extractor exists.













Adversaries used in Theorem 3.
(Sketch). The use of in
ensures that the probability
outputs an accepting transcript must be roughly the same as the probability that
convinces
to accept. The difference between these probabilities is bounded by
. Then the
security of
ensures that the probability it outputs a valid witness cannot be much less than the probability it outputs an accepting transcript. The difference between these probabilities is bounded by
. Adversary
just runs
to obtain a witness, so
is the probability that
would output a valid witness.
Discrete log proof systems. A number of the proof systems in [6, 7, 24] were shown to have a -extractor
. For any such proof system
, Theorem 3 and Theorem 2 bound the soundness of
by the discrete log relation security of
against an expected-time adversary
. Moreover, we can then apply Lemma 3 to tightly bound this adversary’s advantage by the advantage of an expected-time adversary against normal discrete log security. We know how to bound the advantage of such an adversary in the generic group model from Sect. 3.3.
p: the size of the set
draws the integer component of its challenges from
: the size of the group used
: the size of the tree that
requires
: the number of group elements in the discrete log relation instance
: the number of multi-exponentiations
performs11
: the number of multi-exponentiations that
performs
We say such a proof system and extractor
have parameters
. We obtain the following theorem for such a system, bounding its soundness in the generic group model.










![$$\mathsf {E}^{\dag }[\mathsf {T},\mathsf {X}]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq855.png)


The result follows by applying Theorem 3, Theorem 2, Lemma 3, and the generic group model bound from Sect. 3.3 as discussed above.

is the size of group
in which discrete logs are assumed to be hard
Having proven our discrete log bound in a generic group model allowing multi-exponentiations is helpful here; it makes our bound not depend on the size of ’s multi-exponentiations.












![$$\mathsf {E}^{\dag }[\mathsf {T},\mathsf {X}_B]$$](../images/508076_1_En_15_Chapter/508076_1_En_15_Chapter_TeX_IEq888.png)



We expect to be the largest of the parameters, so the bound is dominated by the
term.
We thank Ahsrujit Ghoshal for extensive discussions and his involvement in the earlier stages of this work. We also thank Benedikt Bünz for some clarification regarding [7].
This work was partially supported by NSF grants CNS-1930117 (CAREER), CNS-1926324, CNS-2026774, a Sloan Research Fellowship, and a JP Morgan Faculty Award.