1 Introduction
Background. Robust secret sharing is a version of secret sharing that enables the reconstruction of the shared secret s in the presence of incorrect shares: given all n shares but with t of them possibly incorrect, and of course without knowing which ones are incorrect, it should still be possible to recover s. If then this can be achieved by standard error-correction techniques, while for
the task is impossible. When
, robust secret sharing is possible, but only if one accepts a small failure probability and an overhead in the share size, i.e., shares of bit size larger than the bit size of s. The goal then is to minimize the overhead in the share size for a given (negligible) failure probability
. Following up on earlier work on the topic [2, 4–7, 10], Bishop et al. proposed a scheme with optimal overhead O(k) in the share size, neglecting polylogarithmic terms (in n and k and the bit size of s) [3]. In particular, their scheme was the first robust secret sharing with an overhead that is independent of n (neglecting
terms). However, as pointed out by Fehr and Yuan [8], the Bishop et al. scheme does not (appear to) offer security in the presence of a rushing adversary that may choose the incorrect shares depending on the shares of the honest parties. This is in contrast to most of the earlier schemes, which do offer security against such rushing attacks (but are less efficient in terms of share size).1 Towards recovering security against a rushing adversary, Fehr and Yuan [8] proposed a new robust secret sharing scheme that features security against a rushing adversary and an overhead “almost independent” of n, i.e.,
for an arbitrary
. Furthermore, a variation of their scheme offers security against a rushing adversary and an overhead that is truly independent of n (neglecting polylogarithmic terms), but this version of the scheme has a running time that is superpolynomial.
Our Result. In this work, we close the final gap left open in [8]: we propose and analyze a new robust secret sharing scheme that is secure against a rushing adversary, has an overhead independent of n as in [3] (i.e., independent up to the same poly-logarithmic term as in [3], where m is the bit size of the secret), and has a polynomial running time.
Our new scheme recycles several of the ideas and techniques of [8]. The basic idea, which goes back to [3], is to have each share be authenticated by a small randomly chosen subset of the other parties. Following [8], our approach here differs from [3] in that the keys for the authentication are not authenticated. Indeed, this “circularity” of having the authentication keys authenticated causes the solution in [3] to not allow a rushing adversary; on the other hand, by not authenticating the authentication keys, we give the dishonest parties more flexibility in lying, making the reconstruction harder.
The reconstruction is in terms of a careful (and rather involved) inspection of the resulting consistency graph, exploiting that every honest party can verify the correctness of the shares of a small but random subset of parties, and that these choices of random “neighborhoods” become known to the adversary only after having decided about which shares to lie about. As a matter of fact, in our scheme, every honest party can verify the correctness of the shares of several randomly chosen small neighborhoods, giving rise to several global verification graphs. Furthermore, to ensure “freshness” of each such neighborhood conditioned on the adversary’s behavior so far, these neighborhoods are revealed sequentially in subsequent rounds of communication during the reconstruction phase.
As in [8], in our scheme the reconstructor first learns from the consistency graph whether the number p of “passive” parties, i.e., dishonest parties that did not lie about the actual share (but possibly about other pieces of information), is “large” or “small”. For p small, we can recycle the solution from [8], which happens to also work for the tighter parameter setting we consider here. When p is large though, the solution in [8] is to exploit the given redundancy in the shares
by means of applying list decoding, and then to find the right candidate from the list by again resorting to the consistency graph. However, this list decoding technique only works in a parameter regime that then gives rise to the
overhead obtained in [8]. To overcome this in our solution, we invoke a new technique for dealing with the case of a large p.
We quickly explain this new part on a very high level. The idea is to design a procedure that works assuming the exact value of p is known. This procedure is then repeated for every possible choice of p, leading to a list of possible candidates; similarly to how the scheme in [8] finds the right candidate from the list produced by the list decoding, we can then find the right one from the list. As for the procedure assuming p is known, exploiting the fact that p is large and known, we can find subsets V and so that either we can recover the shared secret from the shares of the parties in
by standard error correction (since it happens that there is more redundancy than errors in this collection of shares), or we can argue that the complement of V is a set for which the small-p case applies and thus we can again resort to the corresponding technique in [8].
One technical novelty in our approach is that we also invoke one layer of random neighborhoods that are publicly known. In this case, the adversary can corrupt parties depending on who can verify whose share, but the topology of the global verification graph is fixed and cannot be modified by dishonest parties that lie about their neighborhoods.
Following [3, 8], we point out that it is good enough to have a robust secret sharing scheme with a constant failure probability and a (quasi-)constant overhead; a scheme with failure probability and a (quasi-) O(k) overhead can then be obtained by means of parallel repetition. This is what we do here as well: at the core is a scheme where each party is given a quasi-constant number of bits on top of the actual share
(i.e., the size of the authentication keys and the size of the random neighborhoods are chosen to be quasi-constant), and we show that this scheme has a constant failure probability.
Concurrent Work. In concurrent and independent work [9], a very similar result as ours was obtained (using rather different techniques though). They also show an optimal (up to poly-logarithmic terms) robust secret sharing scheme with security against a rushing adversary. Compared to our scheme, their scheme has a slightly better poly-logarithmic dependency on n: . On the other hand, in a setting where the reconstruction is towards an external reconstructor R, our scheme works simply by revealing the shares to R (over multiple rounds) and R doing some local computation, whereas their scheme requires interaction among the shareholders and, as far as we can see, the shareholders will then learn the shared secret as well. For instance in the context of robust storage, the latter is undesirable.
2 Preliminaries
2.1 Graph Notation
We follow the graph notation in [8], which we briefly recall. Let be a graph with vertex set
and edge set E. By convention,
represents an edge directed from v to w is . We let
be the restriction of G to S for any
, i.e.,
with
.
![$$v\in [n]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq29.png)
![$$\begin{aligned} N^{\mathsf {out}}(v)=\{w \in [n] : (v,w)\in E\} \quad \text {and}\quad N^{\mathsf {in}}(v)=\{w \in [n] : (w,v) \in E\} . \end{aligned}$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_Equ4.png)


![$$S\subseteq [n]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq32.png)


![$$v\in [n]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq34.png)







![$$S \subseteq [n]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq41.png)

2.2 Random Graphs
We call a graph a randomized graph if each edge in E is actually a random variable. We are particularly interested in randomized graphs where (some or all of) the
’s are uniformly random and independent subsets
of a given size d. For easier terminology, we refer to such neighborhoods
as being random and independent. G is called a random degree-d graph if
is a random subset of size d in the above sense for all
. The following properties are direct corollaries of the Chernoff-Hoeffding bound: the first follows from Chernoff-Hoeffding with independent random variables, and the latter from Chernoff-Hoeffding with negatively correlated random variables (see Appendix A).2
![$$G=([n],E)$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq51.png)
![$$v \in [n]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq52.png)

![$$[n] \setminus \{v\}$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq54.png)
![$$T \subset [n]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq55.png)
![$$\begin{aligned} \Pr \bigl [n^{\mathsf {out}}_T(v) \ge \mu +\varDelta \bigr ] \le 2^{-\frac{\varDelta ^2}{3\mu }} \quad \text { and } \quad \Pr \bigl [n^{\mathsf {out}}_T(v) \le \mu -\varDelta \bigr ] \le 2^{-\frac{\varDelta ^2}{2\mu }} , \end{aligned}$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_Equ8.png)

![$$G=([n],E)$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq57.png)
![$$T \subset [n]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq58.png)



![$$\begin{aligned} \Pr \bigl [n^{\mathsf {in}}_T(v) \ge \mu +\varDelta \bigr ] \le 2^{-\frac{\varDelta ^2}{3\mu }} \quad \text { and } \quad \Pr \bigl [n^{\mathsf {in}}_T(v) \le \mu -\varDelta \bigr ] \le 2^{-\frac{\varDelta ^2}{2\mu }} , \end{aligned}$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_Equ9.png)

We will also encounter a situation where the set T may depend on the graph G; this will be in the context of a random but publicly known verification graph, where the adversary can then influence T dependent on G. The technical issue then is that conditioned on the set T, the neighborhood may not be random anymore, so that we cannot apply the above two corollaries. Instead, we will then use the following properties, which require some more work to prove.
![$$G=([n],E)$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq64.png)
![$$\gamma \in \frac{1}{n} \mathbb {Z} \cap \bigl [0, \frac{1}{2}\bigr ]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq65.png)
![$$T \subset [n]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq66.png)


![$$\begin{aligned} |\{v\in [n]: {n}^{\mathsf {in}}_{T}(v)< d(\gamma -2\alpha )\}|\ge \frac{\gamma n}{2} , \end{aligned}$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_Equ10.png)

See appendix.
![$$G=([n],E)$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq71.png)
![$$\gamma \in \frac{1}{n} \mathbb {Z} \cap \bigl [\frac{1}{\log n}, \frac{1}{2}\bigr ]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq72.png)
![$$T \subset [n]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq73.png)


![$$\begin{aligned} |\{v\in [n]: {n}^{\mathsf {in}}_{T}(v) \ge d(\gamma -2\alpha )\}|\ge \frac{\gamma n}{2} , \end{aligned}$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_Equ11.png)

The proof goes along the very same lines as for Lemma 1.
2.3 Robust Secret Sharing
A robust secret sharing scheme consists of two interactive protocols: the sharing protocol and the reconstruction protocol
. There are three different roles in this scheme, a dealer D, a receiver R and n parties labeled
. The sharing protocol is executed by D and n parties: D takes as input a message
, and each party
obtains as output a share. Typically, D generates these shares locally and then sends to each party the corresponding share. The reconstruction protocol is executed by R and the n parties: each party is supposed to use its share as input, and the goal is that R obtains
as output. Ideally, the n parties simply send their shares to R—possibly using multiple communication rounds—and R then performs some local computation to reconstruct the message.4
We want a robust secret sharing scheme to be secure in the presence of an active adversary who can corrupt up to t of n parties. Once a party is corrupted, the adversary can see the share of this party. In addition, in the reconstruction protocol, the corrupt parties can arbitrarily deviate from the protocol. The following captures the formal security requirements of a robust secret sharing scheme.
(Robust Secret Sharing). A pair of protocols is called a
-robust secret sharing scheme if the following properties hold for any distribution of
(from a given domain).
Privacy: Before
is started, the adversary has no more information on the shared secret
than he had before the execution of
.
Robust reconstructability: At the end of
, the reconstructor R outputs
except with probability at most
.
As for the precise corruption model, we consider an adversary that can corrupt up to t of the n parties (but not the dealer and receiver). We consider the adversary to be rushing, meaning that the messages sent by the corrupt parties during any communication round in the reconstruction phase may depend on the messages of the honest parties sent in that round. Also, we consider the adversary to be adaptive, meaning that the adversary can corrupt parties one by one (each one depending on the adversary’s current view) and between any two rounds of communication, as long as the total number of corrupt parties is at most t. We point out that we do not allow the adversary to be “corruption-rushing”, i.e., to corrupt parties during a communication round, depending on the messages of (some of) the honest parties in this round, and to then “rush” and modify this round’s messages of the freshly corrupt parties.5
2.4 Additional Building Blocks
We briefly recall a couple of techniques that we use in our construction. For more details, see Appendix B.










Robust Distributed Storage. Following [3, 8], the tags in the construction of our robust secret sharing scheme will be stored robustly yet non-privately; the latter is the reason why the extra privacy property (2) for the MAC is necessary. This design ensures that cheaters cannot lie about the tags that authenticate their shares to, say, provoke disagreement among honest parties about the correctness of the share of a dishonest party.
Formally, a robust distributed storage scheme is a robust secret sharing scheme but without the privacy requirement, and it can be achieved using a list-decodable code (see Appendix B or [8] for more details). Important for us will be that the share of each party i consists of two parts, and
, and robustness against a rushing adversary is achieved by first revealing
and only then, in a second communication round,
. Furthermore, we can do with
and
that are (asymptotically) smaller than the message by a fraction 1/n, and with correct reconstruction except with probability
.
3 The Robust Secret Sharing Scheme
3.1 The Sharing Protocol
Let t be an arbitrary positive integer and . Let
.6 We consider the message
to be shared to be m bits long. We let
be a field with
, and we set
so that
. Our robust secret sharing scheme uses the following three building blocks. A linear secret sharing scheme
that corresponds to a Reed-Solomon code of length n and dimension
over an extension field
over
with
,7 together with its corresponding error-correcting decoding algorithm
, the MAC construction from Theorem 6 with
, and the robust distributed storage scheme from Theorem 7. On input
, our sharing protocol
works as follows.
- 1.
Let
to be the non-robust secret sharing of
.
- 2.
Sample MAC randomness
and repeat the following 5 times.
Let
,
and
be the resulting choices in the m-th repetition.
- 3.
Set
, and use the robust distributed storage scheme to store
together with
. Party i gets
and
.
- 4.
For
, define
to be the share of party i. Output
.
We emphasize that the topology of the graph , determined by the random neighborhoods
, is stored robustly (yet non-private). This means that the adversary will know
but dishonest parties cannot lie about it. For
it is the other way round: they remain private until revealed (see below), but a dishonest party i can then lie about
.
3.2 The Reconstruction Protocol


Round 1: Every party i sends
to the reconstructor R.
Round 2: Every party i sends
to the reconstructor R.
Round 3: Every party i sends
to the reconstructor R.
Round 4: Every party i sends
to the reconstructor R.
Round 5: Every party i sends
to the reconstructor R.
We emphasize that since the keys for the authentication tags are announced after the Shamir/Reed-Solomon shares , it is ensured that the MAC does its job also in the case of a rushing adversary. Furthermore, it will be crucial that also the
’s are revealed in the second round only, so as to ensure that once the (correct and incorrect) Shamir shares are “on the table”, the
’s for the honest parties are still random and independent. Similarly for the
’s in the m-th round for
. The graph
is stored robustly; hence, the adversary knows all of it but cannot lie about it.

In a first step, this reconstruction algorithm considers the graphs and all the authentication information, and turns there graphs into labeled graphs by marking edges as
or
depending on whether the corresponding authentication verification works out. Then, makes calls to various subroutines; we will describe and analyze them at a time. As indicated in the description of the reconstruction algorithm, the overall approach is to first find out if the number p of passive parties10 is small or large, i.e., if there is either lots of redundancy or many errors in the Shamir shares, and then use a procedure that is tailored to that case. Basically speaking, there are three subroutines to handle p in three different ranges. The unique decoding algorithm
handles the case
where there is sufficient redundancy in the shares to uniquely decode (this is the trivial case, which we do not discuss any further below but assume for the remainder that
). The graph algorithm
handles the case
, and the algorithm
deals with
; there is some overlap in those two ranges as will not be able to pinpoint the range precisely.
In order to complete the description of the reconstruction procedure and to show that it does its job (except with at most constant probability), we will show the following in the upcoming sections.
- 1.
An algorithm Check that distinguishes “small” from “large” p.
- 2.
An algorithm BigP that, when run with
and given that p is “large”, outputs a valid codeword
for which
for all honest i, and thus which decodes to s. Given the p is not know, this algorithm is run with all possible choices for p, and all the candidates for
are collected.
- 3.
An algorithm Cand that finds the right
in the above list of candidates.
- 4.
An algorithm GraphB that, when run with an honest party i and given that p is “small”, outputs the codeword corresponding to the correct secret s. This algorithm very much coincides with the algorithm used in [8] to deal with the case of a “small” p, except for an adjustment of the parameters. We defer description of this algorithm to our appendix as the security analysis is quite similar to the graph algorithm in BigP.
3.3 “Active” and “Passive” Dishonest Parties
As in previous work on the topic, for the analysis of our scheme, it will be convenient to distinguish between corrupt parties that announce the correct Shamir share and the correct randomness
in the first round of the reconstruction phase (but may lie about other pieces of information) and between corrupt parties that announce an incorrect
or
. Following the terminology of previous work on the topic, the former parties are called passive and the latter are called active parties, and we write P and A for the respective sets of passive and active parties, and we write H for the set of honest parties.
A subtle issue is the following. While the set A of active parties is determined and fixed after the first round of communication, the set of passive parties P may increase over time, since the adversary may keep corrupting parties as long as , and make them lie in later rounds. Often, this change in P is no concern since many of the statements are in terms of
, which is fixed like A. In the other cases, we have to be explicit about the communication round we consider, and P is then understood to be the set of passive parties during this communication round.
3.4 The Consistency Graphs
As in [8], using a lazy sampling argument, it is not hard to see that after every communication round (including the subsequent “corruption round”) in the reconstruction procedure, the following holds. Conditioned on anything that can be computed from the information announced up to that point, the neighbourhoods of the currently honest parties that are then announced in the next round are still random and independent. For example, conditioned on the set A of active parties and the set P of passive parties after the first round, the
’s announced in the second round are random and independent for all
. Whenever we make probabilistic arguments, the randomness is drawn from these random neighbourhoods. The only exception is the graph
, which is robustly but non-privately stored, and which thus has the property that the
’s are random and independent for all parties, but not necessarily anymore when conditioned on, say, A and/or P.
Furthermore, by the security of the robust distributed storage of (Theorem 7) and the MAC (Theorem 6) with our choice of parameters, it is ensured that all of the labeled graphs
satisfy the following property except with probability
. For any edge (i, j) in any of these graphs
, if i is honest at the time it announces
then (i, j) is labled
whenever j is honest or passive.11 Also, (i, j) is labeled
whenever j is active.
These observations give rise to the following definition, given a partition into disjoint subsets with
.
![$$G = ([n],E)$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq202.png)
- (Randomness)
The neighborhoods
of the vertices
are uniformly random and independent subsets of
of size d.
- (Labelling)
For any edge
with
, if
then
and if
then
.
In order to emphasize the randomness of the neighborhoods given the partition
(and possibly of some other information X considered at a time), we also speak of a fresh consistency graph (w.r.t. to the partition and X). When we consider a variant of a consistency graph that is a random degree-d graph, i.e., the randomness property holds for all
, while the partition
(and possibly of some other information X considered at a time) may depend on the choice of the random edges, we speak of a random but non-fresh consistency graph.
Using this terminology, we can now capture the above remarks as follows:
The graphs , as announced in the respective communication rounds, are fresh consistency graphs w.r.t. the partition
given by the active and (at that time) passive parties and w.r.t. any information available to R or the adversary prior to the respective communication round, except that the labeling property may fail with probability
(independent of the randomness of the edges). On the other hand,
is a random but non-fresh consistency graph (where, again, the labeling property may fail with probability
).
In the following analysis we will suppress the failure probability for the labeling property; we will incorporate it again in the end. Also, we take it as understood that the partition
always refers to the honest, the passive and the active parties, respectively.
3.5 The Check Subroutine




[8]. Except with probability , Check(
) outputs yes if
and no if
(and either of the two otherwise).
3.6 The Cand Subroutine







If ,
is a set of codewords of cardinality
for which there exists
with
for all
, and G is a fresh consistency graph, then Cand
outputs this
except with probability
.






![$$\begin{aligned} \Pr [n^{\mathsf {out}}_{[n] \setminus S}(v,\mathtt{good})=0] \le \Pr [n^{\mathsf {out}}_{(H \cup P)\setminus S}(v,\mathtt{good})=0] \le 2^{-\varOmega (\frac{d}{\log n})} , \end{aligned}$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_Equ15.png)





![$$[n] \setminus S \subseteq A$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq255.png)
![$$\begin{aligned} n^{\mathsf {out}}_{[n]\setminus S}(v,\mathtt{good})\le n^{\mathsf {out}}_{A}(v,\mathtt{good}) = 0 \end{aligned}$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_Equ16.png)


4 The Algorithm for Big p




Below, we describe the different subroutines of BigP and show that they do what they are supposed to do. Formally, we will prove the following.
If the number of passive parties satisfies
, and
, then BigP will output a list that contains the correct codeword except with probability
. Moreover, it runs in time poly(n, m).


4.1 Filter Out Active Parties






We point out that the statement holds for the set of honest parties H as it is before Round 2 of the reconstruction procedure, but lower bound will still hold after Round 2, since |H| remains larger than t.

![$$\begin{aligned}&\Pr \bigl [{n}^{\mathsf {out}}(v, \mathtt{bad})\ge \textstyle \frac{d(1-2\gamma +\alpha )}{2}\bigr ] = \Pr \bigl [{n}^{\mathsf {out}}_{A}(v, \mathtt{bad})\ge \frac{d(1-2\gamma +\alpha )}{2}\bigr ] \le 2^{-\alpha ^2 d/6} = n^{-4} \end{aligned}$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_Equ18.png)










![$$\begin{aligned} \Pr \bigl [{n}_T^{\mathsf {in}}(v, \mathtt{bad}) \le \textstyle \frac{d(1-\alpha )}{2}\bigr ] \le \Pr \bigl [{n}^{\mathsf {in}}_{H}(v, \mathtt{bad})\le \frac{d(1-\alpha )}{2}\bigr ] \le 2^{-\alpha ^2 d/4} = n^{-6} , \end{aligned}$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_Equ21.png)



4.2 Find the Correct Codeword—In Some Cases











We stress that in the following statement, the partition (and thus
) and the set V may depend on the choice of the (random) edges in G.

![$$V \subseteq [n]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq295.png)
















The proof under the second assumption goes along the same lines, noting that the lower bound on |V| then implies that , offering a similar gap to the condition
in the definition of
then.
We proceed to our second claim.

![$$V \subseteq [n]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq312.png)

















The proof under the second assumption goes along the same lines, noting that offers a similar gap to the condition
in the definition of
then.
The following theorem is a consequence of Lemma 3 and Lemma 4. The statement holds for P and H after Round 2 in the reconstruction procedure.







It follows from Lemma 3 and Lemma 4 that, except with the claimed probability, and
. Therefore, the punctured codeword, obtained by restricting to the coordinates in
, has more redundancy than errors, thus unique decoding works and produces the correct codeword.






![$$|V\cap H|\in [(\gamma -\alpha )n, (\gamma +5\alpha )n]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq342.png)
![$$W := [n] \setminus V$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq343.png)











4.3 Graph Algorithm







![$$c\in [\frac{1}{4}, \frac{1}{2}]$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_IEq358.png)

![$$\begin{aligned} |W|\in [(2c-2\alpha )n,(2c+4\alpha )],\quad |W\cap H|\in [c-5\alpha , c+\alpha ],\quad |W\cap P|\le 3\alpha n . \end{aligned}$$](../images/508076_1_En_17_Chapter/508076_1_En_17_Chapter_TeX_Equ3.png)


Let and
. The subset of active parties in W is still A. The out-degree of vertices in
and
is expected to be
and that (due to the MAC’s) the edges from honest parties to active parties are labeled bad, and the edges from honest parties to honest or passive parties are labeled good.
We also recall that whether a corrupt party i is passive or active, i.e., in P or in A, depends on and
only, as announced in the first communication round in the reconstruction phase. Note that a passive party may well lie about, say, his neighborhood
. Our reasoning only relies on the neighborhoods of the honest parties, which are random and independent conditioned on the adversary’s view, as explained in Proposition 1.
Under the claim of Proposition 1, and assuming that v is honest and W satisfies (3), the algorithm will output a correct codeword except with probability . Moreover, it runs in time poly(m, n).
The proof follows almost literally the one of [8] adjusted to the parameter regime considered here. For completeness, we provide the proof of this theorem. The proof of Theorem 2 consists of the analysis of Step ii to Step v and the Graph expansion algorithm. The analysis of Step ii to Step v is deferred to the Appendix.
4.4 Graph Expansion
We start by analyzing the expansion property of , the subgraph of G restricted to the set of honest parties
.









By Remark 2, we know that the size of is at least
. By assumption on the
’s and by Corollary 1, for any vertex
,
as
and
. Taking the union bound, this hold for all
except with probability
. In the remainder of the proof, we may thus assume that
consist of
random outgoing edges.
Let ,
, and let
denote the list of neighbours of all
, with repetition. To prove the conclusion, it suffices to bound the probability
that more than
of these
vertices are repeated.












5 Parallel Repetition












The scheme ,
is a
-party
-robust secret sharing scheme with running time poly(m, n) and share size
.
The error probability can be made arbitrarily small by several independent executions of ,
, except that the same Shamir shares
would be used in all the instances. This could be done in a same manner in [8] or [3]. We skip the details but refer interested reader to [8] or [3]. In conclusion, we obtain the following main result.
For any set of positive integers with
, there exists a n-party
-robust secret sharing scheme against rushing adversary with secret size m, share size
, and running time poly(m, n).
Chen Yuan has been funded by the ERC-ADG-ALGSTRONGCRYPTO project. (no. 740972)