1 Introduction
1.1 Random Systems
A random system is an object of general interest in computer science and in particular in cryptography. Informally, a random system is an abstract object which operates in rounds. In the i-th round, an input (or query) is answered with a random output
, and each round may (probabilistically) depend on the previous rounds. In previous work [9, 12], a random system
is defined by a sequence of conditional probability distributions
(or
) for
. This captures exactly the input-output behavior of a probabilistic system, as it gives the probability distribution of any output
, conditioned on the previous inputs
and outputs
.
For example, a uniform random function (URF) from to
is a random system
corresponding to the following behavior: Every new input
is answered with an independent uniform random value
and every input that was given before is answered consistently. Similarly, a uniform random permutation is a random system
(different from
).
Many statements appearing in the cryptographic literature are about random systems (even though they are usually expressed in a specific language, for example using pseudo-code). For example, the optimal distinguishing advantage of a distinguisher class
between two systems
and
only depends on the behavior of
and
. In particular, it is independent of how
is implemented (in program code), whether it is a Turing Machine, or how efficient it is. For example, the well-known URP-URF switching lemma [3, 10] is a statement about the optimal information-theoretic distinguishing advantage between the two random systems
and
(see above). Clearly, the switching lemma holds irrespective of the concrete implementations of the systems
or
, e.g., whether they employ eager or lazy sampling.
1.2 Random Systems as Equivalence Classes

![$$[({\mathsf {X}},{\mathsf {Y}})]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq31.png)





![$$\begin{aligned} \delta ({\mathsf {X}},{\mathsf {Y}}) = \inf _{\mathcal {E}\in [({\mathsf {X}},{\mathsf {Y}})]} {\mathrm {Pr}^{{\mathcal {E}}}({{ {X} } \ne { {Y} }})}. \end{aligned}$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_Equ5.png)











The gist of such reasoning is to lower the level of abstraction in order to define or interpret a property, or to prove a statement in a more elementary and intuitive manner.
In this paper, we apply the outlined way of thinking to random systems. We explore a lower level of abstraction which we call probabilistic discrete systems. A probabilistic discrete system (PDS) is defined as a (probability) distribution over deterministic discrete systems (DDS). Loosely speaking, this captures the fact that for any implementation of a random system we can fix the randomness (say, the “random tape”) to obtain a deterministic system. We then observe that there exist different PDS that are observationally equivalent, i.e., their input-output behavior is equal, implying that they correspond to the same random system. Thus, we propose to think of a random system as an equivalence class of PDS and write
for a PDS
that behaves like
(i.e., it is an element of the equivalence class
). For example, a uniform random function
can be implemented by a PDS
that initially samples the complete function table and by a PDS
that employs lazy sampling. These are two different PDS
, but they are behaviorally equivalent and thus correspond to the same random system, i.e.,
and
(see also the later Example 5).
We answer this question in the positive. The key idea is to exploit the equivalence classes: we prove that the optimal information-theoretic distinguishing advantageIs it possible to express properties which classically involve environments equivalently as natural intrinsic properties of the systems themselves, i.e., without the explicit concept of an environment?











The coupling theorem for random systems is not only of conceptual interest. It also represents a novel technique to prove indistinguishability bounds in an elementary fashion: in the core of such a proof, one only needs to bound the statistical distance of probability distributions over deterministic systems (for example by using the Coupling Method mentioned above). Usually, the fact that the distribution is over systems will be irrelevant. In particular, the interaction with the systems and the complexity of (adaptive) environments is completely avoided.
1.3 Security and Indistinguishability Amplification
Security amplification is a central theme of cryptography. Turning weak objects into strong objects is useful as it allows to weaken the required assumptions. Indistinguishability amplification is a special kind of security amplification, where the quantity of interest is the closeness (in terms of adaptive indistinguishability) to some idealized system. Most of the well-known constructions achieving indistinguishability amplification do this by combining many moderately close systems into a single system that is very close to its ideal form.
In this paper, we take a more general approach to indistinguishability amplification and present results that allow (for example) to combine many moderately close systems into multiple systems that are jointly very close to independent instances of their ideal form. This is useful, since many cryptographic protocols need several independent instantiations of a scheme, for example a (pseudo-)random permutation.
1.4 Motivating Examples for Indistinguishability Amplification






If, say, the second constructed permutation is (forward-)queried with x, the value x is input to and the output
is forwarded to
. The output of
is the response to the query x.
Clearly, if any two of the three random permutations are a (perfect) uniform random permutation
, then
behaves exactly as if all three random permutations
are perfect uniform random permutations (i.e., it behaves as two independent uniform random permutations
). Thus, we call
a (2, 3)-combiner for the pairs
.







How many independent random permutations that are
-close to a uniform random permutation need to be combined to obtain m random permutations that are (jointly)
-close (for
) to m independent uniform random permutations?
This question has been studied for the special case (see for example [12, 18, 19]), and it is known that the cascade of
independent random permutations (each
-close to a uniform random permutation) is
-close to a uniform random permutation. Of course, there is a straightforward way to use such a construction for
multiple times in order to obtain a basic indistinguishability result for
: one simply partitions the n independent random permutations
into sets of equal size and cascades the permutations in each set.
We can construct four random permutations from 20 random permutations as follows:

If the are independent and all
-close (say,
-close) to a uniform random permutation, Theorem 1 of [12] implies that the construction above yields four random permutations that are jointly
-close (
-close,
-close) to four independent uniform random permutations.
Naturally, one might ask whether it is possible to construct four random permutations to get stronger amplification (i.e., a larger exponent) without using more random permutations. This is indeed possible, as the following example illustrates.
Consider the following construction of four random permutations:

The main advantage of this construction is that it makes use of only 15 (instead of 20) random permutations. Our results imply that if the are independent and
-close (say,
-close) to a uniform random permutation, then the constructed four random permutations are jointly
-close (
-close,
-close) to four independent uniform random permutations.
Instead of random permutations one can just as well combine random functions: the same constructions and bounds as in Example 1 and Example 2 apply if we replace the cascade with the elementwise XOR
. However, in this setting, we show that the additional structure of random functions can be exploited to achieve even stronger amplification than in the examples above.
Let be independent random functions over a finite field
, and let A be a
MDS5 matrix over
. Consider the following construction of four random functions
, making use of only 10 random functions (as opposed to the above constructions with 20 and 15, respectively):

On input x to the i-th constructed function (for
), all random functions
are queried with x, and the answers
are combined to the result
.
Our results imply that if the are independent and
-close (say,
-close) to a uniform random function, the four random functions
are jointly
-close (
-close,
-close) to four independent uniform random functions.
1.5 Contributions and Outline
We briefly state our main contributions in a simplified manner. In Sect. 3, we define deterministic discrete systems and probabilistic discrete systems together with an equivalence relation capturing the input-output behavior. Moreover, we argue that we can characterize a random system by an equivalence class of PDS.























![$$i \in [n]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq158.png)



Moreover, we demonstrate how these indistinguishability results can be instantiated by combiners transforming independent random functions (random permutations) into
random functions (random permutations), obtaining indistinguishability amplification.
1.6 Related Work
There exists a vast amount of literature on information-theoretic indistinguishability of various constructions, in particular for the analysis of symmetric key cryptography. Prominent examples are constructions transforming uniform random functions into uniform random permutations or vice-versa: the Luby-Rackoff construction [6] (or Feistel construction), similar constructions by Naor and Reingold [14], the truncation of a random permutation [5], and the XOR of random permutations [2, 7].
Random Systems. The characterization of random systems by their input-output behavior in the form of a sequence of conditional distributions (or
) was first described in [9].
Indistinguishability Proof Techniques. There exist various techniques for proving information-theoretic indistinguishability bounds. A prominent approach is to define a failure condition such that two systems are equivalent before said condition is satisfied (see also [9]). Maurer, Pietrzak, and Renner proved in [12] that there always exists such a failure condition that is optimal, showing that this technique allows to prove perfectly tight indistinguishability bounds. At first glance, the lemma of [12] seems to be similar to our coupling theorem. While both statements are tight characterizations of the distinguishing advantage, the crucial advantage of our result is that it allows to remove the complexity of the adaptive interaction when reasoning about indistinguishability of random systems. This enables reasoning at the level of probability distributions: one can think of a failure event occurring or not before the interaction even begins. The interactive hard-core lemma shown by Tessaro [17] in the computational setting allows this kind of reasoning as well, though it only holds for so-called “cc-stateless systems”.
More involved proof techniques include directly bounding the statistical distance of the transcript distributions, such as Patarin’s H-coefficient method [15], and most recently, the Chi-squared method [4].
Indistinguishability Amplification. Examples of previous indistinguishability amplification results are the various computational XOR lemmas, Vaudenay’s product theorem for random permutations [18, 19], as well as the more abstract product theorem for (stateful) random systems [12] (and so-called neutralizing constructions). In [13], some of the results of [12] have been proved in the computational setting.
A different type of indistinguishability amplification is shown in [11, 12], where the amplification is with respect to the distinguisher class, lifting non-adaptive indistinguishability to adaptive indistinguishability.
2 Preliminaries
Notation. For , we let [n] denote the set
with the convention
. The set of sequences (or strings) of length n over the alphabet
is denoted by
. An element of
is denoted by
for
. The empty sequence is denoted by
. The set of finite sequences over alphabet
is denoted by
and the set of non-empty finite sequences is denoted by
. A set
is prefix-closed if
implies
for any
. For two sequences
and
, the concatenation
is the sequence
.
A (total) function from X to Y is a binary relation such that for every
there exists a unique
with
. A partial function from X to Y is a total function from
to Y for a subset
. The domain of a function f is denoted by
. The support of a function
with
, for example a distribution, is defined by
.
A multiset over is a function
. We represent multisets in set notation, e.g.,
denotes the multiset M with domain
,
, and
. The cardinality |M| of a multiset is
. The union
, intersection
, sum
, and difference − of two multisets is defined by the pointwise maximum, minimum, sum, and difference, respectively. Finally, the symmetric difference
of two multisets is defined by
.
Throughout this paper, we use the following notion of a (finite) distribution.









In the following, we do not demand that a distribution has weight 1, i.e., we do not assume probability distributions (unless stated explicitly). This is important, as the proof of one of our main results (Theorem 1) relies on distributions of arbitrary weight.




Let be distributions over
, ...,
, respectively, such that all
have the same weight
. Then, there exists a (joint) distribution
over
with weight p and marginals
.
A possible choice is .



Note that for distributions and
of different weight, i.e.,
, the statistical distance is not symmetric (
). Moreover, for distributions of the same weight, i.e.,
, we have
.
The following lemma, proved in Appendix A, is an immediate consequence of the definition of the statistical distance.
![$${\langle {\mathcal {A}_i}\rangle }_{i \in [n]}$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq234.png)






![$$i \in [n]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq241.png)
![$${\mathsf {X}} := \sum _{i \in [n]} {\mathsf {X}}_i$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq242.png)
![$${\mathsf {Y}} := \sum _{i \in [n]} {\mathsf {Y}}_i$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq243.png)
![$$\begin{aligned} \delta ({\mathsf {X}},{\mathsf {Y}}) = \sum _{i \in [n]} \delta ({\mathsf {X}}_i,{\mathsf {Y}}_i). \end{aligned}$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_Equ11.png)






The following lemma states that the statistical distance of two distributions cannot increase if a function f is applied (to both distributions). This is well-known for the case in which and
are probability distributions. We prove the claim in Appendix A.





(Coupling Lemma, Lemma 3.6 of [1]). Let be probability distributions over the same set.
- 1.For any joint distribution of
and
we have
- 2.There exists a joint distribution of
and
such that
3 Discrete Random Systems
3.1 Deterministic Discrete Systems
A deterministic discrete -system is a system with input alphabet
and output alphabet
. The system’s first output (or response)
is a function of the first input (or query)
. The second output
is a priori a function of the first two inputs
and the first output
. However, since
is already a function of
, it is more minimal to define
as a function of the first two inputs
. In general, the i-th output
is a function of the first i inputs
.










A DDS is an abstraction capturing exactly the input-output behavior of a deterministic system. Thus, it is independent of any implementation details that describe how the outputs are produced. One can therefore think of a DDS as an equivalence class of more explicit implementations. For example, different programs (or Turing machines) can correspond to the same DDS. Moreover, the fact that there is state is captured canonically by letting each output depend on the previous sequence of inputs, as opposed to introducing an explicit state space.

The four single-query -DDS
,
,
,
.
An environment is an object (similar to a DDS) that interacts with a system by producing the inputs
for
and receiving the corresponding outputs
. Environments are adaptive and stateful, i.e., a produced input
is a function of all the previous outputs
. Moreover, we allow an environment to stop at any time.
















3.2 Probabilistic Discrete Systems
We define probabilistic systems (environments) as distributions over deterministic systems (environments). Note that even though we use the term probabilistic, we do not assume that the corresponding distributions are probability distributions (i.e., they do not need to sum up to 1, unless explicitly stated).
A probabilistic discrete -system
(or
) is a distribution over
such that all DDS in the support of
have the same domain, denoted8 by
. We always assume that
is finite, i.e.,
is finite and
for some
.
A probabilistic discrete environment for an (or
) is a distribution over
.
Observe that a PDS contains all information for a system that can be executed arbitrarily many times, i.e., a system that can be rewound and then queried again on the same randomness. We consider the standard setting in which a system can only be executed once (see Definition 7). In this setting, there exist different PDS that behave identically from the perspective of any environment, i.e., they exhibit the same behavior. The following example demonstrates this.














![$$\alpha \in [0,1/2]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq348.png)







![$$\begin{aligned} \left[ {\mathsf {{V}}}\right] = \left\{ {\mathsf {{V}}}_\alpha \; \vert \; \alpha \in \left[ 0, 1/2 \right] \right\} . \end{aligned}$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_Equ24.png)
More generally, we define two PDS to be equivalent if their transcript distributions are the same in all environments. It is easy to see that considering only deterministic environments results in the same equivalence notion that is obtained when considering probabilistic environments.






![$$[{\mathsf {{S}}}] := \{ {\mathsf {{S}}}' \; \vert \; {\mathsf {{S}}}', {\mathsf {{S}}} \equiv {\mathsf {{S}}}' \}$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq363.png)
The following lemma, proved in Appendix A, states that for and
to be equivalent it suffices that the transcript distribution
is equal to
for all non-adaptive10 deterministic environments
.





Stated differently, an equivalence class of PDS can be characterized by the transcript distributions for all non-adaptive deterministic environments. Since a non-adaptive deterministic environment is uniquely described by a sequence
of inputs and the corresponding transcript distribution
is essentially the distribution of observed outputs under the input sequence
, it follows immediately that an equivalence class of PDS describes exactly a random system as introduced in [9] (where a characterization in the form of a sequence of conditional distributions
or
was used).







4 Coupling Theorem for Discrete Systems
4.1 Distance of Equivalence Classes and the Coupling Theorem
The optimal distinguishing advantage is widely-used in the (cryptographic) literature to quantify the distance between random systems. It can be defined as the supremum statistical distance of the transcripts under all compatible . In the information-theoretic setting, this is equivalent to the classical definition as the supremum difference of the probability that a (probabilistic) distinguisher outputs 1 when interacting with each system.






Understanding a random system as an equivalence class of probabilistic discrete systems gives rise to the following distance notion :




Note that since there exist PDS and
that are equivalent (
) even though
(for example
and
from Example 5), taking the infimum seems to be necessary to quantify the distance of random systems in a meaningful way. We can now state the first theorem.







The coupling theorem for random systems is an immediate consequence of Theorem 1 and the classical Coupling Lemma (Lemma 4).





4.2 Proof of Theorem 1



























![$$i \in [n]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq443.png)












![$$\begin{aligned} \delta ({\mathsf {X}},{\mathsf {Y}}) = \max _{i \in [n]} \delta ({\mathsf {X}}_i,{\mathsf {Y}}_i). \end{aligned}$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_Equ28.png)

![$$\begin{aligned} \max _{i \in [n]} \delta ({\mathsf {X}}_i,{\mathsf {Y}}_i) = p_{\mathsf {X}} - \min _{i \in [n]} \sum _{a \in \mathcal {A}_i}\min ({\mathsf {X}}_i(a),{\mathsf {Y}}_i(a)). \end{aligned}$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_Equ29.png)
![$$\tau := \min _{i \in [n]} \sum _{a \in \mathcal {A}_i}\min ({\mathsf {X}}_i(a),{\mathsf {Y}}_i(a))$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq457.png)
![$$i \in [n]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq458.png)




















![$$\delta ({\mathsf {X}},{\mathsf {Y}}) \le p_{\mathsf {X}} - \tau = \max _{i \in [n]} \delta ({\mathsf {X}}_i,{\mathsf {Y}}_i)$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq477.png)
Finally, we have for all
due to Lemma 3 and thus
, concluding the proof.
The General Case. Before proving the general case of Theorem 1, we introduce the following notion of a successor system.






















We stress that if is a probability distribution (i.e., it sums to 1),
is in general not a probability distribution anymore: the weight
is the probability that
responds with y to the query x.




![$$[{\mathsf {{S}}}]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq511.png)
![$$[{\mathsf {{T}}}]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq512.png)


![$${\mathsf {{S}}}' \in [{\mathsf {{S}}}]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq515.png)
![$${\mathsf {{T}}}' \in [{\mathsf {{T}}}]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq516.png)




![$${\mathsf {{S}}}' \in [{\mathsf {{S}}}]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq520.png)
![$${\mathsf {{T}}}' \in [{\mathsf {{T}}}]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq521.png)











![$${\mathsf {{S}}}_{xy}\in [{\mathsf {{S}}}^{{\uparrow {x}}{\downarrow {y}}}]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq531.png)
![$${\mathsf {{T}}}_{xy} \in [{\mathsf {{T}}}^{{\uparrow {x}}{\downarrow {y}}}]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq532.png)



















![$${\mathsf {{S}}}' \in [{\mathsf {{S}}}]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq554.png)
![$${\mathsf {{T}}}' \in [{\mathsf {{T}}}]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq555.png)

5 Indistinguishability Amplification from Combiners
The goal of indistinguishability amplification is to construct an object which is -close to its ideal from objects which are only
-close to their ideal for
much smaller than
. The most basic type of this construction is to XOR two independent bits
and
. It is easy to verify that if
and
are
- and
-close (in statistical distance) to the uniform bit
, respectively, then
will be
-close to the uniform bit. The crucial property of the XOR construction is the following: if at least one of the bits
or
is perfectly uniform, then their XOR is perfectly uniform as well. This property is satisfied not only for single bits, but actually also for bitstrings (with bitwise XOR) and even for any quasigroup. Interestingly, it was shown in [12] that an analogous indistinguishability amplification result to the XOR of two bits holds for constructions based on (stateful) random systems, and it is sufficient to assume only such a combiner property of a construction.
In this section, we prove that indistinguishability amplification is obtained from more general combiners. All of the above examples are special cases of such a combiner. In particular, Theorem 1 of [12] is a simple corollary to our Theorem 3.
5.1 Constructions and Combiners
Usually (see for example [12]), an -ary construction
is defined as a system communicating with component systems
and providing an outer communication interface. This means that
is a system for any (compatible) component systems
. In this paper, we use a more abstract notion of a construction, ignoring the details of the interfaces and messages. The amplification statements we make are independent of these details, and thereby simpler and stronger. Nevertheless, it may be easier for the reader to simply think of a construction
as a random system.


![$$i \in [n+1]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq580.png)










In many settings (especially in cryptography), we have a pair of random systems , where
is the real system, and
is the ideal system. A combiner is a construction that combines component systems
such that only some of the component systems
need to be ideal for the whole resulting system
to behave as if all component systems were ideal. The following definition makes this rigorous.











A special case of an -combiner is a threshold construction where the whole system behaves as if all component systems were ideal if only
(arbitrary) component systems are ideal. We call such a construction a
-combiner.
An -combiner
is a
-combiner for
if
.










5.2 Proving Indistinguishability Amplification Results




















The following technical lemma describes a general proof technique and can be used as a tool to prove indistinguishability amplification results for any -combiner. The key idea is to consider distributions
and
over
, inducing distributions
and
(recall Definition 14 for the notation). We then use Theorem 1 to exhibit a coupling in which systems
and
are equal with probability
and argue that the two constructions are equal (in the coupling) unless for one of the indices
where
we have
. The proof of Theorem 3 shows how to instantiate this lemma, choosing suitable distributions
and
.





















![$$({\mathsf {{F}}}_i',{\mathsf {{I}}}_i') \in [{\mathsf {{F}}}_i] \times [{\mathsf {{I}}}_i]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq688.png)
![$$i \in [n]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq689.png)


We exhibit a random experiment with random variables16
,
, and
, such that
and
. Define
and
.













Observe that Lemma 7 by itself does not imply indistinguishability amplification for any combiner. In particular, one needs to prove the existence of suitable distributions and
such that the distance
is small for many
(ideally it is zero for all
, where
is the bitwise complement of e). We show the following indistinguishability amplification theorem for all
-combiners.













For and
we represent distributions
using multisets
over
, with the natural understanding that
(
) is the probability distribution with
.
![$$\begin{aligned}&A_{k,n}':= \bigcup _{j\in \{k,k+2,\ldots ,n\}} \Biggl \{ \left( b,\left( {\begin{array}{c}j-1\\ k-1\end{array}}\right) \right) \; \Bigg | \; b \in \{0,1\}^{n}, \sum _{i \in [n]}b_i=j \Biggr \} \quad \text {and}\\&A_{k,n}:= \{(0^{n},1) \} \cup \bigcup _{j\in \{k+1,k+3,\ldots ,n\}} \Biggl \{ \left( b,\left( {\begin{array}{c}j-1\\ k-1\end{array}}\right) \right) \; \Bigg | \; b \in \{0,1\}^{n}, \sum _{i \in [n]}b_i=j \Biggr \}. \end{aligned}$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_Equ44.png)











![$$\begin{aligned} \mathrm {blind}_1(A_{k,n}') =&\bigcup _{j\in \{k,k+2,\ldots ,n-1\}} \Biggl \{ \left( b,\left( {\begin{array}{c}j-1\\ k-1\end{array}}\right) \right) \; \Bigg | \; b \in \{0,1\}^{n-1}, \sum _{i \in [n]}b_i=j \Biggr \} \\ \cup&\bigcup _{j\in \{k-1,k+1,\ldots ,n-1\}} \Biggl \{ \left( b,\left( {\begin{array}{c}j\\ k-1\end{array}}\right) \right) \; \Bigg | \; b \in \{0,1\}^{n-1}, \sum _{i \in [n]}b_i=j \Biggr \}. \end{aligned}$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_Equ45.png)
![$$\begin{aligned} \mathrm {blind}_1(A_{k,n}) =\;&\{ (0^{n-1},1)\} \\ \cup&\bigcup _{j\in \{k+1,k+3,\ldots ,n-1\}} \Biggl \{ \left( b,\left( {\begin{array}{c}j-1\\ k-1\end{array}}\right) \right) \; \Bigg | \; b \in \{0,1\}^{n-1}, \sum _{i \in [n]}b_i=j \Biggr \} \\ \cup&\bigcup _{j\in \{k,k+2,\ldots ,n-1\}} \Biggl \{ \left( b,\left( {\begin{array}{c}j\\ k-1\end{array}}\right) \right) \; \Bigg | \; b \in \{0,1\}^{n-1}, \sum _{i \in [n]}b_i=j \Biggr \}. \end{aligned}$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_Equ46.png)








![$$\begin{aligned}&\mathrm {blind}_1(A_{k,n}')-\mathrm {blind}_1(A_{k,n})\cap \mathrm {blind}_1(A_{k,n}') \\&\quad = \bigcup _{j\in \{k-1,k+1,\ldots ,n-1\}} \Biggl \{ \left( b,\left( {\begin{array}{c}j-1\\ k-2\end{array}}\right) \right) \; \Bigg | \; b \in \{0,1\}^{n-1}, \sum _{i \in [n]}b_i=j \Biggr \} \\&\quad = A_{k-1,n-1}'. \end{aligned}$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_Equ47.png)
![$$\begin{aligned}&\mathrm {blind}_1(A_{k,n}) - \mathrm {blind}_1(A_{k,n}) \cap \mathrm {blind}_1(A_{k,n}') \\&\quad = \{ (0^{n-1},1)\} \cup \bigcup _{j\in \{k,k+2,\ldots ,n-1\}} \Biggl \{ \left( b,\left( {\begin{array}{c}j-1\\ k-2\end{array}}\right) \right) \; \Bigg | \; b \in \{0,1\}^{n-1}, \sum _{i \in [n]}b_i=j \Biggr \} \\&\quad =A_{k-1,n-1}. \end{aligned}$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_Equ48.png)










The following corollary to Theorem 3 provides simpler (but worse) bounds.



- (i)where the
are jointly independent Bernoulli random variables with
.
- (ii)if
for all
we have
- (iii)if
for all
we have
Lemma 10 in Appendix A states that . This immediately implies the bound (i) via Theorem 3.





![$$\begin{aligned} \mathbb {E}\left[ \left( {\begin{array}{c}X\\ m\end{array}}\right) \right] = \mathbb {E}\left[ \sum _{ \begin{array}{c} I \subseteq [n]\\ |I|=m \end{array} } \left[ \bigwedge _{i \in I}(X_{i}=1)\right] \right] = \sum _{ \begin{array}{c} I \subseteq [n]\\ |I|=m \end{array} }{\mathrm {Pr}\Biggl ( {\bigwedge _{i \in I}(X_{i}=1)} \Biggr )} = \left( {\begin{array}{c}n\\ m\end{array}}\right) \epsilon ^m. \end{aligned}$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_Equ50.png)



On the Number of Queries. Many indistinguishability bounds are presented with a dependency on the number of queries q the adversary is allowed to ask. For reasons of simplicity, we understand the number of queries as a property of a discrete system, i.e., the number of queries that a system answers. This is only a conceptual difference, and all of our results can still be used with the former perspective. For example, this means that if the indistinguishability of the component systems is for distinguishers asking up to q queries, our results can be applied to the corresponding systems that answer only q queries. Usually, if the component systems answer only q queries, then the overall constructed system
will answer only up to
queries, for some
depending on q. As a consequence, the resulting indistinguishability bound
holds for any distinguisher asking up to
queries.
5.3 A Simple
-Combiner for Random Functions


























The above construction can be generalized to a -combiner
which combines
independent random functions
(from
to
) to
random functions
as depicted in Fig. 2. By the argument above,
is a
-combiner for
, where the
are arbitrary random functions and
is a uniform random function (assuming A is an MDS matrix).




Construction transforms the n random functions
to k random functions, where k is the number of rows of the matrix A. For an input
to the i-th constructed function
, the output is the dot product
, where
.
5.4 Combining Systems Forming a Quasi-Group






















![$$[n]$$](../images/508076_1_En_8_Chapter/508076_1_En_8_Chapter_TeX_IEq871.png)






6 Conclusions and Open Problems
We presented a simple systems theory of random systems. The key insight was to interpret a random system as probability distribution over deterministic systems, and to consider equivalence classes of probabilistic systems induced by the behavior equivalence relation. We demonstrated how this perspective on random systems provides an elementary characterization of the classical distinguishing advantage and is also a useful tool to prove indistinguishability bounds.



- (i)
Any
-combiner is also a
-combiner for sufficiently large
. In this sense, the bound of Theorem 3 applies also to any
-combiner. A natural question is whether significantly better indistinguishability amplification is possible for general (non-threshold)
-combiners. In particular, can the presented technique (Lemma 7) be used to prove such a bound? It seems that a new idea is necessary to prove such a bound, considering that the current proof strongly relies on the symmetry in the threshold case.
- (ii)
It is easy to see that the proved indistinguishability bound for
-combiners is perfectly tight for the case
. Is it also tight for general
? For special cases, such as
, it is not too difficult to show that the presented bound is very close to tight.
- (iii)
We have shown how MDS matrices allow to combine
independent random functions over a field to m random functions. However, the same technique does not immediately apply to random permutations. The bounds shown in Lemma 8 are the first non-trivial ones in the more general setting of combining systems forming a quasigroup. It may be possible to improve substantially over said bounds, possibly also by making stronger assumptions (e.g., explicitly assuming permutations). In particular, one might hope to improve the exponent
.
- (iv)
Our treatment is in the information-theoretic setting. A natural question is whether our results can be extended to the computational setting. Under certain assumptions on the component systems, the special case of a (1, n)-combiner was shown to provide computational indistinguishability amplification in [13].
- (v)
Can the coupling theorem be used to prove amplification results that strengthen the distinguisher class? For example, can we get more general lifting of non-adaptive indistinguishability to adaptive indistinguishability than what is shown in [11, 12]?