1 Introduction
The complexity class ofî total search problems [41], i.e., with syntactically guaranteed existence of a solution for all instances, holds a perplexing place in the hierarchy of computational complexity classes. The standard method for arguing computational hardness in
is via clustering these problems into subclasses characterised by the existential argument guaranteeing their totality [42]. This approach was particularly successful in illuminating the connections between search problems in seemingly distant domains such as combinatorics, computational topology, and algorithmic game theory (see, for example, [14, 17, 18, 23, 37] and the references therein). However, all results of this type ultimately leave open the possibility of the existence of polynomial time algorithms for all of
.
An orthogonal line of work, which can be traced to Papadimitriou [42], shows the existence of hard problems in subclasses of under cryptographic assumptions. Such conditional lower bounds for structured subclasses of
were recently given under increasingly more plausible cryptographic assumptions [6, 7, 11, 12, 16, 20, 28, 33, 38]. The end of the line in this sequence of results would correspond to a “dream theorem” establishing average-case hardness in one of the lower classes in the
hierarchy (e.g.
[15]) under some weak general cryptographic assumptions (e.g. the existence of one-way functions).
An informative parallel for the limits of this methodology can be drawn by considering average-case hardness of decision problems (i.e., languages) in . The existence of a hard-on-average decision problem in
follows from the existence of hard-core predicates for any one-way permutation [24]. However, the existence of injective one-way functions is insufficient for black-box constructions of hard-on-average distributions for languages in
even assuming indistinguishability obfuscation [5] (in fact, [5] ruled out even black-box constructions of worst-case hardness in
using these cryptographic primitives).
For total search problems, the existence of hard-on-average distributions is straightforward either from one-way permutations or collision-resistant hash functions. Moreover, there exist constructions of hard-on-average
distributions either assuming indistinguishability obfuscation and one-way functions [7, 20] or under derandomization-style assumptions and one-way functions [27]. On the other hand, no analogue of the impossibility result for basing average-case hardness in
on (injective) one-way functions [5] is currently known for
. Rosen, Segev, and Shahaf [47] showed that most of the known structured subclasses of
do not imply (in a black-box way) any form of cryptographic hardness; thus, it is plausible that hard-on-average distributions in
can be based solely on the existence of one-way functions.
Rosen et al. [47] also provided some insight into the structure of hard-on-average distributions in . They showed that any hard-on-average distribution of a
problem from any primitive which exists relative to a random injective trapdoor function oracle (e.g. one-way functions, injective trapdoor functions, or collision-resistant hash functions) must result in instances with a nearly exponential number of solutions. Even though the [47] result restricts the structure of hard-on-average distributions in
constructed from these cryptographic primitives, it certainly does not rule out their existence. Indeed, a collision-resistant hash function constitutes a hard-on-average
distribution, albeit with an exponential number of solutions.



1.1 Our Results
We recall the details of the construction of a hard-on-average distribution in from one-way permutations to highlight the restrictions on the type of reductions considered in our results.
Consider the total search problem , in which you are given a length-preserving n-bit function represented by a Boolean circuit
and are asked to find either a preimage of the all-zero string (i.e.,
) or a non-trivial collision (i.e.,
).
is complete for a subclass of
known as
, and Papadimitriou [42] gave the following construction of a hard
problem from one-way permutations. Given a (one-way) permutation
and a challenge
for inversion under f, the reduction algorithm defines an instance of
by the oracle-aided circuit
computing the function
. It is not hard to see that the instance of
has a unique solution corresponding to the preimage of y under f and, therefore, any algorithm solving it breaks the one-wayness of f.

The construction is fully black-box, i.e., the
instance can be implemented via an oracle-aided circuit treating the one-way permutation as a black-box and the reduction inverts when given oracle access to an arbitrary solver for
.
The reduction is many-one, i.e., a single call to a
-solving oracle suffices for finding the preimage of y.
The reduction is f-oblivious, i.e., the oracle-aided circuit
defining the
instance depends only on the challenge y and does not depend on the one-way permutation f in the sense that
itself can be fully specified without querying f. In other words, given the challenge y, the instance
submitted to the
oracle by the reduction is, as an oracle-aided circuit, identical for all permutations f.
The reduction is deterministic, i.e., it simply passes y to specify the
instance.
Such a fully black-box construction of with a deterministic f-oblivious many-one reduction exists also assuming collision-resistant hash functions exist (folklore). Specifically, for any hash function
from the collision-resistant family, the
instance is defined as
, where
represents the operation of string concatenation. Since
concatenates the value h(x) with 1 for any input x, it never maps to the all-zero string and, therefore, has the same collisions as h. Note that, unlike in the above construction from one-way permutations, the instances resulting from collision-resistant hash functions do not have a unique solution. In fact, there are always at least
nontrivial collisions (even in two-to-one functions where each
has exactly two preimages) and this structural property is inherent as shown by Rosen et al. [47]. Importantly, the property of having nearly exponentially many solutions is not in contradiction with the resulting distribution being hard-on-average. Currently, there is no actual evidence suggesting that average-case hardness in
cannot be based on the existence of injective one-way functions.
The above constructions motivate us to study whether there exist such “simple” constructions of an average-case hard problem under weaker cryptographic assumptions such as the existence of injective one-way functions, and we answer this question in negative (see Sect. 3.2 for the formal statement of Theorem 1).
(Main Theorem - Informal). There is no efficient fully black-box construction of a worst-case hard problem from injective one-way functions with a randomized f-oblivious non-adaptive reduction.
Thus, we actually rule out a larger class of fully black-box constructions with reductions to injective one-way functions than the deterministic f-oblivious many-one reductions from the motivating examples of average-case hardness of from one-way permutations, respectively collision-resistant hash functions. We rule out even constructions of worst-case hard
problems using randomized f-oblivious non-adaptive1 reductions. The formal definitions of fully black-box constructions with f-oblivious non-adaptive reductions are given in Sect. 3.1 (see Definition 2 and Definition 3).
Even though restricted, our results are the first step towards the full-fledged black-box separation of and (injective) one-way functions. We note that the full-fledged separation would necessarily subsume the known separation of collision-resistant hash functions and injective one-way functions [49], for which, despite the recent progress, there are only non-trivial proofs [4, 25, 40].
1.2 Our Techniques
Our results employ the framework of black-box separations [19, 30, 45]. The approach suggested in [30] for demonstrating that there is no fully black-box construction of a primitive P from another primitive Q is to come up with an oracle O relative to which Q exists, but every black-box implementation of P is broken. However, as explained in [40, 45], this approach actually rules out a larger class of constructions (so-called “relativized” constructions), and to rule out just fully black-box constructions it suffices to use the so-called two-oracle technique [26]. Here, the oracle O usually consists of two parts: an idealised implementation of the primitive Q and a “breaker” oracle for primitive P. In our context, P corresponds to a
problem and the oracle O comprises of a random injective function (which yields an injective one-way function) and a procedure
which provides a solution for any instance of a
problem. To rule out the existence of fully black-box constructions of hard-on-average
problems from injective one-way functions, one then has to argue that access to such a “breaker” oracle
for
does not help any reduction R in inverting injective one-way functions. Designing such a
oracle and then arguing that it does not help inverting injective one-way function, which we carry out using the reconstruction paradigm of Gennaro ond Trevisan [21], constitute the main technical challenges. Before giving an overview of these two steps, we explain the structural insight that is key to our separation, and guided us in the design of the two steps.
The Existence of a “Useless” Solution. At the core of our negative result is a new structural insight about instances constructed from (injective) one-way functions. Observe that any one-way function gives rise to a search problem with a hard-on-average distribution which is total over its support (but all instances outside its support have no solution). Specifically, for any one-way function
, an instance is any
and the solution for y is any
such that
. The hard-on-average distribution then corresponds to sampling x uniformly from
and outputting the instance
(as in the standard security experiment for one-way functions). When attempting to construct a hard search problem which is truly total and has a solution for all instances (not only for the support of the hard distribution), one has to face the frustrating obstacle in the form of “useless” solutions which do not help the reduction in inverting its challenge y. Note that, as the resulting
problem must be total for all oracles f, there must exist a solution even for oracles with no preimage for the challenge y. By a simple probabilistic argument, it follows that for a random oracle f and a random challenge y, with overwhelming probability, there exists a solution to any
instance which does not query a preimage of y, i.e., a “useless” solution from the perspective of the reduction.2
Thus, demonstrating a black-box separation would be straightforward if the solver knew which challenge y is the reduction attempting to invert. Our solver would simply output such a “useless” solution and we could argue via the reconstruction paradigm that no reduction can succeed in inverting y given access to our solver. In this work, we show that it is possible to construct a
solver which returns such a “useless” solution with overwhelming probability even though the solver does not know the input challenge of the reduction.
Reduction-Specific . Note that a reduction in a fully black-box construction must succeed in breaking the primitive P when given access to any oracle
(see Definition 2). In other words, to rule out the existence of constructions with a fully black-box reduction, it is sufficient to show that for every reduction there exists a
which is not helpful in inverting; in particular,
may depend on the reduction. To enable
to answer the reduction’s query with a “useless” solution with overwhelming probability, we take exactly this approach and construct a reduction-specific
for any construction of a
problem from injective one-way functions. We significantly differ in this respect from the previous works which relied on the reconstruction paradigm of Gennaro and Trevisan [21], e.g., the works which employed the collision-finding oracle of Simon [8, 25, 43, 46, 49]. We note that the possibility of designing a breaker oracle which depends on the fully black-box construction was exploited already by Gertner, Malkin, and Reingold [22], who considered
which depends on the implementation rather than the reduction algorithm (as in our case). That is, to rule out the construction of a primitive P from a primitive Q, they considered an oracle
that depends on the implementation
of the primitive P, whereas in our case
depends on the reduction algorithm R that is supposed to break Q given access to an algorithm that breaks
. The possibility of proving black-box separations via reduction-specific oracles was also observed in the work of Hsiao and Reyzin [26] who, nevertheless, did not have to leverage this observation in their proofs.
On a high level, given that can use the code of the reduction R,
can simulate R on all possible challenges y to identify the set of challenges on which R outputs the present instance that
needs to solve. As we show, the solution then can be chosen adversarially so that it avoids such solutions of interest to the reduction. To turn this intuition into a formal proof, one needs to show that our
indeed does not help in inverting injective one-way functions and we do so along the lines of the reconstruction paradigm of [21].
Applying the Compression Argument. Two important subtleties arise in the proof when we try to turn the reduction into a pair of compression and decompression algorithms, which we explain next. First, the reconstruction paradigm is conventionally applied to random permutations [21, 25], whereas the reduction R and the algorithm are designed for random injective functions. The natural approach is to simply proceed with the same style of proof even in our setting. Specifically, one would presume that a similar incompressibility argument can be leveraged if we manage to somehow encode the image of the random injective function. While this intuition is correct in the sense that it allows correct compression and reconstruction, it turns out that the space required to encode the image is too prohibitive for reaching the desired contradiction with known information-theoretic lower bounds on the expected length of encoding for a random injective function. To resolve this issue, we construct compressor and decompressor algorithms for a random permutation, but we equip the algorithms with shared randomness in the form of a random injective function
independent of the random permutation
to be compressed. Whenever the compressor and decompressor need to provide the reduction or
with access to the injective function
, they compose the permutation
with the shared injective function h and then pass off the composed injective function
to the reduction. With this modification, we are able to show that any reduction which succeeds in inverting injective one-way functions given access to our
can be used to compress a random permutation on
below a standard information-theoretic lower bound on the size of a prefix-free encoding of such random variable. We note that this is reminiscent of the approach used in [40] for separating injective one-way functions from one-way permutations.
Second, we cannot employ the actual oracle in our compression and decompression algorithms: even though we can use the reduction when compressing and decompressing the random permutation, we must be able to consistently simulate
without accessing the whole permutation. In general, the choice of the “breaker” oracle that can be simulated efficiently without too many queries to the permutation (e.g., the collision finding oracle of Simon [2, 49]) is a crucial part of the whole proof, and, a priori, it is unclear how to design a
solver which also has such a property. Nevertheless, we show that there exists a
which can be efficiently simulated given only (sufficiently small) partial information about the permutation.
f-Oblivious Reductions. As our simulates the reduction on possible challenges y, we need for technical reasons that the reduction is f-oblivious (namely, for correctness of our encoding and decoding algorithms). However, we believe that f-obliviousness is not overly restrictive as it is a natural property of security reductions. Besides the two fully black-box constructions of
with f-oblivious reductions described in Sect. 1.1, f-oblivious security reductions appear also in the cryptographic literature – see for example the standard security reduction in the Goldreich-Levin theorem establishing the existence of hard-core predicate for any one-way function (note that this particular security reduction is also non-adaptive). An orthogonal notion of a
-oblivious construction appears in the work of Wee [50]. However, it is the implementation of the constructed primitive which is “oblivious” to the one-way permutation
in his work.
1.3 Related Work







[42], which captures computational problems related to the parity argument like Borsuk-Ulam theorem or fair division [18];
[34], defined to capture the computational complexity of problems amenable to local search and its “continuous” counterpart
[15] (in fact
), which captures finding the computational complexity of finding (approximate) local optima of continuous functions and contains interesting problems from game theory (e.g., solving the simple stochastic games of Condon or Shapley); and
[42] and
[33], motivated by the pigeonhole principle and contain important problems related to finding collisions in functions.
The relative complexity of some of these classes was studied in [3] as it was shown using (worst-case) oracle separations that many of these classes are distinct.
Cryptographic hardness in . Hardness from standard cryptographic primitives was long known for the “higher” classes in
like
and
. We have already mentioned that one-way permutations imply average-case hardness in
[42] and existence of collision-resistant hashing (e.g. hardness of integer factoring or discrete-logarithm problem in prime-order groups) implies average-case hardness in
(as well as in
). In addition, Jeřábek [33], building on the work of Buresh-Oppenheim [9], showed that
is no easier than integer factoring.
However, it is only recently that we are better understanding the cryptographic hardness of the lower classes in . This was catalysed by the result of Bitansky et al. [7] who reduced hardness in
to indistinguishability obfuscation (and injective OWFs) expanding on Abbot, Kane, and Valiant [1]; Hubáček and Yogev [28] extended this result to
. The underlying assumption was relaxed further to cryptographic assumptions that are more plausible than indistinguishability obfuscation in [11, 12, 16]. Using similar ideas, Bitansky and Gerichter [6] presented a construction for hard-on-average distributions in the complexity class
in the random oracle model. Building on these results, a flurry of recent works [31, 32, 35, 36, 39] further weakened the assumptions required for proving average-case hardness in
to sub-exponential hardness of learning with errors, bringing us closer to proving average-case hardness in
under a standard concrete cryptographic assumption.
One-Way Functions and . Hubáček et al. [27] showed that average-case hardness in
(which is implied by OWFs) implies average-case hardness in
under complexity theoretical assumptions related to derandomization. Pass and Venkitasubramaniam [44] recently complemented the [27] result by showing that when OWFs do not exist, average-case hardness in
implies average-case hardness in
. However, a definitive relationship between OWFs and
has remained elusive. This prompted Rosen et al. [47] to explore impossibility of reducing
hardness to OWFs. They gave a partial answer by showing that there do not exist hard-on-average distributions of
instances over
with
solutions from any primitive which exists relative to a random injective trapdoor function oracle (e.g. one-way functions). Their main observation was that the argument in [48], which separates one-way functions from one-way permutations, can be strengthened to separate other unstructured primitives from structured primitives (such as problems in
). However, it seems that the [48] argument has been exploited to its limits in [47], and, therefore, it is not clear whether their approach can be extended to fully separate one-way functions and
. Thus, the situation is contrasting to
, the decision counterpart of
, whose relationship with (injective) OWFs is much better studied. In particular, we know that hardness is implied by one way permutations but injective OWFs, even with indistinguishability obfuscation, (and, therefore, public-key encryption) cannot imply hardness in
in a black-box way [5].
2 Preliminaries
Unless stated otherwise, all logarithms are base two. For , we use
to denote the set
. For strings
, we use
or
to denote that x is lexicographically strictly smaller than y.
(Functions) Let ,
be a function and
be a set.
- 1.
denotes the restriction of f on
, i.e., the function
such that
.
- 2.
denotes the domain of f, i.e., the set X.
- 3.
denotes the image of f, i.e., the set
.
- 4.
denotes the image of the restriction of f on
, i.e., the set
.
(Injective functions) We denote by the set of all injective functions from
to
. For the special case when
we get the set of all permutations on
.
The set is the set of all functions
, such that f can be interpreted as a sequence
of injective functions, where
is an injective function such that for all
and
.
We say that the function m is the type of f and we define the corresponding type operator such that
.






In our proofs, we often compose a function defined on all binary strings with a function defined only for binary strings of certain length; namely, we often want to compose a function from with a permutation of n-bit strings. The desired resulting function should always be a function from all binary strings. For the ease of exposition, we extend the standard notation for function composition as follows.





Finally, we recall some basic information-theoretic results about prefix-free codes.
(Prefix-free code). A set of code-words is a prefix-free code if there are no two distinct
such that
is a prefix (initial segment) of
, i.e., for any two distinct
there exists
such that
.
(Theorem 5.3.1 in [13]). The expected length L of any prefix-free binary code for a random variable X is greater than or equal to the entropy H(X).
To encode a uniformly random permutation using prefix-free encoding it takes at least
bits in expectation.
The entropy of a uniformly randomly chosen permutation from is
as we choose uniformly at random from
distinct permutations. By Theorem 1, we get that the expected size of the encoding is at least
.
3 Separating
and Injective One-Way Functions
3.1 Fully Black-Box Construction of Hard
Problem from iOWF
Below, we give a definition of fully black-box construction of a (worst-case) hard problem from an injective one-way function.




- 1.
R and C halt on all inputs: For all
,
, and
, the algorithm
halts in time
, and the algorithm
halts in time
.
- 2.
Correctness: For all
and for all
, there exists
such that
and
, i.e., for any instance of the
problem there exists a solution of polynomial length.
- 3.Fully black-box proof of security: There exists a polynomial
such that for all
and any oracle-aided algorithm
, if
then for infinitely many,
Definition 2 has the following semantics. The deterministic algorithm C specifies the problem and the algorithm R is the (security) reduction which, given access to any
solver, breaks the security of any injective one-way function. For example in the case of the hard
problem from one-way permutations discussed in Sect. 1.1, C would be an algorithm which on input
, respectively
, outputs 1 if and only if
, respectively
. The reduction algorithm
simply queries the
solver
with the instance
, i.e., a circuit computing the function
, and outputs the solution s returned by
for which, by construction,
.
Below, we provide some additional remarks on important points in the above definition.





Constructions of Worst-Case vs. Average-Case Hardness in . Our Definition 2 considers constructions of a worst-case hard
problem – the reduction has access to
which is promised to return a solution to any instance of the
problem. To capture constructions of average-case hardness in
, we would need to extend the construction with an efficiently sampleable distribution D of instances of the
problem and require the reduction to invert when given access to any
that returns solutions for instances coming from the specific distribution D (see Definition 5.1 in [47]). However, given that we are proving a black-box separation, showing impossibility for worst-case hardness is a stronger result.
The Quality of . Note that we consider security reductions which invert given access to
which outputs a solution with probability 1, whereas some definitions in the literature allow the reduction to work only with some non-negligible probability. This also makes our negative result stronger – it is potentially easier to give a reduction when given access to
which is guaranteed to always return a solution.
Direct and Indirect Queries to f. The security reduction R can learn something about f in various ways. It may query f directly or the information might be deduced from the solution of some queried instance of the problem returned by
. We introduce the following notation in order to distinguish where queries originate, which allows us to reason about the view the security reduction has over the function f in our proof of Theorem 1.
(Query sets Q) We distinguish the following sets of queries to oracles depending on where these queries originated and which oracle is queried.
Let
denote the set of all preimages
on which the oracle f has been queried by C running on an input (i, s).
Let
denote the set of all instances
on which the oracle
has been queried by R running on a security parameter n and challenge y.
Let
denote the set of preimages
on which the oracle f has been queried by R running on an input y and security parameter n.
Let
denote the set of all preimages
on which the oracle f has been queried indirectly, i.e., it has been queried by C running on an input (i, s) where
and
.
Let
.
Note that these sets may not be disjoint. When f is a partial function (i.e., when f is not defined on all inputs) the query set contains all queries queried up to the point of the first undefined answer and the query with the undefined answer is included as well.
Restrictions on the power of the reduction. We consider various restricted classes of security reductions as defined below.
(Deterministic/randomized, many-one/non-adaptive, f-oblivious reductions) Let be a fully black-box construction of a hard
problem from injective one-way functions.
We distinguish deterministic / randomized reductions. For a randomized security reduction, we extend the input of R to a triple , where the meaning of n, resp. y, remains unchanged (i.e., n is the security parameter, y is the challenge), and
is the randomness of the security reduction.
The security reduction R is many-one if for all , for any oracle
and for all
,
makes a single query to the oracle
.
The security reduction R is non-adaptive if for all , for any oracle
and for all
, all the queries of
to the oracle
are submitted in parallel (i.e., the queries to
do not depend on the answers received from
).
The security reduction R is f-oblivious if for all , for any oracle
, and any pair of functions
, the distributions of queries
and
are identical (i.e., the queries to
depend only on the input y and are independent of the oracle f).
3.2 Impossibility for a Deterministic f-Oblivious Many-One Reduction
In this section, we show that there is no fully black-box construction of a hard problem from injective one-way functions with a deterministic f-oblivious many-one reduction. The proof of this result is already non-trivial and highlights our main technical contributions. In Sect. 3.3, we explain how to extend this result to rule out fully black-box constructions even with a randomized f-oblivious non-adaptive reduction. For lack of space, we omit the proofs of the technical lemmata and instead, for interested readers, provide pointers to the appropriate part of full version [29].
There is no fully black-box construction of a worst-case hard
problem from injective one-way functions with deterministic f-oblivious many-one reduction with success probability at least
such that both running times
.
In the above theorem, the running time of both R and C is restricted to quasi-polynomial. Note that the standard notion of cryptographic constructions requires R, C to run in polynomial time in order to be considered efficient. We are ruling out a broader class of potentially less efficient reductions.


- 1.
: a sequence of random injective functions which implements injective one-way functions; and
- 2.
: a reduction-specific oracle that solves
instances.







Oracle : Let
be the construction of a hard
problem from injective one-way function with a deterministic f-oblivious many-one security reduction which is hardwired in the oracle
. Ideally,
should output a solution i which gives the reduction R no information about the inversion of its challenge y. Unfortunately,
is unaware of the particular challenge y on which
queried
with the instance i. Nevertheless,
can compute the set
of all challenges y on which the reduction would query the instance i.3 The challenges in
become “protected” and
will attempt to provide a solution which does not reveal a preimage of any
, i.e., s such that
does not make an f-query on a preimage of any
.
Note that we could run into a potential technical issue when defining , as the set of all challenges y on which R queries the instance i might be infinite. However when the instance i is queried by the security reduction R on some very long challenge y then C contributes no indirect query to
as the running time of C depends only on the length of the instance i. More formally: the running time of C is bounded by
thus C cannot query f on longer inputs. Therefore, we can consider only possible challenges y from
for
.
On lines 2–6, computes the following auxiliary sets
,
, and
. The set
contains all the input lengths for the preimages x such that the reduction
queries the instance i.
then splits
into subsets
using the input lengths of interest in
. Finally,
computes the set
which is the set of all possible solutions for the instance i.
The strategy of is to return a solution from the set of “benign” solutions
, which do not induce any query to preimages of the protected challenges in
. If there is any such benign solution then
simply halts and returns the lexicographically smallest one. Unfortunately, it might be the case that every solution queries a preimage of some
, e.g., if the instance i is queried for all challenges y of a given preimage length n and on each solution s at least one x of length n is queried (i.e.,
unless we remove
from
). Since
in general cannot output a solution while protecting the whole set
, it will proceed to gradually relax the condition on the set of protected challenges.
Note that we might allow to return a solution even though it induces queries to preimages of protected challenges, as long as the reduction queries the instance i on the corresponding image length often enough, as any fixed solution induces only a bounded number of queries to f (bounded by
). Therefore, if the set of challenges on which R queries i is dense enough w.r.t. some image length then, with overwhelming probability, an arbitrary solution will be benign for the random challenge y given to the reduction. Thus, we allow
to return a solution revealing preimages of challenges from the auxiliary set
maximizing
. If the fraction
is small then
is able to find a benign solution which protects the preimages of length n (see [29, Claim 13]). Whereas, if the fraction
is large enough then any fixed solution will be benign w.r.t. the actual challenge of R with overwhelming probability as each solution can induce queries to only a small number of preimages of challenges from the set
(see [29, Claim 12]).











Algorithm: The algorithm
(Algorithm 2) uses the reduction R to compress a random permutation
on bit strings of length n. Note that even though R succeeds in inverting an injective function, for technical reasons, we leverage its power in order to compress a permutation. One particular issue we would run into when trying to compress an injective function f which is not surjective is that the encoding would have to comprise also of the encoding of the image of f which might render the encoding inefficient.
Nevertheless, in order to use the reduction for compressing, we must provide it with oracle access to an injective function which is not a bijection. Thus, we equip (as well as
) with an injective function h.
then computes the function f as a composition of the functions
and uses the reduction with respect to the composed oracle f. We emphasize that h is independent of
, therefore it cannot be used in order to compress
on its own.
First, computes the set
which is the set of all challenges y on which the reduction successfully inverts (i.e., the reduction returns
). Then
computes the set
, which is the set of “good” challenges y, on which the reduction successfully inverts even though
returns a solution which does not induce a query to any preimage of y. This set is used to reduce the size of the trivial encoding of f – the part of f corresponding to the challenges in
will be algorithmically reconstructed by
using the security reduction R.
Specifically, computes
, the subset of
for which the preimages will be algorithmically reconstructed, as follows:
processes the challenges y in
one by one in lexicographically increasing order and stores all f-queries needed for reconstruction by R (i.e, for any x such that there was an f-query x, the element f(x) is removed from the “good” set
as we cannot reconstruct the preimage of y using R without knowing the image of x under f).
outputs an encoding
which describes the size of
, the sets
and
(where
is the set of preimages corresponding to
), and the partial function representing the function f on inputs of length n outside of
. Thus, the encoding saves bits by not revealing the bijection between
and
which is algorithmically reconstructed instead (Lemma 4). Specifically, the size of
(equal to the size of
) can be encoded using
bits.
is a subset of
and it is encoded using
bits as the index of the corresponding subset of size
(the set
is encoded in a similar manner). Finally, the bijection between
and
is encoded as the index of the corresponding permutation on a set of size
using
bits.






Algorithm: The encoding returned by
is uniquely decodable by
given in Algorithm 3 (see [29, Section 4.2]). When the output of
starts with 0, the rest of the encoding is an explicit encoding of
and we are immediately done with its reconstruction. If the output starts with bit 1, the following n bits represent
.
then reads the following
bits of the encoding to reconstruct the set
(as the j-th subset of
of size
). Similarly,
reconstructs the set
using the following
bits. The remaining bits represent
, a restriction of f on all the n-bit inputs outside of
, given by the index of the corresponding bijection between
and
. Note that such encoding of
does preserve the structure of the restriction but it looses the information about the domain and image of
. However, both are easy to reconstruct. The domain is simply
and the image of
can be computed from
and the common input h as
.
then computes the remaining preimages one by one in lexicographic order using the security reduction R, adding the reconstructed mapping into a partial function
. Note that during the computation of the preimage of y, the reduction might make an f-query on x which has no defined output. But as
takes
in the same order as
added them to the set
, this happens if and only if the preimage of y is being queried. Thus, we answer any such query by y (it is crucial that this happens only for f-queries made directly by R) which is captured in the definition of the auxiliary function
defined by
and used as the oracle for the security reduction R.
Once finds the preimages of all challenges y from
, the function
is defined everywhere. To reconstruct the permutation
on
,
can simply compose the inverse of h with the reconstructed function
.
Algorithm: For the ease of presentation we usually do not explicitly mention the oracle h as it is given by context (we run
and
) with respect to only one h at a time.
The computation of the algorithm (Algorithm 4) is similar to the computation of
(Algorithm 1). First,
computes the sets
. There is one big difference between
and
. As
does not have access to the whole f it uses h or the partial knowledge of f, namely the partial function
everywhere f is used in the
algorithm.
We use h whenever we need to determine the image of
for some n. As
using
instead of
makes no difference to the computation.
The second place where h is used instead of f is when
computes the set
. Specifically, when determining images y for which the security reduction R queries the given instance i, the algorithm
computes the same
as if it used f by the f-obliviousness of the security reduction.
In all other places,
uses the partial knowledge of f (i.e., the partial function
). This causes a real difference in the computation. In particular, the set
(as computed by
) may differ a lot from
(as computed by
) as some solutions from
potentially query some unknown parts of f. Thus, the set
computed by
is just a subset of the whole
. The set
contains only the solutions
is “aware of” (
is defined for all queries and thus
may verify the solution). The rest of the computation is practically the same, except that
is restricted just to the set of solutions
. The main trick is that we make sure that
is aware of the solution which should be returned and it does not matter that it ignores other solutions of the instance.

Structure of the Proof of Theorem 1. For ease of presentation and understanding, we divide the proof into four lemmata, Lemma 1 through 4. Lemma 1 shows that given an instance i of the problem represented by the algorithm
, our
always returns a solution, i.e., an s such that
(formal proof is given in [29, Section 4.1]). Thus, any distribution of instances of the
problem is easy in the presence of
.
For any instance and any
, the algorithm
halts and returns a solution, i.e., it returns a string
such that
and
.
To argue that does not help in inverting injective functions, we analyze the joint properties of the algorithms
and
. First, we show that
always returns the correct permutation encoded by
(see [29, Section 4.2] for the formal proof).






We crutialy rely on f-obliviousness of the reduction for the proof of Lemma 2. It is the property which allows us to simulate the algorithm during the decoding phase, as
needs to be able to compute the same set
as
does. Moreover,
cannot query f on all preimages as
does when computing
. Due to f-obliviousness of the reduction, we may substitute f by h in the computation of
in
as the resulting set depends only on the image of the function given to R as an oracle (and
).
Second, we show that the encoding output by is prefix-free (see [29, Section 4.3]).




Finally, we bound the expected size of the encoding given by (see [29, Section 4.4]) which contradicts the information-theoretic bound implied by Corollary 1.






![$$\begin{aligned} \Pr _{x \leftarrow \{0,1\}^n}[R^{f,\textsc {Solve}}(1^n,f(x)) = x] = \beta \ge 2^{-0.1n}. \end{aligned}$$](../images/508076_1_En_22_Chapter/508076_1_En_22_Chapter_TeX_Equ9.png)
![$$\begin{aligned} \exists h \in \text {Inj}_{}^{} :\mathbb {E}_{\pi \leftarrow \text {Inj}_{n}^{n}, h \leftarrow \text {Inj}_{}^{}}[|\textsc {Encode}_n ^h(\pi )|] \le \log {2^n!} - \frac{8}{10} n 2^{0.1n}. \end{aligned}$$](../images/508076_1_En_22_Chapter/508076_1_En_22_Chapter_TeX_Equ10.png)
We claim (see [29, Claim 10]) that the upper bound used in the statement of the lemma is without loss of generality for large enough n and for all quasi-polynomial (and, hence, also for efficient) algorithms R, C. We use this fact again in the proof of the main theorem (Theorem 1), and refer the readers to [29, Section 4.4] for the precise statement and its proof.
Equipped with the above lemmata, we next prove Theorem 1.




![$$\begin{aligned} \Pr _{x \leftarrow \{ 0,1 \}^n}\left[ R^{f, \textsc {Solve}}(1^n,f(x)) = x \right] \ge \frac{1}{p'(n)} \end{aligned}$$](../images/508076_1_En_22_Chapter/508076_1_En_22_Chapter_TeX_Equ11.png)



- 1.
,
- 2.



For any , we can use the algorithm
(Algorithm 2) to encode a given permutation
. Decodability of the encoding follows from Lemma 2. Moreover, by Lemma 3, the encoding is a prefix-free code. By Lemma 4, there is a function
such that the pair of algorithms
and
defines an encoding of
with expected length at most
. This contradicts the information-theoretic bound on the expected length of any prefix-free encoding of a random permutation on
given by Corollary 1.
3.3 Extensions
In this section, we state some extensions to the result in the previous section. We refrain from providing the details and refer the readers to [29, Section 5].
First, it is possible to extend our proof from Sect. 3.2 to rule out even non-adaptive security reductions which submit multiple queries to the oracle in parallel, though still f-obliviously, as defined in Definition 3. The description of the algorithms
,
,
, and
remain unchanged, but we require a slightly different analysis tailored for non-adaptive reductions. We refer the readers to [29, Section 5.2] for the details.
Second, we can extend our results to randomised reductions with additional changes to our algorithm . One could imagine that the construction has some instance i created for a concrete challenge y, on which R queries i with high probability. But R might also query the instance i for many other challenges
(on each of them with small probability) to hide the real challenge y. Thus we need to take the probability of querying the instance i into account. Roughly speaking, the
for randomised reductions is an extension of the
given in Algorithm 1 taking this probability into account.
is designed accordingly and thanks to f-obliviousness we are still able to show that the simulation is faithful. The rest of the changes involve modifying the existing argument taking into account the changes to
and
. We refer the readers to [29, Section 5.3] for the details.
4 Conclusions
In this work, we have shown that there are intrinsic barriers preventing simple fully black-box constructions of hard problems from injective one-way functions. The main technical contribution of our work is the technique of designing a “
-breaker” oracle
which depends on the reduction.
The natural direction towards extending our results would be attempting to lift the restriction to f-oblivious and non-adaptive reductions. One reason for why this might be challenging is that a black-box separation of and injective one-way functions would subsume the separation of collision-resistant hash functions and one-way functions [49], for which, despite the recent progress, there are only non-trivial proofs [4, 25, 40].
We wish to thank the anonymous reviewers for their comments which helped us to clarify the presentations of our results.