1 Introduction
Consider the following scenario: Alice sends a ciphertext to Bob, but in addition, she wants to encode the data in a way such that Bob can prove to her that he deleted the information contained in the ciphertext. Such a deletion should prevent Bob from retrieving any information on the encoded plaintext once the key is revealed. We call this certified deletion.
Informally, this functionality stipulates that Bob should not be able to do the following two things simultaneously: (1) Convince Alice that he has deleted the ciphertext; and (2) Given the key, recover information about the encrypted message. To better understand this concept, consider an analogy to certified deletion in the physical world: “encryption” would correspond to locking information into a keyed safe, the “ciphertext” comprising of the locked safe. In this case, “deletion” may simply involve returning the safe in its original state. This “deletion” is intrinsically certified since, without the safe (and having never had access to the key and the safe at the same time), Bob is relinquishing the possibility of gaining access to the information (even in the future when the key may be revealed) by returning the safe. However, in the case that encryption is digital, Bob may retain a copy of the ciphertext; there is therefore no meaningful way for him to certify “deletion” of the underlying information, since clearly a copy of the ciphertext is just as good as the original ciphertext, when it comes time to use the key to decrypt the data.
Quantum information, on the other hand, is known for its no-cloning principle [8, 19, 36], which states that quantum states cannot, in general, be copied. This quantum feature has been explored in many cryptographic applications, including unforgeable money [35], quantum key distribution (QKD) [2], and more (for a survey, see [4]).
1.1 Summary of Contributions
In this work, we add to the repertoire of functionalities that are classically impossible but are achievable with unconditional security using quantum information. We give a formal definition of certified deletion encryption and certified deletion security. Moreover, we construct an encryption scheme which, as we demonstrate, satisfies these notions (in addition, our proofs are applicable in the finite-key regime). Furthermore, our scheme is technologically simple since it can be implemented by honest parties who have access to rudimentary quantum devices (that is, they only need to prepare single-qubit quantum states, and perform single-qubit measurements); we also show that our scheme is robust against noise in these devices. We now elaborate on these contributions.
Definitions. In order to define our notion of encryption, we build on the quantum encryption of classical messages (QECM) framework [3]1 (for simplicity, our work is restricted to the single-use, private-key setting). To the QECM, we add a delete circuit which is used by Bob if he wishes to delete his ciphertext and generate a corresponding verification state, and a verify circuit which uses the key and is used by Alice to determine whether Bob really deleted the ciphertext.











Schematic representation of the security notion for certified deletion security. The game is parametrized by and
outputs an encryption of
while
encrypts its input,
. Security holds if for each adversary
, the probability of (
and
) is essentially the same, regardless of the value of b.
Scheme. In Sect. 4, we present our scheme. Our encoding is based on the well-known Wiesner encoding [35]. Informally, the message is encoded by first generating m random Wiesner states, (
) (for notation, see Sect. 2.1). We let
be the substring of r where qubits are encoded in the computational basis, and we let
be the remaining substring of r (where qubits are encoded in the Hadamard basis). Then, in order to create a classical proof of deletion, Bob measures the entire ciphertext in the Hadamard basis. The result is a classical string, and Alice accepts the deletion if all the bits corresponding to positions encoded in the Hadamard basis are correct according to
. As for the message
, it is encoded into
, where H is a two-universal hash function and u is a fresh, random string. Intuitively speaking, the use of the hash function is required in order to prevent that partial information retained by Bob could be useful in distinguishing the plaintext, while the random u is used to guarantee security in terms of an encryption scheme. Robustness of the protocol is achieved by using an error correcting code and including an encrypted version of the error syndrome. We note that while our definitions do not require it, our scheme provides a further desirable property, namely that the proof of deletion is a classical string only.
Proof. In Sect. 5, we present the security analysis of our scheme and give concrete security parameters (Theorem 3 and its proof). First, the fact that the scheme is an encryption scheme is relatively straightforward; it follows via a generalization of the quantum one-time pad (see Sect. 5.1). Next, correctness and robustness (Sect. 5.2) follow from the properties of the encoding and of the error correcting mechanism.














Our proof technique thus consists in formally analysing the entanglement-based game and applying the appropriate uncertainty relation in the spirit of the one above. Finally, we combine the bound on Bob’s min-entropy with a universal hash function and the Leftover Hashing Lemma of [21] to prove indistinguishability between the cases
and
after Alice has been convinced of deletion.
1.2 Related Work
To the best of our knowledge, the first use of a quantum encoding to certify that a ciphertext is completely “returned” was developed by Unruh [33] in the context of revocable timed-release encryption3: in this case, the revocation process is fully quantum. Our main security definition (Definition 13) is closely related to the security definitions from this work. On the technical side, our work differs significantly since [33] uses techniques related to CSS codes and quantum random oracles, whereas we use privacy amplification and uncertainty relations. Our work also considers the concept of “revocation” outside the context of timed-release encryption, and it is also a conceptual and technical improvement since it shows that a proof of deletion can be classical. Fu and Miller [11] gave the first evidence that quantum information could be used to prove deletion of information and that this could be verified using classical interaction only: they showed that, via a two-party nonlocality game (involving classical interaction), Alice can become convinced that Bob has deleted a single-bit ciphertext (in the sense that the deleted state is unreadable even if Bob were to learn the key). Their results are cast in the device-independent setting (meaning that security holds against arbitrarily malicious quantum devices). Further related work (that is independent from ours) by Coiteux-Roy and Wolf [7] touches on the question of provable deletion using quantum encodings. However, their work is not concerned with encryption schemes, and therefore does not consider leaking of the key. By contrast, we are explicitly concerned with what it would mean to delete a quantum ciphertext. We note, however, that there are similarities between our scheme and the proposed scheme in [7], namely the use of conjugate coding, with the message encoded in one basis and its conjugate basis, to prove deletion.
Relationship with Quantum Key Distribution. It can be instructive to compare our results to the ones obtained in the analysis of QKD [29]. Firstly, our adversarial model appears different since in certified deletion, we have one honest party (Alice, the sender) and one cheating party (Bob, the receiver), whereas QKD involves two honest parties (Alice and Bob) and one adversary (Eve). Next, the interaction model is different since certified deletion is almost non-interactive, whereas QKD involves various rounds of interaction between Alice and Bob. However, the procedures and proof techniques for certified deletion are close to the ones used in QKD: we use similar encodings into Wiesner states, similar privacy amplification and error correction, and the analysis via an entanglement-based game uses similar entropic uncertainty relations, leading to a security parameter that is very similar to the one in [29]. While we are not aware of any direct reduction from the security of a QKD scheme to certified deletion, we note that, as part of our proof technique, we manage to essentially map the adversarial model for certified deletion to one similar to the QKD model since we split the behaviour of our adversarial Bob into multiple phases: preparation of the joint state , measurement of a register B in a determined basis, and finally bounding the advantage that the adversary has in simultaneously making Alice accept the outcome of the measurement performed on B and predicting some measurement outcome on register A given quantum side-information E. This scenario is similar to QKD, although we note that the measurement bases are not chosen randomly but are instead consistently in the Hadamard basis (for Bob’s measurement) and that Eve’s challenge is to predict Alice’s measurement in the computational basis only (this situation is reminiscent of the single-basis parameter estimation technique [20, 29]).
1.3 Applications and Open Questions
While the main focus of this work is on the foundations of certified deletion, we can nevertheless envisage potential applications which we briefly discuss below (we leave the formal analyses for future work).
Protection Against Key Leakage. Almost all encryption schemes suffer from the drawback that, eventually (given enough computational time and power), keys are leaked. Here, certified deletion could be used to mitigate this risk. For instance, using certified deletion, a sender using a storage server for encrypted data could at any time (and in particular, as soon as the sender has doubts that the keys are about to be leaked) request a proof of deletion of the data. This could give some reassurance on the secrecy of the data; in contrast, classical solutions clearly are inadequate.
Protection Against Data Retention. In 2016, the European Union adopted a regulation on the processing and free movement of personal data [26]. Included is a clause on the “right to be forgotten”: a person should be able to have their data erased whenever its retention is no longer necessary. See also [12]. Certified deletion encryption might help facilitate this scenario in the following way: if a party were to provide their data to an organization via a certified deletion encryption, the organization would be able to certify deletion of the data using the deletion circuit included in the scheme. Future work could develop a type of homomorphic encryption with certified deletion so that the ciphertexts could be useful to some extent while a level of security, in terms of deletion, is maintained. Also useful would be a type of “public verifiability” which would enable parties other than the originator to verify deletion certificates. Contact tracing [5] is another relevant scenario where individual data could be safeguarded against data retention by using certified deletion.
Encryption with Classical Revocation. The concept of ciphertext revocation allows a recipient to provably return a ciphertext (in the sense that the sender can confirm that the ciphertext is returned and that the recipient will not be able to decrypt, even if the key is leaked in the future); such a functionality is unachievable with classical information alone, but it is known to be achievable using quantum ciphertexts [33]. In a sense, our contribution is an extension of revocation since from the point of view of the recipient, whether quantum information is deleted or returned, the end result is similar: the recipient is unable to decrypt even given the key. Our scheme, however, has the advantage of using classical information only for the deletion.
As a use case for classical revocation, consider a situation where Bob loans Alice an amount of money. Alice agrees to pay back the full amount in time T plus 15 % interest if Bob does not recall the loan within that time. To implement this scheme, Alice uses a certified deletion encryption scheme to send Bob an encrypted cheque and schedules her computer to send Bob the key at time T. If Bob wishes to recall the loan within time T, he sends Alice the deletion string. Another possible application is timed-release encryption [33], where the key is included in the ciphertext, but with the ciphertext encoded in a classical timed-release encryption.
Composable and Everlasting Security. We leave as an open question the composability of our scheme (as well as security beyond the one-time case). We note that through a combination of composability with our quantum encoding, it may be possible to transform a long-term computational assumption into a temporary one. That is, a computational assumption would need to be broken during a protocol, or else the security would be information-theoretically secure as soon as the protocol ends. This is called everlasting security [32].
For example, consider the situation encountered in a zero-knowledge proof system for a -protocol (for instance, for graph 3-colouring [14]): the prover commits to an encoding of an
-witness using a statistically binding and computationally concealing commitment scheme. The verifier then randomly chooses which commitments to open, and the prover provides the information required to open the commitment. If, in addition, we could encode the commitments with a scheme that provides composable certified deletion, then the verifier could also prove that the unopened commitments are effectively deleted. This has the potential of ensuring that the zero-knowledge property becomes statistical as long as the computational assumption is not broken during the execution of the proof system. This description assumes an extension of our certified deletion encoding to the computational setting and also somehow assumes that the verifier would collaborate in its deletion actions (we leave for future work the formal statement and analysis). Nevertheless, since zero-knowledge proofs are building blocks for a host of cryptographic protocols, certified deletion has the potential to unleash everlasting security; this is highly desirable given steady progress in both algorithms and quantum computers. Another potential application would be proving erasure (in the context where there is no encryption) [7].
Outline. The remainder of this paper is structured as follows. Section 2 is an introduction to concepts and notation used in the rest of this work. Section 3 lays out the novel security definitions which appear in this paper. Section 4 is an exposition of our main scheme, while Sect. 5 provides a security analysis.
2 Preliminaries
In this section, we outline certain concepts and notational conventions which are used throughout the article. We assume that the reader has a basic familiarity with quantum computation and quantum information. We refer to [18] for further background.
2.1 Notation







![$$\mathcal {I}\subseteq [n]$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_IEq51.png)





We let denote the state space of a single qubit, and we use the notation
for any
. Let
be a Hilbert space. The group of unitary operators on
is denoted by
, and the set of density operators on
is denoted by
. Through density operators, a Hilbert space may correspond to a quantum system, which we represent by capital letters. The set of diagonal density operators on
is denoted by
—the elements of this set represent classical states. Discrete random variables are thus modeled as finite-dimensional quantum systems, called registers. A register X takes values in
. A density operator
will be denoted as
. We employ the operator norm, which we define for a linear operator
between finite-dimensional Hilbert spaces
and
as
. Moreover, for two density operators
, we use the notation
to say that
is positive semi-definite.

![$$P_X (x) :=\Pr [X = x]_\rho = {{\,\mathrm{Tr}\,}}[\Gamma (x)_X \rho _{XA}]$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_IEq77.png)


Let be classical states for integers i such that
. Then we use the notation
.
Let denote the Hadamard operator, which is defined by
. For any strings
, we define
. States of the form
are here called Wiesner states in recognition of their first use in [35].
We make use of the Einstein-Podolsky-Rosen (EPR) state [10], defined as .
We use to denote sampling an element
uniformly at random from a set X. This uniform randomness is represented in terms of registers in the fully mixed state which is, given a d-dimensional Hilbert space
, defined as
, where
denotes the identity matrix with d rows.
For two quantum states , we define the trace distance
. Note also an alternative formula for the trace distance:
, where
is a positive operator. Hence, in terms of a physical interpretation, the trace distance is the upper bound for the difference in probabilities with respect to the states
and
that a measurement outcome P may occur on the state.
We define purified distance, which is a metric on quantum states.

![$$\begin{aligned} F(\rho _A, \sigma _A) :=\left( {{\,\mathrm{Tr}\,}}\left[ \sqrt{\sqrt{\rho _A} \sigma _A \sqrt{\rho _A}} \right] + \sqrt{1 - {{\,\mathrm{Tr}\,}}[\rho _A]} \sqrt{1 - {{\,\mathrm{Tr}\,}}[\sigma _A]} \right) ^2, \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ4.png)

2.2 Hash Functions and Error Correction
We make use of universal hash functions, first introduced by Carter and Wegman [6].
(Universal Hashing). Let
be a family of functions. We say that
is universal
if
for any two distinct elements
, when H is chosen uniformly at random from
.
Such families exist if is a power of two (see [6]). Moreover, there exist universal
families of hash functions which take strings of length n as input and which contain
hash functions; therefore it takes O(n) bits to specify a hash function from such a family [34]. Thus, when we discuss communication of hash functions, we assume that both the sender and the recipient are aware of the family from which a hash function has been chosen, and that the transmitted data consists of O(n) bits used to specify the hash function from the known family.
In the context of error correction, we note that linear error correcting codes can generate syndromes, and that corrections to a message can be made when given the syndrome of the correct message. This is called syndrome decoding. Therefore, we implicitly refer to syndrome decoding of an -linear code which handles codewords of length n and generates syndromes of length
when we use functions
and
, where
is a syndrome-generating function and
is a string-correcting function. We also make reference to the distance of an error correcting code, which is the minimum distance between distinct codewords.
2.3 Quantum Channels and Measurements










![$$\begin{aligned} \mathcal {E}_f [\cdot ] :=\sum _{x \in X} |f(x)\rangle _Y \Gamma (x)_X \cdot \Gamma (x)_X \langle f(x)|_Y. \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ8.png)













2.4 Entropic Uncertainty Relations
The purpose of entropy is to quantify the amount of uncertainty an observer has concerning the outcome of a random variable. Since the uncertainty of random variables can be understood in different ways, there exist different kinds of entropy. Key to our work are min- and max-entropy, first introduced by Renner and König [15, 21], as a generalization of conditional Rényi entropies [22] to the quantum setting. Min-entropy, for instance, quantifies the degree of uniformity of the distribution of a random variable.


Max-entropy quantifies the size of the support of a random variable, and is here defined by its dual relation to min-entropy.



![$${{\,\mathrm{Tr}\,}}_C [\rho _{ABC}] = \rho _{AB}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_IEq140.png)
In order to deal with finite-size effects, it is necessary to generalize min- and max-entropy to their smooth variants.

![$$\epsilon \in \left[ 0, \sqrt{{{\,\mathrm{Tr}\,}}[\rho _{AB}]} \right) $$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_IEq142.png)


It is of note that smooth entropies satisfy the following inequality, commonly referred to as the data-processing inequality [28].





We use one half of the generalized uncertainty relation theorem found in [27], the precursor of which was introduced by Tomamichel and Renner [31]. The original uncertainty relation was understood in terms of its application to QKD, and was used to prove the secrecy of the key in a finite-key analysis of QKD [30].





![$$\begin{aligned} \rho _{XKC} = \sum _{x, k} \langle x|x\rangle \otimes \langle k|k\rangle \otimes {{\,\mathrm{Tr}\,}}_{AE} \left[ \sqrt{M^x _A} P^k _A \rho _{ACE} P^k _A \sqrt{M^x _A} \right] \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ16.png)
![$$\begin{aligned} \rho _{YKE} = \sum _{y, k} \langle y|y\rangle \otimes \langle k|k\rangle \otimes {{\,\mathrm{Tr}\,}}_{AC} \left[ \sqrt{N^y _A} P^k _A \rho _{ACE} P^k _A \sqrt{N^y _A} \right] \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ17.png)


We also use the Leftover Hashing Lemma, introduced by Renner [21]. It is typically understood in relation to the privacy amplification step of QKD. We state it in the form given in [29].









![$$\zeta _{AYS^H} = {{\,\mathrm{Tr}\,}}_X[\mathcal {E}_f (\sigma _{AX} \otimes \rho _{S^H})]$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_IEq162.png)


2.5 Statistical Lemmas
The following lemmas are required to bound a specific max-entropy quantity. They are both proven in [29] as part of a security proof of finite-key QKD, and this line of thinking originated in [30].
The following lemma is a consequence of Serfling’s bound [24].




![$$\nu \in [0,1]$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_IEq168.png)

![$$\begin{aligned} \Pr \left[ \sum _{i \in \mathcal {I}} Z_i \le k \delta \wedge \sum _{i \in \bar{\mathcal {I}}} Z_i \ge s (\delta + \nu ) \right] \le \exp \left( \frac{-2 \nu ^2\,s k^2}{m (k+1)} \right) . \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ20.png)
It will also be useful to condition a quantum state on future events. The following lemma from [29] states that, given a classical-quantum state, there may exist a nearby state on which a certain event does not occur.
Let be a classical-quantum state with X a classical register, and
be an event with
. Then there exists a classical-quantum state
with
and
.
2.6 Quantum Encryption and Security
Whenever an adversary is mentioned, it is assumed to be quantum and to have unbounded computational power, and we allow it to perform generalized measurements.
Considering that the scheme introduced in this paper is an encryption scheme with a quantum ciphertext, we rely on the “quantum encryption of classical messages” framework developed by Broadbent and Lord [3]. This framework describes an encryption scheme as a set of parameterized CPTP maps which satisfy certain conditions.

,
, and
,






![$$\begin{aligned} {{\,\mathrm{Tr}\,}}[\Gamma (k) \Phi ^\mathsf {key}(1)] > 0 \Rightarrow {{\,\mathrm{Tr}\,}}[ \Gamma (m) \Phi ^\mathsf {dec}_k \circ \Phi ^\mathsf {enc}_k \Gamma (m)] = 1, \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ21.png)








![$$\begin{aligned} \rho \mapsto \sum _{m \in \{0, 1\}^n} {{\,\mathrm{Tr}\,}}[\Gamma (m) \rho ] \cdot \Phi ^\mathsf {enc}_k (\Gamma (m)). \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ23.png)
As part of the security of our scheme, we wish to ensure that should an adversary obtain a copy of the ciphertext and were to know that the original message is one of two hypotheses, she would not be able to distinguish between the hypotheses. We refer to this notion of security as ciphertext indistinguishability (called indistinguishable security in [3]). It is best understood in terms of a scheme’s resilience to an adversary performing what we refer to as a distinguishing attack.


and






![$$\begin{aligned} \mathop {\mathbb {E}}_b \mathop {\mathbb {E}}_{k \leftarrow \mathcal {K}} {{\,\mathrm{Tr}\,}}[\Gamma (b) \mathcal {A}_{1, \lambda } \circ (\Phi ^\mathsf {enc}_{k,b} \otimes \mathbbm {1}_S ) \circ \mathcal {A}_{0, \lambda }(1)] \le \frac{1}{2} + \eta (\lambda ) \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ24.png)




![$$\begin{aligned} \Pr [\mathcal {K}_\lambda = k] = {{\,\mathrm{Tr}\,}}[\Gamma (k) \Phi ^\mathsf {key}_\lambda (1)]. \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ25.png)
3 Security Definitions
In this section, we introduce a new description of the certified deletion security notion. First, however, we must augment our QECM framework to allow it to detect errors on decryption.






![$$\begin{aligned} {{\,\mathrm{Tr}\,}}[\Gamma (k) \Phi ^\mathsf {key}(1)] > 0 \Rightarrow {{\,\mathrm{Tr}\,}}[ \Gamma (m) \otimes \Gamma (1) \Phi ^{\widehat{\mathsf {dec}}} _k \circ \Phi ^\mathsf {enc}_k \Gamma (m)] = 1, \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ27.png)




The extra qubit (which will be referred to as a flag), though by itself without any apparent use, may serve as a way to indicate that the decryption process did not proceed as expected in any given run. In the case of decryption without error, the circuit should output , and in the case of decryption error, the circuit should output
. This allows us to define a criterion by which an AQECM might be robust against a certain amount of noise.
Since the original QECM framework will no longer be used for the rest of this paper, we henceforth note that all further references to the QECM framework are in fact references to the AQECM framework.






![$$\begin{aligned} \mathop {\mathbb {E}}_{k \leftarrow \mathcal {K}} {{\,\mathrm{Tr}\,}}[\Gamma (m') \otimes \Gamma (1) \Phi _k ^\mathsf {dec}\circ \mathcal {A}\circ \Phi _k ^\mathsf {enc}\Gamma (m)] \le \epsilon . \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ29.png)
In other words, a QECM is -robust if, under interference by an adversary, the event that decryption yields a different message than was encrypted and that the decryption circuit approves of the outcome is less than or equal to
. This is functionally equivalent to a one-time quantum authentication scheme, where messages are classical (see e.g. [1, 9, 13]).
Our description takes the form of an augmentation of the QECM framework described in Definition 9. Given a QECM with key k and encrypting message m, the certified deletion property should guarantee that the recipient, Bob, cannot do the following two things simultaneously: (1) Make Alice, the sender, accept his certificate of deletion; and (2) Given k, recover information about m.








![$$\begin{aligned} {{\,\mathrm{Tr}\,}}[\Gamma (k) \Phi ^{\mathsf {key}}(1)] > 0 \implies {{\,\mathrm{Tr}\,}}[\Gamma (1) \Phi ^\mathsf {ver}\circ \left( \Gamma (k) \otimes \left( \Phi ^\mathsf {del}\circ \Phi ^\mathsf {enc}_k \Gamma (m) \right) \right) ] = 1 \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ30.png)

We call the tuple an n-certified deletion encryption (n-CDE).


,
, and



We are now ready to define our notion of certified deletion security. We refer the reader to Sect. 1.1 for an informal explanation of the definition, and we recall that notation is defined in Eq. (22).







![$$\begin{aligned} p_b = \mathop [{\mathbb {E}}_{k \leftarrow \mathcal {K}} {{\,\mathrm{Tr}\,}}[(\Gamma (1, 1)) (\mathbbm {1} \otimes \mathcal {A}_{2}) \circ (\Phi ^\mathsf {ver}_k \otimes \mathbbm {1}_{ST'}) \circ \mathcal {A}_{1} \circ (\Phi ^\mathsf {enc}_{k,b} \otimes \mathbbm {1}_S ) \circ \mathcal {A}_{0} (1)], \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ32.png)



![$$\begin{aligned} \Pr [\mathcal {K}_\lambda = k] = {{\,\mathrm{Tr}\,}}[\Gamma (k) \Phi ^\mathsf {key}_\lambda (1)]. \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ33.png)





4 Constructing an Encryption Scheme with Certified Deletion
Measurement operator acting on system A with setting | |
Measurement map applied on the qubits of system A indexed by | |
Security parameter | |
n | Length, in bits, of the message |
Total number of qubits sent from encrypting party to decrypting party | |
k | Length, in bits, of the string used for verification of deletion |
Length, in bits, of the string used for extracting randomness | |
Length, in bits, of error correction hash | |
Length, in bits, of error syndrome | |
Basis in which the encrypting party prepares her quantum state | |
Threshold error rate for the verification test | |
Set of possible bases from which | |
Universal | |
Universal | |
Hash function used in the privacy amplification scheme | |
Hash function used in the error correction scheme | |
Seed for the choice of | |
Seed for the choice of the hash function used in the error correction scheme | |
Seed for the choice of the hash function used in the privacy amplification scheme | |
Function that computes the error syndrome | |
Function that computes the corrected string |
(Prepare-and-Measure Certified Deletion) Let be integers. Let
. Let both
and
be universal
families of hash functions. Let
be an error syndrome function, let
be the corresponding function used to calculate the corrected string, and let
be a tolerated error rate for verification. We define a noise-tolerant prepare-and-measure n-CDE by Circuits 1–5. This scheme satisfies both Eq. (21) and Eq. (30). It is therefore an n-CDE.
5 Security Analysis





5.1 Ciphertext Indistinguishability
In considering whether Scheme 1 has ciphertext indistinguishability (Definition 8), one need only verify that an adversary, given a ciphertext, would not be able to discern whether a known message was encrypted.
Scheme 1 has ciphertext indistinguishability.







5.2 Correctness
Thanks to the syndrome and correction functions included in the scheme, the decryption circuit is robust against a certain amount of noise; that is, below such a level of noise, the decryption circuit outputs Alice’s original message with high probability. This noise threshold is determined by the distance of the linear code used. In particular, where is the distance of the code, decryption should proceed normally as long as fewer than
errors occur to the quantum encoding of
during transmission through the quantum channel.
To account for greater levels of noise (such as may occur in the presence of an adversary), we show that the error correction measures implemented in Scheme 1 ensure that errors in decryption are detected with high probability. In other words, we show that the scheme is -robust, where
.

















![$$\begin{aligned} \Pr [H_\text {pa}(r|_\mathcal {I}) \ne H_\text {pa}(\hat{x}) \wedge F^\text {ec}= 1] \le \frac{1}{2^\tau }. \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ37.png)
![$$\begin{aligned}&\begin{aligned}&\Pr [H_\text {pa}(r|_\mathcal {I}) \ne H_\text {pa}(\hat{x}) \wedge F^\text {ec}= 1]\\&\qquad = \Pr [H_\text {pa}(r|_\mathcal {I}) \ne H_\text {pa}(\hat{x}) \wedge H_\text {ec}(p \oplus d) = H_\text {ec}(\hat{x})] \end{aligned} \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ38.png)
![$$\begin{aligned}&\qquad = \Pr [H_\text {pa}(r|_\mathcal {I}) \ne H_\text {pa}(\hat{x}) \wedge H_\text {ec}(r|_\mathcal {I}) = H_\text {ec}(\hat{x})] \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ39.png)
![$$\begin{aligned}&\qquad \le \Pr [r|_\mathcal {I}\ne \hat{x} \wedge H_\text {ec}(r|_\mathcal {I}) = H_\text {ec}(\hat{x})] \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ40.png)
![$$\begin{aligned}&\qquad = \Pr [r|_\mathcal {I}\ne \hat{x}] \Pr [H_\text {ec}(r|_\mathcal {I}) = H_\text {ec}(\hat{x})] \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ41.png)
![$$\begin{aligned}&\qquad \le \Pr [H_\text {ec}(r|_\mathcal {I}) = H_\text {ec}(\hat{x}) \mid r|_\mathcal {I}\ne \hat{x}] \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ42.png)


5.3 Certified Deletion Security




verification passes and Bob outputs 1, in the case that Alice encrypted
;
verification passes and Bob output 1, in the case that Alice encrypted
.
(Prepare-and-Measure Game). Let be an n-CDE with
implicit, and with circuits defined as in Scheme 1. Let
be a certified deletion attack. The game is parametric in
and is called Game 1(b).
- 1.Run
. Generate
Denote(44)Compute(45)(46) - 2.RunCompute(47)(48)
- 3.If
, run
else,(49).
Let be the probability that the output of Game 1(b) is 1. Comparing Game 1 with Definition 13, we note that the former runs the adversary to the end only in the case that
, while the latter runs the adversary to the end in both cases. However, the obtained distribution for
is the same, since in Game 1,
whenever the adversary outputs 1 and
. Hence we wish to bound
in Game 1. Instead of directly analyzing Game 1, we analyze a game wherein the parties use entanglement; this allows us to express the game in a format that is conducive for the analysis that follows.
(EPR Game). Alice is the sender, and Bob is the recipient and adversary. The game is parametric in and is called Game 2(b).
- 1.
Bob selects a string
and sends
to Alice. Bob prepares a tripartite state
where each system contains m qubits. Bob sends the A system to Alice and keeps the systems B and
. Bob measures the B system in the Hadamard basis and obtains a string
. Bob sends y to Alice.
- 2.Alice samples
,
,
,
,
, and
. She applies a CPTP map to system A which measures
according to the computational basis if
and the Hadamard basis if
. Call the result r. Let
. Alice computes
and
. Alice selects a message:
If(50),
and Alice sends
to Bob. Else,(51)and
.
- 3.If
, Bob computes
for some CPTP map(52); else
.
Game 2 is intended to model a purified version of Game 1. Note that Bob’s measuremement of B in the Hadamard basis is meant to mimic the circuit of Scheme 1. Although it may seem strange that we impose a limitation of measurement basis on Bob here, it is in fact no limitation at all; indeed, since Bob prepares
, he is in total control of the state that gets measured, and hence may assume an arbitrary degree of control over the measurement outcome. Therefore, the assumption that he measures in the Hadamard basis is made without loss of generality.
It may also appear that the adversary in Game 1 has more information when producing the deletion string than Bob in Game 2. This, however, is not true, as the adversary in Game 1 has only received information from Alice that appears to him to be uniformly random (as mentioned, the statement is formalized later, in Sect. 5.4). In order to further the analysis, we assign more precise notation for the maps described in Game 2.
Bob’s Measurements. Measurement of Bob’s system B of m qubits in Step 1 is represented using two CPTP maps: one acting on the systems in , with outcome recorded in register Y; and one acting on the systems in
, with outcome recorded in W. Note, however, that Bob has no access to
, and therefore has no way of determining
. The formal separation of registers Y and W is simply for future ease of specifying the qubits to which we refer.
Recall the definition of the measurements from Sect. 2.3.
























The import of the outcome of this comparison test is that if Bob is good at guessing Alice’s information in the Hadamard basis, it is unlikely that he is good at guessing Alice’s information in the computational basis. This trade-off is represented in the uncertainty relation of Proposition 2.
Note that we can define the post-comparison test state, since is disjoint from
and
is disjoint from
. The state is denoted
.
The following proposition shows that in order to ensure that Bob’s knowledge of X is limited after a successful comparison test, and receiving the key, his knowledge about Alice’s hypothetical Hadamard measurement outcome must be bounded below.













In the spirit of [29], we provide an upper bound for the max-entropy quantity, thus establishing a lower bound for the min-entropy quantity.


![$$\nu \in (0, \frac{1}{2} - \delta ]$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_IEq403.png)
![$$\epsilon (\nu )^2 < \Pr [F^\mathrm {comp}= 1]_\sigma = \Pr [F^\mathrm {comp}= 1]_{\hat{\sigma }}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_IEq404.png)


At this point, we use Proposition 3, the Leftover Hashing Lemma, to turn the min-entropy bound into a statement about how close to uniformly random the string is from Bob’s perspective. We name this final state
for the function
. We compare this to the state
where
is the fully mixed state on
.
For the case where , we note that the trace distance
is upper bounded by
. Hence, considering the inequality
results in the proof of the following corollary.
![$$\nu \in (0, \frac{1}{2} - \delta ]$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_IEq429.png)


Finally, we would like to translate this into a statement about in Game 2.
![$$\begin{aligned} \vert \Pr [b' = 1 \wedge ok = 1 \mid Game~2(0)] - \Pr [b' = 1 \wedge ok = 1 \mid Game~2(1)] \vert \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ81.png)






![$$\begin{aligned}&\Pr [b' = 1 \wedge ok = 1 \mid Game~2(b)] \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ85.png)
![$$\begin{aligned}&= \sum _{\zeta } {{\,\mathrm{Tr}\,}}[{\zeta _{\tilde{X} S F^\mathrm {comp}E \wedge F^\mathrm {comp}= 1}}] \Pr [b' = 1 \mid Game~2(b)] \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ86.png)
![$$\begin{aligned}&\vert \Pr [b' = 1 \wedge ok = 1 \mid Game~2(0)] - \Pr [b' = 1 \wedge ok = 1 \mid Game~2(1)] \vert \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ87.png)
![$$\begin{aligned}&\le \sum _{\zeta } {{\,\mathrm{Tr}\,}}[{\zeta _{\tilde{X} S F^\mathrm {comp}E \wedge F^\mathrm {comp}= 1}}] \Vert \zeta ^0 _{\tilde{X} S F^\mathrm {comp}E \wedge F^\mathrm {comp}= 1} - \zeta ^1 _{\tilde{X} S F^\mathrm {comp}E \wedge F^\mathrm {comp}= 1} \Vert _{{{\,\mathrm{Tr}\,}}} \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ88.png)
![$$\begin{aligned}&\le \sum _{\zeta } 2 {{\,\mathrm{Tr}\,}}[{\zeta _{\tilde{X} S F^\mathrm {comp}E \wedge F^\mathrm {comp}= 1}}] \left( \frac{1}{2}\sqrt{2^{-s(1 - h(\delta + \nu )) + n}} + 2 \epsilon (\nu ) \right) \end{aligned}$$](../images/508076_1_En_4_Chapter/508076_1_En_4_Chapter_TeX_Equ89.png)





5.4 Security Reduction
We now show that the security of Game 1 can be reduced to that of Game 2. In order to do so, we construct a sequence of games starting at Game 1 and ending at Game 2, and show that each transformation can only increase the advantage in distinguishing the case from
. For a game G, let
be the advantage, as defined in Eq. (34).
We show a sequence of games, transforming Game 1 to Game 2, such that each successive transformation either has no effect on, or can potentially increase the advantage.









































Let be a game like
except that, in
, instead of system A being measured before running
, system A is measured after running
. Then the advantage is unchanged because the measurement and
act on distinct systems, and therefore commute.
We note that is like Game 2 except that, in the latter game, Bob is the party that prepares the state. Since allowing Bob to select the initial state can only increase the advantage, we get that
. This concludes the proof.
Scheme 1 is certified deletion secure.
We would like to thank Carl Miller for related discussions. We are grateful to the anonymous reviewers for help in improving this work. This work was supported by the U.S. Air Force Office of Scientific Research under award number FA9550- 17-1-0083, Canada’s NSERC, an Ontario ERA, and the University of Ottawa’s Research Chairs program.