## Final Report Summary - QUANTSTAT (Quantum Theory and Statistics)

Quantum information science studies the potential applications of microscopic physical systems, that are governed by the laws of quantum mechanics, for the storage, transmission and manipulation of information. In the most fundamental communication scenario, the information to transmit (e.g., a natural number between 1 and r) is encoded in the quantum state of some physical system (e.g., the polarization of a photon), then sent through a channel (e.g., an optical fiber) to the receiver, who has to make a measurement on the system (s)he received in order to identify the state of the system with one of finitely many possible messages (a number between 1 and r). This last step is called state discrimination, which is a fundamental task in not only this, but in any other communication or computation problem where the endresult is a physical system in one of finitely many possible quantum states, and to complete the protocol, one has to determine the identity of this state. The project addressed various mathematical problems related to the analysis of the above information-theoretic tasks. Below we explain briefly the background of the problems that we studied, and the main achievements of the project.

If in the above scenario there are only two possible messages to send, which can be labeled by 0 and 1, the identification at the sender’s side fails either if (s)he erroneously identifies message 0 as 1 (type I error) or the other way around (type II error). The probability of these errors can be reduced if the sender sends the same message many times, which results in several identical copies of the same state at the receiver’s side. One of the cornerstones of quantum information theory is the quantum Stein’s lemma, which states that if the receiver selects a measurement strategy that keeps the type I error probability below a fixed threshold, and minimizes the type II error probability under this constraint, then the optimal type II error probability goes to zero with an exponential speed as the number of copies increases, and the exponent is given by the relative entropy of the two states. Here, the relative entropy is a function that assigns a number to any two quantum states, and Stein’s lemma provides an operational interpretation to this function as a measure of distinguishability of the two states. While this result has great conceptual importance, it is not completely satisfying from a practical point of view, where one only has a finite number of copies at hand. In this project we managed to refine the above asymptotic statement, and provided bounds on the deviation of the error probability from its asymptotic value as a function of the sample size. These bounds decrease with the sample size, and they are optimal in the sense that their speed of convergence to zero cannot be improved.

The above described protocol is only one possible way to optimize the error probabilities. Another natural way is to require the type II error to vanish with a given exponential speed, in which case two things can happen: if the exponent is below the relative entropy, then the type I errors also decay exponentially fast, with an exponent that can be expressed in terms of the Rényi α-divergences of the two states, where α ∈ (0, 1) is a parameter related to the trade-off between the two error probabilities. On the other hand, if the rate of the type II error is above the relative entropy then the protocol fails with certainty, i.e., the type I error probability goes to 1 as the number of copies tends to infinity; this is the so-called strong converse property. The quantification of the trade-off between the relevant exponents in this case has been an open problem for a long time, and one of the the main results of the project was the solution of this problem. This result is even more relevant as it establishes a oneto-one correspondence between the operationally defined trade-off values and a new notion of quantum Rényi α-divergences with α > 1, which have been introduced recently by different independent research groups, partly based on purely mathematical considerations, and partly as useful tools in proving strong converse theorems for the classical information transmission over quantum channels.

If in the above scenario there are only two possible messages to send, which can be labeled by 0 and 1, the identification at the sender’s side fails either if (s)he erroneously identifies message 0 as 1 (type I error) or the other way around (type II error). The probability of these errors can be reduced if the sender sends the same message many times, which results in several identical copies of the same state at the receiver’s side. One of the cornerstones of quantum information theory is the quantum Stein’s lemma, which states that if the receiver selects a measurement strategy that keeps the type I error probability below a fixed threshold, and minimizes the type II error probability under this constraint, then the optimal type II error probability goes to zero with an exponential speed as the number of copies increases, and the exponent is given by the relative entropy of the two states. Here, the relative entropy is a function that assigns a number to any two quantum states, and Stein’s lemma provides an operational interpretation to this function as a measure of distinguishability of the two states. While this result has great conceptual importance, it is not completely satisfying from a practical point of view, where one only has a finite number of copies at hand. In this project we managed to refine the above asymptotic statement, and provided bounds on the deviation of the error probability from its asymptotic value as a function of the sample size. These bounds decrease with the sample size, and they are optimal in the sense that their speed of convergence to zero cannot be improved.

The above described protocol is only one possible way to optimize the error probabilities. Another natural way is to require the type II error to vanish with a given exponential speed, in which case two things can happen: if the exponent is below the relative entropy, then the type I errors also decay exponentially fast, with an exponent that can be expressed in terms of the Rényi α-divergences of the two states, where α ∈ (0, 1) is a parameter related to the trade-off between the two error probabilities. On the other hand, if the rate of the type II error is above the relative entropy then the protocol fails with certainty, i.e., the type I error probability goes to 1 as the number of copies tends to infinity; this is the so-called strong converse property. The quantification of the trade-off between the relevant exponents in this case has been an open problem for a long time, and one of the the main results of the project was the solution of this problem. This result is even more relevant as it establishes a oneto-one correspondence between the operationally defined trade-off values and a new notion of quantum Rényi α-divergences with α > 1, which have been introduced recently by different independent research groups, partly based on purely mathematical considerations, and partly as useful tools in proving strong converse theorems for the classical information transmission over quantum channels.