Periodic Reporting for period 1 - QSPEED (Scalable quantum algorithms in highly noisy environments)
Periodo di rendicontazione: 2023-09-01 al 2025-08-31
Unfortunately, to make quantum computers a reality, one needs to make them resilient to the unavoidable errors occuring within the algorithms. These errors are one of the main reasons why we do not have quantum computers able to outperform state-of-the-art classical computers to this date.
The main approach to resist to errors is quantum error-correction. However, it is very costly in terms of resources: several qubits (quantum bits) are required to detect and correct these errors, and the errors should be sufficiently rare so that this technique works.
The high cost of this approach, and the fact it requires demanding conditions on the hardware quality to work makes it very challenging to be implemented. Therefore, we need to find alternative, more efficient techniques to resist against errors.
The overall objective of this project is exactly to fulfill this goal. More precisely, the goal is to find ways to design quantum algorithms that will be inherently resilient to errors, from the way they are built.
More generally, the scope of this project is to find ways to better resist the noise in quantum information processing tasks.
My first attempt to find an alternative to quantum error-correction, to resist the noise, was to assume a specific noise model, experimentally motivated, for the hardware, and exploit this noise model to make noise-resilient algorithms.
For this, I considered the noise model fulfilled by superconducting cat-qubits, a special kind of superconducting qubits developped by the French quantum startup Alice & Bob, and the giant Amazon.
This noise model has the specificity that only a certain kind of errors, "bit-flips", are produced after each of the quantum gate in the algorithm. With collaborators, we have shown, that by exploiting this property, it is possible to design quantum circuits that are intrinsically noise-resilient without doing quantum error-correction, which is an interesting result in itself. Unfortunately, an important shortcoming was that the obtained algorithms were efficiently simulable classically, preventing the potential for a quantum speedup.
This "negative result" appeared to provide a useful benchmark protocol that can be used by experimentalists. In simple terms, the specific noise model fulfilled by superconducting cat qubits is crucial for the scalability of this platform. While it is easy to check if a noise model is realized for each individual gate in an algorithm, it is much more challenging to check that the noise structure remains when implemented in large-scale circuits. By using the circuits we designed, and the classical algorithm able to efficiencitly simulate them that we developped, our benchmark allows to detect specific, yet important violations to the noise model of superconducting cat qubits, that would occur in large-scale circuits. These violations can be produced because of non-local effect in the noise that are invisible at the level of individual gates. Therefore, our protocol is useful to check the hardware reliability in large-scale circuit and can be used to check if superconducting cat qubits could be scalable. This work has been published in NPJ quantum information in August 2025.
Outside of this work, I also studied how we could better resist the noise in distributed quantum systems. Distributed quantum systems are useful in the context of quantum computing as there is growing interest to distribute a quantum computation over remote processors, as well as for quantum communication and distributed quantum metrology.
A first work I did regarding this, with collaborators, was to show that entanglement dilution can be a useful tool to resist to the noise in such distributed systems. Entanglement dilution is a protocol that was originally designed to distribute a fixed amount of entanglement, contained in some number of two-qubit states, into a larger number of two-qubit states (each of them being less entangled: the total amount of entanglement is preserved through dilution). Dilution is implemented in the formalism of local operations and classical communications (the natural formalism to describe distributed settings). This work is interesting as it is yet another example of an alternative to the implementation of quantum error-correction codes to resist to the noise. In particular, in this work, I compared the performance of entanglement dilution to error-correction. While I have shown that error-correction can achieve better performances for the noise models we looked at, this result is nonetheless interesting because it provides a new possible application to entanglement dilution, which was previously unknown. It motivates further studies to see if dilution could outperform error-correction for specific noise models. This article has been published in Physical Review A in 2024.
Still in the context of distributed systems, I analyzed the question of finding optimal manner to distribute entanglement between distant part, for noisy systems. More precisely, a crucial requirement in distributed quantum systems is to distribute entanglement between far-away parties. This is typically done by generating some entangled qubits qA and qB and providing them to the two distant parties A and B, which must share entanglement. The transmission channel being typically noisy, a natural question is to find where the entanglement source should be located to minimize the amount of noise introduced during the transmission of the entangled pairs. In this work, we have shown that it is typically better to put the entangled source in the middle between the distant parts A and B (in different terms, it is typically worse to co-locate the entanglement source with either A or B). This work is therefore another one where I attempted to find to find way to minimize noise which does not rely on quantum error-correction. This work is accessible on arxiv since 2025 and is currently being reviewed by the journal Physical Review A.
I also produced a contribution, still on the topic of noise-resilience, but this time by considering doing quantum error-correction is performed. It was on the central topic of magic-states. Magic-states are specific quantum states that are necessary to perform universal quantum computing, when one does quantum error-correction. A major question in the field is to find ways to produce magic-states in a cheaper manner, as they are suspected to potentially vastly dominate the resource cost (in terms of qubits or gates) of fault-tolerant quantum computers. This question has driven a significant amount of research in the last decade. I have shown that for concatenated codes, an important class of error-correcting codes, considering them as what vastly dominates the resource cost of the computation is typically misleading. I have shown that, in typical examples, optimizing the cost to prepare these states will only lead to marginal reduction in the computation's cost, compared to other optimisations which could vastly reduce the cost of the computation. This result is highly surprizing as it goes against an important consensus on the field. Additionally, in this work, I provided the very first analytical approach to estimate the resource cost of concatenated code in order to implement *arbitrary* algorithms. For general algorithm there were no approach, before this work, able to provide simple closed-form expressions for the resource cost (qubit's count) of the algorithm. This result is particularly interesting given the fact it has recently been shown that concatenated codes could outperform leading error-correcting approaches (in some key figures of merit), such as the surface code: it is therefore quite timely. I consider this article to be one of my most important contribution during my fellowship. It is on arxiv since 2024 and will be sent for review to the journal Quantum soon.
Finally, I participated in a study aiming to analyze the resources (in particular, energy), required by some quantum machine learning algorithms. In particular, I analyzed this cost for noisy implementations of the algorithm. There, I have shown, with collaborators, that the moment where quantum kernel approach could provide an energetic advantage compared to classical supercomputer would typically occur in a regime of problem size so big that they would be unpractical for applications. This work is accessible as a pre-print since 2024.
(1) A benchmarking protocol able to check if the noise model of superconducting cat-qubits remains when they are implemented in large-scale quantum circuits. This protocol is already experimentally applicable and can detect some non-local effects of the noise that can only be observed in large-scale circuits (such as some non-local effects in the noise, that cannot be detected at the individual gate level).
(2) An approach to make circuits based on superconducting cat-qubits noise-resilient without the implementation of quantum error-correction. However, while these circuits can be made noise-resilient, they cannot lead to a quantum computational advantage in the examples we analyzed. Further research is necessary to either derive a general no-go theorem showing that noise-resilient circuits (without error-correction) able to yield a computational advantage with such qubits is fundamentally impossible, or exhibit an example where such an advantage is possible.
(3) An approach to use entanglement dilution as a way to mitigate noise in distributed settings for specific noise models. Further research is however necessary to see if (i) it can outperform quantum error-correction, (ii) it works for a wider class of noise models than the ones we studied as the approach doesn't seem universal.
(4) Evidences that in order to distribute quantum entanglement between remote parties, it is better, for many important noise models, to put the entanglement source in-between the parties where entanglement should be distributed.
(5) A general analytical approach able to provide simple closed-form expressions to the total number of qubits required for concatenated codes, in order to perform universal quantum computing.
(6) A strong evidence that the dominant source of cost (in terms of qubit's count) for concatenated error-correction scheme is not due to generate magic-states, as typically claimed by the community.