Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Mach-Zehnder and Interference Get Enhanced by Reinforcement Learning

Periodic Reporting for period 1 - MAZINGER (Mach-Zehnder and Interference Get Enhanced by Reinforcement Learning)

Reporting period: 2021-01-01 to 2022-12-31

Light-based technologies represent an essential component in countless applications in our everyday life. In the last few decades, optical technologies have provided a key platform also for several areas of research, from fundamental tests of quantum mechanics to quantum simulation and communication. More recently, optical circuits of modest size have been applied to various tasks, for instance to provide evidence of a quantum computational advantage. The potential inherent to these technologies is rooted in the properties of the individual photons, the elementary particles of light, such as their mobility, speed, high bandwidth and ease of manipulation. At the same time, machine learning (ML) has established itself as a powerful approach to enable problem solving. Motivated by the outstanding success in their respective domains, first steps have been made to bring ML and photonics together. Results, here, fall into two main categories. On the one hand, (i) photonic devices serve as efficient platforms to implement ML: proof-of-principle demonstrations include optical neural networks and neuromorphic processors, which promise higher performance than conventional architectures. On the other hand, (ii) ML can be used to gain insights into single-photon quantum processes. Also, ML offers a promising toolbox to optimize modern-day photonic devices and achieve even stronger demonstrations. In this case, major obstacles are generally represented by imperfect single-photon sources and detectors, as well as by an imperfect control over the parameters (phases) that govern the dynamics in such circuits. While practical solutions can be engineered for the two former issues, the latter represents a real challenge when compact and dense integration of several optical components is desired. Specifically, the adoption of densely integrated circuits with several components requires hardware and software solutions to mitigate the effects of crosstalk noise and biased errors in the phase settings. An effective solution to this problem is necessary to ensure that future protocols and infrastructures – believed to offer great benefits with respect to nowadays classical technologies – can be implemented on hardware affected by some unavoidable level of noise.

The research project MAZINGER took up this challenge by bringing together analytical and numerical tools (both from standard optimization techniques and ML), in order to enhance state-of-the-art optical applications. To this end, MAZINGER explored well-established optimization algorithms to cope with changing, noisy environments and non-ideal reconfigurable components, as well as novel frameworks to enable scientific discovery with optical circuits. The project, carried out in one of the leading groups in theoretical quantum ML, also involves a collaboration with a leading experimental group in photonics, with the goal of testing any findings on an actual, high-precision quantum experiment. The employed techniques have been developed within the general framework of single- and multi-photon interference, to ensure that any results are readily transferable to related lines of research.
The work carried out in the research project MAZINGER can be distinguished into two parallel lines of research: (1) The development of protocols to study, characterize, and mitigate the effect of biased imperfections in integrated optical circuits, and (2) the development of (so-called variational) protocols to implement machine learning algorthms in imperfect integrated optical circuits. In other words, the research project looks at both sides of near-term photonic quantum machine learning: Part of the work was devoted to enhancing the operation of imperfect optical circuits (using well-established optimization techniques), while the other part was devoted to enhancing quantum machine learning algorithms using optical architectures. This interdisciplinary research project was made possible by the principal investigator’s background in both theoretical and experimental aspects of photonics, and thanks to a collaboration between their theoretical group and a partner experimental group with world-leading expertise in photonics.

The main project results have been reported in the four publications summarized below. One more research project is currently under development.

(Project I) We studied the impact of experimental imperfections in integrated photonic circuits. We numerically observed, and qualitatively characterized, the emergence of a moderate biased error in well-established optical architectures.

(Project II) We focused on the optimization of optical circuits experiencing crosstalk noise due to thermal phase shifters. The developed framework is quite general and, while it works best for standard architectures with regular structures, it can be applied to circuits with arbitrary topology.

(Project III) We developed a framework and algorithms to implement a quantum learning model aimed at interpretable artificial intelligence. In this framework, the decision-making process, based on a classical learning model called projective simulation, is modeled as a probabilistic mechanism that takes place in the agent's memory. To implement the quantized model, we considered the dynamics of single photons in well-established optical interferometers, which are then trained via variational algorithms.

(Project IV) We developed a machine learning framework to facilitate scientific discovery, that is, to help extract scientific insights from the behaviour of trained artificial intelligent agents. Among other examples, the algorithm was successfully tested on numerically simulated quantum optical circuits.
With reference to the above-mentioned contributions:

(Project I) We have shown how biased errors in integrated optical circuits correlate with their waveguide structure, providing clearer insights into the known issue that errors depend on the optical paths followed by light.

(Project II) The algorithm is currently being tested in an actual laboratory. If successful, it will help improve the performance of high-density, reconfigurable optical circuits, a key component in classical and quantum technologies.

(Project III) We now have a framework to embed quantum learning agents in state-of-the-art photonic circuits, in a way that is both scalable and flexible. The results of this project have already been successfully tested by one independent research group.

(Project IV) We now have a framework that enables scientific discovery in classical artificial learning agents. In the future, the potential impact of this framework can manifest itself in the discovery of novel tools (both theoretical and experimental) to efficiently achieve tasks for quantum technologies.
A partial trace appears to emerge in the memory of the quantized learning agent.
Gadgets discovered by the learning algorithm developed in the project.
Decision-making process in the learning agent (a) and its quantization (b,c).
Algorithm designed to train the quantized agent using causal diamonds.
Noise affects individual transitions of the scattering process in a non-homogeneous way.
Optical architecture used to quantize the learning agent.
Learning curve for a quantized agent based on the Gram-Schmidt process.
Characterization of the settings relevant for the optical circuit in the presence of noise.
We use projective simulation (a) to quantize (b) a learning agent using photonic technologies.