Skip to main content
European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Coding for terabit-per-second fiber-optical communications

Periodic Reporting for period 2 - TERA (Coding for terabit-per-second fiber-optical communications)

Período documentado: 2019-04-01 hasta 2020-03-31

Long-haul fiber-optic communication links carry virtually all intercontinental data traffic and are often referred to as the Internet backbone. In order to cope with the ever-increasing traffic demands due to Internet services such as video streaming or cloud computing, next-generation systems are soon required to adopt data rates in the order of terabits per second. The overall goal of this project was to enable the design of reliable and sustainable fiber-optic communication systems that operate at terabit-per-second data rates. To that end, we have studied both decoding and equalization algorithms for such systems. The overall purpose of these algorithms is (i) to ensure reliable data transmission in the presence of noise (in the case of decoding) and (ii) to compensate for propagation impairments such as chromatic dispersion and Kerr nonlinearities (in the case of equalization). The first of our three objectives in this project was to derive effective theoretical tools that allow for a rapid assessment of the code performance as a function of its design parameters. The other two objectives were aimed at addressing the growing problem of energy consumption in fiber-optic systems by designing low-complexity receiver algorithms, specifically decoding and equalization algorithms. Our work has particularly highlighted the fact that machine learning and data-driven approaches have a large potential to reduce complexity and could therefore play an important part in the future design of such systems.
"We have focused on three specific research objectives. The first objective was concerned with deriving a so-called finite-length scaling law that predicts the dependence of the performance (measured in terms of the bit error rate) on the code length (measured in bits) for deterministic error-correcting codes, in particular generalized product codes. Such as scaling law was identified for a specific code class relevant for fiber-optic transmission called half-product codes assuming transmission over the binary erasure channel (BEC) where each bit is erased with a certain probability, independently of all other bits. We have then uncovered and analyzed the phenomenon of so-called miscorrections, which make it challenging to generalize the finite-length scaling law from the BEC to the binary symmetric channel (BSC) where bits are flipped instead of erased. To address the issue of miscorrections, we have devised a novel iterative decoding approach for generalized product codes, termed anchor decoding, that can efficiently reduce the effect of miscorrections on the performance. This was done as part of our second objective, which was concerned with the development of low-complexity decoding algorithms for deterministic codes that are particularly relevant for fiber-optic communications. The anchor-decoding algorithm is one of the novel decoding approaches that we derived. Anchor decoding relies on so-called anchor codewords in order to resolve inconsistencies across codewords and offers state-of-the-art performance based on computationally efficient hard-decision decoding. In addition to anchor decoding, we have also developed several other new decoding approaches, where our main focus has been on exploring new data-driven methods that exploit recent advances in the field of machine learning. One approach is based on reinforcement learning, where we have shown that the standard maximum-likelihood decoding problem for binary linear codes can be mapped to the reward function in a Markov decision problem, and optimized decoders can be found using Q-learning and deep Q-learning. We have also explored new data-driven paradigms for both code design and decoding algorithms based on an end-to-end machine-learning autoencoder approach. Finally, the third objective was to develop low-complexity nonlinear equalization algorithms for high-speed fiber-optic communication systems. Specifically, our work has identified a fundamental relationship between a popular existing nonlinear equalization strategy called digital backpropagation (DBP) and conventional feed-forward artificial neural networks. Based on this relationship, we have proposed and investigated learned DBP (LDBP), which is a novel approach to low-complexity nonlinear equalization for high-speed optical systems. We have shown that LDBP can significantly reduce the complexity compared to the previous state-of-the-art, without sacrificing performance. The new algorithm has been implemented and verified under realistic hardware assumptions and extended to polarization-multiplexed systems. We have also conducted an experimental verification of LDBP, demonstrating its effectiveness in a concrete practical setting. The results obtained in this project have led to the publication of 16 conference and 3 journal papers. In addition, the fellow has contributed to the dissemination of results by giving seminars at several international research groups in Munich, Eindhoven, and London, as well as invited talks at two major optical communication conferences, the 45th European Conf. on Optical Communication and the 2020 Optical Fiber Communication Conference, and an invited talk at the 8th Van Der Meulen Seminar on ""Neural Networks in Communication Systems"". To continue to ensure effective dissemination beyond the project end date, the fellow has also committed to an invited talk at the Fraunhofer HHI Summer School on ""AI for Optical networks & Neuromorphic Phtonics for AI Acceleration""."
The main motivating driving force behind this research project was the design of next-generation fiber-optic systems. The design of such systems will require assessment of nontrivial trade-offs between several key system parameters such as performance and complexity. The theoretical tools derived in this project can be used for example to rapidly assess the code performance without the need to run time-consuming simulations and guide the selection of suitable system parameters to optimize the overall system performance. This project also addresses the growing problem of energy-consumption in fiber-optic systems by designing state-of-the-art receiver algorithms. The Internet, along with its associated fiber-optic infrastructure consumes a significant fraction of the world-wide electrical energy production. With the rapidly increasing data traffic, this fraction is bound to increase, unless a significant effort is made to ensure that optical data transport becomes more energy efficient. The results and theoretical insights obtained in this project will help to ensure that future data traffic demands are met in a sustainable way and the project has investigated several novel approaches that are expected to impact the design of future fiber-optic communication systems. The project also had a significant impact on the training and future career prospects of the fellow itself. The outgoing phase was spent at the Rhodes information initiative at Duke (iiD) center, which is an interdisciplinary program designed to increase big data computational research. The fellow has acquired new expertise and significantly expanded his skill set, in particular in the area of machine learning and artificial intelligence. During the incoming phase, the newly acquired knowledge and expertise was brought back and transferred to the host university.