"We have focused on three specific research objectives. The first objective was concerned with deriving a so-called finite-length scaling law that predicts the dependence of the performance (measured in terms of the bit error rate) on the code length (measured in bits) for deterministic error-correcting codes, in particular generalized product codes. Such as scaling law was identified for a specific code class relevant for fiber-optic transmission called half-product codes assuming transmission over the binary erasure channel (BEC) where each bit is erased with a certain probability, independently of all other bits. We have then uncovered and analyzed the phenomenon of so-called miscorrections, which make it challenging to generalize the finite-length scaling law from the BEC to the binary symmetric channel (BSC) where bits are flipped instead of erased. To address the issue of miscorrections, we have devised a novel iterative decoding approach for generalized product codes, termed anchor decoding, that can efficiently reduce the effect of miscorrections on the performance. This was done as part of our second objective, which was concerned with the development of low-complexity decoding algorithms for deterministic codes that are particularly relevant for fiber-optic communications. The anchor-decoding algorithm is one of the novel decoding approaches that we derived. Anchor decoding relies on so-called anchor codewords in order to resolve inconsistencies across codewords and offers state-of-the-art performance based on computationally efficient hard-decision decoding. In addition to anchor decoding, we have also developed several other new decoding approaches, where our main focus has been on exploring new data-driven methods that exploit recent advances in the field of machine learning. One approach is based on reinforcement learning, where we have shown that the standard maximum-likelihood decoding problem for binary linear codes can be mapped to the reward function in a Markov decision problem, and optimized decoders can be found using Q-learning and deep Q-learning. We have also explored new data-driven paradigms for both code design and decoding algorithms based on an end-to-end machine-learning autoencoder approach. Finally, the third objective was to develop low-complexity nonlinear equalization algorithms for high-speed fiber-optic communication systems. Specifically, our work has identified a fundamental relationship between a popular existing nonlinear equalization strategy called digital backpropagation (DBP) and conventional feed-forward artificial neural networks. Based on this relationship, we have proposed and investigated learned DBP (LDBP), which is a novel approach to low-complexity nonlinear equalization for high-speed optical systems. We have shown that LDBP can significantly reduce the complexity compared to the previous state-of-the-art, without sacrificing performance. The new algorithm has been implemented and verified under realistic hardware assumptions and extended to polarization-multiplexed systems. We have also conducted an experimental verification of LDBP, demonstrating its effectiveness in a concrete practical setting. The results obtained in this project have led to the publication of 16 conference and 3 journal papers. In addition, the fellow has contributed to the dissemination of results by giving seminars at several international research groups in Munich, Eindhoven, and London, as well as invited talks at two major optical communication conferences, the 45th European Conf. on Optical Communication and the 2020 Optical Fiber Communication Conference, and an invited talk at the 8th Van Der Meulen Seminar on ""Neural Networks in Communication Systems"". To continue to ensure effective dissemination beyond the project end date, the fellow has also committed to an invited talk at the Fraunhofer HHI Summer School on ""AI for Optical networks & Neuromorphic Phtonics for AI Acceleration""."