## Final Report Summary - LACONIC (LAttice COdiNg for multiuser wIreless Communications)

The main objective of the LACONIC project was to propose new code designs and decoding algorithms for multiuser wireless communications. In order to improve wireless coverage in metropolitan areas, the latest communication standards such as 4G and LTE-advanced support the deployment of heterogeneous dense networks where traditional base stations are partly replaced by low-cost, low-power small cells (relays, femtocells, picocells) requiring a decentralised allocation of resources.

Such dense networks, accommodating a growing number of users who share the same spectral resources, present two fundamental challenges: interference management and data confidentiality. These requirements, which in traditional networks are ensured by centralised protocols at the network layer, must now be met at the physical layer level, taking into account the limited computational resources of small cells.

The implementation of these decentralised systems requires the design of new multiuser codes. Recent studies in information theory have shown the great potential of lattice codes for multiuser coding, since the lattice structure naturally allows for the superposition of several data flows.

Our work has thus focused on three main aspects:

1) Lattice coding for interference alignment

In dense networks, radio performance is essentially limited by the interference between users. We have thus considered the problem of designing new lattice schemes which allow to mitigate interference at the codeword level (interference alignment).

One important scenario is the interference channel, where K transmitter-receiver pairs communicate at the same time. With lattice alignment techniques, every user in a K-user interference channel can theoretically achieve half of the capacity that they could obtain without interference. However, the error performance of state-of-the-art techniques is very poor in practice.

In collabouration with Maria Estela, PhD student at the Electrical and Electronic Engineering Department of Imperial College, we have considered a simplified model, the many-to-one interference channel, in which only one pair of nodes is subject to interference from the other simultaneous transmissions in the network. In this setting, we have analysed the error performance of lattice alignment techniques employing joint optimal decoding of the desired signal and of the sum of interfering signals. More precisely, we have derived a new bound for the error probability in terms of algebraic properties of the chosen lattice point distribution (the theta series). This provides new design criteria to improve performance.

2) Lattice coding for physical layer security

More and more often, users rely on wireless devices to transmit sensitive data, such as banking information, administrative documents or medical records. The broadcast nature of the wireless medium makes it especially vulnerable to malicious attacks, since any user in the network is a potential eavesdropper. Physical layer security is a new paradigm aiming at exploiting the randomness inherent in wireless propagation, such as noise, fading, and interference, in order to provide an additional level of protection. Physical layer security replaces computational secrecy with information-theoretic secrecy, meaning that even an eavesdropper endowed with unlimited computational power cannot extract any information from the channel.

In the context of noisy channels, Wyner proved that both robustness to transmission errors and a prescribed degree of data confidentiality could simultaneously be attained by channel coding without any secret key. Wyner introduced the so-called 'weak secrecy' condition: the asymptotic rate of leaked information between the message and the channel output should vanish as the block length tends to infinity. Unfortunately, it is still possible for a scheme satisfying weak secrecy to exhibit some security flaws, and now it is widely accepted that a physical-layer security scheme should be secure in the sense of Csiszár's 'strong secrecy'.

In collabouration with Damien Stehlé (Ecole Normale Supérieure de Lyon) and Jean-Claude Belfiore (Télécom-ParisTech), we have shown that lattice codes can achieve strong secrecy over Gaussian channels provided that the malicious user or eavesdropper experiences a higher level of noise than the legitimate receiver. Moreover, the attainable rate of confidential communication using lattice codes is close to optimal.

Following Csiszár's approach, we show that strong secrecy is guaranteed if the output distributions of the eavesdropper's channel corresponding to two different messages are indistinguishable in the sense of variational distance. More importantly, we propose the 'flatness factor' of a lattice as a fundamental criterion which implies that conditional outputs are indistinguishable. This leads to defining a notion of lattices that are 'good for secrecy', similarly to the notions of good lattices which have been proposed for channel coding. Finally, we show that our lattice schemes are secure also from a cryptographic viewpoint thanks to the equivalence between strong secrecy and semantic security.

Our design criteria based on the 'flatness factor' can be used in future work to construct explicit lattice codes for secrecy. Such codes will remove the vulnerabilities of the current wireless architectures at the physical layer level, and thus enhance the overall security of mobile networks.

3) Reduced-compexity decoding techniques for lattice codes

Lattice decoding is a problem of high relevance in multi-terminal communication systems. Optimal (maximum-likelihood) decoding for finite constellations carved from lattices can be realised by sphere decoding, whose complexity can, however, grow prohibitively with the dimension of the lattice. The decoding complexity is especially high in the case of coded or distributed systems, where the lattice dimension is usually larger. Thus, the practical implementation of decoders often has to resort to approximate solutions. One of these approaches is lattice reduction-aided decoding. Thanks to its average polynomial complexity, the Lenstra, Lenstra and Lovász (LLL) reduction is widely used in lattice decoding. However, lattice reduction-aided decoding exhibits a widening gap to ML decoding, so there is a strong demand for computationally efficient suboptimal decoding algorithms that offer improved performance.

For the LACONIC project, we have continued the analysis of embedding decoding, a technique which was proposed by Laura Luzzi (the Marie Curie fellow) in a previous work. The fundamental idea is to reduce the decoding problem (that is, the problem of finding the 'closest lattice vector' to the received signal) to a 'shortest vector problem' in an extended lattice, which can be solved using lattice reduction.

In particular, we have found an improved bound for the 'decoding radius' of this technique, i.e. the largest noise amplitude that can be tolerated by the decoder. The new bound greatly improves the previous state-of-the-art bounds for lattice reduction-aided decoders. Moreover, we have shown that embedding is optimal from the point of view of the rate-reliability trade-off. We have also tested the technique through extensive numerical simulations.

Such dense networks, accommodating a growing number of users who share the same spectral resources, present two fundamental challenges: interference management and data confidentiality. These requirements, which in traditional networks are ensured by centralised protocols at the network layer, must now be met at the physical layer level, taking into account the limited computational resources of small cells.

The implementation of these decentralised systems requires the design of new multiuser codes. Recent studies in information theory have shown the great potential of lattice codes for multiuser coding, since the lattice structure naturally allows for the superposition of several data flows.

Our work has thus focused on three main aspects:

1) Lattice coding for interference alignment

In dense networks, radio performance is essentially limited by the interference between users. We have thus considered the problem of designing new lattice schemes which allow to mitigate interference at the codeword level (interference alignment).

One important scenario is the interference channel, where K transmitter-receiver pairs communicate at the same time. With lattice alignment techniques, every user in a K-user interference channel can theoretically achieve half of the capacity that they could obtain without interference. However, the error performance of state-of-the-art techniques is very poor in practice.

In collabouration with Maria Estela, PhD student at the Electrical and Electronic Engineering Department of Imperial College, we have considered a simplified model, the many-to-one interference channel, in which only one pair of nodes is subject to interference from the other simultaneous transmissions in the network. In this setting, we have analysed the error performance of lattice alignment techniques employing joint optimal decoding of the desired signal and of the sum of interfering signals. More precisely, we have derived a new bound for the error probability in terms of algebraic properties of the chosen lattice point distribution (the theta series). This provides new design criteria to improve performance.

2) Lattice coding for physical layer security

More and more often, users rely on wireless devices to transmit sensitive data, such as banking information, administrative documents or medical records. The broadcast nature of the wireless medium makes it especially vulnerable to malicious attacks, since any user in the network is a potential eavesdropper. Physical layer security is a new paradigm aiming at exploiting the randomness inherent in wireless propagation, such as noise, fading, and interference, in order to provide an additional level of protection. Physical layer security replaces computational secrecy with information-theoretic secrecy, meaning that even an eavesdropper endowed with unlimited computational power cannot extract any information from the channel.

In the context of noisy channels, Wyner proved that both robustness to transmission errors and a prescribed degree of data confidentiality could simultaneously be attained by channel coding without any secret key. Wyner introduced the so-called 'weak secrecy' condition: the asymptotic rate of leaked information between the message and the channel output should vanish as the block length tends to infinity. Unfortunately, it is still possible for a scheme satisfying weak secrecy to exhibit some security flaws, and now it is widely accepted that a physical-layer security scheme should be secure in the sense of Csiszár's 'strong secrecy'.

In collabouration with Damien Stehlé (Ecole Normale Supérieure de Lyon) and Jean-Claude Belfiore (Télécom-ParisTech), we have shown that lattice codes can achieve strong secrecy over Gaussian channels provided that the malicious user or eavesdropper experiences a higher level of noise than the legitimate receiver. Moreover, the attainable rate of confidential communication using lattice codes is close to optimal.

Following Csiszár's approach, we show that strong secrecy is guaranteed if the output distributions of the eavesdropper's channel corresponding to two different messages are indistinguishable in the sense of variational distance. More importantly, we propose the 'flatness factor' of a lattice as a fundamental criterion which implies that conditional outputs are indistinguishable. This leads to defining a notion of lattices that are 'good for secrecy', similarly to the notions of good lattices which have been proposed for channel coding. Finally, we show that our lattice schemes are secure also from a cryptographic viewpoint thanks to the equivalence between strong secrecy and semantic security.

Our design criteria based on the 'flatness factor' can be used in future work to construct explicit lattice codes for secrecy. Such codes will remove the vulnerabilities of the current wireless architectures at the physical layer level, and thus enhance the overall security of mobile networks.

3) Reduced-compexity decoding techniques for lattice codes

Lattice decoding is a problem of high relevance in multi-terminal communication systems. Optimal (maximum-likelihood) decoding for finite constellations carved from lattices can be realised by sphere decoding, whose complexity can, however, grow prohibitively with the dimension of the lattice. The decoding complexity is especially high in the case of coded or distributed systems, where the lattice dimension is usually larger. Thus, the practical implementation of decoders often has to resort to approximate solutions. One of these approaches is lattice reduction-aided decoding. Thanks to its average polynomial complexity, the Lenstra, Lenstra and Lovász (LLL) reduction is widely used in lattice decoding. However, lattice reduction-aided decoding exhibits a widening gap to ML decoding, so there is a strong demand for computationally efficient suboptimal decoding algorithms that offer improved performance.

For the LACONIC project, we have continued the analysis of embedding decoding, a technique which was proposed by Laura Luzzi (the Marie Curie fellow) in a previous work. The fundamental idea is to reduce the decoding problem (that is, the problem of finding the 'closest lattice vector' to the received signal) to a 'shortest vector problem' in an extended lattice, which can be solved using lattice reduction.

In particular, we have found an improved bound for the 'decoding radius' of this technique, i.e. the largest noise amplitude that can be tolerated by the decoder. The new bound greatly improves the previous state-of-the-art bounds for lattice reduction-aided decoders. Moreover, we have shown that embedding is optimal from the point of view of the rate-reliability trade-off. We have also tested the technique through extensive numerical simulations.