Skip to main content

Hybrid Digital-Analog Networking under Extreme Energy and Latency Constraints

Periodic Reporting for period 2 - BEACON (Hybrid Digital-Analog Networking under Extreme Energy and Latency Constraints)

Reporting period: 2018-04-01 to 2019-09-30

The main goal of the BEACON project is to explore novel methods for the transmission of signals over noisy wireless channel, particularly under extreme low latency and low energy constraints. When we transmit an image from our phone to a friend, the two following fundamental operations take place: the image is first compressed into as few bits as possible at a certain quality level. This is typically done by image compression algorithms, such as JPEG or JPEG2000, which have been highly optimized over the years to be able to represent generic images with as few bits as possible. Then these bits are transferred to the channel transmission unit, which uses a particular modulation scheme and a channel code against noise in the channel. At the very basic level, this can be considered as adding redundant bits among the bits output by the source compressor, such that even if some the transmitted bits are lost over the channel, the receiver can still recover the compressed version of the image. Some of the well-known codes that provide robustness againt noise in communication channels are convolutional codes, Turbo codes, and LDPC codes. It also took researchers decades to develop and optimize these codes.

This two-step approach provides modularity to the system: we can separately design the source and channel coding components. For example, we can have a single channel coder in our phones, and that same unit can transmit bits coming from an image, video or an audio compressor, as it does not care where the bits come from. Its only goal is to get the bits to the receiver as reliably as possible.

Some of the earlier communication systems did not use this separate digital approach, where all the information is first converted into bits. For example, radio and TV broadcasting and even the first generation of mobile technology (1G) used to be analog where the audio and video signals are directly modulated on carrier waves without first compressing them. However, thanks to its modularity and flexibility in adapting to the available bandwidth, digital communications has dominated the communication systems over the years, and today pretty much all communication systems we have are digital. Digital communications has been extremely successful and has given us the latest generations of the mobile networks, which are capable of delivering high quality video to mobile devices at significant efficiency. However, there is an increasing drive towards for Internet-of-things (IoT) applications and machine-type communications, which have significantly different requirements than conventional communication systems. Many IoT applications have extremely low latency requirements. Also, most IoT devices are limited in energy and complexity, so they need to be highly energy efficient, and may not be able to employ the high-complexity algorithms that are used in mobil devices today. BEACON project argues that we need to reconsider our communication system architecture for such applications; and in particular we need to reintroduce analog communications into our network architecture in order to meet the requirements of emerging IoT and other low-latency and low-energy applications.

Designing ultra-low latency and ultra-low energy communication protocols for IoT applications has numerous benefits. This would allow using wireless technology for many applications with such requirements that are now limited to wired connections. This would reduce both the installation and the operation costs. This would also lead to many novel applications from remote surgery to more efficient monitoring and control of medical devices, surveillance systems, etc.

The main objective of the BEACON project is to develop novel communication technologies that go beyond the conventional separate source and channel coding architecture in order to meet the extreme latency and energy efficiency requirements of emerging applications. BEACON will not only study the fundamental theoretical limits of such communication systems, but will also develop practical coding and communication techniques, and will design and implement these techniques through proof-of-concept demonstrators.
The work carried out within BEACON so far can be divided into three main thrusts. The first thrust focuses on developing theoretical performance bounds. Such bounds serve as benchmarks for practical implementations. For example, if we know the capacity of a communication channel, this sets an upper bound on any communication system in terms of the reliable rate of communication. No system can transmit information beyond this rate in a reliable manner. Hence, if we have a practical code for this channel, we would know how far we are from the capacity, which can tell us if we should try to improve the code rate, or we are already very close to the capacity. In BEACON, we have focused on developing such performance bounds in terms of the energy efficiency of the communication system, which is captured by the `energy distortion exponent’, which was defined by the PI in an earlier paper. In BEACON we have extended the analysis of this performance measure to multi-user networks. We have also studied the fundamental performance limits under latency constraints. In particular, we have focused on the extreme case of `zero-delay’ source transmission, which does not allow any coding in the standard sense as each source sample needs to be transmitted on its own as soon as it is available at the transmitter. We have also studied more standard information theoretic bounds for multi-user communication systems with correlated source signals.

The second thrust of the project focused on developing coding and communication schemes that can be used in practice. For this we have considered more standard coding techniques for different communication scenario, e.g. polar codes for lossy source compression, but mostly focused on novel applications, such as coded caching techniques, coded computation, etc. We have developed schemes that go beyond the state of the art in many different applications and scenarios. A more recent development in this direction is the design of communication schemes based on data-driven machine learning techniques. We have designed low-latency and low-power communication schemes, where both the transmitter and the receiver are modelled by deep neural networks. Our initial results show that this can provide significant performance improvements particularly in the low-power and low-bandwidth regime. In addition to many publications, we have also filed some patents based on these ideas.

Finally, another objective of BEACON is to implement some of the developed communication schemes on a software radio platform as a proof-of-concept. We have implemented some of the initial schemes; in particular we have implemented the hybrid digital-analog communication scheme we have developed, and have shown that it behaves similarly to the numerically estimated one, indicating that the promised performance gains are not limited to idealised conditions typically assumed in numerical simulations.
We have made progress beyond the state-of-the-art in all three thrusts mentioned above. In particular, we have developed new performance bounds using tools from information theory and optimization. In particular we have identified new performance bounds for the energy-distortion exponent, which quantifies the asymptotic energy efficiency of a communication system in the large bandwidth regime. We have also considered the transmission of correlated signals from multiple transmitters to a single receiver over a shared medium. The fundamental information theoretic limits for this problem still remain open despite being studied by numerous researchers for over four decades. We have managed to develop novel performance bounds for this problem that are tighter than all the other known bounds in the literature. Although we have not managed to settle the problem, these results got us one step closer to it.

We have also studied the novel problem of distributed hypothesis testing over noisy channels. To the best of our knowledge this problem has not been studied before, although hypothesis testing is an important problem for IoT networks. In many applications, the goal of IoT devices is not necessarily to transmit their own signal to the receiver with the best quality, but instead to enable the receiver to make a data-dependent decision, or some inference about the data. We considered the special case in which the decision maker wants to decide, which of the two possible joint distributions, its observation and the observation available at the remote observer come from. We have identified the optimal performance, in terms of the asymptotic error exponents, for this system. We will continue to study this distributed decision problem in the remainder of the project, together with the privacy and security aspects, i.e. when the transmitter does not want the receiver to gain information on some sensitive aspect of its data.


Another significant achievement of the project is the design of a novel hybrid digital-analog communication scheme for wireless image transmission. Instead of using data compression techniques followed by fixed constellations and error correction codes, the proposed technique maps the underlying uncompressed data (or a linearly transformed version of it) directly into channel inputs. Some information, e.g. power allocation across uncoded symbol blocks, is provided as digital metadata to the receiver. This, in addition to greatly simplifying the encoding and decoding processes, also provides graceful performance degradation: a digital communication scheme break down when the channel quality goes below the level it was designed for, and its performance does not increase even if the channel quality goes beyond this level. On the other hand, the quality of the proposed HDA scheme gracefully changes with the channel quality. In the proposed hybrid digital-analog transmission schem, called SparseCast, we have further exploited the frequency domain sparsity of an image to improve the bandwidth efficiency, a technique that is also used in conventional image compression standards, such as JPEG. We have shown through computer simulations that SparseCast outperforms all other state-of-the-art schemes for all the channel signal-to-noise ratio (SNR) values we have experienced. Then we implemented this scheme on the universal software radio platforms (USRPs) and have shown gains similar to the theoretical ones.

In SparseCast, the input image is first transformed through a linear transformation to the frequency domain. N particular we use the digital cosine transform (DCT), which provides a sparse representation of the image. We then apply thresholding and power allocation across blocks of DCT coefficients in order to provide better protection against channel noise for more important components. While this scheme has the advantage of simplicity, its performance is limited. We have then developed a completely novel data-driven approach, based on an autoencoder neural network architecture, where the wireless transmitter is modelled as an encoder convolutional neural network (CNN) that maps input images directly to power-constrained channel input vectors, while the receiver is a another CNN that maps the noisy channel output vector to the reconstruction of the input image. We jointly train the two networks over an image dataset and channels simulated with an additive white Gaussian noise (AWGN) channel model.

When the trained encoder and decoder networks are tested on unseen images, the results are mostly better compared to a digital communication scheme based on JPEG image compression followed by a near capacity-achieving digital modulation and LDPC channel coding scheme, particularly in the low bandwidth and low SNR regime, which is the more interesting operation regime for IoT systems.

The more striking result is obtained when this learned autoencoder structure is tested on channels with different quality than the one the network is trained on. This deep joitn source-channel coding (JSCC) architecture achieves graceful degradation with the channel quality. This means that the CNN-based deep JSCC scheme learned to communicate like an analog system rather than a digital encoder. This was actually expected since all the layers in the CNN are analog operations, as this is needed to be able to backpropagate the gradient in the training stage.

Note that such graceful degradation is particularly attractive when either transmitting the same signal to many receivers with different channel qualities, or when transmitting to a single receiver with a varying channel quality. In both cases, while the digital approach would target the worse channel quality to guarantee a certain quality level to all the receivers, the proposed deep JSCC scheme would provide each receiver at the quality it can attain depending on its own channel.

We are currently extending this ML-based communication system design approach to other settings; in particular to channels with various imperfections, e.g. imperfect or no channel state information, fading, inter-symbol interference, or more complicated channels, e.g. multiple input multiple output systems, and to multi-user scenarios, e.g. relay, broadcast and multi-access channels.