Skip to main content

Information Theory with Uncertain Laws

Periodic Reporting for period 2 - ITUL (Information Theory with Uncertain Laws)

Reporting period: 2019-02-01 to 2020-07-31

Shannon’s Information Theory paved the way for the information era by providing the mathematical foundations of digital information systems. A key underlying assumption of Shannon’s key results is that the probability law that governing system is known, allowing to optimize the codebook and decoder accordingly. There are a number of important situations where perfectly estimating the system law is impossible; in these situations the codebook and decoder must be designed without complete (or no) knowledge of the system law. The vast majority of the Information Theory literature makes strong simplifying assumptions on the model. Theoretical studies that provide a general treatment of information processing with uncertain laws are hence urgently needed. For general systems, standard asymptotic techniques cannot be invoked and new techniques must be sought. A fundamental understanding of the impact of uncertainty in general systems is crucial to harvesting the potential gains in practice.

This project is aimed at contributing towards the ambitious goal of providing a unified framework for the study of Information Theory with uncertain laws. A general framework based on hypothesis testing will be developed and code designs and constructions that naturally follow from the hypothesis testing formulation will be derived. This unconventional and challenging treatment of Information Theory will advance the area and will contribute to Information Sciences and Systems disciplines where Information Theory is relevant.

A comprehensive study of the fundamental limits and optimal code design with law uncertainty for general models will represent a major step forward in the field, with the potential to provide new tools and techniques to solve open problems in close disciplines. Therefore, the outcomes of this project will not only benefit communications, but also areas such as probability theory, statistics, physics, computer science and economics.
The main achievements have been:

- introduction of a recursive random coding scheme that simultaneously attains the expurgated and random coding exponents

- analysis of the above coding technique and extension of results to arbitrary channels, possibly continuous

- error probability of perfect and quasi-perfect codes coincides with the lower bound provided by the metaconverse for binary-input output-symmetric channels, hence proving optimal for these channels

- new zero-error codes robust to insertion and deletion errors

- new multiletter decoding scheme for multiple-access channels with mismatch that yields higher rates

- new importance sampling simulator for random coding error probability

- saddlepoint approximation to hypothesis testing

- improved error exponents for reliable source transmission over multiple-access channels

- new (and only in the literature) single-letter upper bound to the mismatch capacity

- refining random coding: large deviations of error exponents

- binary hypothesis testing with mismatch using likelihood ratio testing

- output quantization as a mismatched decoding problem

- low complexity algorithms for output quantization
Progress is documented in 4 journal articles in the leading international journals in the field, 2 journal article submissions and 22 peer reviewed articles in leading international conferences. I expect improvements on the new single-letter upper bound to the mismatch capacity, refined saddlepoint approximation analysis of hypothesis testing, error exponents for mismatched hypothesis testing including sequential hypothesis testing, large deviations of error exponents for recursive random coding, multiuser extensions of the recursive random coding scheme, upper bounds on the capacity of the broadcast channel