Periodic Reporting for period 4 - ITUL (Information Theory with Uncertain Laws)
Reporting period: 2022-02-01 to 2023-07-31
This project is aimed at contributing towards the ambitious goal of providing a unified framework for the study of Information Theory with uncertain laws. A general framework based on hypothesis testing will be developed and code designs and constructions that naturally follow from the hypothesis testing formulation will be derived. This unconventional and challenging treatment of Information Theory will advance the area and will contribute to Information Sciences and Systems disciplines where Information Theory is relevant.
A comprehensive study of the fundamental limits and optimal code design with law uncertainty for general models will represent a major step forward in the field, with the potential to provide new tools and techniques to solve open problems in close disciplines. Therefore, the outcomes of this project will not only benefit communications, but also areas such as probability theory, statistics, physics, computer science and economics.
- introduction and analysis of a recursive random coding scheme that simultaneously attains the expurgated and random coding exponents
- error probability of perfect and quasi-perfect codes coincides with the lower bound provided by the metaconverse for binary-input output-symmetric channels, hence proving optimal for these channels
- new zero-error codes robust to insertion and deletion errors
- new multiletter decoding scheme for multiple-access channels with mismatch that yields higher rates
- new importance sampling simulator for random coding error probability
- saddlepoint approximation to hypothesis testing
- improved error exponents for reliable source transmission over multiple-access channels
- first single-letter upper bound to the mismatch capacity, improvements including a sphere packing error exponent (first upper bound to the reliability function)
- derived exact error exponent for mismatched decoding at rate zero
- refining random coding: large deviations of error exponents. Asymmetric concentration around typical error exponent.
- binary hypothesis testing with mismatch using likelihood ratio testing
- output quantization as a mismatched decoding problem and low complexity algorithms for output quantization