Periodic Reporting for period 4 - InfoInt (An Information Theory of Simple Interaction)
Reporting period: 2019-09-01 to 2021-08-31
(1) A novel interactive communication protocol with noisy feedback over Gaussian channels, that can attain close to optimal performance while maintaining a very low-complexity (two orders of magnitude below state of the art systems).
(2) Extension of the above ideas to multidimensional setups, resulting in improved bounds on the error performance of such systems.
(3) A new notion of a two-way channel capacity for finite-state protocols. We described a novel simulation scheme based on universal source coding, channel coding, and concentration of measure techniques, that can simulate any finite-state protocol with accurate rate guarantees at any noise level, in some cases optimally so. This is the first such result in the interactive setting -- the interactive channel capacity for general protocols is notoriously difficult and wide open at any fixed noise level.
(4) Noisy synchronization between two terminals connected by a channel: We have completely characterized the fundamental limits under various optimality criteria, and provided a low-complexity construction achieving these limits in the vanishing error case.
(5) Introduced the graph information ratio, a new information theoretic notion that quantifies the most efficient way of communicating an information source over a channel while maintaining some structural integrity of the source. We have studied the information ratio in depth, and derived many intriguing properties.
(6) In multiuser zero-error communication setups, we derived a new outer bound on the rate region of the binary adder multiple access channel using a novel variation of VC-dimension. We studied the problem of efficiently broadcasting messages to two users under zero error, and gave new inner bounds on achievable rates, via the novel notion of what we called the rho-capacity of a graph. And we gave new inner and outer bounds for the zero error capacity region of the two-way channel, using versatile techniques.
(7) Relating feedback information theory to problems of adaptive search, we have studied the problem of optimally searching for a target under a physically motivated model of measurement dependent noise, where the noise level depends on the size of the probed area. We showed how to optimally solve this problem via feedback communication techniques, and also characterized the penalty incurred if the search is forced to be non-adaptive.
(8) We studied various information theoretic problems in a "single-bit" setup, where the output of a channel needs to be compressed into one bit (or a few bits), that would be "most informative" if sent back to the transmitter. We provided a new upper bound on the performance of the best strategy for the mutual information measure, improving the best known bound in a wide regime. When measuring "informative" in terms sequentially predictability under quadratic loss, we showed that majority vote is essentially the optimal function in low noise, and is not optimal for high noise; hence, there is no single Boolean function that is simultaneously optimal at all noise levels. This also led us to study Boolean self-predicting functions and their properties, using Fourier analysis over the Boolean cube.
(9) Distributed interactive inference under communication constraints: We characterized the minimax quadratic risk that two remotely located agents can attain when they want to estimate the correlation between their samples, under a fixed communication budget. In particular, we used novel symmetric SDPI techniques to show that interaction does not help in this problem, and described a low-complexity protocol that attains the best possible performance asymptotically, via extreme value theory. We also analyzed a distributed hypothesis testing version of the problem, and provided new bounds on the Stein exponent, improving the state-of-the-art, via novel diffusion process based techniques.
(10) Private simultaneous messages: In this problem, Alice and Bob want Charlie to compute a function of their random inputs, and (using shared randomness) to learn nothing beyond the value of the function. We showed that the best-known lower bound for this problem (from the 1990’s) had a flaw in its proof, and provided a different proof that resulted in a correct lower bound (that is now the best bound known).
(11) Hypothesis testing under computation constraints: We provided new upper and lower bounds (tight in some regimes) on the minimax error probability in testing between two Bernoulli distributions using a finite-state machine, and showed that deterministic machines are worse than randomized ones by as much as a factor of 2 in the exponent.
Finally, it should be noted that the project has kindly received a one-year extension, as well as another 6 months extension due to delays caused by the Covid-19 pandemic. This time extension was very beneficial to the success of the project, as it enabled us to finalize multiple outstanding tasks and bring them to fruition.