## Periodic Reporting for period 2 - FRAPPANT (Formal Reasoning About Probabilistic Programs: Breaking New Ground for Automation)

Reporting period: 2020-05-01 to 2021-10-31

The FRAPPANT project focuses on the theoretical foundations of probabilistic programs. What are such programs? They are computer programs with the feature that every now and then they flip a coin. Thus whereas a typical computer program is deterministic --- running a program repeatedly on a given input always yields the same output --- a first run of a probabilistic program on a given input may yield a different result than a second run. Though this sounds a bit counterintuitive, this feature is extremely useful. Some computational problems can be more efficiently solved with probabilistic programs than with deterministic ones. And even some computational problems cannot be solved by any deterministic program, whereas they can be solved by using randomization (coin flips).

But reasoning about such programs is difficult. Let us illustrate this by an example. The question whether a deterministic program halts on a given input, that is to say, that it stops computing and outputs a result in finitely many steps, is semi-decidable. This means that an automated test whether a deterministic program halts may complete with the feedback "the program halts", or may end up in an infinite, never stopping execution. For probabilistic programs this is not the case. The situation is worse. The question whether a probabilistic program terminates on a given input with probability one, that is, there may be never stopping program runs, but they all happen with probability zero, is not solvable by any automated test. It turns out that this problem is as hard as checking whether a deterministic program halts on all inputs.

An additional feature to coin flipping, is the ability of a probabilistic program to learn from data. A probabilistic program can be seen as a transformer that takes as input a probability distribution over inputs and outputs a probability distribution of the possible output values. Data, observations from the real world, typically change these distributions. Let us assume that the program computes a distribution over a program variable x representing the room temperature. The program computes this distribution based on several inputs such as humidity, air circulation, room occupancy and so forth. If more information about these parameters is known, e.g. from sensor measurements it is evident that the humidity is between 30 and 36%, then this influences the distribution on the room temperature. This is known as Bayesian learning. The programming constructs in a probabilistic programming language provide powerful means to model different operations on distributions that go beyond Bayesian learning. Probabilistic programs are more expressive than for instance Bayesian networks, a probabilistic graphical model that is successfully used in a wide area of applications.

In this project, we investigate what probabilistic programs mean. In an exact, mathematical manner. We also develop techniques to check essential properties of probabilistic programs. Such as if they terminate with probability one (called almost-sure termination) on all possible inputs. This is a notoriously hard question --- it is highly undecidable --- and in principle not solvable in an automated manner. In general. In FRAPPANT, we successfully have developed automated techniques that are able to decide almost-sure termination for many programs. Or the reverse: that they are not almost-surely terminating.

This is typical for the FRAPPANT project. We attempt to push the borders of automated reasoning about probabilistic programs. So far we have concentrated on sequential programs. The next upcoming challenge is to also treat concurrent probabilistic programs.

But reasoning about such programs is difficult. Let us illustrate this by an example. The question whether a deterministic program halts on a given input, that is to say, that it stops computing and outputs a result in finitely many steps, is semi-decidable. This means that an automated test whether a deterministic program halts may complete with the feedback "the program halts", or may end up in an infinite, never stopping execution. For probabilistic programs this is not the case. The situation is worse. The question whether a probabilistic program terminates on a given input with probability one, that is, there may be never stopping program runs, but they all happen with probability zero, is not solvable by any automated test. It turns out that this problem is as hard as checking whether a deterministic program halts on all inputs.

An additional feature to coin flipping, is the ability of a probabilistic program to learn from data. A probabilistic program can be seen as a transformer that takes as input a probability distribution over inputs and outputs a probability distribution of the possible output values. Data, observations from the real world, typically change these distributions. Let us assume that the program computes a distribution over a program variable x representing the room temperature. The program computes this distribution based on several inputs such as humidity, air circulation, room occupancy and so forth. If more information about these parameters is known, e.g. from sensor measurements it is evident that the humidity is between 30 and 36%, then this influences the distribution on the room temperature. This is known as Bayesian learning. The programming constructs in a probabilistic programming language provide powerful means to model different operations on distributions that go beyond Bayesian learning. Probabilistic programs are more expressive than for instance Bayesian networks, a probabilistic graphical model that is successfully used in a wide area of applications.

In this project, we investigate what probabilistic programs mean. In an exact, mathematical manner. We also develop techniques to check essential properties of probabilistic programs. Such as if they terminate with probability one (called almost-sure termination) on all possible inputs. This is a notoriously hard question --- it is highly undecidable --- and in principle not solvable in an automated manner. In general. In FRAPPANT, we successfully have developed automated techniques that are able to decide almost-sure termination for many programs. Or the reverse: that they are not almost-surely terminating.

This is typical for the FRAPPANT project. We attempt to push the borders of automated reasoning about probabilistic programs. So far we have concentrated on sequential programs. The next upcoming challenge is to also treat concurrent probabilistic programs.

We worked on several aspects so far. Most prominently, we developed

a. first algorithms on the automated verification of infinite-state probabilistic programs (publications at CAV 2020 and CAV 2021).

b. a calculus to upper bound the Kantorovich distance between executions of a probabilistic program (POPL 2021 distinguished paper award)

c. a probabilistic extension of O'Hearn's and Reynolds' seminal Separation Logic to reason about randomized algorithms that manipulate dynamic data structures (publication at POPL 2019)

d. a denotational transformer semantics for probabilistic while-programs in terms of generating functions (LOPSTR 2020 best paper award)

In addition, we provided a weakest precondition calculus for a probabilistic programming language over continuous distributions, conditioning and scoring, and proved its correspondence to an operational semantics using entropies. In another line of research, we have studied an important subclass of probabilistic programs, Bayesian networks, and have been able to show tat applying automated verification techniques from the field of probabilistic model checking is very competitive to analysis techniques that are tailored to analyse Bayesian networks.

a. first algorithms on the automated verification of infinite-state probabilistic programs (publications at CAV 2020 and CAV 2021).

b. a calculus to upper bound the Kantorovich distance between executions of a probabilistic program (POPL 2021 distinguished paper award)

c. a probabilistic extension of O'Hearn's and Reynolds' seminal Separation Logic to reason about randomized algorithms that manipulate dynamic data structures (publication at POPL 2019)

d. a denotational transformer semantics for probabilistic while-programs in terms of generating functions (LOPSTR 2020 best paper award)

In addition, we provided a weakest precondition calculus for a probabilistic programming language over continuous distributions, conditioning and scoring, and proved its correspondence to an operational semantics using entropies. In another line of research, we have studied an important subclass of probabilistic programs, Bayesian networks, and have been able to show tat applying automated verification techniques from the field of probabilistic model checking is very competitive to analysis techniques that are tailored to analyse Bayesian networks.

The largest progress in my view has been on automating the analysis of probabilistic programs. Both termination analysis, as well as obtaining loop invariants has progressed extremely well, and its results are above expectations. We will certainly evolve further in this direction and will attempt to also consider concurrent, i,e,. parallel, probabilistic programs.