CORDIS - Forschungsergebnisse der EU
CORDIS
Inhalt archiviert am 2024-06-18

Computable Analysis

Final Report Summary - COMPUTAL (Computable Analysis)

The programme of this interaction focuses on the following related topics: computable analysis, domain theory, topology, and exact real arithmetic, hereby dealing with both theoretical and applied aspects.

A major motivation for undertaking this research is the fact that in safety-critical applications it is not sufficient to produce software that is only tested for correctness: its correctness has to be formally proven. This is also true in Scientific Computation. The problem here is that the current mainstream approach to numerical computing uses programming languages that do not possess a sound mathematical semantics. Hence, there is no way to provide formal correctness proofs. The reason is that on the theoretical side one deals with well-developed analytical theories based on the non-constructive concept of real number. Implementations, on the other hand, use floating-point realisations of real numbers, which do not have a well-studied mathematical structure. Ways to get out of these problems are currently promoted under the slogan "Exact Real Arithmetic". Well developed practical and theoretical bases for exact real arithmetic and, more generally, computable analysis are provided by Scott's Domain Theory and Weihrauch's Type Two Theory of Effectivity (TTE). In certain applications also other computational models are used. Moreover, strengthening the underlying logic has been suggested. The full relationship of these approaches is still under investigation.

Exact Real Arithmetic is an approach where in contrast to mainstream numeric computation control of truncation and rounding errors as well as the used precision is integrated in the software. Several software packages for exact real arithmetic are freely available. The iRRAM package has been extended to allow the computational treatment of problems in hybrid systems, which are well known for their computational hardness.

Weihrauch's TTE is based on Turing machines that, by computing infinitely long, transform infinite strings. The study of such computations is important also for other branches of theoretical computer science including specification, verification and synthesis of reactive systems as well as stream computability. In contrast to the theory of finite computations, the study of infinite computations crucially depends on topological considerations.

An obvious approach to measure the complexity of infinite computations is to measure how many steps are necessary to compute an approximation of a given precision. On the basis of this insight, a theory of computational complexity has been developed in analogy to the discrete case. Complexity statements in this theory are worst-case statements: that is, the computational complexity of a problem is the complexity of the worst behaviour among the elements of the problem set. Consequently, such statements sometimes give unrealistic over-estimates of computational cost. As a way out of this problem, the study of average-case complexity was started in the present interaction. Standard examples of continuous functions with increasingly high worst-case complexity are shown to be in fact easy in the mean; while a further example is constructed with both worst and average-case complexity exponential.

A further observation is that more realistic complexity statements can be obtained if the complexity of a computation not only depends on output precision but on certain natural parameters as well. The complexity of natural operators on subclasses of smooth functions each of which coming with a characteristic integer parameter is considered in this way. It could be shown that Maurice Gevrey’s 1918 classical hierarchy of functions climbing from analytic to (just below) smooth provides for a quantitative gauge of the uniform (operator) complexity of maximisation and integration that non-uniformly (as complexity of the input function alone) is known to jump from polynomial time (fast) to NP-hard (slow).

Computable functions are known to be continuous. In most cases a function is non-computable, since it is already discontinuous. In order to measure the degree of discontinuity and algorithmic unsolvability of a function, hierarchies of sets of functions are introduced and functions are classified according to their exact level in the hierarchy. There is a well-developed theory of such hierarchies: Descriptive Set Theory. Traditionally, only Polish spaces are considered herewith. Spaces as they are typically used in mathematical studies of program semantics (and are hence vital for the present research), are not Polish, however. In systematic work essential parts of descriptive set theory could be extended to a larger class of spaces embracing both, Polish spaces as well as the domains used in computer science. The newly introduced class of quasi-Polish spaces turned out to be the right concept.

Research along these lines has been extremely prosperous. It initiated new research activities in Descriptive Set Theory and led to several collaborations with colleagues outside the project.

As has turned out, there are important spaces that are still not covered by this extension. Attempts to extend Descriptive Set Theory to the even larger class of spaces treatable in the TTE approach have been made. The author has won the silver medal of the Gödel Research Prize for this work.

As mentioned, a central aim of this undertaking is laying the foundations for the generation of provably correct software operating with infinite data as the real numbers. Data representations have been studied that allow the use of logical principles such as co-induction in verification proofs. For this purpose data are represented as formal proofs of properties of abstract mathematical structures. The method not only provides data representations, but at the same time programs computing on these data together with formal correctness proofs. New applications in computable analysis, satisfiability checking and imperative program extraction have been found: Abstract set-theoretic parsing relations were studied and it was shown how to extract monadic parsers from finiteness properties of such relations; a completeness theorem for Resolution was formalised and a provably correct SAT-solver extracted from its proof.

A novel construction of real numbers using a higher inductive-inductive type was developed. The construction is inspired by the usual Cauchy construction of the reals, but uses Homotopy Type Theory to connect coincident Cauchy sequences with paths at the same time as inductively completing under limits of Cauchy sequences. This is an intriguing new construction, which provides a homotopy-type-theoretic foundation for the development of new data representations of reals. It produces the reals as an inductive type, which no other known construction has done before, and is therefore equipped with accompanying induction and recursion principles. How to best use these for exact real arithmetic has still to be investigated.

The project was a great success: it led to a great many of important results and insights as well as new developments. There are deep results that would have never been established without the project: the secondments allowed researchers to meet, discuss, and collaborate.