Community Research and Development Information Service - CORDIS


AI4REASON Report Summary

Project ID: 649043
Funded under: H2020-EU.1.1.

Periodic Reporting for period 2 - AI4REASON (Artificial Intelligence for Large-Scale Computer-Assisted Reasoning)

Reporting period: 2017-03-01 to 2018-08-31

Summary of the context and overall objectives of the project

The AI4REASON project is targeting a very hard problem in AI and automation of reasoning, namely the problem of automatically proving theorems in large and complex theories.

Such complex formal theories arise in projects aimed at verification of today's advanced mathematics such as the Formal Proof of the Kepler Conjecture (Flyspeck), verification of software and hardware designs such as the seL4 operating system kernel, and verification of other advanced systems and technologies of today's information society.

It seems extremely complex and unlikely to design an explicitly programmed solution to the problem. However, we have recently shown that the performance of existing approaches can be multiplied by data-driven AI methods that learn reasoning guidance from large proof corpora. The AI4REASON project focuses on developing such novel AI methods.

Work performed from the beginning of the project to the end of the period covered by the report and main results achieved so far

"The work proceeded as scheduled and good progress was made in the research areas covered by the five work packages:

WP1: High-Level Premise Selection
WP2: Internal Proof Guidance
WP3: Lemmatization, Conjecturing, and Concept Introduction
WP4: Self-Improving AI Systems Combining Deduction and Learning
WP5: Deployment and Cross-Corpora Reuse

In WP1, we have worked on novel learning architectures for premise selection, applying deep neural networks to this task jointly with a new team at Google. This has resulted in our paper ""DeepMath"", about the first use of deep learning for theorem proving. We have also explored various boosting methods between the learning and proving systems. This has led to improvements in premise selection and consequently stronger theorem proving in large theories. The learning methods have been applied to several proof assistants, such as Isabelle, HOL4, and Coq and two journal papers about these methods and systems were published.

In WP2, we have started by implementing fast machine learning guidance in simpler theorem provers and also added Monte-Carlo methods to them. The improvement led us to add learning-based guidance to more complicated higher-order systems, and finally also to optimized state-of-the-art automated theorem provers. This is still work in progress, however the first experimental results have been surprisingly good. We have also worked on adding machine learning guidance to tactical theorem provers such as HOL4 and Isabelle.

In WP3, we have developed lemmatization methods that improve the performance of ATPs over large formal libraries. This was continued by employing statistical-symbolic concept analogies as a method for both targeted and untargeted conjecturing over a large mathematical corpus. A particular statistical-symbolic method has been also used to propose the most likely contradictory sets in large formal corpora.

In WP4, we have worked on several systems that implement a feedback loop between solving and defining classes of problems, and inventing better theorem proving strategies for these classes. Such loops gradually solve more and more problems and invent better and better specialized strategies. Some of the methods have been used in our Machine Learner for Automated Reasoning, which implements several feedback loops between theorem proving and machine learning. This system significantly improves over its base deductive component.

In WP5 we worked on statistical-semantic parsing tools for informal mathematics and producing mathematical datasets on which such system can be trained. Our first methods turn out to be surprisingly good in automated formalization, which opens the prospect of large-scale deep computer understanding and assistance of mathematical and scientific writings in not so distant future.

Our systems have won two divisions of the yearly competition of automated theorem provers (CADE) in 2018, and one division in 2017.

In total, the researchers have published over 30 papers, organized several conferences and workshops in the field, gave a number of invited talks, and successfully competed in several theorem-proving competitions. Combining learning and reasoning seems to be a very viable approach to building stronger AI and reasoning systems.

Progress beyond the state of the art and expected potential impact (including the socio-economic impact and the wider societal implications of the project so far)

Our project and domain is unique in connecting two major AI fields: Automated Reasoning and Machine Learning. This produces new methods in Automated Reasoning, as well as new tasks and issues in Machine Learning. Particularly interesting and important are combinations of learning and reasoning methods into larger AI metasystems where the learning and reasoning components inform and improve each other's work in various feedback loops.

In more detail, the major novel aspects and progress beyond state of the art include:

- equipping a number of theorem provers with a guiding component based on learning from previous proofs,
- application of deep learning and other advanced learning methods to theorem proving,
- defining the autoformalization task and building the first corpora and systems for autoformalization, and
- building several AI metasystems that combine learning and reasoning in various feedback loops.

These methods have led to a significant improvement of the performance of automated reasoning and autoformalization tools on several standard benchmarks as well as to new results in automatically assisted research-level mathematics.

In the second half of the project we expect further development of all these areas. Many of the developed methods and systems are still evolving prototypes and a large number of new ideas are appearing as we develop them. These need to be implemented and experimentally evaluated, together with the number of ideas that have started come from the growing wider community interested in these AI topics.
Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top