Community Research and Development Information Service - CORDIS

ERC

FLEXILOG Report Summary

Project ID: 637277
Funded under: H2020-EU.1.1.

Periodic Reporting for period 1 - FLEXILOG (Formal lexically informed logics for searching the web)

Reporting period: 2015-05-01 to 2016-10-31

Summary of the context and overall objectives of the project

"The long-term aim of this research is to develop systems that can provide direct answers to questions from users, by reasoning about information that is found on the web. The specific technical challenges addressed in this project relate to how information is represented, and how inferences can be made in a way that is sufficiently robust to deal with the messy nature of data on the web.

Traditionally, in the field of artificial intelligence, logics have been used to represent and reason about information. An imporant advantage of using logic is that the underlying reasoning processes are completely transparent. Moreover, logical representations naturally allow us to combine information coming from a variety of sources, including structured information (e.g. ontologies and knowledge graphs), information provided by domain experts or obtained through crowdsourcing, or even information expressed in natural language. However, logical inference is also very brittle. Two particularly problematic limitations in the context of web data are (i) the fact that there are no mechanisms for handling inconsistency (in most logics) and (ii) there are no mechanisms for deriving plausible conclusions in cases where "hard evidence" is missing.

Vector space models form a popular alternative to logic based representations. The main idea is to represent objects, categories, and the relations between them, as geometric objects (e.g. points, vectors, regions) in a high-dimensional Euclidean space. Such models have proven surprisingly effective for many tasks in fields such as information retrieval, natural language processing, and machine learning. However, the underlying inference processes lack transparency, and conclusions that are derived come without guarantees. This is problematic in many applications, as it is often important that we can provide an intuitive justification to the end user about why a given statement is believed. Such justifications are moreover invaluable for debugging or assessing the performance of a system. Moreover, the black box nature of vector space representations makes it difficult to integrate them with other sources of information.

The aim of this project is to combine the best of both worlds. Specifically, the aim is to derive interpretable semantic structures from vector space models, and to use these semantic structures to develop robust forms of logic based inference. In particular, our main objectives are as follows:

* To develop new methods for learning interpretable vector space models from data
* To develop robust and efficient forms of inference that use the learned vector space models to derive plausible conclusions, or deal with inconsistencies
* To evaluate the effectiveness of these methods in a variety of tasks in fields such as natural language processing, information retrieval and machine learning

The importance of this project is twofold. First, the developed methodology will form the basis of a new generation of interpretable machine learning methods. The lack of interpretability of current systems is important concern, and is likely to become even more critical as people's lives become increasingly affected by systems that rely on artificial intelligence. Second, the methods will directly contribute to the development of more intelligent search engines, making it easier for users to obtain information that answers complex questions."

Work performed from the beginning of the project to the end of the period covered by the report and main results achieved so far

In the first reporting period, the focus has been on learning suitable vector space models from data. In particular, while there is an abundance of existing methods for learning vector space models, the models produced by these methods are typically not interpretable. To address this issue, we have developed a new method for which there is a direct correspondence between the geometric structure of the vector space model and the logical representation of the same domain. For example, objects correspond to points, concepts to regions, features to directions (or vectors) and semantic types to subspaces.

In a second contribution, we looked at explicitly modelling the uncertainty of vector space representations. This is important to ensure that inferences using the vector space model are made in a principled way. In particular, existing models represent each object as a vector. However, if only limited information is available about a given object, the coordinates of that vector are largely arbitrary, which can lead to spurious inference results. In contrast, our model represents each object as a probability distribution over vector space representations, explicitly capturing how much information we have about the object.

Another line of work has looked at qualitative vector space representations. The aim of this work is to deal with situations where vector space representations are not available (or the obtained representations are too uncertain), but where some useful qualitative information about these representations is available. For example, we may know that one person is older than another (which is something that a vector space model can capture), even if a full vector space representation is not available. We have identified ways in which we can reason effectively with such partial/qualitative vector space representations.

In a final line of work, we have started to use vector space models for improving logical inference methods. In particular, we have shown how our vector space models can be used to support a very powerful form of inductive inference, and how this allows us to automatically complete ontologies. This method was experimentally shown to outperform the existing methods for this problem.

Progress beyond the state of the art and expected potential impact (including the socio-economic impact and the wider societal implications of the project so far)

The most successful methods in artificial intelligence (e.g. deep neural networks) are currently not interpretable. The fact that these methods increasingly affect people's lives (e.g. when used for processing mortgage applications, setting insurance premiums, or for determining which news stories are most important) is worrying policy makers, as well as the general public, as exemplified by the reaction to the role of Facebook's handling of fake news stories in the US presidential election campaign. Developing methods which are interpretable, as well as robust, is challenging. On the one hand, interpretability requires that the model requires on some kind of symbolic representation, but symbolic inference is too brittle for most applications. Making the bridge between such symbolic representations, on the one hand, and the kind of vector space representations used in neural networks, on the other hand, is precisely what the FLEXILOG project is trying to achieve.

While we expect the project outcomes to lead to interpretable models for a wide variety of tasks, we will focus in particular on the development of intelligent methods for searching information on the web.
Record Number: 193866 / Last updated on: 2017-01-27
Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top