Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Formal lexically informed logics for searching the web

Periodic Reporting for period 4 - FLEXILOG (Formal lexically informed logics for searching the web)

Reporting period: 2019-11-01 to 2020-04-30

"The specific technical challenges addressed in this project relate to how knowledge about the world is encoded, and how we can reason with such (messy) knowledge in a robust way.

Traditionally, in the field of artificial intelligence, logics have been used to represent and reason about information. An important advantage of logic is that the underlying reasoning processes are transparent. Moreover, logical representations naturally allow us to combine information coming from a variety of sources, including structured information (e.g. ontologies and knowledge graphs), information provided by experts, or even information expressed in natural language. However, logical inference is also very brittle. Two particularly problematic limitations in the context of web data are (i) the fact that there are no mechanisms for handling inconsistency (in most logics) and (ii) there are no mechanisms for deriving plausible conclusions in cases where ""hard evidence"" is missing. Vector space models form a popular alternative to logic based representations. The main idea is to represent objects, categories, and the relations between them, as geometric objects (e.g. points, vectors, regions) in a high-dimensional Euclidean space. Such models have proven surprisingly effective for many tasks in fields such as information retrieval, natural language processing, and machine learning. However, the underlying inference processes lack transparency, and conclusions that are derived come without guarantees. This is problematic in many applications, as it is often important that we can provide an intuitive justification to the end user about why a given statement is believed. Such justifications are moreover invaluable for debugging or assessing the performance of a system. Moreover, the black box nature of vector space representations makes it difficult to integrate them with other sources of information.

The aim of this project was to combine the best of both worlds. Specifically, methods have been developed to learn expressive vector space models, to derive interpretable semantic structures from these models, and to use such structures to implement robust forms of logic based inference. Among others, our methods make it possible to make more accurate predictions in relational domains (e.g. predicting properties of molecules, links between users of social networks, or missing facts in a knowledge base), to implement flexible information retrieval systems (e.g. finding entities that satisfy some high-level properties, even if such properties are not mentioned in available text descriptions), and to achieve deeper levels of natural language understanding."
The first research line of the project was about learning suitable vector space models (also known as embeddings) from data. While there is an abundance of existing methods for this purpose, the models produced by these methods are typically not interpretable. One important consequence is that existing models are difficult to use in unsupervised settings (e.g. interpreting query terms in an information retrieval context) and that it is not always obvious how external background knowledge can best be incorporated into existing methods. To address these issues, we have developed a number of new methods for which there is a more direct correspondence between the geometric structure of the vector space model and the logical representation of the same domain. We have also developed two models for learning vector space embeddings that can take advantage of prior probabilities, to make the resulting representations more robust, especially in the case of entities for which relatively little information is available. Another line of work has looked at qualitative vector space representations, where we were able to show that higher-quality representations of concepts can be learned by first obtaining symbolic knowledge about their relationships.

In the second research line, we have exploited the learned vector space models for implementing different forms of commonsense reasoning. We have first focused on predicting missing factual knowledge, i.e. predicting plausible instances of concepts and relations. An important focus has been on making reliable predictions in cases where few training examples are provided, for instance by exploiting prior knowledge of how different concepts are related. Subsequently, we have developed a number of methods for identifying plausible missing rules in a given knowledge base, where we were able to confirm that vector representations indeed allow us to make meaningful predictions beyond what is possible with standard deduction. We have also studied such approaches at a theoretical level, within the context of description logics and the framework of existential rules.

The third reseach line was concerned with combining our vector space representations with methods for relational learning, and to evaluate their potential in applications such as natural language processing. One important focus in this research line has been on learning vector representations of relations in an unsupervised way. Moreover, we have developed highly interpretable strategies for knowledge base completion. This has been achieved by relying on possibilistic logic, which makes it possible to reason about uncertain knowledge in a way that stays close to classical logic. Finally, we have also developed a highly flexible approach to relational reasoning which combine symbolic rules with neural network learning.
While the use of vector representations is now a popular and widely used strategy, such representations have been used in an unconventional way in this project. Indeed, while most existing work is aimed at learning vector representations for the purpose of encoding inputs to neural network models, our aim was to use vector representations as an interpretable source of knowledge. This was accomplished by learning spaces in which semantic notions (such as types, categories and contexts) have a direct geometric counterpart. Moreover, existing approaches almost exclusively use vectors to represent entities and concepts. In contrast, we use vectors for objects, regions for properties and categories, and subspaces for types and contexts. This leads to a more natural, and provably more general, representation, which is easier to interpret and to link to human models of categorisation.

In the context of ontologies, the strategies we have developed allow us to make plausible predictions in ways that are not possible with existing models. While there has been considerable work on learning rules from examples, our method is able to predict plausible rules even if no such examples are given.

Another important contribution has been about learning relational data. While existing approaches assume that the relation between two entities can be predicted from the vector representations of these entities, we have shown that substantially better results are possible by directly learning vectors that capture such relationships. Finally, we have focused on statistical relational learning with interpretable rule-based models. This is a radical departure from existing methods, as our models are simply stratified classical theories, which are particularly easy to reason with. In contrast to earlier approaches, our approach is more interpretable, more efficient, and often more accurate.
erc-fig1-001.jpeg