European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Computing Answers to Complex Questions in Broad Domains

Periodic Reporting for period 3 - DELPHI (Computing Answers to Complex Questions in Broad Domains)

Periodo di rendicontazione: 2022-04-01 al 2023-09-30

The goal of the DELPHI project is to develop methods for answering complex questions, where the information sources for answering those questions can be diverse, such as paragraphs of text, semi-structured tables, knowledge-bases, images, etc. Making progress on this problem can revolutionize the way we interact with computers. At present, people expect search engines to answer relatively simple questions, without reasoning over multiple sources of information. Our project aims to revolutionize user experience, where users, researchers, and scientists can treat computers as "research assistants" that can retrieve information much better than humans, perform calculations, integrate information, which can aid in developing new insights. This can lead to ground-breaking applications in science and education, allowing researchers to more easily form and test hypotheses.

Moreover, the DELPHI project is centered around some of the most burning questions in natural langauge understanding. First, what is the right representation for performing reasoning and computation in language? How can we unify traditional symbolic representations with modern distributed representations to benefit from their respective advantages? Second, the DELPHI project advocates a compositional view of language, where the meaning of the whole is computed from its parts. Last, this project will further our understanding on topics related to generalization beyond the training distribution.
The DELPHI project has made substantial progress towards answering complex questions over multiple information sources:
* We have defined a symbolic meaning representation for complex questions, which decomposes then to simpler questions. This has been shown to be useful for question answering and interpretability
* We have defined representations for questions that require implicit reasoning, that is, questions, where the reasoning process expressed, is not stated explicitly in the question
* We have developed parsers that can map natural language questions to these structured representations. These parsers are of high quality and have been used in subsequent papers as a component in more complex systems.
* We have developed methods for compositional generalization, that is, models that can generalize like humans to structures that were unseen at training time.
* We have developed models for reasoning over multiple modalities, including text, tables, and images.
* We have contributed numerous datasets and benchmarks for the community focusing on question decomposition, visual question answering, multi-modal question answering and more.
Recent years have led to progress in the field of pre-trained language models, which had a dramatic effect on the field. Until the end of the project, we plan to continue working on developing methods for robust question answering with an emphasis on compositional generalization. Moreover, due to the rise of pre-trained language models, we will consider few-shot and zero-shot settings where very little training data is available, as well as retrieval settings, where questions need to be answered given a very large corpus of text.