Periodic Reporting for period 2 - AMORE (A distributional MOdel of Reference to Entities)
Reporting period: 2018-08-01 to 2020-01-31
The crucial tenet of AMORE is that conceptual and referential aspects of meaning interact, and this has an impact both to understand language and to model it computationally. By ""conceptual"", we mean generic aspects like the word ""box"" being associated with physical objects and physical objects having colors. By ""referential"", we mean situation-specific uses of linguistic expressions, like ""red box"" being used for a particular brown box containing red objects. The project has two main goals, namely to advance 1) our understanding of how conceptual and referential aspects of meaning interact, 2) the computational modeling of language, with the hypothesis that models that have an explicit bias towards modeling referents will fare better. We use state of the art Machine Learning techniques, as well as theoretical analyses of linguistic phenomena.
The main challenges we address are:
- Identifying which entities (""that big tree"") are being talked about;
- Tracking the entities as they are mentioned again (""that one""), retrieving and adding new information about them as needed;
- Crucially, having the machine learn these two abilities directly from examples of how people use language.
We face the machine with different tasks that require using language to talk about the world, and it progressively learns to represent both the entities and the language that we use to refer to them. Specifically, we test our computational model in referential tasks that require matching noun phrases (such as ""the big tree"") with entity representations extracted from text and images.
This project is important for society because it enables a better understanding of the our vehicle for thought, which is language, and it makes progress in computer-human interaction, helping technologies to better support us in our everyday life.
Newer generation neural networks hold promise of accounting for some referential aspects, as they explicitly model the context in which utterances are spoken and interpreted. We are exploring their potential, but in our experiments so far they have shown to fall short of accounting for many aspects of context. The main technical innovation of AMORE is the incorporation of a memory module to store information about entities. A version of this model won an international competition on learning how to match mentions in a dialogue with the corresponding characters. The dialogue was from the series Friends. For instance, given the sentence ""Ross, you love this woman"", the system should identify ""Ross"" and ""you"" as referring to the character ROSS, ""this woman"" to the character RACHEL. Our system was simpler than competitors, and we argued that it was able to perform better because its theoretically well-founded architecture. However, in further analysis we showed that neither our model nor another memory-based one were able to model character properties, such as their gender. We conclude that while the bias towards modeling entities is useful, current models implementing this bias are still far from accounting for entities. Similarly, our analysis of current LSTM-based language models shows their limitations in accounting for referential aspects: it suggests that they still heavily rely on lexical regularities rather than situation-specific information, and that, while they profitably use morphosyntactic features, they do not capture not a more global notion of entity.
To further explore the interaction between lexico-conceptual knowledge and contextual knowledge (in this case, object properties and visual context), we are creating a visual dataset annotated with referential information. We have started by collecting object names, with 36 annotations per image in a collection of 25,000 images extracted from a previously created dataset. In this line of grounding language in visual context, we have also studied how situation-centric multimodal object representations can be learnt by grounding semantic roles in the corresponding image regions, and how multi-tasking allows us to better learn quantifiers describing specific images.
Finally, we are modeling other aspects of utterance context that affect reference. In particular, interpreting referring expressions requires an understanding not just of entities, but also of which subset of entities are actually relevant to the discourse goals (often termed 'Questions Under Discussion'/'QUDs'). Besides contributing theoretical work on this topic, we have worked on neural network models for predicting discourse goals and, to test and analyze such models, we are currently collecting a corpus of human annotations that make implicit discourse goals explicit.
- a better understanding of how, and to what extent, current neural network-based models account for contextual aspects of meaning (in particular, referential aspects);
- a validation of the hypothesis that memory-augmented neural networks can better account for language as referring to entities in the real world;
- an understanding of the factors that intervene in people's choice of names for objects in visual scenes, and how that impacts computational models of naming;
- a better understanding of how the context of use influences reference, and how reference in turn feeds back to the organization of the lexicon.