Periodic Reporting for period 4 - AMORE (A distributional MOdel of Reference to Entities)
Berichtszeitraum: 2021-08-01 bis 2023-01-31
Our goal is to better understand how the interaction between conceptual and referential aspects of meaning shape language, and how cognitive and communicative constraints intervene in this interaction. We also aim at improving how computational systems deal with reference.
AMORE is important for society because it helps understand how humans make the connection between language and the world, and it makes progress in computer-human interaction, helping technologies better support us in our everyday life.
On the AI front, we have provided a deeper understanding of computational models of language, and we have linked them to theoretical results on meaning. Despite some promising initial results, we did not manage to design a new model that significantly improves on the referential front. Note that during the life of the project, an abyss grew between academia and companies, in that only a few large companies have the compute to create state-of-the-art models. In response, we shifted the focus to model analysis (testing their linguistic capabilities). We have shown that current computational models excel at conceptual aspects of meaning (so this aspect is "solved" in AI); but they lack severely in referential capabilities. This is still the case with newer-generation models like ChatGPT, with its well-known penchant for fabulation.
On the linguistics front, we have shown that the interaction between conceptual and referential aspects of meaning shape the world's languages in the following aspects:
- How languages are structured, in particular, how they allocate meanings to words in their lexicons: the same word can be used for two meanings if those meanings are conceptually related, but not too confusable in actual language use in referential contexts.
- How people use language to refer to the real world: for instance, when objects are typical for a given name, like a duck swimming in the lake being followed by a couple ducklings, people converge in their naming choices much more than when they are atypical.
- How language structure and real world interact in the extension of word meanings: the kinds of conceptual relationships that link a word with a new referent are shared in a surprising variety of phenomena related to lexical creativity. In particular, the kinds of errors that children make when they learn language (for instance, uttering "apple" when they want a ball) are very similar to attested historical meaning extensions in the world's languages.
Our research is highly inter-disciplinary. We have published our results in top venues in Linguistics, Artificial Intelligence, and Cognitive Science: the main venues in computational linguistics (ACL, NAACL-HLT, EMNLP, COLING, EACL, LREC), top journals in linguistics (Annual Review of Linguistics, Glossa, Linguistics, Semantics and Pragmatics), and top journals in Cognitive Science (Cognitive Science, Cognition, Computational Brain & Behavior, Journal of Cognitive Neuroscience, Journal of Experimental Psychology: Learning, Memory and Cognition).
- a better understanding of how, and to what extent, current neural network-based models account for contextual aspects of meaning (in particular, referential aspects);
- a validation of the hypothesis that memory-augmented neural networks can better account for language as referring to entities in the real world;
- an understanding of the factors that intervene in people's choice of names for objects in visual scenes, and how that impacts computational models of naming;
- a better understanding of how the context of use influences reference, and how reference in turn feeds back to the organization of the lexicon.