Periodic Reporting for period 2 - DREAM (Distributed dynamic REpresentations for diAlogue Management)
Reporting period: 2021-03-01 to 2022-08-31
Artificial conversational agents, such as chatbots and personal assistants, are increasingly present in our everyday lives. Many websites now for instance host customer service chatbots, and dialogue agents are also being used in education and health coaching. Interacting with these systems already feels a lot more natural and is less frustrating for the user than ten or even five years ago; nevertheless, there is substantial room for improvement. In DREAM, we aim to (1) deepen our understanding of what features of human conversations are key and should be incorporated in our artificial dialogue systems, and (2) develop machine models implemented as artificial neural networks that lead to dialogue agents that incorporate these features -- for example, being able to adapt to the user and to integrate visual and linguistic information.
-- A computer system able to automatically describe images by mimicking where human speakers look when they describe a scene;
-- A computer system that adapts to the user by taking into account how entities have been previously referred to within a dialogue (by keeping track of, for example, whether the user refers to a certain dog as “the white fluffy dog” or “the angry Samoyed”).
We have also carried out computational modelling experiments to better understand:
-- How well our computer systems capture changes in the meaning of words over time;
-- How well our computer systems integrate information from vision and language when representing concepts.