Skip to main content
European Commission logo print header

Multi-modal Context Modelling for Machine Translation

Project description

A new era in machine translation

In the field of Natural Language Processing (NLP), the goal of automatically translating human language has long been pursued. However, current approaches, such as Statistical Machine Translation (SMT), often overlook vital contextual cues present in human translations. This leads to translations that lack relevant information or convey incorrect meanings, hampering reading comprehension and rendering them useless in many cases. In this context, the ERC-funded MultiMT project is taking an innovative approach by harnessing global multi-modal information. It will develop methods to incorporate contextual cues such as images, related documents, and metadata into translation models. Twitter posts and product reviews will serve as test datasets. This interdisciplinary initiative combines expertise from NLP, Computer Vision, and Machine Learning.

Objective

Automatically translating human language has been a long sought-after goal in the field of Natural Language Processing (NLP). Machine Translation (MT) can significantly lower communication barriers, with enormous potential for positive social and economic impact. The dominant paradigm is Statistical Machine Translation (SMT), which learns to translate from human-translated examples.

Human translators have access to a number of contextual cues beyond the actual segment to translate when performing translation, for example images associated with the text and related documents. SMT systems, however, completely disregard any form of non-textual context and make little or no reference to wider surrounding textual content. This results in translations that miss relevant information or convey incorrect meaning. Such issues drastically affect reading comprehension and may make translations useless. This is especially critical for user-generated content such as social media posts -- which are often short and contain non-standard language -- but applies to a wide range of text types.

The novel and ambitious idea in this proposal is to devise methods and algorithms to exploit global multi-modal information for context modelling in SMT. This will require a significantly disruptive approach with new ways to acquire multilingual multi-modal representations, and new machine learning and inference algorithms that can process rich context models. The focus will be on three context types: global textual content from the document and related texts, visual cues from images and metadata including topic, date, author, source. As test beds, two challenging user-generated datasets will be used: Twitter posts and product reviews.

This highly interdisciplinary research proposal draws expertise from NLP, Computer Vision and Machine Learning and claims that appropriate modelling of multi-modal context is key to achieve a new breakthrough in SMT, regardless of language pair and text type.

Host institution

IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND MEDICINE
Net EU contribution
€ 1 010 513,67
Address
SOUTH KENSINGTON CAMPUS EXHIBITION ROAD
SW7 2AZ LONDON
United Kingdom

See on map

Region
London Inner London — West Westminster
Activity type
Higher or Secondary Education Establishments
Links
Total cost
€ 1 010 513,67

Beneficiaries (2)