This site has been archived on
The Community Research and Development Information Service - CORDIS
Information & Communication Technologies

Language Technologies


Back to overview

Please note that the project factsheets will no longer be updated.  All information relevant to the project can be found on the CORDIS factsheet .  This is updated on a regular basis with public deliverables, etc.

Dicta-Sign - Sign Language Recognition, Generation and Modelling with application in Deaf Communication

dictasign logo

At a glance

ICT-2007.2.2 - Cognitive Systems, Interaction, Robotics

231135 - STREP

The development of Web 2.0 technologies has made the internet a place where people constantly interact with one another, by posting information (e.g. blogs, discussion forums), modifying and enhancing other people's contributions (e.g. Wikipedia), and sharing information (e.g., Facebook, social news sites). Unfortunately, all these technologies are not sign language user-friendly because they require the use of written language. Can't sign language videos fulfil the same role as written text in these new technologies? In a word, no. Videos have two problems: First, they are not anonymous – anyone can recognize from the video who made a contribution, which holds many people back who otherwise would be eager to contribute. Second, people cannot easily edit and add to a video that someone else has produced, so a Wikipedia-like web site in sign language is not possible.

Dicta-Sign's goal was to develop the necessary technologies that make Web 2.0 interactions in sign language possible: users sign to a webcam using a dictation style. The computer recognizes the signed phrases, converts them into an internal representation of sign language, and then has an animated avatar sign them back to the users. Content on the Web is then contributed and disseminated via the signing avatars. Moreover, the internal representation also allows us to develop sign language-to-sign language translation services, analogous to the Google translator. This way, the Dicta-Sign aimed to solve both problems that sign language videos have. The avatar is anonymous, and its uniform signing style guarantees that contributions can be easily altered and expanded upon by any sign language user.

The Challenge

The project addressed the need for communication between Deaf individuals and communication via natural language by Deaf users with various Human-Computer Interaction environments. It aimed at developing sign language recognition and synthesis engines at a level of detail necessary for recognising and generating authentic signing with an ultimate goal of making it possible for Deaf users to fully exploit possibilities of Web 2.0, e.g. to make, edit and review avatar-based sign language contributions online.

One of the main objectives of Dicta-Sign was to develop an integrated framework that allows contributions via webcams in four different European sign languages: Greek, British, German, and French. Other objectives of the project included the development of the world's first parallel multi-lingual corpus of annotated sign language data; advanced sign language annotation tools that integrate recognition, translation, and animation; as well as large cross-lingual sign language dictionaries.

The Goal

The project carried out research and development for recognition and synthesis engines for sign languages (SLs) at a level of detail necessary for recognising and generating authentic signing.

Scientific Innovation

Dicta-Sign was based on research novelties in sign recognition and generation exploiting significant linguistic knowledge and coded SL resources. To that end annotated parallel video corpora for four sign languages (British (BSL), German (DGS), Greek (GSL) and French (LSF)) was exploited, linked to common grammar and lexicon modules created to feed both recognition and synthesis engines. These resources were also applied, by exploiting multilingual syntactic and lexical interrelations, to domain-specific SL-to-SL machine translation. Interoperation of several scientific domains was required to combine linguistic knowledge with computer vision for image/video analysis for continuous sign recognition, and with computer graphics for realistic avatar animation.

The result

Overall the project has advanced the state of the art in computer vision and sign language recognition, sign language generation, sign language linguistic modelling and sign language translation. It has produced a convincing demonstrator in the form of a Sign-Wiki for sign language input/output which showcases a number of technological advances in the areas of sign recognition, linguistic processing and sign synthesis. It has also produced two other prototypes: a sign lookup tool, and a sign translation tool, which demonstrate integration of the sign recognition, computer vision, linguistic modelling, and sign synthesis. Large user studies were conducted to evaluate the Sign-Wiki.
Secondly, the Dicta-Sign project has produced the largest-to-date “parallel” corpus across four sign languages (GSL, DGS, LSF and BSL). It has been annotated for GSL, DGS and LSF. The BSL corpus has been assigned detailed annotations on a segment of 40 minutes and annotations with thematic tags on the full BSL corpus data. The produced corpus is available publicly via the Consortium’s website for use by other researchers, Deaf users, educators and learners of sign language. Finally, the Dicta-Sign project has also developed a linguistic annotation tool infrastructure that, for the first time, includes various automatic detection modules to assist sign language annotation.
 

Research outcomes were integrated in three laboratory prototypes leading to a practical project demonstrator:

  • Search-by-Example tool
  • SL-to-SL translator and a
  • Sign-Wiki

Latest news

"Der Spiegel" published an article about Dicta-Sign: "Prof. Haartolles Wortgestöber" , 34/2011.

Latest achievements of the project are available in the Newsletter: May 2011 , December 2011

Co-ordinator

Contact Person:

Name: Eleni Efthimiou
Tel: +30 210 687 5356
Fax: +30 210 687 5485
E-mail: Eleni Efthimiou
Organisation: Athena RC, Institute Language and Speech Processing

Participants


























Back to overview


This page is maintained by: Susan Fraser (email removed)