Please note that the project factsheets will no longer be updated. All information relevant to the project can be found on the CORDIS factsheet . This is updated on a regular basis with public deliverables, etc.
Dicta-Sign - Sign Language Recognition, Generation and Modelling with application in Deaf Communication
At a glance
ICT-2007.2.2 - Cognitive Systems, Interaction, Robotics
231135 - STREP
The development of Web 2.0 technologies has made the internet a place where people constantly interact with one another, by posting information (e.g. blogs, discussion forums), modifying and enhancing other people's contributions (e.g. Wikipedia), and sharing information (e.g., Facebook, social news sites). Unfortunately, all these technologies are not sign language user-friendly because they require the use of written language. Can't sign language videos fulfil the same role as written text in these new technologies? In a word, no. Videos have two problems: First, they are not anonymous – anyone can recognize from the video who made a contribution, which holds many people back who otherwise would be eager to contribute. Second, people cannot easily edit and add to a video that someone else has produced, so a Wikipedia-like web site in sign language is not possible.
Dicta-Sign's goal was to develop the necessary technologies that make Web 2.0 interactions in sign language possible: users sign to a webcam using a dictation style. The computer recognizes the signed phrases, converts them into an internal representation of sign language, and then has an animated avatar sign them back to the users. Content on the Web is then contributed and disseminated via the signing avatars. Moreover, the internal representation also allows us to develop sign language-to-sign language translation services, analogous to the Google translator. This way, the Dicta-Sign aimed to solve both problems that sign language videos have. The avatar is anonymous, and its uniform signing style guarantees that contributions can be easily altered and expanded upon by any sign language user.
The project addressed the need for communication between Deaf individuals and communication via natural language by Deaf users with various Human-Computer Interaction environments. It aimed at developing sign language recognition and synthesis engines at a level of detail necessary for recognising and generating authentic signing with an ultimate goal of making it possible for Deaf users to fully exploit possibilities of Web 2.0, e.g. to make, edit and review avatar-based sign language contributions online.
The project carried out research and development for recognition and synthesis engines for sign languages (SLs) at a level of detail necessary for recognising and generating authentic signing.
Dicta-Sign was based on research novelties in sign recognition and generation exploiting significant linguistic knowledge and coded SL resources. To that end annotated parallel video corpora for four sign languages (British (BSL), German (DGS), Greek (GSL) and French (LSF)) was exploited, linked to common grammar and lexicon modules created to feed both recognition and synthesis engines. These resources were also applied, by exploiting multilingual syntactic and lexical interrelations, to domain-specific SL-to-SL machine translation. Interoperation of several scientific domains was required to combine linguistic knowledge with computer vision for image/video analysis for continuous sign recognition, and with computer graphics for realistic avatar animation.
Research outcomes were integrated in three laboratory prototypes leading to a practical project demonstrator:
- Search-by-Example tool
- SL-to-SL translator and a
"Der Spiegel" published an article about Dicta-Sign: "Prof. Haartolles Wortgestöber" , 34/2011.
Name: Eleni Efthimiou
Back to overview
This page is maintained by: Susan Fraser (email removed)