CORDIS - EU research results
CORDIS

aRTIFICIAL iNTELLIGENCE for the Deaf

Article Category

Article available in the following languages:

AI solutions for the deaf and hard of hearing

Emerging technologies such as artificial intelligence, machine learning and augmented reality are rewriting the rules on how deaf and hard-of-hearing people communicate.

Health icon Health

Nearly 10 % of the EU population is deaf or hard of hearing. That means one out of every 10 citizens struggles with such everyday tasks as having a conversation, attending class or watching television. But this could soon change, thanks in part to the work being done by aiD. The EU-funded project is leveraging the power of such emerging technologies as artificial intelligence (AI), machine learning and augmented reality to significantly enhance communication for the hearing-impaired community. “Using emerging technologies, we created a number of innovative solutions to help deaf and hearing people communicate with one another,” says Sotirios Chatzis, an associate professor at the Cyprus University of Technology. One of those solutions is an on-demand sign language-generation application. The tool uses generative AI to produce sign language videos from text prompts in an end-to-end fashion. “If a deaf student is in class, they can simply open the app on their mobile phone and their personal avatar will automatically sign the day’s lecture,” explains Chatzis. The app can also be used to facilitate conversations between deaf and hearing people, providing a signed translation of the conversation for the deaf individual and a text translation of the sign language for the hearing individual. “This groundbreaking solution could prove very helpful when a deaf individual is travelling, for example, providing signed translations of relevant announcements being made at an airport or train station,” adds Chatzis. The generative AI application has been submitted for consideration by the prestigious European Conference on Computer Vision.

A sign language video transcription solution

Another key outcome of the project, which received support from the Marie Skłodowska-Curie Actions programme, was a sign language video transcription service. Like video closed captioning, the solution adds an avatar sign language translator to the corner of a video screen who simultaneously signs the show or movie. “What makes this development so important is its unprecedented level of accuracy, with a memory footprint of up to 70 % less than existing solutions,” remarks Chatzis. A memory footprint is the amount of main memory that a programme uses when running. aiD also contributed a novel data set for Greek sign language that will play a crucial role in the development of more accurate and efficient sign language translation tools.

Using technology to foster a more inclusive society

These technologies, plus others, were all tested during an extensive pilot programme, which included running aiD’s solutions with a news service, during video conferencing, with an automated relay service for emergencies, and as an interactive digital tutor. The pilots were key in demonstrating both the technologies used and the practical applications that the developed solutions can provide the deaf and hard-of-hearing community. “Our pilots show that aiD’s impact extends well beyond its technologies and helps foster a more inclusive and equitable society where a hearing impairment is no longer a barrier to communication, education or employment opportunities,” concludes Chatzis. Although the project is now finished, researchers continue working to advance the aiD technologies towards commercialisation. They are also developing a business model that foresees making its solutions available via a subscription fee.

Keywords

aiD, artificial intelligence, AI, generative AI, machine learning, augmented reality, deaf, hard of hearing, hearing-impaired, emerging technologies, sign language

Discover other articles in the same domain of application