Skip to main content

Article Category


Article available in the folowing languages:

Immersive narrative experiences for all

Even though immersive content is still in its early stages, the technology powering it is moving fast. The ImAc project comes out as an ambitious effort to leave no one behind, regardless of disabilities and specific accessibility requirements.

Digital Economy

ImAc (Immersive Accessibility) specifically focuses on 360° experiences. Despite its relatively young age, virtual reality is already leaving people behind. Accessibility is generally put on the back burner, to the point where potential users with visual or hearing impairment currently have no way to enjoy the highly immersive experiences head-mounted displays (HMDs) can provide. Kick-started in 2017, ImAc is the organised response to this growing issue. The project consortium has been exploring how accessibility services can be integrated with immersive media, and developed a new generation of a new VR360 player for stakeholders to integrate into their own projects. Sergi Fernandez, coordinator of ImAc on behalf of the Spanish internet research centre i2CAT, discusses the project’s achievements so far.

Immersive technology is advancing at a dizzying pace. Do you feel like accessibility is being forgotten in the process? How so?

Accessibility is always forgotten in the process, but this is specifically why ImAc is here. An important part of our work is to put accessibility on the radar of standardisation groups, make our requirements visible to them, and sustain our requests around proofs of concepts that we have already tested intensively with users. Over 300 users have already been involved, from viewers with accessibility needs to professional accessibility editors working for broadcasting entities.

So what does ImAc’s plan to catch up consist of exactly?

We are trying to figure out how to make VR360 content accessible. We have a basic VR360 player developed by i2CAT that we use to run fast prototype cycles. We are implementing specific strategies which we test with small groups of users from targeted communities, who tell us what they like and what they don’t like. The cover, for instance, the use of a radar or arrows to signal where a speaker is positioned, the use of different positional audio tracks to audio-describe the action that surrounds you, subtitles either attached to the speaker or always visible, etc. We communicate the results through scientific articles and, as I said earlier, through different standardisation groups.

What would you say are the most innovative aspects of the project?

We have created the first-ever fully accessible VR360 player. Besides the features I already mentioned, we also have an accessible menu that is fully configurable and adaptable to users’ specific needs and preferences (size/position of accessibility data, type of access service or combination of these services, etc.). We even incorporated voice control for people facing issues with the mouse or the HMD pointer. We made sure to screen all existing VR360 players, and when it comes to accessibility, we definitely have the most advanced one. But our objective is not to compete with companies providing VR360 players: We just want them to incorporate our work, which is why we have made all of our source code openly available.

Let’s take the example of a head-mounted, virtual reality display. How do you make it accessible without negatively impacting user experiences?

Well, that depends a lot on the type of user. If you take subtitles and implement them for any type of user with no specific auditory or vision requirements, the fact that these subtitles monopolise part of the field of view can be seen as a drawback. However, users eventually get used to it and realise that, in the end, it actually helps understand what is being said. The audio domain is a bit different. In this case, content editing must be much more fine-grained. You cannot run two audio sources at the same time if they are not very well aligned. Sometimes – especially in those cases where there is no visual feedback and audio is the only channel a person has available to understand the content and benefit from an immersive experience – we will face a situation where the audio content is not self-descriptive, with no existing gaps to introduce audio description tracks. This is more or less the same problem you would have with traditional video, but slightly worse, as here the audio is supposed to come from all around you.

Can you tell us more about the services you developed and demonstrated?

In addition to the player itself, we have created the necessary editing tools (audio-description, subtitling, sign language) and background mechanisms such as transcoding, metadata packaging and 3D audio editing. Using these, broadcasters can easily incorporate a full accessibility service into their internal workflow.

What has been the feedback from stakeholders so far?

The feedback is good, especially that coming from users with some special needs. They love what we are doing and the fact that we rely on them to make relevant decisions. On the other hand, VR360 is still a very immature storytelling technique. Traditional (framed) content has been around for more than a century, whereas 360° video has been there for less than a decade. There is still a lack of content. This, added to the fact that HMDs are still far from being usable, makes us think that our findings will be useful in the medium- more than the short-term. With that being said, we are ready to support any broadcaster or content producer that may want to offer full accessibility services in their immersive contents.

What do you still need to achieve before the end of the project?

We now want to go open and we will do that before February 2020. This means we will move from controlled user tests to open tests. Thanks to two of our partners – RBB and TV3 – we will launch a public campaign. Our enhanced contents will be accessible on their website, to a larger group of people. This is an important moment for us to gather a lot of quantitative data and test everything we’ve achieved so far in a real production environment.