Skip to main content
European Commission logo print header

An Artificial Intelligence Enhancing Video Quality Locally to Limit Internet Traffic Tied to Video Streaming

Article Category

Article available in the following languages:

New tech to stream high-quality video without sapping your data

With demand for video streaming services growing fast, the current internet bandwidth is struggling to cope. ENHANCEplayer allows broadcasters to stream smaller files, enabling users’ own devices to upscale the video resolution to higher quality.

Digital Economy icon Digital Economy

Video streaming over the internet, either as video on demand for existing content or live streaming for events, is a boom industry. Yet, high-quality streaming is not universally available, with some lacking the broadband connections needed. The cost of mobile internet is also prohibitive for many, with one study finding that cost stops almost 50 % of people with 4G network access globally using smartphones for the internet. The EU-supported ENHANCEplayer project set out to reduce the infrastructure load caused by sending lower resolution/lower bit rate videos over the internet with the receiving video player enabled to upscale quality. “Whereas traditional approaches increase the encoding efficiency of video, risking incompatibilities with older hardware, we started with the quality of the viewer experience, not technical metrics,” explains Ely Loew, the project coordinator.

Neural network enhancement

Project consortium members Artomatix and THEO Technologies shared their technological infrastructures to develop a prototype solution. ENHANCEplayer’s starting point was that the minimum video resolution needed for broadcasters is 540 p for mobile devices. Artomatix hypothesised that their super-resolution technology could be optimised for a variety of devices and prove fast enough to upscale video frame resolution in real time, 25-30 frames per second. THEO Technologies redesigned its universal video player, ‘THEOplayer’, to include Artomatix’s upscaling modules. The resulting ENHANCEplayer prototype works by training a neural network with two versions of a series of images – a source resolution, 360 p for example, and a target resolution, say 720 p. The model then adds pixels to the 360 p version so that it matches the 720 p image in quality. To further test the system, the project created a custom proof of concept model for videos sent by broadcaster partners – VRT in Belgium, NPO in the Netherlands and RTP (website in Portuguese) in Portugal. The first success was streaming a 360 p video with its resolution increased to 540 p for an iPhone 11. “At that moment, all doubts about the technology disappeared. Our enthusiasm was bolstered with broadcast tests and surveyed viewers who confirmed the upscaled video quality,” says Loew. This breakthrough was made possible because the iPhone 11 has new neural network chips can run machine learning. These chips are becoming more prominent in new mobile devices. Despite the newest Android phones also having neural network chips, their architecture slowed down the processing of individual video frames disabling the model from running in real time. “So, currently, this technology is dependent on hardware,” observes Loew.

Expanded opportunities

According to one survey the global over-the-top services market is projected to grow from USD 81.6 billion in 2019 to USD 156.9 billion by 2024, with a projected fourfold increase in bandwidth consumption between 2017 and 2022. The live video streaming portion is projected to grow by a factor of 15. By reducing the bandwidth needed, ENHANCEplayer minimises strain on communication infrastructure, while reducing energy consumption. It also increases access for those denied it for technical or cost reasons, such as rural communities and developing countries. Additionally, it opens up opportunities for content generated by non-professionals using basic equipment. Along with reviewing options for Android hardware capabilities, the team are now working on web browser viewing. “We estimate that the hardware and browser infrastructure needed to handle the neural network models for resolution upscaling in real time is 1 to 2 years away,” adds Loew. The team are also looking at a range of future directions including integrating upscaling into a video codec or as part of a video display device, such as a TV, or enabling partial upscaling for older devices.

Keywords

ENHANCEplayer, streaming, video, bit rate, internet, iPhone, mobile, bandwidth, developing countries

Discover other articles in the same domain of application