Skip to main content
European Commission logo print header

Eyes of Things

Periodic Reporting for period 2 - EoT (Eyes of Things)

Periodo di rendicontazione: 2016-07-01 al 2018-06-30

Embedded systems control and monitor a great deal of our reality. While some ‘classic’ features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last years to emphasize connectivity functions. In fact, embedded systems currently present a large overlap with the paradigm of the Internet of Things, whereby a myriad of sensing/computing devices are attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Still, the major breakthrough will arguably come when such devices are endowed with some level of autonomous ‘intelligence’. Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we can not conceive an exact algorithm. Central to such intelligence is computer vision (CV), i.e. extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places, and things. The amount of information we infer from images alone is impressive. It is estimated that eyes are responsible for 80% of the information that our brain receives. Furthermore, given the amount of data generated by our visual processing system, we rely on subconscious information processing to apply selective attention processing and extract relevant meaning from the data to quickly access and act in situations. Such decision making is made “instinctively” with our conscious barely informed. To build intelligent systems in the future we must be able to replicate similar capabilities with sensors together with advanced visual processing capabilities.

In this context, the challenge that motivates this proposal can be summarized as follows:

• Future embedded systems will have more intelligence and cognitive functionality. Vision is paramount to such intelligent capacity.
• Despite advances in connectivity, cloud processing of these images captured ‘in the edge’ is not sustainable. The sheer amount of visual data generated cannot be transferred to the cloud. Bandwidth is not sufficient and cloud servers cannot cope with it. This means that processing has to be brought to the edge (i.e. to the device itself). Additionally, for the embedded device, it is not efficient to transmit images themselves. It has been shown that, compared to local computation, transmitting data out of the device (for off-line processing) is several orders of magnitude more expensive in terms of energy consumption
• Unlike other sensors, vision presents enormous challenges that have not yet been tackled such as the ratio of power consumption vs processing power, size and cost. This is currently an inhibitor of further research and innovation.

Our aim in project “Eyes of Things” (EoT) has been to develop an an optimized open vision platform that can work independently and also embedded into all types of artefacts. The platform is optimized is high performance, low power consumption, size, cost and programmability. This will not only mean more hours of continuous operation, it will allow to create novel applications and services that go beyond what current vision systems can do, which are either personal/mobile or “always-on” but not both at the same time. The EoT platform targets OEMs and the low cost products. The design and development will be followed by the use of the platform in 4 demonstrators spanning surveillance, wearable and embedded configurations.

The action concluded successfully by providing the optimized vision platform and demonstrating it in at least 4 demonstrators, some of which have a clear potential for productization. A startup company, staffed by key Consortium employees, has been set up to channel further uses of the platform.
Project started officially January 1st, 2015. During the first half of the project, a major effort has been devoted to building the platform, mainly in work packages 2 (hardware) and particularly 3 (software). Software for the platform included middleware and many libraries for computer vision, audio, robotics, messaging, scripting and image streaming. Deep learning inference was not in the original plan but was also added. Work continued during the second period to refine the hardware and the interfaces with different cameras. Also during the second period the four demonstrators were developed.
The final result is an optimized open platform and accompanying technologies for developing vision-based products and applications. Such platform was not available at the beginning of the project. The reference design and expertise created within the Consortium is currently of high value in the context of a growing interest in AI and related technologies. An Early Adopter Programme was set up towards the end of the project that has also proved that the number of potential applications for the platform is larger than anticipated. Through a startup company and further applications to competitive calls we expect to advance in exploitation of the platform and related technologies.