Skip to main content

Eyes of Things

Periodic Reporting for period 1 - EoT (Eyes of Things)

Okres sprawozdawczy: 2015-01-01 do 2016-06-30

Embedded systems control and monitor a great deal of our reality. While some ‘classic’ features are intrinsically necessary, such as low power consumption, rugged operating ranges, fast response and low cost, these systems have evolved in the last years to emphasize connectivity functions. In fact, embedded systems currently present a large overlap with the paradigm of the Internet of Things, whereby a myriad of sensing/computing devices are attached to everyday objects, each able to send and receive data and to act as a unique node in the Internet. Still, the major breakthrough will arguably come when such devices are endowed with some level of autonomous ‘intelligence’. Intelligent computing aims to solve problems for which no efficient exact algorithm can exist or for which we can not conceive an exact algorithm. Central to such intelligence is computer vision (CV), i.e. extracting meaning from images and video. While not everything needs CV, visual information is the richest source of information about the real world: people, places, and things. The amount of information we infer from images alone is impressive. It is estimated that eyes are responsible for 80% of the information that our brain receives. Furthermore, given the amount of data generated by our visual processing system, we rely on subconscious information processing to apply selective attention processing and extract relevant meaning from the data to quickly access and act in situations. Such decision making is made “instinctively” with our conscious barely informed. To build intelligent systems in the future we must be able to replicate similar capabilities with sensors together with advanced visual processing capabilities.

In this context, the challenge that motivates this proposal can be summarized as follows:

• Future embedded systems will have more intelligence and cognitive functionality. Vision is paramount to such intelligent capacity.
• Despite advances in connectivity, cloud processing of these images captured ‘in the edge’ is not sustainable. The sheer amount of visual data generated cannot be transferred to the cloud. Bandwidth is not sufficient and cloud servers cannot cope with it. This means that processing has to be brought to the edge (i.e. to the device itself). Additionally, for the embedded device, it is not efficient to transmit images themselves. It has been shown that, compared to local computation, transmitting data out of the device (for off-line processing) is several orders of magnitude more expensive in terms of energy consumption
• Unlike other sensors, vision presents enormous challenges that have not yet been tackled such as the ratio of power consumption vs processing power, size and cost. This is currently an inhibitor of further research and innovation.

Our aim in project “Eyes of Things” is to develop an an optimized open vision platform that can work independently and also embedded into all types of artefacts. The platform is optimized to maximize inferred information per milliwatt and adapt the quality of inferred results to each particular application. This will not only mean more hours of continuous operation, it will allow to create novel applications and services that go beyond what current vision systems can do, which are either personal/mobile or “always-on” but not both at the same time. The EoT platform targets OEMs and the estimated unit cost of $15 makes it suitable for mass consumer products. The design and development will be followed by the use of the platform in 4 demonstrators spanning surveillance, wearable and embedded configurations.
Project started officially January 1st, 2015. This report has been written halfway into the project (Month 18). During this time, a major effort has been devoted to building the platform, mainly in work packages 2 (hardware) and 3 (software). As a result of this work, there is currently a prototype available of the EoT platform, with the final factor-form board design finalised and the first boards expected for end of August-early September 2016. Software for the platform is nearly finished, including middleware and many libraries for computer vision, audio, robotics, messaging, scripting and image streaming. We continue to work on software, mainly to extend the scripting capabilities. The attached poster summarises the work performed so far and current status of the project.
Despite being at an early stage, the expected final result is an optimized open platform for developing vision-based products and applications. A dissemination effort is being made to gain traction and already a number of companies have shown interest in the platform and the related technologies used. A Plan for dissemination and exploitation of results is already being written.
First prototype EoT
Poster, status of the project