CORDIS - EU research results
CORDIS

Multimodal Extreme Scale Data Analytics for Smart Cities Environments

Project description

A disruptive Edge-to-Fog-to-Cloud ubiquitous computing framework

Handling, processing and delivering data from millions of devices around the world is a complex and remarkable feat that hinges on edge computing systems. While edge computing brings computation and data storage closer, fog computing is what brings analytic services to the edge of the network. It’s an alternative to cloud computing. The EU-funded MARVEL project will develop an Edge-to-Fog-to-Cloud ubiquitous computing framework to enable multimodal perception and intelligence for audio-visual scene recognition, event detection and situational awareness in a Smart City Environment. It will collect, analyse and data-mine multimodal audio-visual streaming data to improve the quality of life and services to citizens within the smart city paradigm, without violating ethical and privacy limits, in an AI-responsible manner.

Objective

The “Smart City” paradigm aims to support new forms of monitoring and managing of resources as well as to provide situational awareness in decision-making fulfilling the objective of servicing the citizen, while ensuring that it meets the needs of present and future generations with respect to economic, social and environmental aspects. Considering the city as a complex and dynamic system involving different interconnected spatial, social, economic, and physical processes subject to temporal changes and continually modified by human actions. Big Data, fog, and edge computing technologies have significant potential in various scenarios considering each city individual tactical strategy. However, one critical aspect is to encapsulate the complexity of a city and support accurate, cross-scale and in-time predictions based on the ubiquitous spatio-temporal data of high-volume, high-velocity and of high-variety.
To address this challenge, MARVEL delivers a disruptive Edge-to-Fog-to-Cloud ubiquitous computing framework that enables multi-modal perception and intelligence for audio-visual scene recognition, event detection in a smart city environment. MARVEL aims to collect, analyse and data mine multi-modal audio-visual data streams of a Smart City and help decision makers to improve the quality of life and services to the citizens without violating ethical and privacy limits in an AI-responsible manner. This is achieved via: (i) fusing large scale distributed multi-modal audio-visual data in real-time; (ii) achieving fast time-to-insights; (iii) supporting automated decision making at all levels of the E2F2C stack; and iv) delivering a personalized federated learning approach, where joint multi modal representations and models are co-designed and improved continuously through privacy aware sharing of personalized fog and edge models of all interested parties.

Call for proposal

H2020-ICT-2018-20

See other projects for this call

Sub call

H2020-ICT-2020-1

Coordinator

IDRYMA TECHNOLOGIAS KAI EREVNAS
Net EU contribution
€ 506 250,00
Address
N PLASTIRA STR 100
70013 Irakleio
Greece

See on map

Region
Νησιά Αιγαίου Κρήτη Ηράκλειο
Activity type
Research Organisations
Links
Total cost
€ 506 250,00

Participants (17)