This site has been archived on
The Community Research and Development Information Service - CORDIS
Information & Communication Technologies

News News Corner

ICT Proposers' Day 2014

9-10 October, Florence Italy

Are you interested in Big data research? Please see the programme and the topics' information at:  ICT Proposers' Day 2014 in Florence!

The activities supported within LEIT under the topic: Big Data-research (ICT16-2015) contribute to the Big Data challenge by addressing the fundamental research problems related to the scalability and responsiveness of analytics capabilities.


Data Public-Private Partnership

The Data Public Private Partnership aims at strengthening the data value chain, in order to allow Europe to play a relevant role on Big Data in the global market, read more at:


European Data Forum 2014 (EDF), 19-20 March 2014 - Athens, Greece

All EDF2014 talks online: Watch the videos

See also : EDF presentations

Full details : .
 The programme:

EDF website:
LinkedIn Group:
Twitter hashtag: #EDF2014


Information & Networking Days

Unit G3 organised information and networking days around the Work Programme 2014-2015. Presentations from the plenary and networking sessions can be found on our Digital Agenda website pages


Commissioner Kroes' speech on Big Data for Europe at the ICT2013 event, 7/11/2013, in Vilnius. ict2013vilnius.png

#ICT2013eu, Europe's biggest digital technology event, read more


Calls FP6 Project Portfolio

  • Much of the ICT research and development in the 7th Framework Programme was build on and extend the work carried out in the previous programme (FP6 2002-2006).
  • FP6 Project portfolio (61 projects clustered in seven themes along the FP7 research lines).

Technologies for Information Management


Technologies for Information Management

FP7 Projects

This page provides you with access to details on projects in the area of Information and Communication Technologies (ICT) of the Seventh Framework Programme (FP7).

Research topics cover a wide range of ICT fields (see the image), and with one click on the project acronym in the image of clusters you will find the respective Project Synopsis.

The Project Synopses can also be found in the list of FP7 Projects (in alphabetical order ).





FP7 Projects (in alphabetical order): the complete set of 94 projects of Call 1, Call 3, Call 5, Call SME-DCL, Call 8, Call SME-DCA, and Call 11 are listed here.

FP7 Call SME-DCL projects are included in the list, but not in the scheme above, go directly to: BIOPOOL , CODE , DOPA , EUCLID , EUROFIT , GAPFILLER , FUSEPOOL , plan4business , SIMPLEFLEET , smeSpire , SOPCAWIND , ViSTA-TV .

FP7 Call 8 projects are included in the list, but not in the scheme above, go directly to: AXLE , BIG , BIOASQ , BIOBANKCLOUD , GEOKNOW , IMPART , INSIGHT , IQUMULUS , JUNIPER , LDBC , LINKEDUP , MEDIAMIXER , NEWSREADER , OPTIQUE , SEMAGROW , VISCERAL .

FP7 Call SME-DCA projects are included in the list, but not in the scheme above, go directly to: ALIADA , COMSODE , DaPaaS , EUCases , FUSEPOOL P3 , LEO , LinDA , OpenCube , OpenDataMonitor , PublicaMundi .

FP7 Call 11 projects are included in the list, but not in the scheme above, go directly to: ASAP , ADMIST , BYTE , FERARI , LeanBigData , ONTIC , QualiMaster , RETHINK BIG , SPEEDD , VELaSSCo .

With one click on the project logo you reach the project website.

ACTIVE - Enabling the Knowledge Powered Enterprise
Effective means for supporting the productivity of knowledge workers
ACTIVE technology aims at increasing the productivity of knowledge workers through tools that leverage hidden factual and procedural knowledge. The project will advance research and integrate technologies to realise the vision of an integrated and contextualised knowledge workspace, which will result in the ACTIVE Knowledge Workspace. This platform is open and scalable and well integrated into existing desktop applications and intranet portal solutions of an enterprise. ACTIVE will generate sustainable impact by deploying the tools and applications in three industry sectors: consulting, telecommunication and engineering.
ADVANCE - Advanced predictive-analysis-based decision-support engine for logistics
Enabling strategic planning coupled with instant decision making to provide vision in a blizzard of data
ADVANCE will develop an innovative predictive-analysis-based decision support platform for novel competitive strategies in logistics operations. It will provide a dual perspective on transport requirements and decision making dependent on the latest snapshot information and the best higher-level intelligence. Our software framework will be available open-source, as this way, low initial investment will encourage also smaller enterprises to exploit the solution. Also, previously unidentified needs will receive better coverage as development can proceed in close collaboration with the users.
ALIADA - Automatic publication under LInked DAta Paradigm of library DAta
ALIADA will automatize the publication of the Linked Open Data cloud
ALIADA will automatize the publication in the Linked Open Data cloud of open data hosted by different Library or Collection Management Software. ALIADA will support the whole life cycle of reuse of multilingual open data from public bodies, initially the ones in the consortium, providing a usable and open source tool that automatize the selection, publication and linking of datasets in the Linked Data Cloud by the ALIADA users.
AMIDST - Analysis of MassIve Data STreams
AMIDST provides a generic framework for analysis of extremely large volumes of streaming data

The AMIDST research project will provide a generic framework for analysis of extremely large volumes of streaming data, thereby adding, creating and increasing the value of existing and new data resources as well as providing a means for more timely and efficient decision making

ANSWER - Artistic-Notation-based Software Engineering for Film, Animation and Computer Games
A new approach to the creative process of film and game production
ANSWER assists the creative artist to record a distilled, clear, accurate description of the media she wishes to create. The project will produce a notion system (DirectorNotation) for describing the creation of multimedia content. This will offer a bridge between digital media production and animation for game design. Creative artists in several domains will be allowed to express creativity in an artistically significant language. The facility that notes have provided to music, and dance notation to choreography will be acquired. The ANSWER tools will optimise the human, artistic, conceptualisation, understanding and creative mechanisms that lead to the production of media, and also offer a process of recording this conceptualisation in a machine-processable representation.
APIDIS - Autonomous Production of Images based on Distributed and Intelligent Sensing
A framework to automate the collection and distribution of digital content
APIDIS will investigate the automatic extraction of intelligent content from networks of multi-modal sensors. It will exploit the knowledge to automate the production of video content for controlled scenarios (sports events or surveillance). The project will also consider personalised and potentially interactive content summarization mechanisms to address heterogeneous user needs and access conditions. APIDIS will develop applications that are cost-effective and fully automated production of content dedicated to small-audience, but also automated summarization for video surveillance.
ASAP develops an open-source execution framework for scalable data analytics
AS AP develops an open-source execution framework for scalable data analytics. ASAP assumes that no single execution model is suitable for all types of tasks and no single data model (and store) is suitable for all types of data. The solution to this is to combine existing and future stores, model the behavior of each and schedule each task using the most appropriate for that case




AXLE - Advanced Analytics for EXtremely Large European Databases

AXLE focuses on automatic scaling of complex analytics, while addressing the full requirements of real data sets.


BIOASQ - A challenge on large-scale biomedical semantic indexing and question answering -

BioASQ will push for a solution to the information access problem of biomedical experts by setting up a challenge on biomedical semantic indexing and question answering (QA).



BIOBANKCLOUD - Scalable, Secure Storage of Biobank Data

The storage infrastructure for human biological material is generally known as a biobank. One of the main tenets of biobanking is the digitization of our genomic information for its archival and analysis.

BIG Logo

BIG - Big Data Public Private Forum

Big Data is an emerging field where innovative technology offers alternatives to resolve the inherent problems that appear when working with huge amounts of data, providing new ways to reuse and extract value from information.

BIOPOOL - Services associated to digitalised contents of tissues in Biobanks across Europe
BIOPOOL objective is to build a system able to link pools of digital data managed by biobanks (comprised by histological digital images of the biologic material and the data associated to these images) and to develop added value services for the exploitation of this information (data sharing, image visualization, text and image based search queries and advanced functions such as region-of-interest extraction or automated pathology information extraction in certain cancer types). BIOPOOL aims to establish an intelligent biobank network, demonstrating the potential of this data pool for medical research, diagnosis and educational activities.
BYTE - Big data roadmap and cross-disciplinary community for addressing societal Externalities
BYTE will assist European science and industry in capturing the positive externalities and diminishing the negative externalities associated with big data to gain a greater share of the big data market by 2020
BYTE will contribute to the formulation of a strategy that defines the research efforts and policy necessary for the realisation of the big data economy through a consideration of the positive and negative societal externalities associated with big data. Positive and negative externalities refer to the effects of a decision by stakeholders such as industry, scientists, policy-makers and other decision-makers that have an impact on a third party (especially members of the public). It will thus aid European stakeholders in making better technology adoption and support actions that amplify positive externalities (e.g., new products and services, efficiencies, economic competitiveness etc.) associated with big data, while diminishing negative externalities (e.g., privacy infringements, legal barriers, etc.).
CALBC - Collaborative Annotation of a Large Biomedical Corpus
Novel approach of consensus annotations amongst systems that have been built under different assumptions
CALBC's goal is to demonstrate that it is possible to bootstrap automatic (i.e. inexpensive) annotations of satisfactory quality by having a large number of annotation programs all annotate the same corpus in several iterations and improving their accuracy over time by learning from each other's strengths and mistakes. In order to do this CALBC will prepare an appropriately representative corpus of biomedical literature and construct a web based system that would allow any developer of biomedical text-mining applications to submit their annotation of the corpus, determine how this submission diverges from all the others and exploit this information to improve its performance.
CASAM - Computer-Aided Semantic Annotation of Multimedia
Human - Machine synergy
CASAM will facilitate the synergy of human and machine intelligence to significantly speed up the task of human-produced semantic annotation of multimedia content. The project will deal with the task of aggregating human and machine knowledge with the ultimate target of minimizing human involvement in the annotation procedure. Intelligent human-computer interaction is of central importance, and the concept of effort-optimized knowledge aggregation will be introduced. CASAM will provide a significant boost to the long term goal of achieving really large-scale and precise annotation of multimedia documents with minimum human effort.
COMSODE - Components Supporting the Open Data Exploitation
Components Supporting the Open Data Exploitation
The project COMSODE is an SME-driven RTD project aimed at progressing the capabilities in the Open Data re-use field. The concept is an answer to barriers still present in this young area: data published by various open data catalogues are poorly integrated; quality assessment and cleansing are seldom addressed.
CODE - Commercially Empowered Linked Open Data Ecosystems in Research
CODE’s vision is to establish the foundation for a web-based, commercially oriented ecosystem for Linked Open Data
CODE focuses on research papers as a source for mining facts and their integration into LOD repositories and light-weight ontologies. Hence, it will leverage the wealth of knowledge contained in research publications on a semantic, machine-readable level by creating the Linked Science Data cloud.


Cubist - Combining and Uniting Business Intelligence and Semantic Technologies
The constantly growing amounts of data and an emerging trend of incorporating unstructured data into analytics is bringing new challenges to Business Intelligence (BI).
CUBIST is an EU-funded research project which envisions to leverage BI to a new level of precise and user-friendly analytics of data. CUBIST follows a best-of-breed approach that combines essential features of Semantic Technologies, Business Intelligence and Visual Analytics. It aims to
• support federation of data from unstructured and structured sources,
• persist the data in an Information Warehouse; an approach based on a BI enabled triple store,
• provide novel kinds of Visual Analytics based on meaningful diagrammatic representations.
DaPaaS - Data Publishing through the Cloud: A Data- and Platform-as-a-Service Approach for Efficient Data Publication and Consumption
Combining Data-as-a-Service (DaaS) and Platform-as-a-Service (PaaS) for open data
While in recent years a large number of datasets has been published as open (and often linked) data, applications utilizing these open and distributed data have been rather few. The DaPaaS project directly addresses these challenges by developing a software infrastructure combining Data-as-a-Service (DaaS) and Platform-as-a-Service (PaaS) for open data, with the aim of optimizing publication of Open Data and development of data applications.
dicode - Mastering Data-Intensive Collaboration and Decision Making
Exploiting a cloud infrastructure to augment collaboration and decision making in data-intensive and cognitively-complex settings
The goal of the Dicode project is to facilitate and augment collaboration and decision making in data-intensive and cognitively-complex settings. To do so, it will exploit and build on the most prominent high-performance computing paradigms and large data processing technologies to meaningfully search, analyze and aggregate data existing in diverse, extremely large, and rapidly evolving sources. The foreseen solution can be viewed as an innovative workbench incorporating and orchestrating a set of interoperable services that reduce the data-intensiveness and complexity overload at critical decision points to a manageable level, thus permitting stakeholders to be more productive and concentrate on creative activities. - Integrated Userware for the Intelligent, Intuitive and Trust-Enhancing Management of the User's Personal Information Sphere in Digital and Social Environments
Integrating data in personal and business sphere by a single, user-controlled point of access
The use and disclosure of personal information for private and business life is a major trend in information society. Advantages like enhancing social contacts, personalising services and products compromise with notable privacy risks arising from the user’s loss of control over their personal data and digital footprints. Large amounts of scattered personal data lead to information overload, disorientation and loss of efficiency. aims at integrating all personal data in a personal sphere by a single, user-controlled point of access: the userware. This tool will run on the user’s devices, and rely on scalable peer-to-peer communication in order to avoid external storage of personal data as far as possible and to enhance data portability.
DOPA - Data Supply Chains for Pools, Services and Analytics in Economics and Finance
DOPA will enable European SMEs to become key players in the global data economy, as the impacts of the DOPA RTD activities will likely materialize in both the supply and in the demand side of B2B vertical market segments of the data related services.
EUCases - EUropean and National CASE Law and Legislation Linked in Open Data Stack
EUCases will develop a unique pan-European law and case law Linking Platform transforming multilingual legal open data into linked open data after semantic and structural analysis
EUCases will develop a unique pan-European law and case law Linking Platform transforming multilingual legal open data into linked open data after semantic and structural analysis. It will reuse the millions of legal documents from EU and national legislative and case law portals, and open access doctrinal work.
EUCLID - Educational curriculum for the usage of Linked Data
EUCLID will facilitate professional training for data practitioners aiming to use Linked Data in their daily work, through a curriculum implemented as a combination of living learning materials and activities (eBook series, webinars, face‐to‐face training), validated by the user community through continuous feedback.
FERARI - Flexible Event pRocessing for big dAta aRchItectures
FERARI Vision: the project intends to exploit the structured nature of M2M data while retaining the flexibility required for handling unstructured data elements. Taking into account the structured nature of the data will enable business users to express complex tasks, such as efficiently identifying sequences of events over distributed sources with complex relations, or learning and monitoring sophisticated abstract models of the data.
EUROFIT - Integration, Homogenisation and Extension of the Scope of Anthropometric Data Stored in Large EU Pools
Since 1999, over 16 large-scale national body scanning surveys have been conducted around the world (six in Europe) gathering 3D shape data from over 120,000 subjects (~50,000 Europeans). The availability of these data pools has created the opportunity to exploit shape information beyond current 1D-measure use. However, these data pools are dispersed and heterogeneous and, above all, the exploitation of 3D data at industry level requires knowledge, skills and resources beyond the means of companies, especially SMEs. These barriers have until now confined the use of existing 3D shape data to scientific research. The overall aim of EUROFIT is thus to implement an online platform and an open framework.
e-LICO - e-Laboratory for Interdisciplinary Collaborative Research in Data Mining and Data-Intensive Sciences
Data mining support to end users
e-LICO is meant to link two communities: data miners in quest of data to feed their sophisticated tools and domain scientists who must confront massive data. The project aims at developing efficient technologies for intelligent knowledge extraction from globally growing loads of images, text and other structured data and also at fostering the use of these data mining methodologies in data-intensive sciences. e-LICO will be demonstrated on a systems biology approach to disease studies, with focus on diseases of the kidney and urinary pathways.
FIRST - Large scale information extraction and integration infrastructure for supporting financial decision making
How to focus on the relevant while making financial decisions in a world of information overkill?
The FIRST project provides an information extraction, information integration and decision making infrastructure for information management in the financial domain. This area is extraordinarily faced with the challenges of extremely large, dynamic, and heterogeneous sources of information. The daily work and the business success of all decision makers in this industry depend on the availability of highly trustable, easily acquirable information.
Information is amongst the most valuable assets in the financial industry and reducing information asymmetries and increasing transparency by providing a fast, real-time, automatic and more comprehensive information base can help preventing false decisions.
Fish4knowledge - Supporting humans in knowledge gathering and question answering w.r.t. marine and environmental monitoring through analysis of multiple video streams
Studying marine biology by analysing automatically large amounts of underwater video feeds
The study of marine ecosystems is vital for understanding environmental effects, such as climate change and the effects of pollution, but is extremely difficult because of the inaccessibility of data. Undersea video data is usable but is tedious to analyse (for both raw video analysis and abstraction over massive sets of observations), and is mainly done by hand or with hand-crafted computational tools. Fish4Knowledge will allow a major increase in the ability to analyse this data: 1) Video analysis will automatically extract information about the observed marine animals which is recorded in an observation database. 2) Interfaces will be designed to allow researchers to formulate and answer higher level questions over that database.
GAPFILLER - GNSS DAta Pool for PerFormances PredIction and SimuLation of New AppLications for DevelopERs
GAPFILLER project aims at filling the gap between big manufacturers and SMEs by providing the researchers and developers' community with a unique extensible data pool enabling performances prediction and simulation of new Global Navigation Satellite System (GNSS) based applications and algorithms.
FOCUS K3D - FOster the Comprehension, adoption and USe of Knowledge intensive technologies for coding and sharing 3D media content in consolidated and emerging application communities
With the focus on the awareness of new ways of working with 3D models and objects
FOCUS K3D will support 3D user communities, and help them in the adoption of best-practices for the integrated use of semantics in 3D content modelling and processing. The coordination action will promote a critical mass of interdisciplinary activities to encourage different communities to learn from each other while contributing valuable skills to the problems of 3D content and knowledge capture, interpretation and sharing. It will also coordinate actions devoted to the dissemination of available research solutions to a wide community of users. FOCUS K3D has identified a number of applications that are both consolidated in the massive use of 3D digital resources (like Medicine and Bioinformatics or Product Modelling) and emerging (like Gaming or Archaeology).
FUSEPOOL - Fusing and pooling information for product/ service development and research
Fusepool develops an user-adaptive «Living Knowledge Pool» for product development and re-search
Fusepool develops an user-adaptive «Living Knowledge Pool» for product development and re-search. Compared to existing search and knowledge management solutions, Fusepool provides two core benefits: the automated transformation of content from web-harvesting and participating organizations into structured Linked Open Data format and the automated group-specific optimization of knowledge finding and matching based on transfer learning from individual users.
Fusepool P3 - Fusepool Publish-Process-Perform Platform for Linked Data
Fusepool P3 makes publishing and reuse of linked data as easy as possible for end user and data developers

The goal is to make publishing and reuse of linked data as easy as possible for end user and data developers based on a thriving market economy with data publishers, developers, and consumers along the value chain.
Making data reusable and interoperable within and outside the organization requires a fundamentally different approach to ‘storing’ knowledge. “The best name is probably a Logical Data Warehouse, because it focuses on the logic of information ...[for] giving integrated access to all forms of information assets” (Gartner's Mark Beyer). Only with integrated access to the data it's possible to have apps on top of that data that scale across single implementations and provide added value for a wide variety of end user and data developers.


GEOKNOW - Making the Web an Exploratory for Geospatial Knowledge

The advent of the Data Web demonstrates how Web technologies can be employed to integrate dispersed, heterogeneous information.

i3DPost - intelligent 3D content extraction and manipulation for film and games
High quality 3D content
i3DPost will develop new methods and intelligent technologies for the extraction of structured 3D content models from video, at a level of quality suitable for use in digital cinema and interactive games. The research will enable the increasingly automatic manipulation and re-use of characters, with changes of viewpoint and lighting. i3DPost will combine advances in 3D data capture, 3D motion estimation, post-production tools and media semantics. The result will be film quality 3D content in a structured form, with semantic tagging, which can be manipulated in a graphic production pipeline and reused across different media platforms.
IKS - Interactive Knowledge Stack for small to medium CMS/KMS providers
Moving hundreds of SMEs towards reaping benefits from the Semantic Web
IKS creates a technology platform for semantically enabled content and knowledge management, targeted at small to medium CMS technology providers. The objective is to provide an easy-to-use knowledge and content management framework which raises the semantic capability of European software houses that are active in developing intelligent content solutions for customers. The first and most prominent research issue for IKS is the question: 'What needs to be programmed so that end users can directly interact with knowledge?'
iMP - Intelligent Metadata-driven Processing and distribution of audiovisual media
Virtual Film Factory
iMP will create architecture, workflow, and applications for intelligent metadata-driven processing and distribution of digital movies and entertainment. The goal is to enable a 'Virtual Film Factory' in which creative professionals can work together to create and customise programmes. The project intends to radically extend the use of metadata, linking it to semantic technologies to support and enhance the creative processes, to unify the treatment of sound and image, and remove the barriers between postproduction, customisation, formatting and distribution. iMP will reduce cost for distribution, and provide audience with media tailored to their needs.

IMPART - Intelligent Management Platform for Advanced Real-Time media processes

IMPART will research, develop and evaluate intelligent information management solutions for 'big data' problems in the field of digital cinema production.

INSEMTIVES - Incentives for Semantics
Bridging the gap between human and computational intelligence
INSEMTIVES will develop process methodologies for the creation of semantic annotations for different types of Web resources, jointly exploiting human intelligence, community effects and automatic machine processing. The project will enhance semantic content authoring in several areas related to semantic technologies, ranging from ontology engineering, ontology learning and ontology population to the semantic annotation of media and Web services. Three case studies will apply and validate the developed technology in the sectors of telecommunications, online marketplaces, and computer animated virtual worlds. The INSEMTIVES Toolkit is envisioned to be deployable as Web and client-side applications and will target the broad Internet public.

INSIGHT - Intelligent Synthesis and Real-tIme Response using Massive Streaming of Heterogeneous Data -

The instrumentation of the world with diverse sensors, smart phones, and social networks acquires exascale data that offer the potential of enhanced science and services.

iPROD - Integrated management of product heterogeneous data
iProd aims to improve the efficiency and quality of the Product Development Process of innovative products by developing a flexible, service oriented, customer driven software framework
Data and knowledge management technologies are of strategic importance for industrial innovation, provided they are integrated in the company processes, in the organisational structure, and can be flexibly adapted to company evolution. In particular the Product Development Process (PDP) of manufacturing companies, requires the efficient management of huge amounts of data from different sources and their integration in the subprocesses that compose the product chain. The efficient use of information lifecycle, by the large adoption of virtual testing and by the inter-functional management of related data in the product management would become a strategic advantage for the innovation race. iProd will improve the efficiency and quality of the Product Development Process developing a flexible, service oriented, customer driven software framework that will be the backbone of computer systems associated with current and new development processes.
IRIS - Integrating Research in Interactive Storytelling
Virtual Centre of Excellence
IRIS aims at achieving breakthroughs in the understanding of Interactive Storytelling and the development of corresponding technologies. Expected progress is to bring basic Interactive Storytelling technologies to a level of maturity where they can be used by a broad range of stakeholders in new media, and make Interactive Narratives less dependent from any specific production process. The development of Interactive Storytelling technologies will impact on the implementation of New Media (interactive TV, Interactive films) as well as producing paradigm shifts in interactive entertainment.


IQUMULUS - A High-volume Fusion and Analysis Platform for Geospatial Point Clouds, Coverages and Volumetric Data Sets -

For geospatial applications, huge amounts of heterogeneous data sets of different topology are collected nowadays with different data acquisition techniques.

JUMAS - Judicial Management by Digital Libraries Semantics
An advanced audio and video knowledge management system
JUMAS addresses the need to build an infrastructure able to optimise the information workflow in order to facilitate later analysis. New models and techniques for representing and automatically extracting the embedded semantics derived from multiple data sources will be developed. The most important goal of the JUMAS system is to collect, enrich and share multimedia documents annotated with embedded semantic minimising manual transcription activity. JUMAS is tailored at managing situations in which multiple cameras and audio sources are used to record assemblies in which people debates and event sequences need to be semantically reconstructed for future consultations.
Juniper Logo

JUNIPER - Java platform for high-performance and real-time large scale data management

The efficient and real-time exploitation of large streaming data sources and stored data poses many questions regarding the underlying platforms.

KHRESMOI - Knowledge Helper for Medical and Other Information Users
KHRESMOI aims to develop a multi-lingual multi-modal search and access system for biomedical information and documents.
This will be achieved by: Effective automated information extraction from biomedical documents, including improvements using crowd sourcing and active learning, and automated estimation of the level of trust and target user expertise
Automated analysis and indexing for medical images in 2D (X-Rays), 3D (MRI, CT), and 4D (fMRI) Linking information extracted from unstructured or semi-structured biomedical texts and images to structured information in knowledge bases Support of cross-language search, including multilingual queries, and returning machine-translated pertinent excerpts
Adaptive user interfaces to assist in formulating queries and display search results via ergonomic and interactive visualizations.
KIWI - Knowledge in a Wiki
The wiki way
KIWI will develop an advanced knowledge management system based on a semantic wiki. The KIWI system will support collaborative knowledge creation and sharing, and use semantic descriptions and reasoning as a means to intelligently author, change and deliver content. The KIWI vision will describe how the 'convention over configuration' paradigm of wikis combined with semantic technologies can lead to flexible and problem-oriented knowledge management. The project will evaluate the system in two use cases in the area of software and project knowledge management, and the software will be published as OpenSource to ensure a broad uptake and sustainability of the project results.
KYOTO - Knowledge Yielding Ontologies for Transition-based Organization
A collaborative tool for environmental organisations
KYOTO is a generic system offering knowledge transition from any domain of knowledge and information, across different target groups in society and across linguistic, cultural and geographic borders. It represents a new concept of information mining through knowledge mining. The project developments will be enabled through an ontology linked to wordnets for a variety of languages. Concept extraction and data mining is applied through a chain of semantic processors. KYOTO addresses the need for global and uniform transition of knowledge across different types of organisations. This is particularly critical in the environmental domain, but collaborative knowledge sharing in the medical, security or legal area can be set up quickly in Europe using the KYOTO mechanism.
LarKC - The Large Knowledge Collider: a platform for large scale integrated reasoning and Web-search
Not just a single reasoning engine, but a generic platform and an open architecture
LarKC will develop the Large Knowledge Collider, an open-source pluggable distributed infrastructure for real-time incomplete reasoning and search, exploiting techniques and heuristics from areas as diverse as databases, machine learning, cognitive science and Semantic Web. The platform will fulfil needs in sectors that are dependent on massive heterogeneous information sources such as telecommunication services, bio-medical research, and drug-discovery. LarKC is designed to harness the efforts of various research communities so as to deliver the paradigm shift required for reasoning at Web scale.
LATC - The LOD Around-The-Clock
The LOD arround-The-Clock Support Action aims to help institutions and individuals in publishing and consuming quality Linked Data on the Web.
Progress in the areas of large-scale data processing, data integration and information quality assessment increasingly depends on the availability of large amounts of real-world data.
The emerging Web of Linked Data is the largest source of multi-domain, real-world and real-time data that currently exists, containing billions of assertions and spanning diverse domains: media companies such as the BBC and Reuters, pharmaceutical companies like Eli Lilly and Johnson & Johnson, as well as the US and UK governments are publishing Linked Data on the Web.
This global data space allows the development of applications that benefit from the universal identifiers (URIs) and the uniform data model (RDF) over a scalable protocol for data access (HTTP).
LeanBigData - Ultra-Scalable and Ultra-Efficient Integrated and Visual Big Data Analytics
LeanBigData aims at addressing three open challenges in big data analytics: 1) The cost, in terms of resources, of scaling big
data analytics for streaming and static data sources; 2) The lack of integration of existing big data management technologies
and their high response time; 3) The insufficient end-user support leading to extremely lengthy big data analysis cycles.
LeanBigData will address these challenges by: Architecting and developing three resource-efficient Big Data management
systems typically involved in Big Data processing: a novel transactional NoSQL key-value data store, a distributed complex
event processing (CEP) system, and a distributed SQL query engine.


LDBC - Linked Data Benchmark Council

Non-relational data management is emerging as a critical need for the new data economy based on large, distributed, heterogeneous, and complexly structured data sets.

Develop software tools that support the whole life cycle of reuse of linked open EO data and related linked geospatial data
In LEO, the core academic partners of TELEIOS join forces with 2 SMEs and one industrial partner with relevant experience to develop software tools that support the whole life cycle of reuse of linked open EO data and related linked geospatial data. To demonstrate the benefits of linked open EO data and its combination with linked geospatial to the European economy, a precision farming application is developed that is heavily based on such data.
LinDA - Linked data Analytics
Linda assists SMEs and data providers in renovating public sector information

inDA aims at assisting SMEs and data providers in renovating public sector information, analysing and interlinking with enterprise data by developing: A cross-platform; and a repository for accessing and sharing Linked-Data vocabularies and metadata amongst SMEs’ data marts that can be linked to the LOD (Linked Open Data) cloud


LINKEDUP - Linking Web Data for Education Project - Open Challenge in Web-scale Data Integration -

LinkedUp aims to push forward the exploitation of the vast amounts of public, open data available on the Web, in particular by educational institutions and organizations.

LOD2 - Creating Knowledge out of Interlinked Data
Research and development of novel, innovative Sematic Data Web technologies
Expansion and integration of openly accessible and interlinked data on the web
Adoption and implementation of Linked Data for media, enterprise and government
LOD2 will integrate and syndicate linked data with large-scale, existing applications and showcase the benefits in the three application scenarios. The resulting tools, methods and data sets have the potential to change the Web as we know it today.


MEDIAMIXER - Community set-up and networking for the reMIXing of online MEDIA fragments

While we have already established, traditional markets for complete videos, media libraries or TV archives, where entire videos may be found and also purchased for re-use in new media production situations, these markets do not permit the easy purchase or sale of smaller fragments of AV materials.

Newsreader Logo

NEWSREADER - Building structured event indexes of large volumes of financial and economic data for decision making

The volume of news data is enormous and expanding, covering billions of archived documents and millions of documents as daily streams, while at the same time getting more and more interconnected with knowledge provided elsewhere.

NoTube - Networks and Ontologies for the Transformation and Unification of Broadcasting and the Internet
Putting the TV viewer back in the driver's seat
NoTube will focus on TV content as a medium for personalised interaction between people based on a service architecture that cater for a variety of content metadata, delivery channels and rendering devices. The project will take a user-centric approach to investigate fundamental aspects of consumer' content-customisation needs, interaction requirements and entertainment wishes. The three use cases (personalised semantic news; personalised TV guide with adaptive advertising; Internet TV in the Social Web) address different dimensions of personalised TV-content interaction, including individual viewers and communities of viewers as well as multi-lingual and multi-modal interaction.
OKKAM - Enabling the Web of Entities. A scalable and sustainable solution for systematic and global identifier reuse in decentralised information environments
Entity identifiers should not be multiplied beyond necessity
OKKAM will deliver a secure and privacy aware open source infrastructure to manage entity references. Just as the WWW enables a global decentralised network of documents, connected by hyperlinks, OKKAM will provide a global digital space for publishing and managing information about entities, where every entity is uniquely identified, entities can be reused across digital resources and links between entities can be explicitly specified and exploited in a variety of scenarios.
ONTIC project will integrate offline and online mechanisms and techniques into an autonomous network traffic characterization system to be used as cornerstone of a new generation of scalable and proactive network management and engineering applications.
ONTIC project proposes to investigate, implement and test: 1. A novel architecture of scalable mechanisms and techniques to be able to a) characterize online network traffic data streams, identifying traffic patterns evolution, and b) proactively detecting anomalies in real time when hundreds of thousands of packets per second are processed. 2. A completely new set of scalable offline data mining mechanisms and techniques to characterize network traffic, applying a big data analytics approach and using distributed computation paradigms in the cloud on extremely large network traffic summary datasets consisting on trillions of records.
ONTORULE - Ontologies meet business rules
Enabling the right people to interact in their own way with the right part of their business application
ONTORULE aims to lift the knowledge relevant to business rules in an organisation from the IT level to the business level, allow management of this knowledge by the business professional, and make this knowledge available to the software applications in the organisation. The ONTORULE technology is validated and showcased using two industrial case studies (automotive and steel industry). The results of the project will not only improve the awareness and increase the use of semantic Web technologies in the automotive industry and break new ground in a traditional industry sector, but will be applied world-wide and in many different domains.
OpenCube - Publishing and Enriching Linked Open Statistical Data for the Development of Data Analytics and Enhanced Visualization Services
OpenCube is facilitating publishing of high-quality linked statistical data, and reusing distributed linked statistical datasets
The ultimate goal of OpenCube project is to facilitate (a) publishing of high-quality linked statistical data, and (b) reusing distributed linked statistical datasets to perform advanced data analytics and visualizations.
OpenDataMonitor - Monitoring, Analysis and Visualisation of Open Data Catalogues, Hubs and Repositories
OpenDataMonitor provides the possibility to gain an overview of available open data resources and undertake analysis and visualisation of existing data catalogues using innovative technologies.



OPTIQUE - Scalable End-user Access to Big Data -

Scalable end-user access to Big Data is critical for effective data analysis and value creation. Optique will bring about a paradigm shift for data access and thus reducing the turnaround time for information requests to minutes rather than days.

plan4business - A service platform for aggregation, processing and analysis of urban and regional planning data
The plan4business project aims to develop a platform that can serve users a full catalogue of planning data such as transport infrastructure, regional plans, urban plans and zoning plans. The platform offers clients not just the data itself in integrated, harmonised and thus ready-to-use form, but it also offers rich analysis and visualisation services via an API and an interactive web frontend. Functions offered range from simple statistical analysis to complex trend detection and to 2D/3D representations of these.
Planet Data - Large-scale Data Management
The Planet Data project aims to establish an interdisciplinary, sustainable European community of researchers, helping organizations to expose their data on the Web in a useful way.
PlanetData will push forward the state-of-the-art in large-scale data management and its application to the creation of useful, open data sets. This is motivated by the increasing reliance of business on large public data; the uptake of open data principles in many vertical sectors; and the need of research communities to make sense out of petabytes of
scientific data, to describe and expose this data in ways that encourage and enable collaboration.
plugIT - Business and IT Alignment using a Model-Based Plug-in Framework
Plug your Business into IT
PlugIT puts the business and IT-expert in the centre and helps them to more productively expose, use and interact with the business processes of their customers. The project will develop an 'IT socket' that will realise the vision of businesses 'plugging-in' to IT. This concept will play the role of conceptual middleware for bridging the gap between business-specific knowledge and cross-domain IT governance. The key challenge is to build a Next Generation Modelling Framework that considers existing standards, tools and frameworks and establishes a formal integration and transformation among them to realise the 'IT-socket'.
PROMISE - Participative Research labOratory for Multimedia and Multilingual Information System Evaluation
Advancing the Evaluation and Benchmarking of Multimedia and Multilingual Information systems
Measuring is a key to scientific progress. This is particularly true for research concerning complex systems, whether natural or human-built. Multilingual and multimedia information systems are increasingly complex: they need to satisfy diverse user needs and support challenging tasks. Their development calls for proper evaluation methodologies to ensure that they meet the expected user requirements and provide the desired effectiveness.
PROMISE will provide a virtual laboratory for conducting participative research and experimentation to carry out, advance and bring automation into the evaluation and benchmarking of such complex information systems, by facilitating management and offering access, curation, preservation, re-use, analysis, visualization, and mining of the collected experimental data.
PRONTO - Event Recognition for Intelligent Resource Management
Decision support that results in saving time in dynamic and noisy situations
PRONTO will offer real-time, knowledge-led support for decision-makers in sectors characterised by large volumes of multi-source, multi-format data. The project introduces a highly synergetic approach to intelligent resource management by combining the research areas of information extraction from sensor data, information extraction from audio and text, and event recognition. This approach is applicable to a wide range of domains, where resource management is needed, and the PRONTO technology will be tested in two such domains: emergency rescue operations and city transport management.
PublicaMundi - Scalable and Reusable Open Geospatial Data
PublicaMundi will deliver the required methodologies, technologies and software components to leverage geospatial data
PublicaMundi will deliver the required methodologies, technologies and software components to leverage geospatial data as first-class citizens in open data catalogues, and deliver reusable software components and tools enabling the development of scalable, responsive, and multimodal value added applications from open geospatial data.
PuppyIR logo
PuppyIR - An Open Source Environment to construct Information Services for Children
An opportunity for children to fully and safely exploit the power of the Internet
PuppyIR aims to facilitate the creation of child-centric information access, based on the understanding of the behaviour and needs of children. An open source framework will be created in which advanced functionalities can be developed, and then deployed to create information services that are tailored towards the unique information needs of children and their intuitive style of interaction. The project will also contribute to the evaluation of children's search systems by the development of child-centered evaluation methodologies and datasets for evaluation.
QualiMaster - A configurable real-time data processing infrastructure mastering autonomous quality adaptation
QualiMaster Vision: make high volume real-time data processing a highly opportunistic process that flexibly exploits data sources, reconfigurable hardware and families of approximate algorithms in a configurable, demand-driven and adaptive way.
RENDER - Reflecting Knowledge Diversity
RENDER engages with the World Wide Web and its amazing diversity of information, opinions, viewpoints, mind sets and news. RENDER addresses the challenges of purposeful access, processing and management of these sheer amounts of data, whilst leveraging the diversity inherently unfolding through world wide-scale contribution and collaboration on the Web.
RENDER’s information management solution shall scale to large amounts of data and hundreds of thousands of users, while reflecting on the plurality of points of views and opinions.
Maximize European competitiveness in the processing and analysis of Big Data
The objective of the RETHINK big Project is to bring together the key European hardware, networking, and system architects with the key producers and consumers of Big Data to identify the industry coordination points that will maximize European competitiveness in the processing and analysis of Big Data over the next 10 years.
ROBUST - Risk and Opportunity management of huge-scale BUSiness communiTy cooperation
Online communities generate major economic value and form pivotal parts of corporate expertise management, marketing, product support, CRM, product
innovation and advertising. Communities can exceed millions of users and infrastructures must support hundreds of millions discussion threads that link
together billions of posts. ROBUST is targeted at developing methods to understand and manage the business, social and economic objectives of the users, providers and hosts and to meet the challenges of scale and growth in large communities.


SEMAGROW - Data intensive techniques to boost the real-time performance of global agricultural data infrastructures -

As the trend to open up data and provide them freely on the Internet intensifies, the opportunities to create added value by combining and cross-indexing heterogeneous data at a large scale increase.

Service-Finder - Web Service Discovery at Web Scale
Web Services Search Engine
Service-Finder will develop a platform for service discovery in which Web Services are embedded in a Web 2.0 environment. The project addresses the problem of utilising the Web Service technology for a wider audience by realising a comprehensive framework for Discovery. The result will be a Search Engine that enables users to find up-to-date information on available Web Services, similarly to current search engines for content pages.
SIMPLEFLEET - Democratizing Fleet Management
SimpleFleet will be a one-stop solution for SMEs for tracking solutions and has as its ultimate goal the commoditization of tracking and fleet management services
GPS positioning devices are becoming a commodity sensor platform with the emergence and popularity of smartphones and ubiquitous networking. While the positioning capability has been exploited in location-based services, so has its spatiotemporal cousin, tracking, so far only been considered in costly and complex fleet management applications. SimpleFleet will make it easy for SMEs, both, from a technological and business perspective, to create (Mobile) Web-based fleet management applications.
SMARTMUSEUM - Cultural Heritage Knowledge Exchange Platform
Full benefit of the multi-source digitalised cultural information
SMARTMUSEUM will establish a platform for enhanced experience resulting from interaction between visitors and cultural heritage objects. The platform incorporates secure adaptive profiling, on-site local distributed knowledge and global digital cultural information access. The future smart museum IT infrastructure and services will increase the interaction between multilingual European citizens and cultural heritage objects.
SmartProducts - Proactive Knowledge for Smart Products
A new paradigm for the interaction of people and products
SmartProducts focus on one special type of content, embedded in or seamlessly linked with concrete physical objects or products: proactive knowledge. Proactive knowledge has to be self- and context-aware, since it will actively guide a user in the interaction with the product. The project will develop the foundation for smart products that are able to communicate and co-operate with humans, other products and the environment. The outcome of SmartProducts will have impact on the manufacturing domain, primarily targeting the consumer products, automotive and aerospace industries.
SmartVortex - Scalable Semantic Product Data stream Management for Collaboration and decision Making in Engineering
Innovation through an intelligent analysis of massive data streams
SMART VORTEX Project aims at providing a technological infrastructure and interoperable methods, tools, and services that will support large-scale industrial innovation and collaborative engineering projects; making possible that information management will underpin an intelligent analysis of massive data streams and growth of business value and capabilities.
smeSpire - A European Community of SMEs built on Environmental Digital Content and Languages
SmeSpire's purpose is to encourage and enable the participation of SMEs in the mechanisms of harmonising and making large scale environmental content available.
The INSPIRE Directive 2007/2/EC, establishes an Infrastructure for Spatial Information in Europe, requiring large amounts of environmental digital content to be made accessible across Europe, resulting in a data pool that is expected to be of huge value for a myriad of value-added applications. The INSPIRE Implementing Rules Legal Acts outlines these data pools, but more work is needed.
SmeSpire's purpose is to encourage and enable the participation of SMEs in the mechanisms of harmonising and making large scale environmental content available.
SOPCAWIND - Software for the Optimal Place CAlculation for WIND-farms
Sopcawind optimizes windfarms locations by considering several criterias such as wind power, local environment characteristics, potential interference on communication systems, visual impact or the existence of archaelogical sites.
The SOPCAWIND project aims at developing a new service achieved through the development of a software tool for optimal wind farm design based on a large and heterogeneous set of digitalised data containing information from different fields (wind climate, geography, environment, archaelogy and social-economy), that will be treated, validated, standardised and converted for this extent.
SPEEDD - Scalable Proactive Event-Driven Decision-making
SPEEDD will develop a prototype for proactive event-driven decision-making: decisions will be triggered by forecasting events-whether they correspond to problems or opportunities-instead of reacting to them once they happen. The decisions will be in real-time, in the sense that they will be taken under tight time constraints, and require on-the-fly processing of Big Data, that is, extremely large amounts of noisy data flooding in from different geographical locations, as well as historical data.
SYNC3 - Synergetic Content Creation and Communication
Methods for connecting users to other users and news content
SYNC3 will provide an intelligent framework for making more accessible the vast quantity of user comments on news issues. The project will structure the part of blogsphere that refers to running news stories, rendering it accessible, manageable and re-usable. The immediate target of SYNC3 is the news industry and social networks, but domains like commerce, tourism, e-science and business intelligence are likely to benefit from the resulting technology.
TELEIOS - Virtual Observatory Infrastructure for Earth Observation Data
Building a Virtual Observatory: a powerful information system for managing very large amounts of satellite earth observation data.
Earth observation data has increased considerably over the last decades as satellite sensors collect and transmit back to Earth many gigabytes of data per day. The aim of project TELEIOS is to increase the usability of the terabytes of satellite images lying dormant in archives by automating the relevant data management, integration and knowledge discovery tasks. The main innovation of project TELEIOS is the development of a Virtual Observatory infrastructure that goes beyond the current state of the art Earth Observation portals and Image Information Mining systems.
TRIDEC - ACollaborative, Complex and Critical Decision-Support in Evolving Crises
TRIDEC focuses on new technologies for real-time intelligent information management in collaborative, complex critical decision processes in earth management. Key challenge is the construction of a communication infrastructure of interoperable services through which intelligent management of dynamically increasing volumes and dimensionality of information and data is efficiently supported; where groups of decision makers collaborate and respond quickly in a decision-support environment.
VALUE-IT - Adding Value to RTD: Accelerating Take-up of Semantic Technologies for the Enterprise
Dynamic links between research and business environments
VALUE-IT will address the need to improve the European performance in producing socio-economically relevant RTD results, and to accelerate innovation. The support action will cooperate with and provide focused support to STE applied researchers and related industry stakeholders in order to add value to research endeavours. By implementing a Support Mechanism consisting of interlinked activities, such as business demand driven and ST innovation roadmapping, 'matchmaking' and awareness raising support, VALUE-IT will help to move semantic technologies to the mainstream market.
VELaSSCo - Visualization For Extremely Large-Scale Scientific Computing
VELaSSCo aims at developing a new concept of integrated end-user visual analysis methods with advanced management and post-processing algorithms for engineering modelling applications, scalable for real-time petabyte level simulations.


VISCERAL - VISual Concept Extraction challenge in RAdioLogy

VISCERAL is a support action that will organize two competitions on information extraction and retrieval involving medical image data and associated text that will benchmark the state of the art and define the next big challenges in large scale data processing in medical image analysis.

ViSTA-TV - Video Stream Analytics for Viewers in the TV Industry
Live video content is increasingly consumed over IP networks in addition to traditional broadcasting. The move to IP provides a huge opportunity to discover what people are watching in much greater breadth and depth than currently possible through interviews or set-top box based data gathering by rating organizations, because it allows direct analysis of consumer behaviour via the logs they produce. The ViSTA-TV project will gather consumers' anonymized viewing behaviour and the actual video streams from broadcasters/IPTV-transmitters to combine them with enhanced electronic program guide information as the input for a holistic live-stream data mining analysis: the basis for an SME-driven market-place for TV viewing-behaviour information.
WeKnowIt - Emerging, Collective Intelligence for personal, organisational and social use
Novel techniques for generatinag different layers of intelligence
WeKnowIt aims to develop techniques for exploiting multiple layers of intelligence from user-contributed content, which together constitute Collective Intelligence. The project will provide technology able to support a paradigm shift, establishing a foundation for a new generation of services and tools supporting communities of users. The approach is built around different Intelligence Layers (Personal, Media, Mass, Social and Organisational) which address various aspects of user contributed and consumed content. The emphasis will be on integration and bridging (e.g. social and content dimensions) and the mobile and organisational business aspects.

This page is maintained by: CNECT G3 Webmaster (email removed)