Skip to main content
European Commission logo print header

Search Computing

Final Report Summary - SECO (Search Computing)

The Search Computing project (SeCo) focused on building the answers to complex search queries like “Who are the strongest European competitors on software ideas? Who is the best doctor to cure insomnia in a nearby hospital? Where can I attend an interesting conference in my field close to a sunny beach?” This information is available on the Web, but no software system can accept such queries nor compute the answer. The objective of the project was to provide the abstractions, foundations, methods and tools required to answer these and many similar questions, by interacting with cooperating search services, and by using ranking and joining of results as the dominant factors for service composition. Leveraging on the peculiar features of search services, the project has devised query languages, query execution plans, plan optimization techniques, query configuration tools, and exploratory user interfaces, covering aspects ranging from theory to applications in the fields of data management, human interaction, and software engineering.

From a theory perspective, several new methods were developed for extending the rank-join theory, dealing with proximity join and with the stability of search results under uncertainty of rankings. These methods were supported by panta rhei. an open infrastructure for the efficient execution of search computing queries, designed for extensibility, portability and adaptation. From a human interaction perspective, the project has investigated several ways for presenting complex results to users within the liquid query framework, in which query results adapt to changing users’ needs, much in the way in which liquids adapt to surfaces which contain them. Supported interactions include exploratory search, as a paradigm allowing users to progressively build search queries in an incremental way within sessions. From a software engineering perspective, the project focused on search service modeling by introducing the service mart pattern, which exhibits a three-layer architecture describing respectively service content, signatures, and implementations. Service descriptions are bridged to knowledge repositories (we used Yago) so as to enable reasoning over service and query terms; this builds the premises for supporting natural language or keyword-based interfaces.

Search computing had a number of interdisciplinary ramifications, concerning economic and socio-technical aspects which are essential to grant applicability of the methods. Specifically, the project delivered micro-economic models for studying how “publishers” and “brokers” should be rewarded in a broader search economy; business models for result exploitation; and methods for user-centered application design. The project has produced an important follow-up project on crowd-based search applications (http://crowdsearcher.search-computing.com/) started in 2011, which aims at filling the gap between traditional search systems and social systems, capable of interacting with real people to capture their opinions, suggestions, and emotions; it leverages crowdsourcing practices and makes them viable upon a social network. The new project on genomic computing, started in 2012, is also using SeCo results for searching genomic data. Eight PhD students graduated during the course of the project, while nine PhD students are currently contributing to crowdsearcher and to genomic computing.

Project results are described by about hundred publications, in top-quality international journals and conference proceedings; SeCo has also promoted eight international workshops, produced four books published by Springer-Verlag, and obtained a US Patent (“Method for Extracting, Merging and Ranking Search engine Results, Patent No. US 8,180,768 B2, May 15 2012). Results of the SeCo projects are illustrated in the SeCo Web site (www.search-computing.org).