Cel The key problem that must be solved in order to build cognitive vision systems is the robust, efficient and learnable categorisation and interpretation of large numbers of objects, scenes and events, in real settings. LAVA will create technologies enabling such systems and an understanding of the systems- and user-level aspects of their applications, via a novel alliance between statistical learning theory, computer vision and cognitive science experts. For practical computational efficiency and robustness, we shall devise methods for goal-directed visual attention and the integration of multiple asynchronous visual cues. These results will be embodied in two integrated systems: one will employ vision for information retrieval in a mobile setting; the other will derive symbolic representations from video sequences, enabling a wide range of "ambient intelligence scenarios.Objectives:Our goal is to create fundamental enabling technologies for cognitive vision systems and to understand the systems- and user-level aspects of their applications. Technologically, the objectives are the robust and efficient categorisation and interpretation of large numbers of objects, scenes and events, in real settings, and automatic online acquisition of knowledge of categories, for convenient construction of applications. Categorisation is fundamentally a generalisation problem, which we shall solve using measures of distance between visual descriptors known as "kernels". We aim to dramatically improve generalisation performance by incorporating prior knowledge about the behaviour of descriptors within kernels, and by exploiting the large amounts of unlabelled data available to vision systems. Finally we aim to exploit this technology in integrated systems that employ vision for information retrieval in a mobile setting, and systems that derive symbolic representations from video.Work description:Five of the project's seven workpackages are devoted to LAVA's core technologies. The first two concentrate on learning and visual descriptors. This will involve close collaboration around kernel design and the incorporation of models of the behaviour of descriptors. The learning work will emphasise improvement of the generalisation properties of classifiers, for example by exploiting the vast amounts of unlabelled data available to vision systems. Visual descriptors will be designed to enable efficient learn ability and discrimination between categories in the face of much extraneous information such as lighting, viewpoint, occlusion and natural within-class variation. The next two workpackages focus on higher-level issues of generic categorisation, interpretation and cue integration. These will collaborate closely in the study of attention mechanisms, but each has its specific focus: one is principally concerned with the practice of goal-directed search with mainly static cues of specific types, and the other aims at a unifying theory of attention for integrating arbitrary asynchronous cues.One workpackage is devoted to building and evaluating the two integrated demonstrators, which contain contributions from all partners. It will also conduct the important task of data gathering. Both of these operations will be conducted in two phases, which are reflected by the task divisions of the other workpackages. The first phase will integrate early versions of components from the other workpackages. User- and systems-level evaluation will identify areas for improvement in the application scenarios and the system architecture, while maximising the opportunity for feedback on issues regarding the components. The second phase will rectify such issues and incorporate more advanced components as necessary.Milestones:YR1 Initial data gathering. Baseline descriptors, learning methods and static cue integration methods leading to initial evaluation of association assistant. YR2 Learning with unlabelled data, temporal cue integration and dynamic attention mechanisms leading to initial evaluation of event interpreter. YR3 Final data gathering. Optimised sparse and online learning, high-level descriptors, learning and attention for interpretation leading to final evaluation of integrated demonstrators. Dziedzina nauki natural sciencescomputer and information sciencesartificial intelligencecomputer visionsocial sciencessociologysocial issuessocial inequalitiessocial sciencespsychologycognitive psychology Program(-y) FP5-IST - Programme for research, technological development and demonstration on a "User-friendly information society, 1998-2002" Temat(-y) 2001-4.2.1 - Real time distributed systems Zaproszenie do składania wniosków Data not available System finansowania CSC - Cost-sharing contracts Koordynator XEROX - THE DOCUMENT COMPANY Wkład UE Brak danych Adres 4, RUE NICOLAS ROBERT 93607 AULNAY-SOUS-BOIS Francja Zobacz na mapie Koszt całkowity Brak danych Uczestnicy (8) Sortuj alfabetycznie Sortuj według wkładu UE Rozwiń wszystko Zwiń wszystko CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE Francja Wkład UE Brak danych Adres 3, RUE MICHEL-ANGE PARIS CEDEX 16 Zobacz na mapie Koszt całkowity Brak danych IDIAP (FONDATION DE L'INSTITUT DALLE MOLLE D'INTELLIGENCE ARTIFICIELLE PERCEPTIVE) Szwajcaria Wkład UE Brak danych Adres RUE DU SIMPLON 4 1920 MARTIGNY Zobacz na mapie Koszt całkowity Brak danych INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Francja Wkład UE Brak danych Adres DOMAINE DE VOLUCEAU 78153 LE CHESNAY Zobacz na mapie Koszt całkowity Brak danych INSTITUT NATIONAL POLYTECHNIQUE DE GRENOBLE Francja Wkład UE Brak danych Adres 46 AVENUE FELIX VIALLET 38031 GRENOBLE CEDEX 1 Zobacz na mapie Koszt całkowity Brak danych LUNDS UNIVERSITET Szwecja Wkład UE Brak danych Adres PARADISGATAN 5C 221 00 LUND Zobacz na mapie Koszt całkowity Brak danych MONTANUNIVERSITAET LEOBEN Austria Wkład UE Brak danych Adres FRANZ-JOSEF-STRASSE 18 LEOBEN Zobacz na mapie Linki Strona internetowa Opens in new window Koszt całkowity Brak danych THE AUSTRALIAN NATIONAL UNIVERSITY Australia Wkład UE Brak danych Adres 0200 CANBERRA ACT Zobacz na mapie Koszt całkowity Brak danych UNIVERSITY OF SOUTHAMPTON Zjednoczone Królestwo Wkład UE Brak danych Adres Highfield SOUTHAMPTON Zobacz na mapie Linki Strona internetowa Opens in new window Koszt całkowity Brak danych