A challenging goal in cognitive vision research deals with integrating quantitative and qualitative modes of representation; in particular, quantitative computer vision techniques for object recognition, tracking and motion analysis etc., and qualitative spatio-temporal representations to abstract away from unnecessary details, noise, error and uncertainty.
This project is aimed at developing a semantic and cognitive description of real-world scenes captured at the intelligent building where the Spatial Cognition Research Centre at Universität Bremen is located. From the cameras and other sensors integrated in the building, the most efficient computer vision techniques will be applied for recognition of objects and human pose. A qualitative model for describing scenes and spatio-temporal changes will be defined in order to manage uncertainty and to apply qualitative reasoning models for inferring further information. Then, the qualitative descriptions obtained will be provided with ontological meaning for symbol grounding in order to provide ‘scenario understanding’ to software or robotic agents. To enhance human machine communication, the qualitative and semantic descriptions obtained will be translated to natural language for human understanding and a narrative description will be provided to the end-user for reading or for listening to by means of a speech synthesizer program. Finally, as a truly cognitive system must have the ability to learn from models built from sensor inputs, a framework for high-level (symbolic) learning of human-object interaction in temporal events will be designed.
Fields of science
Call for proposal
See other projects for this call