Remotely sensed imagery has become an invaluable tool for scientists, governments and the general public to understand the world and its surrounding environment. Automatic content extraction and efficient access to this content have become highly desired goals for developing intelligent systems for effective processing of these images. This project aims to research automatic techniques for semantic processing of remotely sensed images, extraction of the relationships between them, efficient storage of these relationships in databases, improved access to the stored information, and comparisons between new information and past cases. The proposed work includes methods from computer vision, pattern recognition, machine learning, database systems and data mining areas.Semantic processing of images will be done using a hierarchical decomposition. First, clustering and density estimation will be performed on pixel data and these pixels will be classified using a fusion of the estimated densities for different features. Next, segmentation techniques will be used to group the pixels and divide images into spatially contiguous and meaningful regions. Then, relative arrangements of these regions will be modeled using attributed relational graph structures that encode perimeter-based, distance-based and orientation-based spatial relationships between regions. Data mining methods will be developed to find interesting patterns in image databases and track their evolution in time. Content-based retrieval techniques will be developed to enable semantic searches by finding matches between conceptually similar scenes. This work will enable us to analyze large image databases by their high-level semantic content instead of the limited representation supported by low-level feature vectors, and will make significant contributions to the image understanding and remote sensing image analysis research areas.
Call for proposal
See other projects for this call