Increasing interest in modeling and reconstruction of digital urban scenes may largely be attributed to recent advances in scanning technology and the proliferation of GIS services such as those offered by Microsoft Virtual Earth or Google Earth. We are thus witnessing a strong trend towards reconstruction and visualization of urban scenes based on satellite photography combined with street-level and aerial laser scanners. Nevertheless, scanned point clouds are typically of low quality and exhibit significant missing data (due to occlusion), as well as uneven point density, noise and outliers.
In this project I plan to research techniques for efficient denoising, reconstruction and visualization of 3D urban scenes. This is a pioneering effort towards the goal of generating 3D digital cities from LiDAR scanned data.
My first objective is developing an interactive tool for reconstruction of complex structures and fine details from scanned urban scenes. The interaction paradigm borrows from a sheet layout metaphor where a thin elastic sheet that is laid over some 3D terrain deforms and captures the terrain's structure. This will be achieved through a set of interaction operations, allowing computation of mesh surfaces that will accurately reconstruct complex urban buildings.
In my second objective, I plan to explore and fuse two modalities: 2D photographs and 3D scans of urban buildings. After accurately registering these two modalities using feature correspondence, I plan to use domain specific knowledge to improve and enhance the data mutually.
My third objective is visualization and navigation of 3D digital cities. State-of-the-art navigation systems such as Google Street View and Microsoft Streetside enable users to visit cities virtually by navigating static 2D photos. My goal is to explore ways of navigation through complex 3D environments and their integration with 2D maps by optimizing a set of terms such as speed, view direction, etc.
Call for proposal
See other projects for this call