"Many massive data sources now exist for large scale 3D measurements in the form of raw pointclouds or polygonal soups. Such raw data, however, have to be cleaned, consolidated, and semantically annotated before they can be consumed by downstream applications (e.g. augmented reality, procedural modeling, energy simulation, etc.). This manual step is a major processing bottleneck resulting in significant economic and efficiency costs. For example, state-of-the-art workflows in location-based services and urban planning industries easily take 2-3 man days to produce accurate semantic labels for small spaces (500-1500 sqm), or several man weeks for a few blocks of street (2-5 sqkm). This makes large-scale semantic annotations unfeasible, and hence, majority of the raw scan data archives remain under-utilized.
The proposed ""Proof-of-Concept"" SemanticCity aims to pre-commercialize a novel software suite utilizing algorithms developed in course of the ERC-funded SmartGeometry project for automating generation of semantically structured models from raw 3D scans of cities to enable next generation augmented reality (AR), urban planning, and semantically tagged location-based services. SemanticCity, in partnership with a major European AR company, a large UK-based city developer, and London local councils, will address this challenge by automatically producing structured output directly from raw 3D measurements, reduce computation times from days/weeks to hours, and evaluate the structured output via the AR City app on multiple large-scale datasets. As case studies, we will work with governmental agencies and identified partner companies to meaningfully utilize their large archives of 3D measurements which are currently largely under-utilized.
Call for proposal
See other projects for this call