Skip to main content
European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS

Interpreting Drawings for 3D Design

Periodic Reporting for period 4 - D3 (Interpreting Drawings for 3D Design)

Período documentado: 2021-08-01 hasta 2022-07-31

Designers draw extensively to externalize their ideas and communicate with others. However, drawings are currently not directly interpretable by computers. To test their ideas against physical reality, designers have to create 3D models suitable for simulation and 3D printing. However, the visceral and approximate nature of drawing clashes with the tediousness and rigidity of 3D modeling. As a result, designers only model finalized concepts, and have no feedback on feasibility during creative exploration.

Our ambition is to bring the power of 3D engineering tools to the creative phase of design by automatically estimating 3D models from drawings. However, this problem is ill-posed: a point in the drawing can lie anywhere in depth. Existing solutions are limited to simple shapes, or require user input to ``explain'' to the computer how to interpret the drawing. Our originality is to exploit professional drawing techniques that designers developed to communicate shape most efficiently. Each technique provides geometric constraints that help viewers understand drawings, and that we shall leverage for 3D reconstruction.

Our research allowed us to make several breakthroughs in 3D modeling from line drawings. Most notably, we introduced the first dataset of annotated professional design drawings, we proposed the first algorithms capable of automatically lifting these drawings to 3D, and we developed an interactive drawing system that automatically translates line drawings into editable Computer-Aided-Design models. We also contributed to emerging design technologies such as drawing in Virtual Reality and low-cost 3D printing of freeform surfaces. In addition to tackling the long-standing problem of single-image 3D reconstruction, our research significantly tightens design and engineering for rapid prototyping.
These five years of research allowed us to better understand how designers draw, and to propose the first methods capable of automatically reconstructing 3D shapes from design drawings. Overall, we are happy to say that we achieved our initial goals to a large extent, as we proved that despite their inherent complexity and ambiguity, design drawings can be reconstructed because they follow specific drawing principles.

To better understand design drawing, we have collected a dataset of more than 400 professional sketches [Gryaditskaya et al. 2019]. We manually labeled the techniques used in each drawing, and we registered all drawings to reference 3D models. Analyzing this data revealed systematic strategies employed by designers to convey 3D shapes, which then inspired the development of novel algorithms for drawing interpretation. In addition, our annotated drawings and associated 3D models form a challenging benchmark to test these algorithms.

We proposed several methods to recover 3D information from drawings. A first family of method employs deep learning to recognize what 3D shape is represented in a drawing. We applied this strategy in the context of architectural design, where we reconstruct 3D building by recognizing their constituent components (building mass, façade, window) [Nishida et al. 2018]. We also presented an interactive system that allows users to create 3D objects by drawing from multiple viewpoints [Delanoy et al. 2018, 2019]. Finally, we leveraged recent developments in natural language processing to propose a neural network architecture capable of parsing line drawings into sequences of Computer-Aided-Design commands [Li et al. 2020, 2022].

The second family of methods leverages geometric properties of the lines to optimize 3D reconstruction. In particular, we exploit properties of developable surfaces to reconstruct sketches of fashion items [Fondevilla et al. 2017, 2021], and properties of construction lines to reconstruct human-made objects [Gryaditskaya et al. 2020]. More recently we leveraged symmetry to further improve the quality of these reconstructions [Hähnlein et al. 2022]. While our original focus was on 2D drawings, we extended our methodology to also consider 3D drawings in Virtual Reality [Yu et al. 2021], and to reconstruct a 3D surface from such drawings [Yu et al. 2022].

A long-term goal of our research is to evaluate the physical validity of a concept directly from a drawing. We obtained promising results towards this goal for the particular case of mechanical objects. We proposed an interactive system where users design the shape and motion of an articulated object, and our method automatically synthesize a mechanism that animates the object while avoiding collisions [Nishida et al. 2019]. The geometry synthesized by our method is ready to be fabricated for rapid prototyping. We also studied innovative fabrication techniques, in particular printing-on-fabric that allows to rapidly prototype freeform surfaces [Jourdan et al. 2020, 2022].
We were among the first to apply deep learning to the problem of line drawing reconstruction, which allowed our methods to reconstruct 3D shapes from as little as a single drawing. Our interactive systems illustrate the potential of this approach to offer a new and effective workflow where designers can seamlessly draw and navigate around a 3D shape. However, our solutions are currently only capable of reconstructing simple drawings.

The dataset we have collected contains drawings that are much more complex, for which we have developed tailored reconstruction algorithms based on discrete optimization of geometric constraints. An exciting direction of research is to combine our data-driven and optimization-based methodologies. A missing ingredient to do so is the ability to synthesize large quantities of complex design drawings for training machine learning algorithms, since such drawings would be too costly to collect among professional designers. We have proposed a first solution towards this goal, and showed that the resulting training data improves the performance of deep neural network on the prediction of surface orientation from line drawings.
Automatic 3D reconstruction of a design drawing