Skip to main content

An innovative low-cost and flexible 3D scanning system

Final Report Summary - NAVOSCAN (An innovative low-cost and flexible 3D scanning system)

Executive Summary:
NavOScan’s objective is to enhance the current state of the art’s white light 3D scanner technology to produce automated 3D scans which spare the end user from manually manipulating the scanned data. Requiring the user to manually manipulate the scanned data can take up to 75% of the whole 3D scanning process. The impact of an automated 3D matching process is significant time and cost saving for 3D scanning processing which uses mobile 3D white light scanners. NavOScan is a 3D scanning system which allows this automated process. A navigation unit is able to provide position and orientation information of the 3D scanner during the whole scanning process. Therefore optical and inertial sensors in combination are used. Next to this the 3D matching and scanning process is adapted to handle position and orientation information with certain accuracy. The navigation unit provides a constant accuracy of 1cm and 0.1°. The 3D scanning system is based on the Fraunhofer “kolibri cordless” system.
Project Context and Objectives:
The NavOScan project is particularly relevant to our SME partnership and the wider community of SMEs since we need to innovate to survive increasing globalization and intensification of competition.
NavOScan is particularly relevant in that it addresses a specific and important need, affecting a wide range of industrial manufacturing companies. The solution that this provides will, we believe, be attractive to all nations. Thus through our consortium we will together be able to increase our global competitiveness for the EU against current, dominant nations.


Competition for the European market sector comes from three main fronts:
• US Market Dominance: Around 60% of all companies who produces 3D scanners come from North America
• Software products for 3D scanners are manly developed in North America from a few big companies
• A market analysis showed that competitors from US introduce products like Polhemus “fast scan” and “handyscan” from Creaform. Their 3D scanner devices aiming similar markets like the NavOScan project. The NavOScan project will allow the SMEs to compete on a global basis and to move to higher value added products and services which will thus compensate for the global competitive pressure. Competitive position for European manufactures against Asian because of cost reduction in product design cycle.


White light scanners used for scanning of large volumes have been in existence for quite a long time. They yield accurate results with scans that are meant to record data points without losing continuity. The major part of the measurement process is handled by the software which is customised for different end-user applications. The principle at work in a white light scanner is triangulation and, after capture of data, the software uses high-speed point cloud processing. Applications include polygon model generation, surface reconstruction in research (e.g. cultural heritage applications), and production inspection processes. The key end-user industries for this segment are the aerospace, automotive and other manufacturing-intensive industries, which include locomotive, shipbuilding and farm vehicle manufacturing. Actual 3D scanners are specialized devices which provide only a small bandwidth of applicable scanning procedures cause of their scan range and static implementation. The aim of the proposed 3D scanner solution is to provide a flexible scanning solution. The actual analysis of the need in the market shows that the industry does not need something that's more accurate. They need something that is easier for everybody to use and still achieve the same level of accuracy.


In different companies Quality Assurance policy is established in order to guarantee the best
quality products and technical support for its clients. In this way, in-process inspections must cover all levels of production to meet the requirements of ISO 9000 which can be provided by the NavOScan technology.
The key advantages for the usage in a manufacturing process are:
• Quickly capture all of the physical measurements of any physical object
• Save time in design work
• Ensure parts will fit together on the first try
• Capture engineering optimizations inherent in manufactured parts
• tilize modern manufacturing on parts that were originally manufactured before CAD
• Compare "as-designed" model to "as-built" condition of manufactured parts


The overall objectives of the NavOScan project is the technological revision of an existing scanning device to provide a user friendly, flexible and cost saving 3D scanning device which does not leak of measurement accuracy against cost intensive competitors. This will be achieved by the implementation of an additional low-cost navigation unit, automated 3D surface matching process and an interactive scanning guidance.


The scientific objectives are those relating to issues of accuracy, flexibility and automation of the 3D scanning process.
We intent to work on the following questions:
• Guidance strategies for the Scanning process
• Fusion of Visual and inertial navigation information
• Long and short term navigation
• The optimal accuracy compensation during the scanning mode with optical motion information
• How the best feedback to the user for 3D scanning
• 3D Matching Process-> Processing it in real time
• 3D Matching Process-> How make the process robust
• Situation Detection for optimal automated Scanning
This enhanced knowledge will be attained in the first part of the project using the protocol described in work packages 1 to 4.


To acquire the science to enable the development of the first flexible portable white light scanning device that will:
• Fully automated surface matching process.
• Constant accuracy of 0,1mm for surface scanning
• Navigational parameter: Orientation 0,1° and Position: 1cm
• Guided Scanning process that the user of the device can verify the scan coverage in real time
• Real-time processing of the visual and orientation motion estimation
• Real-time visualization of the pre matching result


• Gain significant market share to compete with the USA and other major 3D scanner manufactures displacing at least 2.5% of the estimated €407M p.a. Global scanner market allowing the group of SME’s to gain a footing into the Global market place by the development of the new prosthesis.
• Better implementation of Six-Sigma and ISO 9000 standards during the product design and manufacturing procedure
• Obtain for the SME partnership an additional 5% of the sales worth an estimated €21 million per annum after ten years post project including the anticipated compound growth rate of 16% per annum. We estimate that based on the average cost for the manufacture of the 3D scanner of €15.000 with an average profit of €10.000 we will increase the profit of the SMEs involved by € 1,5 million per annum.
• The increase in turnover will have the effect of increasing employment within SMEs companies based on 1 person per €140,000 by (21M / 140,000 = 150) an estimated 150 jobs after the 10 year period.


To achieve the societal and economic objectives that come from the dissemination and exploitation of the research results we have defined an enabling set of objectives.
To enable innovation through the project team and to benefit Europe the objectives are:
• To collate and prepare the results of the project into a suitable format and apply for patent protection of the results
• To transfer knowledge from the RTD performers to the SME participants through three technology transfer events and interactions. This will result in one secondment and placement of two staff providing a total of 30 hours of technology transfer.
• To disseminate the results and benefits of the knowledge and technology developed beyond the consortium to potential users such as healthcare/surgical, dental, veterinary, scientific and specifically:
• 100 SME companies from the automotive, robotics, moulding and scientific sectors will be contacted to promote the project results.
• Two trade or sector specific shows will be attended these will include VISION and CONTROL.
• Four SMEs engaged with, in detailed knowledge or technology transfer, by the end of 2015.
• 10 licensees to adopt the results in the generation of new products or systems by the end of 2015.


• Less numbers of unemployed people cause of lower time cost for production design
• High employed rate
• Higher production design rate
• 2ndary market
• lower product costs cause of lower engineering costs
• Allocation of the NavOScan in medical sector to provide faster recording processes

Project Results:
In order to deliver the work within the Navoscan project each objective was broken down into a separate work package. In the following section a summary for each work package will be given.


The first work package of the NavOScan project was meant to derive the project requirement and specification for the development of the overall system. The system specifications and requirements will describe the market demand for the use-cases in industrial manufacturing processes. We use a combination of literature, expertise-driven research and simulation to derive detailed specifications for the 3D scanner hardware and software modules.


The first task was the determination of the specifications of the NavOScan project for the demanded application cases of the 3D scanner. The 3D scanners systems will be used in archaeology, forensic, in the medical area and in the industrial production. The application cases were examined more exactly and requirements derived. For this the most important specifications and requirements of the 3D scanner system were collected to guide the development of the navigation system. The navigation data will be used to pre-align the 3D surface data in the world coordinate system for the global automated fine-alignment step. We defined the development toolchain in work package 1 for the embedded programming. This is based on the use of a Xiling Spartan-6 FPGA module and firmware was developed to satisfy all the embedded requirements could be used for the HDL programming of the FPGA. In the next step the requirements and specifications were fixed for the development of the navigation unity. To this test measurements were carried out in the laboratory. With this we evaluated the requirements for the visual and the inertial sensors. It can be summarized that a wide-angle optics (2-3 mm of focal length) and an inertiale measuring unit with max. 150°/s and+1 g are adequate. Furthermore disturbing influences were evaluated which can occur in the measuring surroundings. For the development of the algorithm of the navigation unity the requirements of the measured movements and surroundings structures were derived. For the execution of the navigation algorithms an examination of the necessary hardware was carried out. The system will consist of a multi-core PC and an external FPGA. We conducted a study to evaluate the present state of the art of scientific approaches for the navigation task. The result is the usage of the key frame of based approach. Its core functionality is to calculate the motion information from sequential image data with a parallelized map making and optimization process.


The objective of work package 2 is the investigation of the usage of navigation information for the post processing of the 3D scanning data. Therefore a concept for the 3D surface registration method will be evaluated and implemented. The derived method will used to pre-align the 3D surface scans with the measured 3D position and orientation by the navigation unit. Furthermore the derived concepts will be evaluated in a post-processing environment. Investigation will be done with simulated navigation data by an optical motion tracker. The requirements for the navigation unit accuracy and resolution will be updated according to the results of the evaluation.

(see Figure 1 in Attached final report document)


We evaluated different 3D surface registration methods on the basis of existing knowledge by Simpleware and Fraunhofer IOF in the field of 3D surface data processing. Therefore the current state of the art of surface registration was investigated. A custom navigation data usage concept was developed (see Figure 1). The registration is done in two steps: coarse and fine registration. Using the pose information of the navigation unit the 3D surface scans are roughly aligned together. A fine registration algorithm is necessary which has the ability to align the 3D scans without manual interaction by the user. The prominent and widespread method for fine registration of parts of a surface into the entire surface is the Iterative Closest Point (ICP) algorithm. This concept is used as a basis and is adapted to handle outliers, constraints multi patch registrations and high speed real-time processing. The navigation data is used to solve the hard starting solution problem of the ICP algorithm by providing a coarse registration. This has been confirmed by experiments. Furthermore, the navigation data enables the easy overlapping patch selection. This improves robustness and speeds up the registration calculation by decreasing the number of necessary iterations of the ICP. Finally, by constraining the registration parameter space to the expected navigation uncertainty, misalignments can be avoided, further increasing robustness. Next to the registration algorithm we developed the concept for navigation data usage. Visual feedback supports the user in determining the next best view. When the user starts a scan, the current navigation information is used to calculate a coarsely registered 3D scan. In the first phase of fine registration, using pairwise ICP registration, the current scan is sufficiently fine aligned for coarse 3D model creation. Optionally, a correction vector can be feed back into the navigation unit in order to com¬pen¬sate for potential accumulated navigation errors. Tests with simulated navigation data were conducted. The results proved to concept. From the results we could derive and commit the requirements and specification of the navigation to solve the fine registration problem. The ICP based fined registration was developed and implemented in C++ using the library. As a module it is possible to integrate the developed software into the tool chain of the SME partners. The algorithm is evaluated to work sufficient with a navigation data accuracy of 1cm and 0.1°.


The goal of this work-package is the conception of the navigation unit with software and hardware. The system consists of a visual sensor and an inertial sensor unit. It shall be able to track the 3D orientation and the 3D position in an unknown environment. Therefore the fusion strategies of these both sensor systems and their algorithms need to be investigated. The sensors will be evaluated for the navigation task and a sensor built-up for sensor data acquisition will be conducted. The visual navigation algorithm will be developed and investigated in respect to the treatment of errors and adaption strategies. The last step in this work package is the validation of the developed concepts with test measurements.


At first we developed the principal fusion architecture of the visual inertial motion tracker. In this case the application is not real-time sensitive. We decided to use an optimization filter for the fusion of both sensor types. Therefore both sensors are processed with a separate and independent motion estimation algorithm. The frequency of the IMU system is much higher than the frequency of the camera. The generated state of the camera and all generated IMU states after the last camera frame are fused using an optimization algorithm. The conditional probabilities for each state are modeled as Gaussian variables and are derived as adapted confidence parameter for the IMU and the camera system. We investigated different situations and its treatment which can affect state estimation errors. These errors are used to develop adaption strategies for the treatment of these errors. In particular the effects due to errors which occur in the visual feature detection and matching algorithm. The conclusion is an intensive usage of RANSAC based estimation methods for the visual motion estimation to test different hypothesis like epipolar constraints of the reprojection error. The next step in the project was the development of an experimental sensor platform and the evaluation of these chosen sensors. At this time in the project we used the IMU SD746 from SensorDyamics and the Visual sensor MT9V032STM from Aptina. Both sensors meet the requirements for the measurement environment. The Aptina sensor has its performance in the 120dB logarithmic response which enables to measure higher contrasts. The sensor platform synchronizes the sensor systems and is mounted on a 3D scanner housing mock-up with a LED projector. The sensors were intensively studied for the upcoming integration into the FPGA board system in work package 6 including the sensor performance and the data acquisition. 3D Scanning test scenarios were defined for the validation of the NavOScan system. This included the built-up in a laboratory with a motion capturing system and a motion reference (robot manipulator) to conduct referenced test trails. The main part of the work package was the development of the inertial and visual motion estimation algorithm. The IMU motion estimation is based on the Fraunhofer IPA software library and is now extended to the position measurement. We developed strategies for the adaption of the extended Kalman filter. It consists mainly of online calibration of the acceleration sensors and improved zero velocity updates with position and orientation correction. The visual motion estimation algorithm is based on the well-known PTAM approach with parallel mapping and localization. We adapted this approach to the usage of a-priori information from the IMU motion measurement, long term stable visual features and a continuously map refinement during the map creating process. The basic algorithm was investigated and then programmed in Matlab with the usage of existing SURF features. We investigated several approaches for the mathematical relation between camera motion and visual feature motion on the image plane and decided to use the epipolar constraint which is well tested in the literature. Next to this we investigated the initialization procedure of the overall algorithm. The well-known BRISK feature detector was chosen to be the best solution to provide long term stable features with real-time performance. We did a comparison of the feature detector to SURF, SIFT, BRIEF, ORB and SU_BRIS with sample images were the BRISK detector showed significant advantages in during change of view-point and illumination. The hamming distance matcher is used as the matching algorithm. The evaluation of the developed algorithms in work-package 3 showed that the approaches fit the requirements derived in work-package 1.


Develop and document the communication interface between the preliminary prototype of the 3D scanner device and the navigation unit. This contains the development of the 3D scanner software including production of the software interface to the navigation unit, a data transfer interface between the cameras and the navigation unit, the development of the camera control software and the test of the developed software. Produce a concept for the Slip-on adapter unit, which will be used for the navigation unit including the technical drawings of the slip-on adapter unit.


In order to maximize flexible software development for the 3D scanner software and the navigation unit, an interprocess communication (IPC) approach was chosen. It works via DDE or internal TCP/IP interface. An existing IPC approach from the open source L-GPL licensed wxWidgets library was uses as the base framework. That solution has the added benefit of offering a high level programming model whereas the underlying data exchange can be transparently changed from DDE to TCP/IP. Three main C++ classes have been devised to incorporate the interface. All source and documentation have been uploaded to the repository created for the project by the project coordinator. These classes provide functions like server, client, data handling and event management to fulfil the functionality of an interface between the navigation software, the navigation sensor network and the 3D scanner software on the same processing unit. Next to it a calibration function class was developed. It implements the transformation of the navigation unit’s pose to the 3D scanners pose. This can be achieved through the so called Hand-Eye-Transformation (HAT).
The software was tested in a separate testing framework It enables standalone testing and verification of the modules. This has been performed using DDE and TCP/IP. Tests of the implementation without a functional navigation unit showed the principal performance of the programmed modules. In preparation of the connection of the navigation unit, three main tasks were conducted. First the electronic connections necessary for the navigation unit were defined and provided within the existing 3D scanner prototype. Second, the 3D sensor was adapted to be able to carry the navigation unit without mechanic instabilities that would degrade measurement quality. This also includes the adaption of a new projector in order to improve the thermal stability of the system. Third, the mechanic interface was developed in order to enable the consortium to quickly start the navigation unit’s development ensuring at the same time mechanic compatibility with the existing 3D sensor.


This work-package is the primary stage for the development of the navigation unit hardware which is based on a FPGA board an optical sensor and an inertial navigation unit. A concept must be developed to interface the sensor physically and in software to the FPGA board. Additional the concept for the FPGA software will be developed. The main processing will communicate to the FPGA boards for the sensor data acquisition.
On the basis of the conception follows the implementation of the software interfaces on the FPGA side and the main processing unit side. A validation of the sensor network is the last step of the work package.


The sensor network consists of the IMU, the optical sensor and the FPGA Board. Acquisition of sensor data, preprocessing and providing the data to the main processing unit are the main functions of the FPGA boards. We chose a FPGA because the image processing performance is much better than in single core systems like DSPs or CPUs. A concept of the sensor network was drafted including the definition of hardware interfaces, the data acquisition concept and the software interface concept. The FPGA board is the Mars MX2 module developed by partner Enclustra. It will be connected via Ethernet to the main processing unit (see Figure 2a). Therefore we developed a special GigE based Ethernet protocol which transports the IMU data, the image data and trigger signal events. This streaming protocol is completely documented. For acquiring the IMU we use the SPI interface. A parallel interface is connected to the optical sensor. According to the complete draft of the sensor network we started to define the functional blocks of the HDL code for the FPGA based on the UDP software core from partner Enclustra. The implementation of the HDL code interface was conducted. The implementation of the HDL code interface was conducted. The implementation of the software interface on the main processing unit in C++ (see “NSPInterface” in Figure 2b) was challenging, since a complete high performance GigE based camera interface had to be implemented. The interface had to be debugged in parallel with the evolving FPGA firmware, further complicating the development. Finally it was shown in a validation step that the interfaces deliver the data according to the specification of the interface. Data loss of packets was investigated and the reason was found in the TCP/IP stack. Therefore adaptions were made to lower the data loss to a sufficient minimum. The data loss does not influence the performance of the further data processing.

(see Figure 2 in Attached final report document)
Figure 2: a) The sensor network architecture. b) The interfaces between the 3D scanner, the navigation unit software and the sensor network.


The objective of this work package was to implement a primary prototype that incorporates all the functions discussed in WP1. The software was implemented based on the developed embedded processing architecture. This includes the development of the image processing architecture for implementation on the FPGA. Furthermore, software was developed for the main processing unit to perform sensor data pre-processing, the visual inertial navigation and the inter process communication between the software modules on the main processing unit. From here a validating step was performed to test the performance of the visual inertial motion tracker and the overall NavOScan system. Then updates to the existing 3D scanner were done to aid in its integration. Lastly, this work package also includes the design of the scanner’s housing as a cover for the sensor devices and the embedded processing units.


The NavOScan software development in this project has three main parts: The HDL programming on the FPGA, the C++ programming on the main processing unit and the adaption of the existing software of the 3D scanner. Before the implementation of the HDL code we conducted a structural analysis of the necessary algorithm architecture for the parallelization of the chosen image processing algorithm. Therefore the BRISK and the ORB feature detector evaluated in work package 3 were compared regarding efficiency, structure and ability for the paralyzing step. BRISK turned out to be the better algorithm because of the scale pyramid and sub-pixel localization performance. The nature and complexity of the algorithm led to the assignment of the BRISK detector to the FPGA unit and having the subsequent processing steps performed on the main processing unit. The determinant factor was the knowledge about the FAST detector implementation on the FPGA which is a preprocessing step of the BRISK detector. The parallel BRISK algorithm consists of the following architectures: a delay line, a pixel threshold comparator and a corner detector. An analysis has shown that the FAST detector and the other FGPA firmware component use just 16% of the FPGA resources. The theoretical max frequency of the FAST corner detection on the FPGA is about 77µs per frame which is 100 times faster than the frame rate of the optical sensor. We had a partner change at this stage of the project. Sensor Dynamics left the consortium because it was acquired by another company. Therefore partner NIT took their place instead and brought a new optical sensor to the NavOScan project. This HDR CMOS sensor has the ability to produce consistent images under uncontrolled illumination condition at a fixed exposure time. This is due to the dynamic range of the sensor based on the technology of NIT. With this it is possible to produce more reliable feature measurements. The parallel interface from the optical sensor to the FPGA was programmed in HDL. The acquisition of SensorDynamics also resulted in the discontinuation of the IMU which was chosen for NavOscan. Therefore the EPSON IMU was chosen as the best replacement regarding accuracy and measurement frequency and the FPGA was adapted to use this sensor. The Ethernet protocol to the main processing unit was also adapted to the sensor changes. It was shown that the performance of the sensor system is sufficient of the NavOScan application. The next step in the project was the development and implementation of the software on the main processing unit. The NavOScan framework is defined as a modular software framework with independent components including the 3D Scanner software, the sensor, scanner and module interfaces and the visual inertial motion tracker. The motion tracker itself is separated in class based independent modules which represent core functions. All functions in the software are programmed on the basis of the concept in work package three. The feature detection and matching module is programmed using the OpenCV BRISK and ORB implementation. The visual navigation is based on OpenCV and other free of license library implementations which provide basic image processing functionalities. The inertial navigation implementation is based on the Fraunhofer IPA software library. The motion tracker software framework represents all necessary states: initialization, re-initialization and motion tracking. The sensor network and 3D Scanner interface handles the data transfer between the motion tracker, the visual inertial sensors and the 3D scanner. Next to this we performed a validation and performance analysis of the written software. The feature detection and matching was validated on single images and parameter settings were evaluated to adapt the OpenCV Brisk implementation to the image contrast. The implementation of the IMU based orientation and position estimation was validated to perform within the specification of 1cm / 0.1° accuracy for at least 2 seconds. The last step in this work package was the hardware built-up of the sensor system and the modification on the 3D scanner. Therefore the housing for the sensor system was designed and printed with rapid prototyping to fit on the 3D scanner. A special connector board was designed and built to combine the NIT sensor with the FPGA board. All parts were assembled and proven to fit into the housing connected to the 3D scanner (see Figure ). The connectors to the main processing unit are integrated into the 3D scanner housing. With this result we have a fully integrated hardware functional model for demonstration purpose.

(see Figure 3 in Attached final report document)
Figure 3: Assembled Navigation unit system with opened top-cover

(see Figure 4 in Attached final report document)
Figure 4: Navigation unit mounted on top of the FhG IOF 3D s canner “kolibri CORDLESS ”.


The task of work package 7 is the analysis of the system performance. In detail the analysis of the processing performance analysis of the whole scanner system with integrated navigation and review of the software in respect of the results from work package 7. Next to this an investigation of the navigation performance unit with variable measurement environment parameters is foreseen. The functionality of the 3D matching algorithm with variable surface shapes was also investigated. The mechanical housing of the navigation unit will be tested in respect to the requirements from WP1.


The visual motion tracker was validated using simulated data. This validation process was performed using the 3D scanner and the IMU sensor system from WP3. The visual motion tracker was evaluated in regards to varying feature quality, amount of features and camera motion. The results showed that the visual motion tracker needs a minimum of 30 successfully correlated features to estimate the position and the orientation with an accuracy of 1 cm and 0.1°. The process of matching features to a corresponding 3D map was also evaluated and it was concluded that this process’ re-projection error needs to be lower than 10^ (-2) pixels.
Three scanning tests were performed using objects such as a bust and a stone plate. These tests enabled the validation of the IMU based motion tracker during the scanning procedure. The result of testing the motion tracker without the optical sensor showed that the IMU can provide sufficient accuracy over the specified time but not beyond it. This concludes that the visual measurement system is necessary to fulfil the navigation task.
In addition, tests were performed to evaluate the 3D measurements from the 3D scanner. These tests were also performed using the 3D scanner and the IMU sensor system from WP3. The goal of these tests was to show that the navigation information supports the surface matching procedures. The developed 3D surface registration module has been demonstrated to work on all three examined object classes: cylindrical, complex and planar. Furthermore, these tests resulted in the following two conclusions. First, complex objects may require limited additional manual registration steps to combine several “chains” of successfully registered sub scans. Second, planar objects with only minor 3D surface topography may only be registered coarsely. These conclusions reveal that only minimal post processing effort is needed from third party software. Furthermore, it is expected that the machine module runtime can be reduced below 1 second using exact sensor poses. Detailed quantitative analysis is provided in deliverable report 8.1 along with visual motion tracking evaluations.

A parameter set for the feature detection was evaluated with the described measurement set which shows the amount of detected features with different contrasts. An intensive study has been conducted to evaluate the 3D measurements from the 3D scanner. The aim was to show that the navigation information supports the matching of the surfaces correctly. The developed 3D surface registration module has been demonstrated to work on all three examined object classes (cylindrical, complex, planar, see Figure 5). Two limitations have been experienced. First, complex objects may require limited additional manual registration steps to combine several “chains” of successfully registered sub scans. Second, planar objects with only minor 3D surface topography may only coarsely be registered. Both problems only require minimal post processing effort in appropriate third party software. Furthermore it is expected, that the matching module runtime can be reduced below 1 s, once exact sensor poses are available. Detailed quantitative analysis will be provided in deliverable report 8.1. The evaluation of the visual motion tracking was postponed to work package 8.

(see Figure 5 in Attached final report document)
Figure 5: 3D scans of the examined test objects: a) The “Schiller” bust, b) the “Sand core” model, and c) the “Stone slab” (as examples of the cylindrical, complex and planar object classes).


The objective of this work package is to test and evaluate the developed 3D scanning and navigation system in laboratory conditions. Therefore specific laboratory test-environments and worst case scenario conditions were defined in order to represent the requirements from WP1. Furthermore, a specific test of the 3D scanning device prototype was conducted in regards to the requirements from WP1. This test included the worst case scenarios such as difficult shapes, environmental variations and variation in motion dynamic. This work package also includes further developments to work done within WP6 and WP7.


A laboratory environment was developed to enable optimal measurement conditions using the NavOScan system. Details such as lighting, texture and environment dynamics were addressed and a worst case scenario analysis was conducted. Knowledge of the worst case scenario conditions will help to obtain optimal measurements from the NavOScan system during on-the-field operation. The worst scenarios can be grouped into four categories. First is exposing the HDR sensor to a high contrast background. Second is having a featureless scene. Third is having a scene with a changing background (this scene can be handled if at least s subset of 30 stable features is still present). Fourth is having a changing foreground which is an invalid operating condition because the scanned object is not supposed to change during the scanning procedure.
The next part of the work package consisted of an evaluation of the visual measurement and visual motion estimation. The first step in this evaluation was calibrating the optical sensor. The results of the calibration showed that the lens distortion can be compensated to a degree and that about 30% of the image data around the image borders should be avoided for navigation purposes. This also led to the conclusion that changing the lens would improve the results. It would be recommended to choose a lens with a lower focal length and/or a lower distortion level. The next part of the evaluation dealt with the map making process. This process is critical since the visual pose tracking requires a 3D map data base to calculate pose and orientation. The combination of the map making process and the feature detection and matching was evaluated using the bust, the stone plate and the sand core scenarios. The results of this evaluation showed that the feature detection algorithm provides a sufficient amount of features for the matching process. The feature matching was also evaluated during the triangulation step in the map making process. This evaluation showed that it would be possible to improve the map making process by further developing the algorithm to increase in the amount of features which are re-found. The last step in WP8 consisted of studying test cases for the archaeological, forensic, medical and industrial manufacturing business sectors. The focus of the investigation was to study the potential time savings resulting from the use of the automated 3D matching process in conjunction with the 3D scanning. The results showed that the 3D scanning process can be performed almost twice as fast using the NavOScan system and the automated 3D matching process.


The goals for WP9 were to perform a functional test of the software and to review the overall design. The software modules on the main and embedded processing units were reviewed in respect to the results from WP7 and WP8. The following hardware components were also reviewed: the main processing unit, the embedded processing unit, the sensor network and the optical scanner units. Lastly, the scanner housing was redesigned in WP9 in regards to handling, weight, dimensions and mechanical stability.


Several improvements were made to the visual navigation software modules in WP9, that were suggested by the results of WP7. The combined effect of these improvements is that the number of features included in the map building process has increased significantly, while the accuracy of the feature location in terms of reprojection error has also improved. As a consequence, the initialization of the visual tracking now runs robustly, and 3D tracking and visual navigation runs automatically throughout the tested sequences.
Operational tests in D7.1 revealed that the comparable number of features in the scanning environment can be improved by using a lower value of the minimum contrast threshold. The key frame based approach is based on the assumption that a sufficient number of image features are correctly matched to previous key frames over a reasonable range of motion. This includes rotation and translation around the key frames.
Criteria was developed to improve the visual motion tracking software in regards to position and orientation estimation. This criterion was labelled “confidence values” and it consists of the re-projection error and the number of inliers during position estimation. Thresholds were also defined to evaluate the confidence values. The matching module was also reviewed and the resulting recommendations could result in further research activities. The embedded processing unit and the sensor network were improved with the addition of the inter-packet delay feature. This feature consists of a programmable delay time. This allows to control the time between each packet sent from the sensor network to the main processing unit. This feature can be used to enhance UDP packet reception during high processing times in the main processing unit. Another improvement to the embedded processing unit and the sensor network was allowing the image capture’s exposure time to be configured using UDP commands.
A review of the NavOScan hardware showed that the main processing unit was adequate for its task. The embedded processing unit was also deemed adequate for its task.
It would be recommended to replace the Mars PM3 FPGA base board with a custom made FPGA base board for a production model for two reasons. First, a custom FPGA base board would allow an improved positioning of the components inside the navigation unit’s housing. Second, it would reduce manufacturing costs since unnecessary components could be removed from the design. For example, the Mars PM3 board includes a mini-HDMI connector which is not necessary for the NavOScan functionality. Furthermore, a custom FPGA base board would allow connecting the imaging sensor and the IMU directly to this board instead of using the FMC connector and an FMC breakout board. Also, the custom made NIT NSC1001 imaging sensor board’s traces for the VCLK and VSYNC pins were modified to accommodate the FMC breakout board’s pin layout. A production version of the custom imaging sensor board should include these changes. In addition, enhanced image data could be achieved by switching to the NIT NSC1003 sensor chip which was recently launched to the market. This new sensor features a solar cell pixel and global shutter with almost double the resolution of the NSC1001 sensor. This switch could also result in an enhanced image signal to noise ratio if the switching to the NSC1003 is accompanied by also switching to a custom made FPGA base board. This conclusion results from the fact that a higher bit-depth image analogue-to-digital (ADC) converter could be used and the custom made FPGA base board could accommodate the additional image pins. For example, the current custom imaging sensor with the NSC1001 sensor uses a 12-bit ADC which as expected gives 8 bits of usable signal range. The current use of a 12-bit ADC resulted from the number of available general purpose input and output (GPIO) pins available on the FMC breakout board. However, a custom FPGA base board would allow routing additional FPGA GPIO to accommodate more than 12 pixel pins to use perhaps a 14-bit ADC.
Lastly, the review of the optical scanner unit and the navigation unit’s housing resulted design recommendations such as the following: The mechanism for attaching the prototype navigation unit to the Kolibri Corless 3D scanner unit could be improved to allow the attachment and detachment of the two units without dismounting the IMU and the FPGA board. This would also enhance the serviceability of the units. The use of foam rubber padding could further reduce vibrations perceived by the IMU and produced by the handling of the unit and by the scanner’s exhaust fan. The vibrations from the exhaust fan are in the order of 100 Hz. This makes it possible to integrate an active dynamic damper into the housing. This damper could include a spring and a micro shock absorber. D9.1 details the redesign recommendations to the scanner housing regarding the handling, weight, dimension and mechanical stability. Some of these recommendations are made in conjunction with the suggestion of using a custom made FPGA base board to enhance the placement of the electrical components within the housing. Other redesign details of the scanner housing include a revised positioning of the screws to attach the electrical components within the housing.


The objective of this work package is an end user test with the NavOScan system and a market demand validation for the developed concept.


The final scanner with navigation unit prototype was demonstrated in the final project meeting to all participating members of the NavOScan consortium. A hands-on session with live measurements at the “Schiller bust” was conducted.

(see Figure 6 in Attached final report document)

Furthermore, we used three datasets for the evaluation. The task was to deliver an end user the derived 3D scans including the navigation information for further processing. The end-user should be able to use the pre-alignment by the navigation information for an automated 3D matching process of the scanned 3D surfaces. The partner 3DScanners was foreseen to simulate the end user. We used three measurement sets which are related to different business sectors. The 3D scans were generated in the facilities of the RTDs with the usage of the navigation unit and the 3D matching algorithm for pre-alignment of the 3D scans. The end user used the software POLYWORKS to process the data generated by the NavOScan system. The combined 3D models created by the NavOScan project approach were (with minimal manual post processing demonstrating the benefits of automatic registration of data acquired by a handheld 3D scanner. An intensive manual alignment of the data and the resultant Postprocessing step is not necessary anymore. This was proven in all three use cases. From the results it can be seen that the system performs well as a hand held unit. It can be used with the navigation unit and commercial industry standard software’s (PolyWorks) to produce fully workable meshes. Due to the nature of the system there are several markets identified as possible avenues where the aid of the navigation unit may give the scanner an advantage over other hand held scanners. These markets are typically looking at hand held systems due to their accessibility advantages and the systems range in cost from €17000 - €40000. The time and effort saving benefits of using the NavOScan system was evaluated up to 75%. This confirms the numbers given in delivery report DR8.1 section 3.5.


This work package is dedicated to the application and exploitation of the results and includes the development of an exploitation strategy and protection of the intellectual property rights arising from the technological developments in the project. It also includes the dissemination of information which will take place in the form of presentations of the project at conferences, exhibitions and publications in journals and magazines.


In the focus of the planned exploitation we conducted a market research to evaluate the best applicable business sector. The partner Innowep is the leading exploitation partner who wants to produce the NavOScan system. This partner expects the best market acceptance in the cultural heritage because of the mobility and the fast processing of the NavOScan system. Object analysis in Museums, cultural institutions etc. are another key market for the new scanning system. In this market, too, it will be possible to measure objects of medium and big size. Innowep GmbH is already present in this market, especially in Europe. The new technology will increase the field of this already established application substantially. The research on potential primary market applications was conducted regarding the current product portfolio of partner Innowep. The NavOScan system shall be applied to the TRACEIT product family. Here the NavOScan provides substantial additional value. 3D scan from different scale will be set in relation to each other. Several secondary markets are found to apply the navigation unit of the NavOScan system to autonomous underwater and street vehicles, indoor robots or in the asset management. We conducted a cost analysis of the NavOScan system components which were developed in the project. I result in an estimated component price of 2000€ for a navigation unit and a production price of 750€ per unit. This price estimation is without any optimization or price deduction rates. The technology transfer discussion was proposed to work package 12. Public and restricted web services were established to share documents and files with the whole consortium.


This work package is designed to provide detailed co-ordination of the work at consortium level and coordinate information and management between the EC and project consortium. Next to this the final plan for the dissemination and exploitation activities will be drawn-up.


The knowledge management and innovation related activities were coordinate by the project management. All SME and RTD partners were part of several discussions about the further exploitation of the NavoScan technology in telephone conferences, meetings and personal talks. The summary of the amount of information resulted in the dissemination and exploitation plans. The main dissemination activities were business fairs with a very good feedback of potential buyers of the NavOScan technology. The coordination of technical activities was the project came along with weekly telephone conferences and quarterly meetings. Next to this the project is related to all project coordination activities. About 1700 emails are documented by the project manager. It represents the rich communication activity between the project partners. An important step was to define the exploitation strategies of all SME partners. Beyond this there is high interest in further research to apply the developed technology to other systems like vehicles. Further we defined the technology transfer process in detail. Every RTD partner is related to a specific project result and act as a contact person for the delivery of software and hardware to the consortium. The exchange of the knowledge comes along with a planned and committed training by the RTD partners.

Potential Impact:

A part of the industrial and economical validation for the NavOScan market analysis was performed on the potential manufacturing cost compared to the cost of existing 3D scanner devices. Next to this we compared the processing time from the 3D scan to the finalized 3D matching result. The result of this analysis showed that the NavOScan navigation unit can be manufactured below 3T€ which is compared to a current motion tracker negligible. The most imopact is the time saving factor of the NavoScan system. Compared to state of the art mobile 3D scanner, 75 % of the processing time can be saved.


The component costs are structures in hardware and software parts. Every costs represents the current cost price without quantity reduction. The hardware and software parts are almost all produced by the SME consortium. In this case a significant cost reduction can be estimated. The hardware components are listed in 1. The software parts are listed in 2. The software components are parts of the SMEs and the RTDs background which was used during the development of the project.

(see Table 1 NaveOScan hardware components costs in the final report attached)

(see Table 2: NavOScan software component costs in the final report attached)


The production cost estimation is based on the experience of the SME partners. The calibration step are manual procedures. These procedures can be neglected if the coordinate transformation between theses system can be calculated by using the mechanical construction.

(see Table 3: Production step cost in the final report attached)


The component cost summary is listed in the table below. We summarized all cost units and used the factor 3 to estimate the necessary price in the market:

Cost units Expected Costs
Hardware components 2000 €
Software components 100 €
Production costs 750 €
SUM 2850 €
Estimated Market Price 8550 €

The 3D scanner unit is estimated to cost 15000€ on the market. This concludes in a complete estimated market price of 23550€


From the previous results it can be seen that the system performs well as a hand held unit. It can be used with the navigation unit and commercial industry standard software’s (PolyWorks) to produce fully workable meshes. Due to the nature of the system there are several markets identified as possible avenues where the aid of the navigation unit may give the scanner an advantage over other hand held scanners. These can be seen below:

Automotive benchmarking
• Vehicle maintenance
• Healthcare

These markets are typically looking at hand held systems due to their accessibility advantages and the systems range in cost from €17000 - €40000. An estimation of the time and effort saving benefits of using the NavOScan system in comparison to manually aligning the data

• Time for meshing, manual alignment (without NavOScan pre aligned datasets) and cleaning of the data: 1-2 hours.
• Time for meshing, manual alignment (with NavOScan pre aligned datasets) and cleaning of the data: 15-30 minutes.

This means a real life post processing time saving of up to 75%. This confirms the numbers.




We conducted a deep research of patents in the NavOScan sector. It showed up that there are not relevant patents which cover the NavOScan technology. This is clearly due to the fact the most of the basis technology and approaches are already scientific published. For example:

• „An innovative hand-held vision-based digitizing system for 3D modelling“, Coudrin, Benjamin; Devy, Michel; Orteu, Jean-José; Brèthes, Ludovic, Optics and Lasers in Engineering 2011
• „A Stereo-Based System with Inertial Navigation for Outdoor 3D Scanning“, Byczkowski, T.; Jochen Lang, Computer and Robot Vision, 2009. CRV '09

Therefore we published the NavOScan project itself. The software and technology and knowledge of the NavOScan project itself includes many scientific disciplines. A pure coping of the approach is not possible due to the necessary intensive knowledge about the navigation system and its usage.


(see Table B2: Type of Exploitable Foreground)



NIT claims the programmed GIG-E interface for the FPGA unit as a basis for their exploitation. NIT wants to use this interface for the development of SDKs for the NIT sensors. The sensor network interface is related to project result 4 (Electronics (PCB, concept for built-up) for the NavoScan sensor hardware within the interface and the software on the FPGA for the interface). Next to this NIT claims the hardware and the software of the sensor network.
Innovation Center Iceland delivers the result to NIT. Torfi Torhalllsson is in charge of the contact person.


Autonomous State (AS) claims the Navigation unit software prototype and the FPGA embedded Concept of the navigation unit (Result 3 and Result 6). AS wants to exploit the navigation unit in the field of ground vehicle navigation for intra logistics purpose. Fraunhofer IPA delivers the results to AS. Bernhard Kleiner is in charge of the contact person.


Simpleware claims the 3D surface matching algorithm. This results is integrated in project result 6. The matching algorithm is an extension of the VTK library and can be seen as a patch of the VTK library. Simpleware wants to integrate the project result for internal usage for the integration on their software tools which are based on the VTK library. Application markets are medical and traceability and service for quality inspection. Fraunhofer will deliver this result to Simpleware. Christoph Munkelt is in charge of the contact person.


3D Scanners will make use of the knowledge for their 3D inspection service. Their interest is the early usage of the whole NavOScan system which put them in a competitive position. The related result is result No. 1. Fraunhofer IPA delivers the results to 3D Scanners. Bernhard Kleiner is in charge of the contact person.


Partner Enclusta wants to use the developed concept for the image processing on the FPGA including the data interface to the optical sensor and the inertial sensor. Enclustra want to enlarge their service portfolio on the basis of this concepts with programming services or special applied FPGA boards for the image inspection sector. Mr. Torhallsson from ICI is in charge of the contact person.


Partner Innowep wants to use the whole developed navigation system with the prototype component for their 3D scanner technology as an extension of the current portfolio. The navigation unit will allow the combination of micro and macro scanner data from the same scanning object. This would be a great benefit which is not provided by competitive 3D scanner types. Mr. Peter Kühmstedt is in charge of the contact person.


We used three datasets for the evaluation. The task was to deliver an end user the derived 3D scans including the navigation information for further processing. The end-user should be able to use the pre-alignment by the navigation information for an automated 3D matching process of the scanned 3D surfaces. The partner 3DScanners was foreseen to simulate the end user. We used three measurement sets which are related to different business sectors. The 3D scans were generated in the facilities of the RTDs with the usage of the navigation unit and the 3D matching algorithm for pre-alignment of the 3D scans. The end user used the software Geomagic to process the data generated by the NavOScan system. The combined 3D models created by the NavOScan project approach were (with minimal manual post processing) demonstrating the benefits of automatic registration of data acquired by a handheld 3D scanner. An intensive manual alignment of the data and the resultant Postprocessing step is not necessary anymore. This was proven in all three use cases. From the results it can be seen that the system performs well as a hand held unit. It can be used with the navigation unit and commercial industry standard software’s (PolyWorks) to produce fully workable meshes. Due to the nature of the system there are several markets identified as possible avenues where the aid of the navigation unit may give the scanner an advantage over other hand held scanners. These markets are typically looking at hand held systems due to their accessibility advantages and the systems range in cost from €17000 - €40000. The time and effort saving benefits of using the NavOScan system was evaluated up to 75%. This confirms the numbers given in delivery report DR8.1 section 3.5.


Fraunhofer IPA set up a server for the distribution of project document and the project deliverables. This server will be used also to upload the software modules.
The server can be reached on following website:
The username is:
The password is: 0Fk8oY

Next to this the RTD partner are in contact with the SME partners. The RTD partner defined contact persons for all project results. The SME partners can go in contact with these people for the knowledge transfer, the handover of hard- or software and the instruction on the usage of the results. The contact persons are defined in deliverable 12.4.


The following section shows additional pictures and information about dissemination activities were performed during the duration of the project.


“NavOScan: hassle free handheld 3D scanning with automatic multi-view registration”, C. Munkelt, B. Kleiner, T. Thorhallsson, C. Mendoza, C. Bräuer-Burchardt, P. Kühmstedt, and G. Notni, 05.2013 SPIE Optical Metrology Conference, 2013


Elektor. TV: Das NavOScan-Projekt, 27.12.2012.


The NavOScan system was presented on the measurement fair Sensor+Test in the years 2011, 2012 and 2013 on the booth of Fraunhofer IPA

Mr Bernhard Kleiner