Skip to main content
European Commission logo print header

Efficient 3D Completeness Inspection

Final Report Summary - 3DCOMPLETE (Efficient 3D completeness inspection)

Executive summary:

One important application of machine vision is quality control and in particular checking the completeness (presence / absence of parts, correct type, position, orientation) of assemblies. Existing systems usually apply two-dimensional (2D) cameras that provide a monochrome or colour image. These images lack the information of depth and consequently have problems when dealing with non-rigid objects (hoses, cables) or low contrast between background and part and they often do not provide an optimal view on each single part of the assembly. This project aims to develop efficient 3D completeness inspection methods that exploit two different technologies. The first one, which is called 'full 3D' orthoimage technology, is based on calculating arbitrary views of an object given a small number of images of this object. The second one is called extended 2.5D scanning technology and aims to combine 3D shape data with colour and texture information.

In order to develop the technologies a set of test cases were defined that covered typical quality control application in assembly processes. These test cases were taken from the automotive industry and from electronics manufacturing. Quality control criteria were defined and structured so as to provide a basis for the development of the technologies. For the 'full 3D' technology a specific image acquisition setup was development that allowed the triggering of multiple cameras at high precision to enable the synchronised acquisition of multiple images. High image quality in terms of high resolution and reduced shadows was the main concern when setting up this system. Using a specific approach based on the proprietary orthoimage technology the single images were combined to generate arbitrary views of the object. These views could be generated by defining a plane in 3D coordinates onto which the single view is projected. Virtual sensors (small image regions in which specific properties of the sub-assembly are analysed) could then be defined in these images to perform the actual inspection task.

For the '2.5 D2 technology two different approaches were developed. The first one was based on a dedicated profile scanner that was combined with a colour camera. The profile scanner provided a 3D point cloud that was matched to the images coming from the camera by means of a calibration. The second technology was based on a camera system that acquires both, the 3D point cloud and colour images. The difference between these systems is mainly in terms of performance (speed) and cost. For both system appropriate ways of combining the different source of information (point cloud, colour / texture) were developed. Calibration targets were designed and algorithms implemented to generate accurate representations of the assembly. Based on similar methods than those used for the 'full 3D' technology, virtual sensors were developed that could be combined to perform the inspection task.

Both of the basic technologies were integrated into two demonstrators that were presented at the Control Fair in Stuttgart.

Project context and objectives:

The project 3DCOMPLETE aims to develop 3D completeness inspection systems that are similar to existing 2D inspection systems in terms of speed, ease of use and integration, but provide improved functionality. In Europe there are about 3 000 small and medium-sized enterprises (SMEs) working in the field of machine vision. These SMEs provide services and products to another 300 000 SMEs in the machine building and automation sector. About 80 % of the applications of machine vision systems are in quality control such as 2D and 3D metrology, surface inspection and completeness inspection. Completeness inspection is one of the most basic applications of machine vision, it involves checking the presence of parts of an assembly, identifying their type, detecting defective parts and sometimes estimating the position of a part. There are numerous products that try to solve this task by using 2D cameras. However, 2D completeness inspection systems suffer from substantial shortcomings that limit their capabilities and their robustness:

- They cannot robustly detect parts in front of a background of similar colour or reflectance and thus cannot determine whether the part is actually there. This e.g. applies to a large group of applications in the automotive industry.
- Provided that there is sufficient contrast to the background, 2D inspection systems can only do rough presence / absence detection. They cannot determine whether e.g. a plug is securely mounted.
- 2D completeness inspection has a general problem with non-rigid objects such as hoses and cables that may change their position and lead to occlusions so that the parts to be detected are not fully visible.

All of these problems are caused by the fact that these systems lack the information of depth. Clearly, there are a number of 'workarounds' such as using large numbers of cameras, but these are not at all efficient and prohibitively expensive. Applications that are insufficiently covered by existing 2D inspection systems (total market EUR 700 million) are about 10 % of all completeness inspection tasks, so the total market volume will be around EUR 70 million. Within 5 years the SME partners of 3DCOMPLETE plan to acquire 5 - 10 % of this market thus creating additional turnover of EUR 4 to 7 million per year and a growth of 100 - 200 % of their companies.

The SME partners thus want to develop a fully automatic 3D completeness inspection system (including cameras, illumination and software) that is efficient and competitive in terms of price, speed and ease of installation. This is achieved by focusing on two technologies that each aim at slightly different types of applications and will solve the above mentioned shortcomings:

- Full 3D inspection: Multiple oblique views of an object are recorded, from which so-called orthoimages are generated, which are similar to a technical drawing, but extended with colour and texture information. Thus generate, any arbitrary view of the object can be generated without the need of having one camera per view.

- Extended 2.5D inspection: A laser profile scanner is used to acquire 3D shape information. This is then extended with colour and texture data from a 2D camera. This will provide the depth information that is so urgently needed in 2D inspection systems.

The range of applications for both of these technologies (called 'Full 3D' and 'Extended 2.5D') are quite different. Full 3D is required for complex assemblies, such as a car engine, whereas extended 2.5D is useful for more or less flat objects such as electronic assemblies on a circuit board.

3DCOMPLETE aims at two business opportunities: on the one hand we can replace existing inspection systems that use multiple cameras at high cost, with a low-cost system using much fewer cameras and on the other hand we can offer solutions for inspection tasks that currently cannot be solved either for technical or for economic reasons.

For 'Full 3D inspection', the main objective is to develop a technology for completeness inspection using a small number of oblique, uncalibrated views of the object and converting these images into an arbitrary number of metric orthoimages that are ideally suited for inspection. This requires research on the following topics:

- efficient ways of acquiring a set of high-resolution 2D images either by using multiple cameras or by exploiting the motion of the object;
- fast and full automatic methods of converting the oblique images into a set of orthoimages;
- methods to perform completeness inspection on these orthoimages that deal with the pecularities of this type of images.

For 'Extended 2.5D inspection', the main objective is to develop a technology for completeness inspection using laser profile scanners combined with 2D cameras to provide shape and colour information. This requires research on the following topics:

- combined calibration methods for 2D cameras and 3D profile scanners;
- algorithms for mapping 2D colour and texture information on 3D image data;
- methods for performing completeness inspection on 3D point clouds extended with RGB information per point.

The technical specifications that both of the technologies should achieve are oriented along the typical requirements for end-of line inspection systems. They are chosen so as to cover a wide range of applications in the area of discrete inspection systems:

- the total cycle time for inspection (including part handling if required) should be in the typical range of 2 - 20 seconds, in extreme cases down to 0.5 seconds or up to 1 minute;
- the space covered by the inspection system should be scalable up to 1 x 1 x 1 m3 (roughly the size of a car engine) and down to 3 x 3 x 2 cm2 (electronic assembly of a mobile phone);
- the acquisition should be highly flexible to be rearranged for different geometries of the objects to inspected (e.g. to avoid occlusions);
- for full 3D the resolution of the system has to be > 1 500 pixels in all dimensions, which allows the integration of functionalities such as character recognition and bar code reading;
- for extended 2.5D the resolution has to be > 1 500 pixels laterally and at least 1/256 of the depth to be covered.

Project results:

The following significant results were generated during the project:

1. a 'full 3D' technology that uses a set of oblique images to generated arbitrary views of an assembly to enable accurate inspection;
2. methods for identifying correspondences in images to enable the re-construction of depth information;
3. an 'extended 2.5D' technology that combines profile scanners with colour cameras to generate 'coloured point clouds';
4. calibration methods for the '2.5D' technology that merge the coordinate systems of the point cloud and the colour images;
5. virtual sensors for analysis of data coming from both technologies to perform the actual inspection in a robust manner, suitable for industrial application;
6. 3 integrated demonstrators that showed the 'full 3D' technology and two variations of the '2.5D' technology.

1. 'Full 3D' technology

The hardware prototype consists of different modules in which the acquisition system is split: the acquisition devices (cameras, illumination components, marking systems), the housing (the container that holds these devices) and the control unit.

The housing contains the marking systems (if required) the illumination, the cameras and the part under test. The housing had to fulfil a few basic requirements that are needed to achieve high-quality images:

- It must be a steady and robust structure since within this box will be placed the cameras and the marking and lighting elements surrounding the object to be tested.
- Light levels have to be well controlled, as well as no foreign bodies can be inside the housing. The purpose is to isolate and provide shelter and housing to the whole of the acquisition system, protecting them from dust, dirt, vibrations and strange external movements.

The control unit is composed of a personal computer (PC) and a hardware sequencer (trigger unit), which is able to define the temporal sequence of triggers for the camera(s), e.g. to simultaneously acquire images or to synchronise image acquisition with the objects motion. Furthermore it provides capabilities to load a specific camera configurations and to for transferring, storing and editing of the resulted photos. The control unit was integrated with a software tool which allows the users to create the control sequences to be transmitted, to all components involved in the image acquisition process.

The user has the option of:

- creating new sequences, storing them (in Extensible Markup Language (XML) file) on disk and loading old sequences created previously (during another session);
- making a list of the cameras currently connected to the system and setting up their configuration (image acquisition parameters);
- starting, stopping, restarting of the sequencer in real time (which sequence is being executed);
- checking the digital input / output during debugging of the system, i.e. without loading trigger sequences into the sequencer unit.

This software was initially set up to support microcontroller-based and field programmable gate array (FPGA)-based systems, although later in the project the focus was on the more efficient FPGA version.

2. Methods for correspondence matching

Pre-processing starts with three views (images) of a master part taken from three different points of view, guaranteeing that all optical axis converge in the centre of the part. Those images has been taken in the same system and in the same way than the parts that could be defective (test parts) to ensure the same environmental conditions in the inspection process. In order to prepare the system to perform an alignment with good tolerance to 2D and 3D transformations, the operator must define an area with enough detail in the central view of the pattern image. That area will serve later to check the quality of the alignment between the test part’s images and the pattern part's images. The operator has also to define the rectification plane where the images will be projected onto (only it is necessary to mark 3 points to define the plane), when the alignment process is finished, to generate the orthoimage.

A key step in the generation of the orthoimage is the alignment of the current objects to a previously trained version of that object. This is done in a two-step approach, where the rough alignment is quicker and reduces the difference to less than 10 degrees.For this purpose 36 rotations of the test images, each one having 10 degrees, are calculated.

For each 10-degree rotation, using the selected area the previously created the pattern image is compared with the rotated test image using an implementation of the normalised cross correlation algorithm. When the 36 rotations are performed, the one with the higher correlation factor is selected. These rough rotations ensure that the alignment is sufficiently precise to be further refined by the fine alignment procedure. With an algorithm based on Least Squares Matching an alignment in a per pixel basis with an error of 0.01 degrees is achieved. The algorithm has been further extended to include not only a simple rotation but also a projective correction that is applied to each pixel. This correction also fixes small 3D differences, so that each pixel is almost perfectly aligned (geometrically) between the trained pattern and the test part.

The next step is to find the rectification plane, so starting in each rectification plane point position, in the central pattern image, we will find the same point in the central image of the test part (which now it is aligned) using Normalised Cross Correlation. Finally, using epipolar lines, correspondences are established followed by a bundle adjustment algorithm to obtain the 3D positions of those points.

3. An 'extended 2.5D' technology

Two variants of the technology were developed. The first one was based on conventional cameras, while the second ones used a profile scanner for acquiring depth information. An extended 2.5D colour scanner has been developed for acquiring range and texture information of an object. For building more accurate models and reducing the problem of laser occlusion in the direction of movement of the camera two lasers were used, each of them projecting a laser line on the object. The chosen configuration for the scanner includes a camera and two lasers at its sides.

Range information is easier to extract from images where the laser projectors are the only source of light, while a diffused illumination of the scene is required for colour / texture acquisition. In order to be able to extract both range and texture in a single scan, the possibility to put also a collimated white light next to the camera has been investigated. This light allows to perform acquisitions in dark lighting conditions and to illuminate only a small part of the object where texture could be correctly acquired.

The laser projectors were positioned in such a way that the two laser lines do not overrun the zone reserved for the texture. This can be easily done by checking where the laser lies in correspondence of the smallest and the tallest part of the object to be acquired.

Pre-processing algorithms were implemented to produce range and texture data that are then used in the inspection phase from the images acquired during the object scan. At first the laser points are detected in the image, then they are triangulated in order to estimate their 3D position in the camera reference system. The camera pose is estimated for every frame, so all the 3D profiles can be referred to the same reference system to compose the whole set of points. From the resulting point cloud a mesh is created and images are projected on it for texturing. Alternatively a range image can be created from the point cloud and a texture image can be built by stitching together all the images acquired according to the estimated motion of the camera. Laser detection and triangulation are to be performed 'online', during image acquisition, while the other algorithms can be performed after the scan.

The second variant of the technology was designed for applications that require fast data acquisition. The system consists of a camera with on board laser line detection algorithms. For this part a Sick Ranger is used. The benefit of detecting the laser line in the camera is that the laser line could be detected in more columns simultaneously and that only the 3D points have to be transferred to the PC for further evaluation.

Because of this, range data acquisition is much faster compared to a system using a normal camera (up to 35 000 scans per second under optimal conditions). As laser, a triggerable red laser module is used with a specific lens to generate a line.

For acquiring gray scale data the sick ranger is used as range scanner and gray scale line camera at the same time. For these tasks one sensor row is used for gray scale acquisition and the rest of the sensor is used for laser line detection. To avoid disturbances from gray scale and range data acquisition through their different illumination needs the laser line and the white light illumination are switched on and off in an alternating way.

For the pre-processing the laser line detection is performed on the Sick Ranger which offers different parameters and algorithms that can be tuned to the particular application.

Triangulation is done with a look-up table LUT which results from the calibration algorithm. There for every pixel position of the laser line is a corresponding Y and Z value in world coordinate frame. To generate a full 3D point cloud, motion estimation is needed by using the position value which comes from an encoder wheel on the conveyor belt. For each line captured from the camera the encoder value is read and the line is positioned in the image based on its encoder value in a way that the distance in X coordinate is equal between all lines. Using the look-up table the texture image is mapped onto the point cloud, thus providing the input data for the inspection.

4. Calibration methods for the '2.5 D' technology

Calibration is a key step to generate accurate models of the object and to match the single sources of information, in particular the 3D point cloud and the the colour information. The calibration of the acquisition system is composed of three steps:

i) Camera calibration: It consists in estimating intrinsic and extrinsic camera parameters to build a geometrical model of the camera-lens system.
ii) ii) Laser-camera calibration: The calibration between one or more lasers and the camera provides information on the reciprocal positions of the scanner components and it is needed to perform triangulation and obtain 2.5D models.
iii) Scanner-world calibration process: It aims to estimate the relative motion between the scanner and the object to inspect.

The objective of camera calibration is to determine the set of intrinsic and extrinsic parameters, which describe the mapping between the reference frame of the 3D world and the reference frame of the 2D image. The overall performance of the machine vision system strongly depends on the accuracy of the camera calibration. To perform the calibration, the camera is modelled as a pinhole camera, where each point in the object space is projected into the image plane by a straight line through the projection centre. The intrinsic camera parameters usually include: focal length, principal point, skew angle (defining the angle between the x and y axes), distortions: the image distortion coefficients (radial and tangential distortions). Extrinsic parameters are needed to transform the world coordinates into the camera centred coordinate frame.

To perform the camera calibration a large number of pictures (usually thirty or more) of a NxM checkerboard with known dimensions in different positions are used. An image processing algorithm automatically finds out the internal corners of the checkerboard in each image. Using the detected corners, the calibration algorithm estimates the homography matrix relative to each checkerboard image and minimises the reprojection error of the corners for obtaining a good estimate of the camera intrinsic and extrinsic parameters.

Laser-camera calibration is needed to determine the relative pose between the laser plane and the camera reference system. To estimate the 3D equation of the laser plane, the laser plane is cut at different positions with a solid plane resulting in a laser line on this plane. The solid plane is the same checkerboard used in the previous section. Twenty pictures of the checkerboard in different pose are taken, this results in 20 images containing the checkerboard and the laser line lying on it. The checkerboard pose in 3D is calculated with the same algorithm introduced in the previous paragraph. The exact position of the laser line is determined by performing a laser-peak detection to calculate the position of the laser points in the image plane. In this phase, since the laser line lies on a plane (the checkerboard) and so it is a straight line, it is possible to apply a RANSAC algorithm to remove the outliers.

Finally, a calibration procedure is needed to estimate the direction of movement of the scanner or of the object respect to the world frame. For this purpose two images of the same planar object, e.g. a checkerboard, are acquired from two different position of the scanner respect to the acquisition plane. From these photos some local features are extracted and matched to estimate a planar transformation between the two images. A homography matrix is calculated and decomposed to find the correspondent 3D camera rotation and translation. Actually the motion is a pure translation, so rotation should be equal to zero, while the translation vector should give the direction of movement.

To exploit the fact that the motion of the camera (or of the object) is known to be a pure translation it could be imposed that the rotation is equal to zero (e.g. by estimating a homology instead of a homography) in order to reduce eventual estimation errors.

5. Virtual sensors for analysis

A set of virtual sensor, each with specific properties has been developed. In the following these virtual sensors are described on concrete examples from inspecting the electronic assemblies used as test cases in the project.

The check for the perpendicularity of a 'radio module' relative to the 'base board' is performed using the coloured 3D point cloud. The radio module as well as the base board are isolated from the point cloud by extracting the two clusters in their known approximate locations. The best-fit plane to each cluster is determined, and a comparison is made between the plane normals to determine if the top of radio module is parallel to the base board. A threshold of 5 degrees is applied to classify a radio module as a defective.

The check for the planarity of a GSM module on the assembly is performed using the coloured 3D point cloud. The GSM module as well as the base board are isolated from the point cloud the best-fit plane to isolated cloud is determined, and a comparison is made between the plane normals to determine if the GSM module is parallel to the base board. Again a threshold of 5 degrees is used to classify a GSM module as a defective.

The check for the routing of a GSM cable the GSM module is first isolated as explained previously. The top surface of two pegs at the top right corner of its bracket is then located based on location and height. Finally, the GSM cable is isolated from the points between the GSM module and the base board using colour segmentation. If points belong to the cable are located between the two pegs, then the inspection task is classified as a success. The check for the presence of the yellow seal is performed using the coloured 3D point cloud. First, using a threshold on height and location from the centroid, the point cloud cluster comprising of both the black side structure as well as the seal is extracted. Next, the yellow seal is isolated using colour segmentation. If a sufficient amount of points are detected, the yellow seal is deemed to be present.

The check for the eccentricity of an antenna connector is performed using the 3D point cloud. The points belonging to the inner and outer rings of the connector are extracted, each one is matched to a circle, and the eccentricity of the matched circles is determined.

These virtual sensors were embedded in a framework that allows a graphical programming of the inspection task and a simple parameterisation of this single algorithms.

6. 3 integrated demonstrators

The technologies were integrated to obtain 3 demonstrators:

A 'full 3D' demonstrator was set up that consisted of a set of high-resolution, digital cameras together with an illumination (flash). Both, camera and flash, were triggered using an FPGA-based sequencer unit that synchronised image acquisition with the motion of the part. Algorithms for pre-processing and defect detection were integrated.

A first 'extended 2.5D' demonstrator that used a standard colour camera that simultaneously generated a 3D point cloud and colour / texture information. Two lasers with different colour were used as illumination. Based on a full intrinsic and extrinsic calibration, the matching between the point cloud and the colour images was done. The resulting data were the used as input for the virtual sensors.

A second 'extended 2.5D' demonstrator consisting of a high-speed profile scanner, a laser module and a white light source was built. The laser line and the line light were quickly switched in alternating mode, so that depth and grey calues could be acquired in a single scan. The resulting data were used as input for the virtual sensors that performed the actual inspection.

Potential impact

In the beginning of the project, the end users who provided sample parts and test cases for the project, confirmed the problems that have already been identified during the preparation of the proposal. Completeness inspection is an important part of quality control at the end of assembly lines. In many cases conventional 2D inspection systems have been implemented and tested in the production lines. These systems were either complex inspection systems with multiple cameras that perform a full check of the whole assembly or single-camera systems, often using 'intelligent' cameras that provide simple programming tools. The results of using these systems were mixed. The key issue is the lack of robustness, which comes in combination with a lack of understanding of what exactly is going on in the inspection system or inside the intelligent camera. This lack of understanding makes it difficult for end users to re-configure the system and adapt it to new situations. A particular problem is that the behaviour of the system is counter-intuitive in the sense that tasks that are perceived as simple by a human or difficult to solve with existing systems. Technically the main problems were identified as (too) high sensitivity to changes in the visual appearance of the products, lack of depth information and problems with the system’s overall complexity.

The 3DCOMPLETE project addressed all of these problems, by providing technologies that are robust to changes in appearance, by including depth information and by providing systems, where a single sensor produced all the information that is needed for performing a complex inspection task. Especially in situations where contrast is low and where non-rigid objects (cables, hoses) are involved, this is a substantial advantage. The low cost of the sensor system (compared to its capabilities) was identified as an additional advantage.

Based on the feedback of the end-users that was received in later stages of the project, the technologies developed in 3DCOMPLETE will have a significant impact on end-of-line inspection systems in assembly lines. They have a good chance of transforming the conventional approach to automatic inspection systems in assembly lines. The current practice of having a single, large and complex inspection system at the end of the production line, could transform into having multiple, smaller and simple inspection systems distributed throughout the assembly line. This enables an earlier detection of defects and saves time and money for repairing defective assemblies. In many cases the motion of the production on a conveyor belt or a transfer system can be used for the scanning (in the case of 'extended 2.5D') or image acquisition (for 'full 3D'). Robotic systems were also proposed and discussed, where these are sufficiently quick to achieve the required cycle time. The two technologies ('extended 2.5D' and 'full 3D') that resulted from the project have developed into two quite distinct directions, clearly identifying potential fields of application. The 'extended 2.5D' - and especially the low-cost version with conventional, industrial cameras - has a very high robustness to environmental conditions and can be easily integrated into existing production lines. The high robustness comes at the cost of lower quality data (only single-view depth map; only rough colour representation) and thus makes it suitable for presence detection, identifying the type of objects on the assembly and performing rough measurements.

The 'full 3D' technology has higher requirements in terms of controlled lighting conditions, precise transportation of the part and camera resolution. The typical implementation of 'full 3D' system will thus be an end-of-line inspection system that acquires a most complete representation of the assembly. These higher requirements in turn provide higher data quality, such as a more or less full 3D view of the object, good representation of colours and comparably precise 3D data. These systems are thus suitable where completeness inspection has a stronger focus on performing measurements and distinguishing objects that are very similar.

In the proposal the market potential has been characterised as follows: 'The total market of machine vision systems in Europe is about EUR 3 billion per year with a moderate growth of about 10 % per year. The applications that can be covered by 3DCOMPLETE are discrete inspection (40 %) and to a certain degree also metrology in 2D (3 %) or 3D (12 %). 'Discrete inspection' can be further broken down into surface inspection and completeness inspection each of which has a 50 % share. Consequently, 3DCOMPLETE is addressing about 20 - 25 % of the total machine vision market, which amounts to EUR 700 million per year'.

Essentially, this assessment is still true and the small and SME partners have a good chance of accessing a part of this market, which was - at the time of the preparation of the reports on use and dissemination - still estimated to be 1 % of the total market and amount to EUR 7 million additional turnover per year.

A specific technical risk with respect to achieving the potential impact has been realised during the lifetime of the project: Originally, the main competing technologies were assumed to be time-of-flight cameras that provide a highly integrated solution to acquiring depth and colour information. Their main drawback was the low resolution. This is still the case. However, on the consumer market, technologies such as Microsoft Kinect emerged that had substantial impact on methods for acquiring depth information in vision applications. The main advantages are its very low cost (approximately EUR 150) and ease-of-use. In the meantime also a substantial community has been established that develops software libraries and applications using Kinect as a sensor. For the application in industrial inspection there are still two drawbacks: the fact that Kinect is a consumer product and its comparably low resolution (because it was originally intended for capturing human motion). Both of these drawbacks are just a matter of time and the SME partners of 3DCOMPLETE will closely monitor the development of this technology. Still there is a good chance of realising the market potential, because the Kinect provides data that are quite similar to those produced by the 'extended 2.5D' sensor system, so that algorithms for completeness inspection will need only little adaptation to be suitable also for Kinect-based sensor systems.

Societal impact

The project is focused on the development of industrial technologies and has thus little direct societal impact. However, it does have impact on the role of workers in production environments such as assembly lines. Currently, workers often perform an end-of-line inspection, which often does not require highly qualified personnel. By using automatic inspection systems, these jobs are converted into high qualified jobs thus allowing workers to focus in the important issues and leaves room for the optimisation of the production process.

In the medium-term future, the trend towards higher numbers of product variants and almost 'individual' products, the end-of-line inspection will require automatic systems, because it will be infeasible for to deal with such large numbers of product variants. The technologies developed in 3DCOMPLETE thus support the trend towards individual products and 'mass customisation'.

Main dissemination activities

The main demonstration of the 3DCOMPLETE technologies was at the CONTROL fair in Stuttgart. The CONTROL Fair is an international trade fair for quality assurance. It is the world’s only trade fair which focuses strictly on quality assurance and presents the entire spectrum of products, systems and complete solutions for efficient, effective quality assurance. Many innovative companies prefer to exhibit their new products at Control for the first time, because international expert visitors attend the event in order to gather information about worldwide offerings and how to exploit them in actual practice as quickly as possible.

This is why the project partners decided to exhibit at the Control Fair 2012 which in Stuttgart, Germany from 8 - 12 May 2012. The SME partners and all the research and technological development (RTD) partners set up a booth und the lead of the coordinator. At the booth both demonstrators, the 'full 3D' and the 'extended 2.5D' technology, were presented. The particular focus on 3D completeness inspection was a quite distinct feature of this presentation and many of the visitors attended the booth specifically because they knew about the potential of 3D inspection as compared to more conventional 2D systems. The project partners considered the participation in this fair a success, with many contacts coming either directly from potential end-users or from system integrators.

Aside from this industrial dissemination event, the RTD partners also prepared scientific publications, some of which have already been published, while others are still in the review process: Edmond Wai Yan So, Matteo Munaro, Stefano Michieletto, Emanuele Menegatti, Stefano Tonello: 3DCOMPLETE: Efficient Completeness Inspection using a 2.5D Colour Scanner, submitted to Computers in Industry, 2012

International Computer Vision Summer School (ICVSS), 'Reducing the Problem of Occlusions in Laser-Triangulation Reconstruction'; M. Munaro, S. Michiletto, E. Menegatti; Sicily, Italy; July 11-16, 2011

(ICMVIPPA) International Conference on Machine Vision, Image Processing, and Pattern Analysis, 'Fast 2.5D model reconstruction of assembled parts with high occlusion for completeness inspection'; M. Munaro, S. Michiletto, E. W. Y. So, D. Alberton, E. Menegatti; Venice, Italy; November 2011

IAS-12, 12th International Conference on Intelligent Autonomous Systems, 'Real-Time 3D Model Reconstruction with a Dual-Laser Triangulation System for Assembly Line Completeness Inspection', E. W. Y. So, M. Munaro, S. Michiletto, M. Antonello, E. Menegatti; Korea; June 2012

ROSE 10th International Symposium on RObotic and Sensors Environments; 'Calibration of a Dual-Laser Triangulation System for Assembly Line Completeness Inspection'; E. W. Y. So, S. Michiletto, E. Menegatti; Magdeburg, Germany; November 16th, 2012; Receipient of Best Paper Award

Other dissemination activities included the setup of a project web page (www.3DCOMPLETE.eu) the preparation of a flyer that has been distributed at several occasions, and press releases that lead to publications in regional and national newspapers.

Exploitation of results

The details of the exploitation plans of the SME partners will be described in the confidential part of this report and in the deliverable on use and dissemination of the project results. The general concept that will be implemented by 2 of the 3 SME partners is driven by the financial capabilities of the SMEs. An extended development of a finished product using own funds is not feasible for both of these SMEs. Therefore the approach will be stepwise, starting with first implementations at 'friendly' customers, in particular considering the test cases that were investigated in the project. These first installations will have a project-based business model, still involving a significant proportion of engineering work. Within 2-3 years after these first implementations, the technology will mature and the scope of the product (including sensor system, illumination, electronics and software) will emerge, by identifying those features that need to be present in almost any application and by defining appropriate interfaces to the application-specific developments. After this period of time the resulting product that can be sold either as part of a larger project-based implementation of completeness inspection systems, or it can be sold to system integrators the use the technology as part of the production equipment that they build.

The third SME partner has a slightly different business model. This SME will take over the exploitation of the project results in northern and north-eastern European countries, to which the SME has very good access and also established sales channels.

List of websites: http://www.3DCOMPLETE.eu

Contact details: Ing. Petra Thanner, MSc
Profactor GmbH
Im Stadtgut A2
4407 Steyr-Gleink, Austria
e-mail: petra.thanner@profactor.at
phone: +43-725-2885950