European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS
Contenido archivado el 2024-05-28

Planetary Robotics Vision Ground Processing

Final Report Summary - PROVISG (Planetary robotics vision ground processing)

Executive Summary:
PRoVisG stands for 'Planetary Robotics Vision Ground Processing'. It is a Collaborative Project in the frame of FP7-SPACE-2007-1. PRoVisG started in October 2008 with a duration of 45 months until June 2012. It brought together major EU and US research institutions and stakeholders involved in space robotic vision and navigation to develop a unified approach for robotic vision ground processing. One main result is a web-based Geographic Information System (GIS), facilitating the invocation of comprehensive visual data processing and the visualization of the context, history, vision meta-data and products of robotic planetary missions. Prototypes of rovers and airborne probes were used in terrestrial field test campaigns to demonstrate visual processing ability going beyond what is currently achievable with the Mars Exploration Rovers (MER), the Mars Science Laboratory and currently envisaged ESA missions.
The project built upon knowledge both from users of vision data from planetary surfaces and from computer vision and robotics experts. It was based upon a straightforward research and development chain reflected by its main work packages:
- Requirements to vision processing were identified by scientists and mission operators
- A consolidation and collection took place, covering existing tools, data structures, commitments, and interfaces used within the planetary society on the one hand, and by computer vision and robotics on the other hand
- The requirements and interfaces lead to an integrated robotic vision processing chain, namely 'PRoViP'Planetary Robotic Vision Processing.
- For easy access to PRoViP, an overlay was provided by means of a web-based GIS
- In order to find out the impact and judge the elaborated knowledge and tools for usability, the traceability of the results back to the requirements was checked
- Major outcome of such collaborative projects consists of publications, education, presentations and the support of student in various aspects.

Within 45 months, the following achievements and results were successfully finalized:
- Following the Project logistics, relevant information about vision sensors' data and their processing was collected, and specifications on implementation, functional interfaces and use cases were formulated.
- The collection of necessary information was complemented by mechanisms of interfacing existing planetary data bases, exploiting various vision sensor geometries, identifying relevant 3D data structures and finding the necessity for newly developing missing items.
- The Vision processing chain PRoViP has been completed. High-level vision tools from the relevant PRoVisG partners were collected and integrated into PRoViP.
- The PRoGIS was implemented, placed on the internet and interfaced with PRoViP. PRoGIS a web-GIS is based access platform that is able to provide immersive access to parts of the MER vision data, and launch 3D vision processing tasks on images interactively selected by the system.
- The PRoVisG Consortium and most of its members have published several tens of abstracts and papers at relevant conferences such as the European Planetary Science Congress (EPSC), the European Geoscience Union (EGU) Conference, and computer vision as well as 3D vision conferences.
- The '3D Mars Challenge', a contest for planetary 3D Vision processing has been launched, making use of reference data sets from PRoVisG Consortium Members. After its evaluation, the Winners were taken to a PRoVisG meeting at JPL in December 2012.
- http://www.provisg.eu is the official PRoVisG web site,
- A Stereo Work Station has been successfully implemented in cooperation with JPL to work with planetary surface stereo imagery. It is publicly available under Sourceforge http://sourceforge.net/ .
- A web service for structure from motion has been established and improved for general purpose and planetary surface & aerial imagery 3D vision processing.
- Various high-level tests were performed and data products were generated that show the ability of 3D vision to exploit planetary surface imagery.
- The PRoVisG Field Trials Tenerife 2011 was held in conjunction with a Summer School in Berlin.

Project Context and Objectives:
For a probe on another planet, time is of the essence, since its operational life is often short. The harsh environment, extreme temperatures and pressures, dust and radiation threaten to damage the hardware and compromise the mission at any moment. Given the difficulty and cost of getting to other planets, obtaining a high return on investment is crucial.
In order to maximize the use of a robotic probe during its limited lifetime, scientists immediately have to be provided the best achievable visual quality of 3D data products, and mission controllers need to minimize the time spent for planning the next activities. PRoVisG will facilitate this, having developed technology for the rapid processing and effective representation of visual data by improving Planetary Robotic Vision Ground Processing facilities. Its ambition was to collect a tool set and integrate a versatile and flexible processing chain which can be easily adapted to the various tasks.
PRoVisG brought together major EU and US research institutions and stakeholders involved in space robotic vision and navigation to develop a unified approach for robotic vision ground processing. One main result is a web-based Geographic Information System (GIS), facilitating the invocation of comprehensive visual data processing and the visualization of the context, history, vision meta-data and products of robotic planetary missions. Prototypes of rovers and airborne probes were used in terrestrial field test campaigns to demonstrate visual processing ability going beyond what is currently achievable with the Mars Exploration Rovers and currently envisaged ESA missions.
The main PRoVisG objectives can be summarized as follows:

- Collection of requirements from planetary scientists and mission engineers to define important features to be provided by robotics vision components
- Definition of interfaces between various components regarding robotics vision, both on-board and on-ground, as well as the data structures required thereto
- Integration of a Planetary Robotic VIsion ground Processing chain (PROVIP) with representative components available at the proposing institutions, with minor adaptation and integration efforts
- Integration of a web-based GIS (PROGIS) that provides a comprehensive vision data processing chain as well as visualization of the context, history, vision meta-data and products of complete robotic planetary missions
- Providing an open-ended framework for vision processing in ongoing and future robotic missions by definition and implementation of a simple open architecture for PROVIP
- Enabling PROVIP for batch 3D Processing of MER imagery to demonstrate European state-of-the-art
- Demonstrating the European state-of-the-art, making use of existing planetary robotic test beds to verify the PROVIP abilities to cope with such versatile environments
- Improving the public outreach of robotic missions by efficient and simple visualization mechanisms
- Ensuring a flexible yet robust 3D information source for activity planning and operations monitoring
- Including funds to issue Announcements of Opportunity (AOs) to European vision groups to submit results from consortium-provided test data for specific algorithms.
- Organising workshops as well as sub-workshops in major computer vision and planetary science conferences to present the results, their evaluation and the winners of the PRoVisG 'prize', and a summer school in 2010 at Aberystwyth University covering planetary robotics vision.
- Maintaining an official web site for education, public outreach, and information sharing (such as updating project status, disseminating results, and advertising events)
- Supporting, contributing to, and organizing public events such as ECCV, the Farnborough Air Show, or the International Astronautical Congress

The project built upon knowledge both from users of vision data from planetary surfaces and from computer vision and robotics experts. It was based upon a straightforward research and development chain reflected by its main work packages:
- Requirements to planetary robotics vision ground processing were identified by scientists and mission operators (WP2: Requirements)
- A consolidation and collection took place, covering existing tools, data structures, commitments, and interfaces used within the planetary society on the one hand, and by computer vision and robotics on the other hand (WP3: Interfaces)
- The requirements and interfaces lead to an integrated robotic vision processing chain, namely 'PRoViP' ' Planetary Robotic Vision Processing. This was the core part, integrated from contributions by the computer vision and robotics experts (WP4: PRoViP).
- For easy access of the PRoViP processing tools, an overlay was provided by means of a web-based GIS (WP5: PRoGIS).
- In order to find out the impact and judge the elaborated knowledge and tools for usability, the traceability of the results back to the requirements was checked (WP6: Evaluation)
- Major outcome of such collaborative projects consists of publications, education, presentations and the support of student in various aspects. A dedicated work package emphasized these actions and made sure that the knowledge collected within PRoVisG is properly re-used and exploited in the society (WP7: Dissemination).

The study logic is given in Figure 1.

Figure 1: PRoVisG Study logic

Project Results:
4.1.3.1 Summary of Foreground
The PRoVisG results can be separated into software, data, hardware, test results, comprehensive information via reports, as well as dissemination and cooperation aspects:
1) The PRoViP (Planetary Robotics Vision Processing Chain) contains a whole toolchain for 3D vision processing of imagery obtained on the surface of planetary bodies. It is available under two different platforms (Linux & Windows) and can be used both for future and ongoing missions support, and academic purposes. It is extensible in terms of functionality and data schemes to be processed. It will be further exploited in the FP7-SPACE domain in the Project 'PRoViDE' for mass 3D processing of planetary robotics vision data. In addition it supports the forthcoming ESA / ROSCOSMOS ExoMars 2018 mission in terms of PanCam image processing.
2) The PRoGIS (Planetary Robotics Vision GIS) has been established to have web-GIS based access to portions of the US - MER mission, in order to invoke PRoVIP components such as panorama mosaicking or Digital Elevation Models (DEMs) or ortho image generation. PRoGIS will be extended for further usage within PRoViDE as well. The underlying GIS & Data base engine is able to handle full mission data sets as planned on Mars within the next few years.
3) A catadioptric camera system was developed (the 'omniview stereo camera').
4) Knowledge about ongoing and past missions was collected and placed in a set of reports accessible to specific parts of the European planetary research community. Sensor calibration, data structures, global coordinate system issues on Mars missions in general, and various test cases in terrestrial applications, including submarine, were elaborated.
5) The relationship between European and US planetary robotics vision processing was substantially improved by a tight cooperation due to involvement of US players Ohio State University (OSU) and Jet Propulsion Laboratory (JPL). Joint workshops, data exchange, and a software harmonization / integration campaign close to the end of the Project at JPL showed a high level of mutual understanding and improvement of procedures on either side. One European PRoVisG Beneficiary (UCL) is involved in the current US MSL mission, which ensures further exploitation of this collaboration channel.
6) The web service for structure from motion is available for multiview stereo applications both for the academic society for comparisons, and for consumer and education applications to produce 3D reconstructions from a series of images from the same object or region. Due to the complexity of test data from the planetary robotics vision data, PRoVisG could contribute with substantial improvements, making the service more robust also for terrestrial applications.
7) A large set of dissemination actions could be successfully realized, such as workshops in the course of international conferences, several tens of presentations and conference papers, as well as more than ten peer-reviewed papers in computer vision and planetary research platforms.
8) The PRoVisG Field Trials in Tenerife in September 2011, combined with a summer school in Berlin showed that such events can raise the understanding about the necessary techniques and logistics of planetary robotics vision, and the environment in which the robotic assets are used. Valuable experience was collected in these issues, which will lead to more enhanced field trials, reference data acquisition and concept verification & rehearsals for upcoming space missions.

4.1.3.2 Software
i. PRoViP: Planetary Robotocis Vision Processing Chain
Within the PRoVisG project, multiple heterogeneous image processing functionalities available at the contributing partner institutions that allow ground processing of planetary mission data were integrated into an extensive framework called PRoViP. PRoViP can essentially be regarded as the processing core in PRoVisG, performing image processing on selected sets of input data, at the request of a client. The client allows the user to select the data to be processed, choose and configure the desired workflow, start the actual processing, and collect the results.
Two PRoViP clients have been implemented which are available in Windows as well as Linux:
- a command line version PRoViPInvoke, that can be used to perform batch processing and is embedded and used in PRoGIS (the PRoVisG web-based GIS see: http://www.progisweb.eu )
- a standalone Graphical User Interface (GUI) to demonstrate the capabilities of the processing chain and to provide a comfortable processing environment (Figure 3).

The Planetary Data System (PDS) format is used as the exclusive external data interface to PRoViP, both for input and output data. Internally, however, the more light-weight RSX/PAR formats from JR are used. A dedicated import/export module converts between the formats on input and output.
PRoViP is implemented in Python, a widely used script based programming language, that allows for rapid creation of workflows while offering a very flexible programming environment. PRoViP relies on the Qt Framework for GUI creation, which is accessed via PySide offering Python language bindings for Qt.
The processing chain, by means of Python, is assembled by a sequence or sequences of processing steps and/or workflows. Configuration parameters and settings as well as workflow definitions are heavily based on XML files. The Graphical User Interface (GUI) is in part automatically generated from these XML files.
PRoViP workflows and processing steps use a generic XML interface that can be implemented and embedded by different contributing parties. These implementations may either reside on the machine PRoViP is executed from, or be called via remote processing (see Figure 2 for an overview of the PRoViP framework including all remote processing options).

Figure 2: PRoViP framework overview

Figure 3 shows the layout of the PRoViP GUI version, which allows the access and definition of missions (and the underlying file data base) and the definition of datasets to be processed via one of the available workflows.
The command line version PRoViPInvoke is also implemented in Python (ProvipInvoke.py) and acts as a wrapper around the import, processing and output functionality of the PRoViP processing core. It needs to provide all the functionality available in the standalone GUI over a command line interface. Input images and meta-data have to be available in PDS format. The data set to process (e.g. the directory in which the input files reside) is passed as parameter to the script. The import module is invoked establishing the database and importing files.
The script processes an integrated workflow configuration XML file containing the workflow including all sub-workflows and processing steps as well as all processing parameters. This is the main difference to the standalone GUI version, in which the XML configuration files defining the workflows and processing steps, as well the ones defining the processing parameters are held separately in order to keep the system modular. The integration of these different files into the integrated workflow configuration simplifies the interface towards PRoGIS.
Further, a remote processing is also available (for both PRoViP clients) which allows the outsourcing of processing power to remote servers.
Detailed instructions and descriptions of the available workflows are given within the PRoViP user manual as shown in Figure 4. This manual is included in the PRoViP installation and can be accessed via pressing the F1 button in the Main GUI or the 'Documentation' entry in the 'Help' menu bar (see Figure 3).

Figure 3: Left: PRoViP GUI startup sindow. Right: PRoViP GUI main window


Figure 4: Parts of PRoViP User Manual

ii. Structure from motion pipeline
CTU finalized the v1.0 implementation of their CMP Structure from Motion (SFM) web service interface (Figure 5). Two interfaces were implemented. The web interface to the CMP SFM web service allows users to log into the system, upload data, to run different 3D reconstruction pipelines, and to view progress and results. The Command Line Interface (CLI) provides the equivalent functionality to third party clients, which can call the service without manual interaction. The CLI interface has been used by PRoViP to call the 3D reconstruction of the scenes via the CMP SFM service. The service is available under http://ptak.felk.cvut.cz/sfmservice/ .

Figure 5. (Left) Web based interface to the CMP SFM web service allows users to upload images and run a number of 3D reconstruction pipelines. (Right) Command Line Interface (CLI) can be used to call the SFM service from a remote application.
iii. PRoGIS
The PRoGIS Web GIS interface is illustrated below in Figure 6 and Figure 7. The first screenshot shows the web map with HiRISE image background, overlaid rover track and rover stop points, and polygons representing the image fulcra supplied by Ohio State University (OSU), coloured according to MER camera system, in the background. Sub-windows show (in clockwise order from top-left), basic details of the rover stop point that was selected on the map (to support further data queries & referencing); a grid of thumbnails associated with each image taken at the stop point including derived products; and PRoVIP processing results (in this case, an image panorama).


Figure 6 Example of the PRoGIS web interface

The second example shows the 3D viewing capabilities from using UCL's StView system to generate red/cyan anaglyph images.


Figure 7 PRoGIS interface showing anaglyph 3D view

Step-by-step details of the use of PRoGIS to display Opportunity MER rover data and to call PRoVIP can be found in deliverable D5.4 the PRoGIS User Guide.
iv. Stereo Workstation
Due to the smooth nature of present-day landing sites (mainly due to engineering constraints), MER images often contain regions with homogeneous texture containing only sparse features. This not only complicates obtaining accurate stereo matching results but also prevents the retrieval of a dense disparity map without additional manual measurements being involved (Shin and Muller, 2009). Furthermore, it is hardly expected that a data consumer who does not have expert knowledge about the stereo matching process can choose an appropriate matching algorithm and set appropriate parameter values to produce acceptable quality of a 3D model from a stereo image. Therefore, it is necessary to provide a lightweight rendering method that can display a stereo pair with 3D effect more effortlessly. To address this, JPL developed a JAVA based stereo rendering engine (called JADIS) and introduced some simple applications, which directly visualise a stereo image using a 3D stereo display without a 3D model (Deen and Lorre, 2005). JADIS was published as Open Source, and from this starting point the stereo workstation tool was developed.
JADIS inherits its platform independent characteristics from its constituent language, JAVA (Pariser and Deen, 2009). Also, JADIS is designed to automatically select an appropriate stereo rendering mode according to the graphics card configuration, e.g. quad-buffer rendering mode or anaglyph mode. To bring this functionality to the European space community, MSSL has developed a platform-independent stereo application that combines some of the practical stereo tools (e.g. stereo matcher and triangulation) with JPL's stereo visualisation library.
In summary, we have achieved three main goals in this task, namely:
- Development of a stereo image processing and visualisation tool (called StereoWS) using JPL JADIS library
- Modification of an existing orbital processing chain (e.g. ALSC) for close-range stereo imagery and integrating the result into StereoWS
- Preparing a platform-independent and web-environment friendly stereo processing software tool (i.e. StView) for future use within ProGIS and ProVIP
A sample set of screenshots are shown below, the main control window (Figure 8) and the auxiliary windows which can be displayed (Figure 9).
The stereo workstation tool also includes the UCL Gotcha matcher, which, although slow, produces very high quality results. Examples of the parameter setting and progress matcher are shown in Figure 10 .

Figure 8: StereoWS: Main control window (left) and stereo view window (right)

Figure 9 Auxiliary viewers: a) navigation viewer
b) disparity map viewer c) tiepoint info viewer

Figure 10 Parameter setting and progress monitoring windows:
a) ALSC parameter setting and ALSC progress monitoring window
b) GOTCHA parameter setting and progress monitoring window
c) 3D reconstruction parameter setting window

The StereoViewer V2.0 application is publicly available from sourceforge (http://sourceforge.net/directory/os:mac/?q=stereo).
Some new features have also been added during the web-integration. These include 1) Disparity data can be displayed on a grid; 2) Tab GUI component is used to compact multiple control panels; 3) Inherit JPL's stereo test code rather than UCL's stereo workstation.
UCL's data fusion algorithm has been designed to import XYZ data. This leads to the need for a tool that can visualize XYZ point clouds with camera information (e.g. visual fulcrum, coordinate systems, and etc.) to select fusion candidates from MER data products.
The key features of the developed XYZ viewer include:
- Support for stereo visualisation of 3D points using JOGL
- Multiple XYZ data from the same site can be visualised with their rover and camera frames
- Selected XYZ Data can be highlighted and visual fulcra can also be visualised
- Data selection by sol, drive ID and instrument type
- Integration of stereo viewer: disparity value at selected point is converted to a ray in a 3D space, which can vary as the offset of the stereo cursor changes
v. Visualization & Rendering Pipeline
The PRoVisG Visualisation and Rendering Pipeline (VRP) interface can generate synthetic camera image data. Data inputs into the VRP include rover CAD models, terrain DEM data generated using PRoViP (with images selected using PRoGIS), surface reflectance model data, and information regarding the rover's planetocentric latitude and longitude, the local true solar time (LTST), and date (Figure 11). The rover's pan & tilt unit (PTU) which is pointing the rover cameras in a given direction, camera details such as field of view (FOV), detector width and pixel dimensions can be modified. Using the user defined input parameters VRP generates a synthetic camera image. The pipe-line also has the potential for generating virtual stereo camera pairs for VR display technology such as stereo workstations, and GeoWall technology.
The current PRoVisG VRP web-based GUI interface has been implemented using the C# programming language, and it can be called using a web browser at the following URL:
http://pcsgp.dcs.aber.ac.uk/
This presents the remote (client) user with a web page (shown in Figure 11) that can be used to make a number of modifications to a given scene file. The current PRoVisG VRP web-based GUI interface supports the following selections:
- Scene selection (currently either Marquett Island or Victoria Crater).
- Rover latitude and longitude, date and time (for calculating Sun azimuth & elevation).
- Camera (view) type.
- Camera pixel dimensions (pixel numbers).
- Camera detector size dimension (Width = Height mm).
- Camera field of view - FOV (degrees).
- Pan & tilt unit angles (degrees).
- Rover drill rotation (degrees) and translation (m).

When the 'Submit' button is mouse selected, then the PRoVisG VRP web-based GUI interface reads the requested parameters and creates a MAXScript file that reflects the status of the user web page. This MAXScript file, together with a Remote Procedure Call (RCP), is then sent to the AU-based 3ds MAXA server. Figure 12 and Figure 13 show examples of the synthetic images that the VRP server has generated and returned to the remote user web interface.


Figure 11. Version 1.1 web page interface for the PRoVisG Visualisation and Rendering Pipeline (VRP). This version supports the translation and rotation of ExoMars 2018 rover drill, and a selection
choice from two scenes is available.


Figure 12. PRoVisG VRP generated synthetic camera image using the ExoMars 2018 rover and PRoGIS/PRoViP generated multi-resolution DEM data of Marquette Island ' Opportunity Sols 2095 to 2121. The selected camera view shown here is 'Overview 1' which is the default.


Figure 13. PRoVisG VRP generated synthetic camera images.
The selected camera views shown here are
'Left WAC', 'HRC', and 'Right WAC'.

We now have version 1.1 for the PRoVisG Visualisation Interface, see Figure 11, that allows the ExoMars 2018 rover drill to be translated and rotated, and there is now a choice of two scene files (currently either Marquett Island or Victoria Crater).
4.1.3.3 Data from Tenerife field trial

AUPE was the main vision sensor for the Field Trials. More than 7000 images were taken; the following list summarizes the acquired data sets (see Figure 14, left for a simple example):
- At Llanos de Ucanca, 5 panoramas (two of them with 180° pan, three with 360°) were acquired, partly in RGB.
- At Minas de San Jos, more than 15 stereo WAC panoramas with different angular pan&tilt range were taken, most of them in RGB.
- Parts of panoramas were additionally covered with the full WAC wavelength range.
- High dynamic range sequences were taken for selected panoramas to proof the benefit of > 12 bit radiometric range
- A trajectory of about 400 meters was covered with stereo visual odometry sequences, with 1-2 meters of intervals.
- For some parts of the panoramas super resolution sequences (using about ten slightly different ' random ' pan-tilt values from the same scene) were acquired
- For parts of some panoramas also the HRC from AUPE was used to ensure the existence of a realistic test data set for ExoMars PanCam processing.
For Minas Day 3 a comprehensive list, considering photogrammetry-based localization was compiled (Table 1).

Table 1: Part of AUPE image data description table to be found on the PRoVisG ftp server

Hypercam was applied on different sites, all of them also covered with AUPE. See Figure 14, left for a far-range example.

Figure 14: Left: Parts of AUPE WAC Panorama.
Right: Example for a collocated HC1 and AUPE WAC image set.

The last day of the Field Trials was dedicated to the Omniview Camera. At Minas de San Jos an area of about 20 * 20 meters was traversed twice in a circular path (Figure 15). The Omniview Camera was operated in video mode and captured with about 10 Hz.


Figure 15: Left: Omniview camera experimental site.
Right: Video 2: Raw and (Panorama) processed Omniview images.

4.1.3.4 Hardware
Highly autonomous terrain rovers are currently been designed for future Mars exploration missions. Due to the limited communication possibilities between Mars and the Earth, such vehicles have to navigate and travel autonomously.

Usually, omniview cameras are built with conventional objectives and external mirrors. Such set-ups are bulky, heavy, and sensitive to vibrations. CSEM has already developed a miniaturized omniview system consisting of a mirror lens and an imaging lens in one compact lens holder.

Figure 16: (a) Catadioptrical lens system,
(b) mounted upside-down on PhotonFocus camera,
(c) vertical stereo vision assembly,
(d) schematic view of the compact catadioptrical system

The optical system is assembled in a metallic tube, each lens fixed and adjusted by spacer rings. The tube has a C-Mount thread to be screwed into the lens holder and thereby the focal length can be adjusted. The prototype lenses were fabricated in plastic to reduce costs.

Another innovative camera technology developed at CSEM is based on the principle of Time-Of-Flight (TOF). A dedicated image sensor with smart 'lock-in' pixels performs a synchronous detection of the phase of a modulated infrared light field. These phase offsets correspond to different flight times and therefore to different distances ' acquired for every pixel individually in parallel.

For PRoVisG, the commercially available SR4000 camera from MESA Imaging AG, a CSEM spin-off company, is used.

Figure 17: Left: Raw sensor data recorded with omniview camera system:
upper camera, and lower camera side-by-side.
Right: Unrolled panorama images created from raw data


4.1.3.5 Experimental Results
PRoVisG contained various testing activities with existing and newly generated software, hardware and data. In the following some special test results are summarized.
a. MER Multi-Resolution Reconstruction
MER data already provides quite mature means of spatial data description documented in the PDS labels attached to the experiment data records (EDR). This allows the fusion between NavCam stereo & PanCam stereo reconstruction, and a Microscope image overlaid on the NavCam structure & texture. Such combinations are possible without major effort due to the excellent image orientation quality provided by the PDS.



Figure 18: Multi-scale fusion of NavCam, PanCam,
and Microscope from MER (JR).
From sol 2095 to sol 2121, the Opportunity Rover stayed in the same position ' 'Marquette Island', and captured a great amount of different types of images, such as PanCam, Front Hazcam, Navcam, and Microscopic Imagers.
4 different data sets of the same site were combined. Primarily image orientations were taken from PDS label with a subsequent manual correction of the image orientation. Each stereo pair was matched, the disparities together with the camera orientations from PDS were directly used to generate VRML files (Figure 18).
b. Svalbard AMASE Campaign (2009-2011)
The Svalbard AMASE Campaigns were field trials in a Mars analogue area that delivered valuable input for PRoVisG.
The objectives of these expeditions included
- the performance of science similar to that of an 'In Search for Life' mission to Mars, i.e. describe geology, geophysical features, bio-signatures, and possible life forms in volcanic centers, warm springs, and perennial rivers and
- the operational testing of instruments and robotic technologies in scientifically relevant Terrestrial Planetary Analog environments.
Especially, the latter objective goes very well hand in hand with the PRoVisG goals since the derived visual data needed processing before interpretation was possible to the full extent.
Field work was carried out using equipment that consists of flight instruments or their breadboards (Figure 19) which are intended for future robotic Mars missions (e.g. NASA and ESA ExoMars or the Mars Science Laboratory).


Figure 19: Comparison of the CAD drawing of the ExoMars optical bench configuration
and the AMASE field trial set up.

Using these instruments, the following overall objectives were defined for the AMASE campaigns:

- Demonstrate ExoMars PanCam's general functionality in a field environment;
- Demonstrate the ability of PanCam 3D post processing and image data analysis;
- Demonstrate Visualization Chain including;
- Panorama Mosaiking (incl. Draft Stitching, Spherical Co-Ordinate System WAC & HRC);
- Vrml from Stereo;
- DEM from Stereo (Incl. Draft Stitching, Vrml from DEM);
- Distance Map from Stereo (Incl. Draft Stitching, Vrml from Distance Map);
- Fusion of PanCam Wide Angle and HiRes imagery (WAC / HRC);
- Investigate data commonalities and the potential of fusing data with other instruments in the ExoMars suite to enhance the science output;
- Investigate optimal combination of PanCam's different cameras (and of PanCam and other ExoMars instruments) for best science return,
- Display the interpretability of PanCam data for the in-situ determination of a geological context;
- Investigate PanCam's ability to support the selection of astro-biological targets for ExoMars;
- Assess the capability of the ExoMars PanCams to support every-Sol mission planning and scientific target selection;
- Refine the preliminary operational scenario (based on ExoMars Rover Reference Surface Mission);
- Test color calibration with the PanCam calibration target and influence on PanCam spectra generation/analysis in representative environment;
- Test of 'APIC' (Rock Detection Algorithm, detects objects of interest in WAC image and automatically points HRC));
- Joint investigations with other ExoMars instruments.

While the objectives where only slightly adapted for the different campaigns, the experience furthered technical development to ease the data acquisition and closer simulate real space mission scenarios (e.g. through remote image acquisition). Automating the data acquisition included the recording of meta data, like camera orientation information, meant that less errors occurred and lower man power was necessary to prepare data for subsequent processing. This development provided the opportunity to directly apply processing software in the field in the first place.
During the AMASE 2011 Campaign PRoViP was used for on-site generation of panoramas, 3D reconstructions and visualizations. One important finding was, that there is a necessity to frequently re-calibrate the AUPE camera rig. An automatic procedure was found to perform that and it is planned to be implemented into the PRoViP software targeting ExoMars PanCam 3D vision. Various improvement hints for hardware, control software and processing chain were detected which led to a massive increase of stability during the PRoVisG Field Trials at Tenerife in September 2011.
c. PRoVisG Testbed-Test 2 ' Aberystwyth July 2010
In July 2010 a field test was held in Aberystwyth, hosted by AU & PRoVisG, making use of the EADS Astrium Rover Bridget. It was mainly used to find exploitation modes of the catadioptric stereo sensor by CSEM and to verify the PRoViP processing chains implemented so far. After calibration of the various vision sensors mounted on Bridget (Figure 20), the field test data exploitation resulted in the immersive 3D reconstruction of a part of the Clarach Bay area (Figure 21), documented in a YouTube video http://www.youtube.com/watch?v=6gRo8QSXX5c.


Figure 20: Bridget Rover during the Clarach Bay field trial with mounted optical sensors.


Figure 21: 3D reconstruction of the Clarach Bay field trial data.

d. Wheel Track Measurement at DLR Bremen
One PRoVisG objective was to assess the usability of PRoVisG processing tools for related applications in the Space domain. DLR Bremen is running a wheel testing laboratory to assess various parameters during rover wheel development, including a sand bed.
In May 2011, in cooperation between DLR Bremen and JR, a temporary system was built up by DLR Bremen (under instructions by JR, see Figure 22) in their rover wheel laboratory sand bed, with the intention to test the ability of stereo reconstruction to evaluate wheel track measurement.


Figure 22: Schematic test setup (from design) and dimensions for stereoscopic wheel track measurement system

Figure 23 shows the test setup and two images taken by the stereo-arranged cameras as examples.

Figure 23: Left: Experimental setup with stereo cameras visible in the background. Middle, right: Stereo images of track region (left image without track, right image with track)
3D reconstruction was straightforward since it was only necessary to generate differences within a local coordinate system. Camera orientation therefore was set arbitrarily and a stereo orientation adjustment was launched purely based on stereo disparities. Figure 24 shows the disparities and a 3D reconstruction.


Figure 24: Left: Column Disparities. Right: 3D Reconstruction

Figure 25: Differences in distance maps
(green: zero; blue: up to 2.5 cm loss; red: up to 2.5 cm accumulation)
The generated fully automatic batch-based work flow was ready to be delivered to DLR, but unfortunately, DLR had to dismantle the system shortly after the experiment, therefore it was not possible to further exploit the capabilities of the system.
e. Underwater Stereo Reconstruction
One of the objectives of accession of the company Marum Bremen to the PRoVisG Consortium was to assess the feasibility of PRoVisG techniques for underwater applications. For this purpose, Marum prepared an underwater stereo vision system on an underwater rover ('C-Move') and conducted a field test in the Swedisch area of the North Sea (Skagerrak).
Following steps were performed for the test:
1. Underwater Stereo rig: Marum built a stereo camera system consisting of VGA video cameras, mounting on the C-Move rover, cabling and video recording unit (Figure 26);
2. Calibration target: A calibration target was designed by JR and constructed by Marum;
3. Calibration: Calibration target points were photogrammetrically determined using a semi-automatic software, and the lens distortion parameters were calculated. This procedure failed due to problems with proper point detection & positioning on the calibration target;
4. Marum ran a 3-days data acquisition campaign in the region of Skagerrak. Imaging strategy was iterated with JR remotely analyzing the images. The campaign delivered several video sequences from different underwater structures;
5. Stereo-synchronized video frames were grabbed by Marum. It turned out that the grabbing procedure applied too much averaging on the generated individual images, therefore grabbing was re-done by the JR Audiovisual Research Group (AVM);
6. AVM tried to enhance video quality (noise caused by sediment) using their video restoration software;
7. Selected grabbed image sequences were sent to CTU for application of their Structure-from-motion pipeline;
8. CTU calculated 3D point clouds & surfaces out of the grabbed video sequences (incl. interior camera calibration & lens distortion), see Figure 27;
9. The camera poses in the vicinity of the reconstructed point cloud are displayed on Figure 28.


Figure 26: Left: Marum stereo cameras.
Right: C-Move Rover equipped with stereo camera system

Figure 27: CTU SfM result for unrestored image sequence.

Figure 28: Camera poses and reconstructed point cloud seen from aside.
Left: from original, right: from restored image sequence
f. Application of Shape From Shading to MER Data
AU has designed and implemented a new 'shape from shading' algorithm. This is known as the Large Deformation Optimisation Shape From Shading (LDOSFS) algorithm. LDOSFS allows DTM data to be generated from a single camera image provided that the scene illumination conditions are known. LDOSFS supports different surface reflectance models and currently includes Lambertian and Oren-Nayer reflectance models.

Figure 29: Example of applying LDOSFS to Mars Global Surveyor - Mars Orbiter Camera (MOC) data. The image on the left is a single orthographic image captured by the MOC instrument.
The image on the right is the DTM generated from this MOC data.
LDOSFS has been applied to a variety of MER data obtained from the NASA PDS archive. The results shown in
Figure 30, Figure 31, and Figure 32 demonstrate the capability of LDOSFS to deal with (perspective) panoramic view points in addition to orthographic image data from orbit (Figure 29)

Figure 30: First example of applying LDOSFS to MER data.
The image on the left is a single orthographic image captured by the MER Pancam instrument.
The image on the right is the DTM generated from a detail of this Pancam image.

Figure 31: Second example of applying LDOSFS to MER data.

Figure 32: Third example of applying LDOSFS to MER data. (Left) MER image 2P188063471EFFAKE1P2422R4M1. (Top Right) Region of Interest (ROI) from MER image. (Bottom Right) Colour coded LDOSFS generated height map, red regions are close to the camera, and blue regions further away.

g. Rover Trajectory reconstruction from omniview stereo camera sequences.
For verification, two sets of panoramic omniview stereo images have been acquired during the PRoVisG Tenerife campaign. Firstly, images of the calibration board have been acquired and used to calibrate the stereo sensor. Secondly, a sequence of images from the camera carried by a rover was recorded. The camera calibration was used to rectify the omniview images to virtual cylindrical panoramic images and new calibration descriptions of the virtual images have been constructed. Then, CMP SFM service was used to reconstruct the trajectory of the camera and the terrain around the vehicle.

Figure 33. (Left) A pair of CSEM omniview images remapped to cylindrical panoramic images. (Right) Camera trajectory computed by the CTU CMP SFM web service.

4.1.3.6 Reports and Other Results

The D2.1 Mission Context Document presents a spectrum of mission cases and possibilities regarding the vision processing chain, data handling and archiving of mission data. Through a study of the relevant past, present and future missions and analysing them to understand their science and mission goals, data processing methods the user requirements for the PRoVisG frame work have been derived.
In deriving the user requirements from the investigated missions three scenarios have been proposed; a rover, an aerial vehicle and a multiple vehicle system. Each has been analysed for their system configuration and processing requirements.
The D2.2 Requirements Definition Document presents the outputs of the PRoVisG specifications and requirements regarding the scientists, operations and system and serves as input for the future work packages. This document also presents the initial testing criteria for the PRoVisG system that were used during the evaluation in later phases of the Project.
The D3.1 On-Board Data and Operations Interface Document discusses how on-board interfaces relate to and affect the PRoVisG system. On-board interfaces are discussed in the light of ExoMars and other missions' preparation, with experience based projections and assertions about the way in which future technology is likely to develop and how it may impact the development of the PRoVisG system. The document served to make the PRoVisG team members aware of the context in which the data originate to support a broad understanding of the issues that can arise as a consequence of both nominal and anomalous mission operations.
The D3.2 Vision Sensors & 3D Data Structures Interface Document summarizes the description of current and future possible vision sensors used in planetary environment, seen from the processing point of view. It gives information on how to access the (processing - relevant) key parameters of the images and meta data involved, using a survey performed during Work Package 2. Based on current available and used standards inside the planetary community an interface for (meta-) data exchange inside and outside PRoVisG is outlined, mainly relying on PDS. The second part of the document deals with 3D data structures and identifies suitable 3D data representation approaches inside PRoVisG. The D3.2 does not intend to provide a full design of interfaces and data structures, but summarizes the status of knowledge by finalization of Work Package 3 most relevant to the following PRoViP and PRoGis design. Main objective is to allow further detailed design and collect all pointers to relevant information.
The D3.3 Robotics Interface Document summarizes the description of current and future robotics interface issues that relate to the operation and ground processing of planetary vision sensors. It gives information on current robotic interfaces. It covers novel autonomous calibration methods of various robotic components, data visualization including camera simulations, as well as aspects of tethered aerobot application.
The D3.5 sensor Calibration Document provides an explanation of the geometry of the sensors and their models and is aimed to be the reference material of how to use various camera models within the PRoVisG consortium but also for computer vision and photogrammetric communities. It complements the D3.2 Vision Sensors & 3D Data Structures Interface Document describing the interfaces to sensors and models. The D3.5 also describes related calibration issues where these were useful for the PRoVisG consortium.
The D4.4 Algorithm Interface Document reflects the functional requirements addressed to the PRoVisG project and its vision processing chain PRoViP, and gives a detailed view on the explicit functions provided by the consortium members to fulfil a particular requirement. Some of the functions have been integrated into PRoViP, some remained stand-alone for use in a more generic scope. The functions are described in terms of maturity, integration status into PRoViP, and functional interface. If available, data processing examples are given.
The D6.3 Internal Tests Summary summarizes the experience in terms of gathering test data by testbed tests and the testing of developed functions and functionalities of PRoViP and PRoGIS, respectively, based on testbed data as well as on representative Mars mission data sets. The document contains Test Objectives, Test-Bed Campaigns and Acquisition of Ground-Truth Data, DTM Production Tests, and Summary of the Test results.
The D7.3 Field Test Report brings together the initial plans, documentation and exploitation notes for the PRoVisG field trials that took place at the island of Tenerife on September 2011 15th to 21st. This document will also be used for an initial guide for other future field trials. It covers the following aspects:
a) The objectives for the Campaign are pointed out, both from the Project point of view and, as formulated, from the individual contributors.
b) Initial considerations from prior field tests are summarized to serve as further rationale for the drawn decisions.
c) The envisaged and visited Locations and their specific features are summarized, and sufficiently detailed site maps are pointed out.
d) A global schedule, driven by travelling and gear availability is given.
e) All contacts, including people that support the Tests from their home institutions, including their specific roles are given.
f) The general Logistics is described, such as accommodation, formal site access, officially assigned persons to contribute, law & customs aspects, personal equipment, catering, allergies, local transport, protective housings and/or a van, power supply, etc.
g) Detailed test facilities aspects are described such as data transmission, synchronization and archiving.
h) Preparatory on-site actions are summarized such as the data capture of 3D ground truth, set-up of localization facilities, sensor calibration.
i) The involved components and instruments (Rover, cameras, navigation utilities) are described in terms of their resource requirements, data, mechanics and electrical interfaces.
j) Operation & processing aspects are addressed.
k) Communication with remote places such as the Berlin 2011 Summer School and its particular synchronization with the Field Test are described.
l) Components of the Campaign are broken down into Experiments, Tests and Operations.
m) The data captured is described.
n) Preliminary processing procedures & results are presented.
Potential Impact:
4.1.4.1 Education Kit & Mars 3D Challenge
The D7.8 Education Kit summarizes the efforts taken to provide the Education Kit for PRoVisG. It lists the material that is provided for the Education Kit prepared during the PRoVisG project and is also linking to already existing sources provided by project partners. The education kit is supposed to provide information material for students of all ages that supports the dissemination of the projects results. The approach is to collect informative and to students interesting data to communicate the purpose of planetary exploration, how robotic remote sensing is working, what is needed to ensure a successful space mission, and how does PRoVisG fit into the picture.
The PRoVisG dissemination strategy included funds to issue Announcements of Opportunity (AOs) to European computer vision groups to submit results from consortium-provided test data for specific algorithms. As stated in the PRoVisG Description of Work, key groups within the consortium will perform an evaluation of such submitted algorithms and produce as well as compare their own results on this set of test data. One of the most effective impacts was raised by the 'Mars 3D Challenge'.
A preliminary call for interest in a PRoVisG Mars 3D Reconstruction Challenge, which was released at the ICCV 2009, had shown that several computer vision groups are interested in participating in the contest. Nevertheless, it was a platform to attract more participants, e.g. project partners.
After an initial consolidation within the PRoVisG Consortium, asking about ideas and hints for such a dedicated call to compete in 3D vision processing of planetary vision data, a set of reference data from the MER mission, as well as image data from PRoVisG Partner CNES (see Figure 34) were used for the contest. Three stages with various tasks (disparity map generation - Figure 35, visual odometry, structure from motion - Figure 36) with one evaluation each by 3-4 experts from PRoVisG were conducted.

Figure 34: Indoor (top left) and outdoor 'Mars yard' ' rover test bed, used rover (top middle) and sample of acquired image (top right) for PRoVisG 3D Mars Challenge


Contestant A Contestant B Contestant C Contestant D
Figure 35. Stage 2, Task 1 submitted results from four different contestants (stereo disparity map)

Figure 36. Stage 2, Task 3 submissions from three different contestants (Structure from motion).
The Imagine group is a joint project of the Acole des Ponts ParisTech (ENPC) and the French Scientific and Technical Centre for Building (CSTB), now part of the Center for Visual Computing (CVC), in association with the Acole Centrale de Paris (ECP). The Imagine group is part of the Computer Science lab (LIGM) of University Paris Est (UPE). They have been working for several years on dense multi-view stereovision with main focus on high precision 3D surface reconstruction from images, targeting large-scale data sets taken under uncontrolled conditions.
At the time of the PRoVisG challenge, Imagine was holding the best results worldwide on the Strecha et al. reference benchmark, with the most complete and the most precise reconstructions. One of the key and original components of Imagine pipeline is a variational mesh refinement that re-projects mesh hypotheses into the original images to improve photo consistency. This expertise and software has been recently transferred to the start-up company Acute3D, powering Autodesk's 123D Catch (formerly project Photofly), a web service to create 3D models from photographs.
Imagine is further working on improving calibration using statistical methods, e.g. in the framework of the Callisto project, also in association with the CNES in the context of the MISS project. Imagine is interested in other sensors too, such as lasers and Kinect, as well semantization, e.g. for high-level building model reconstruction http://imagine.enpc.fr/ for more details.
During PRoVisG meetings it was decided to invite the winners to take part in the PRoVisG MSL synchronization meeting at JPL as well as at the Photogrammetry workshop at OSU that took place in December. Pierre Moulon and Pascal Monasse attended those meetings and held public presentations about their work.
The planning of the 3D Mars Challenge are reported in the D3.6 AO Interface Document, which describes the procedure for this contest ('Announcement of Opportunity' ' AO) in terms of dissemination venue, test and reference data generation and presentation, modes and rules of the contest. Its procedures and results are described in the D6.3 AO Evaluation Document in terms of evaluation strategy, evaluation of incoming results, and grant procedure
4.1.4.2 Internal cooperation
Beside external impact, also internal impact was gained among the PRoVisG beneficiaries. Table 2 shows the distribution of cooperation in terms of mutual number of co-operation items. The intensity (or effort) of mutual cooperation is not pointed out here, but should average out for this overall figure.
JR AU DLR CTU SSL ASU TUB UCL OSU UNIS CSEM CNES UNOTT UniHB JPL
JR 6 8 14 3 1 9 13 8 6 5 4 8
AU 4 4 3 1 2 3 5 2 3 4 1 1
DLR 2 1 1 1 3 2 3 2 1 4 1 1
CTU 8 1 1 1 2 5 5 6 3 2 2 1 1 4
SSL 1 1 2 1 2 1 2 1
ASU 1 1 1 1 1 4 2 2 4 1
TUB 2 2 2 3 1 6 2 5 4 2 1 2
UCL 7 1 3 5 2 7 2 6 2 1 4
OSU 5 3 1 3 4 3 2 3 1 2 1 4
UNIS 3 2 3 3 1 1 2 4 1 4 1 3
CSEM 1 1 3 1
CNES 1 1
UNOTT 4 1 3 1 1 1
UniHB 1 1
JPL 1 3 1 1 1 1 1
Table 2: Distribution of cooperation items between individual PRoVisG Beneficiaries
derived from individual work package cooperation.
1st Column: Higher activity / leader w.r.t. contributor in the 1st row.

In terms of Spin-off, a specific Deliverable (D7.6 the Spin-Off Document) documents a comprehensive collection of possible methods to further exploit the PRoVisG results by identifying opportunities and evaluating use cases in space and terrestrial sciences plus commercial or public applications, on top of the 'regularly planned' results PRoVisG is providing (software & dissemination). For each opportunity and use case several aspects (unless confidential) are given such as:
- Project / Product environment
- Status (We distinguish between potential possibilities and explicit activities that lead to the initialization of such possibilities)
- Components & aspects developed within PRoVisG used
- Potential partners
- Added value
- Prospects & Impact
- Commercial
- Strategic
- Educational
- Societal & Dissemination
- Scientific
- Technical
- Planned future agenda for further exploitation
- Cooperation between PRoVisG partners stemming from PRoVisG collaboration
- A-posteriori spin-in into PRoVisG (i.e. what could have been improved during PRoVisG Lifetime)
Opportunities were identified in past, current and planned / upcoming missions and projects, the immediate exploitation of the software components PRoViP, PRoGiS and the Stereo Work Station, the Omnivision camera system, further space-related research (including FP7-SPACE and Horizon 2020), the exploitation of field trials experience and data, industrial exploitation modes for various components, and research-oriented exploitation.

4.1.4.3 Video Footage opportunities

As one specific mode of exploitation, a data sheet was generated where video footage options for broadcast companies are listed, see the following tables & text:

Planetary Robotics Vision Ground Processing (PRoVisG)
Planetary Robotics Vision Scout (PRoViScout)
Video & Picture Material Sources

Introduction
PRoVisG - Planetary Robotics Vision Ground Processing project brings together the major groups currently working on planetary robotic vision, consisting of research institutions inside and out of Europe, European Space Agency (ESA), National Aeronautics and Space Administration (NASA), and the industrial stakeholders involved in vision & navigation for robotic space missions as well as for their scientific exploitation. It is developing a unified and generic approach for robotic vision ground processing.
PRoViScout ' Planetary Robotics Vision Scout will demonstrate the combination of vision-based autonomous sample identification & sample selection with terrain hazard analysis for a long range scouting/exploration mission on a Terrestrial Planet.

This document summarizes information sources for TV broadcast & Press that can be obtained from various stakeholders involved in these FP7 Projects.

This overview has been prepared by the PRoVisG & PRoViScout Co-ordinator
Gerhard Paar
JOANNEUM RESEARCH, Institute for Information and Communication Technologies
Steyrergasse 17
A-8010 Graz, Austria
+43 316 1876 1716
gerhard.paar@joanneum.at
What
Na-ture Content / link Contact Example
(1)
Aerobot P / V flying and tethered to a rover.
Formation flying
Dr. Laurence Tyler lgt@aber.ac.uk

(2)
Small Rover1 P / V On a beach
Prof. Dave Barnes dpb@aber.ac.uk

(3)
Small Rover2 P / V General Purpose Platform
mark.woods@scisys.co.uk

(4)
Virtual scenes V Youtube
Gerhard.paar@joanneum.at

(5)
Visualization V / P AU shadow simulation, various renderings
Prof. Dave Barnes dpb@aber.ac.uk

(6)
Large Planetary Rover V Bridget Moving on a beach (see pictures)
Lester.waugh@astrium.eads.net

(7)
Large Terrestrial Rover V Idris Moving on a beach Fred Labrosse
ffl@aber.ac.uk

(8)
Recent Press Info P / V / D On request from PRoVisG cms
Obtain account from Gerhard.paar@joanneum.at

(9)
Existing Broadcasts V Let's embrace space

Enterprise & Industry at EC
(10)
Forthcoming Field Test E To take place in Tenerife (Las Canadas National Park) in September 15-22, 2011. Press briefing is planned Sat 17th September Lester.waugh@astrium.eads.net

(11)
AU PatLab L Planetary laboratory at Aberystwyth University Prof. Dave Barnes dpb@aber.ac.uk

(12)
CSEM Lab L Optical Laboratory at CSEM / Zurich christiane.gimkiewicz@csem.ch

(13)
JPL Contact n/a Press Contact Entry point at JPL Guy.Webster@jpl.nasa.gov
ProVisG contact: Bob.Deen@jpl.nasa.gov

(14)
Laser Imager WALI P Biology sensor
Prof. Jan-Peter Muller
jpm@mssl.ucl.ac.uk

(15)
Stereo Viewing lab L High-end stereo viewing screen for scientific planetary application
Prof. Jan-Peter Muller
jpm@mssl.ucl.ac.uk

(16)
Interaction with Mars L Working on a dedicated Geographical Information System (GIS) on Mars data
Jeremy.Morley@nottingham.ac.uk

(17)
Virtual Environment P Simulated robotics environment for operation of space robots
konstantinos.kapellos@trasys.be

(18)
Human Spaceflight Ground Vehicle V Eurobot Ground Prototype
Alberto Medina
amedina@gmv.com

(19)
Surrey Rover Hardware and Autonomy Software Testbed (SMART) L/V http://www.provisg.eu/Science--Technology/SMART/
Dr. Yang Gao, Group Head, SSC
Yang.Gao@surrey.ac.uk

4.1.4.4 Conclusions
The PRoVisG Project, with a duration of 45 Months, has gained a better European understanding of 3D vision processing using images from Planetary probes. It has produced software, hardware, knowledge and initiated new collaboration options within Europe and beyond. In the following a list of achievements and added values is given, most of them being described in the previous sections of this summary report:
a) PRoViP and PRoGIS are new tools that assembled various toolchains, concepts and existing software solutions into a unique set of processing and maintenance of 3D vision data from planetary surfaces.
b) The Stereo Workstation, as a result of cooperation between UCL/UK and JPG/USA is an open source tool for visualizing stereo content from planetary surface probes.
c) The CTU web service for structure from motion is open for the computer vision and space research communities as well as for the public to test their uncontrolled multiple view images for usability to generate automatic 3D reconstructions.
d) The omniview stereo camera as exceptional development within PRoVisG has proven to be a promising concept for future planetary missions for the purpose of 3D reconstruction and navigation.
e) Knowledge about robotic interfaces, metadata schemes, GIS representations and major shortcomings of current coordinate systems representations of different missions on Mars was collected and documented.
f) FP7-SPACE PRoViDE (to be launched in 2013, among others by a subgroup from PRoVisG) is a logical successor of PRoVisG which will process major portions of surface imagery taken on planetary surfaces so far.
g) Relevant use cases included underwater tests, wheel track measurements to be used in development in robotic probes,
h) Major PRoVisG mechanisms were tested during a comprehensive field trials campaign on the island of Tenerife. Beside detailed knowledge about the logistics and planning issues of such testing, a major set of representative image data is available for the community.
i) PRoVisG can be regarded as a new link of computer vision society and planetary research community.

Beyond the immediate value of close ' and sustainable ' cooperation among its players, PRoVisG has raised high interest in many individual domains such as the planetary research community, computer vision community, at students, public viewers via the web page and several youtube videos, decision makers and funding agencies.
Remaining open questions are still manifold. They range from further hardware improvements and new exploitation modes of the omniview camera concept to a better understanding of common coordinate systems on the Planet Mars, in order to more straightforwardly fuse data from different missions. Issues appearing as trivial as the usage of multiple operating systems for systems integration turned out to be major obstacles for final realization of common-use software stemming from multiple sources. Simple data formats and meta data schemes had to be harmonized on a small ' scale level, be it disparity maps or image orientation data. Current obstacles are more on an operational level than on algorithmic issues and research in general. It is therefore of utmost importance to build upon PRoVisG experience also in these technical matters: The PRoViDE Project (Planetary Robotics Vision Data Exploitation) starting early 2013 will take into account all experience from PRoVisG (positive and negative) and use a successor of its results to process a major set of planetary vision data in a 3D context.
List of Websites:
http://www.provisg.eu
Project Coordinator:
Gerhard Paar gerhard.paar@joanneum.at
140398361-8_en.zip