Periodic Reporting for period 1 - SENSE (A Sensor for Autonomous Navigation in Deep Space)
Reporting period: 2022-04-01 to 2023-09-30
The SENSE project elevates a state-of-the-art method known as celestial triangulation to a proof-of-concept hardware sensor. The project aims at validating a first prototype ready to be integrated in nanosatellites, nanoSENSE, so opening to future scale up in other spacecraft classes. The sensor architecture and the algorithms in nanoSENSE leverage on a 5 years of background research, including the activities performed in the main ERC project EXTREMA.
Action 1: Product deployment and optimization: Hardware-in-the-loop Testing
An additional high-resolution screen, a new set of collimating lenses, and an optical camera have been procured and added to the EXTREMA single-line optical bench to test the performances of the double-optic sensor. The attached figures show the model of the experimental setup devised in the proposal, as well as the final optical facility assembled in the lab. The second optical line screen displays the scenario observed by the second optic. While the proposal recommends positioning the two optics at a 90 deg angle for optimal configuration, although this setup may not always be feasible due to operational constraints. Depending on the interplanetary trajectory covered by the probe, one or both optimal planets may not be visible due to their vicinity to the Sun or their low luminosity. To explore potential improvements, we conducted an additional preliminary study, analyzing the navigation algorithm performance with variations in the configuration angle and field of view (FoV) of the two optics. A universally optimal strategy or sensor configuration cannot be reached. Yet, proper tuning of the two-optic disposition angle must be performed during the mission design phase to achieve the best navigation performance.
Moreover, the navigation filter has been updated to consider measurements coming from both the camera optics to correct the spacecraft state estimate. Preliminary software simulation showed that over a 100-day leg on an Earth-Mars transfer, the accuracy in the probe state estimation increased by almost 50%. Successively, the image processing procedure and the navigation filter have been deployed on a miniaturized processor representative of a CubeSat onboard computer.
Action 2: Validation – Autonomous Navigation Sensor Validation
The sensor validation has been performed step-by-step due to the additional complexity introduced by the presence of the hardware. As the first step, the image processing pipeline was tested by acquiring images from the optical bench and processing them on a Raspberry Pi. The test was successfully completed, validating the operability of the chosen procedure on a computationally limited computer. Subsequently, the navigation algorithm was assessed with only the optical facility in the loop. This analysis aimed to understand how the introduction of real optical errors in deep-space images would impact the performance. Finally, the navigation algorithm was deployed on a Raspberry Pi, and processor-in-the-loop simulations were performed. In this case, the images were not taken from the optical facility but were generated with a high-fidelity rendering engine. The Monte Carlo run obtained the same performance as the simulations run without the processor in the loop.
Action 3 – Exploitation and Knowledge Transfer: Market Assessment and Exploitation
This activity combined desk research aimed at outlining the knowledge landscape, the structure of the supply chain, and the market outlook of the small satellite domain and exercises/brainstorming sessions aimed at further reflecting on the value proposition of the sensor and its competitive advantages, inputs for re-examining the business proposition. For what concerns the market analysis, the outcomes were an overview of the sensor sector and market (its profile, dimension, and trend) and competitive intelligence analysis, including comparison tables to outline competitors and competitive solutions, the nanoSENSE attractiveness map, and its lean canvas.
- Software implementation of a dual-optic navigation sensor
- Development of a dual-line optical facility
Whereas, the main outcomes of Action 2 are:
- Validation of the navigation filter on a processor with computational time comparable to an onboard miniaturized processor.
- Validation of the navigation filter with optical facilities in the loop
For what concerns both Actions 1 and 2, further investigation is required to assess the performance of the dual-optic navigation algorithm using the complete experimental setup, involving both the processor and the optical facility in the loop. Additionally, the simulations conducted thus far have been asynchronous. First, the facility images were acquired and then, exploited by the navigation filter on the processor. A further advancement will require conducting simulations in a synchronous manner, where images are requested from the processor as needed by the filter to correct the state estimation.
Action 3 yielded the following main outcomes:
- Exploration of the "Global Deep Space Autonomous Navigation Sensors" market for the identification of opportunities trends, and competitors.
- Formulation of an exploitation plan, comprehending the development of a value proposition, identification of funding schemes for future advancement, and an examination of the pros and cons associated with exploitation routes (licensing and direct use).
- Submission of the Italian patent N° 102023000024294
The key needs to ensure further uptakes are:
- Looking for additional funding to increase the sensor TRL
- Establishing a collaborative partnership with a company responsible for the development of the sensor hardware.
- Research possible opportunities that can expand the sensor capabilities to address in a shorter timeframe a larger SOM.