Skip to main content
Ir a la página de inicio de la Comisión Europea (se abrirá en una nueva ventana)
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS

Photonic integrated LIDAR and snapshot spectral imagers for scaling up multimodal perception in precision applications

Periodic Reporting for period 1 - RETINA (Photonic integrated LIDAR and snapshot spectral imagers for scaling up multimodal perception in precision applications)

Período documentado: 2023-12-01 hasta 2025-05-31

RETINA aims to develop advanced photonic sensory systems, combining novel PIC-based LIDAR and cost-efficient CMOS, InGaAs, and QD spectral imagers with a digital infrastructure for ML-based perception algorithms. These systems will be validated in three different scenarios in the healthcare, automotive, and agriculture sectors. By fostering collaboration among photonics developers, AI experts, and industry leaders, the project seeks to enable scalable, adaptable, and cost-effective solutions validated in real-world environments.
During the first 18 months of the project, efforts were focused on three key areas: defining the specifications for the novel photonic sensors, collecting an initial set of datasets to support platform and machine learning (ML) development, and initiating the fabrication of the sensors.
The requirements for the three use cases—healthcare, automotive, and agriculture—were gathered through direct engagement with relevant stakeholders. In the healthcare domain, workshops were held with medical professionals; in the automotive sector, letters of interest from major OEM manufacturers provided valuable input; and in the agriculture use case, insights were obtained through discussions with regional farmers and the prior experience of partners specialized in this field. Based on the collected requirements, the specifications for each sensor and the overall system architecture were defined. Given the diversity of the three validation scenarios, a prioritization matrix was developed to assess the relevance of each sensor per use case. The final specifications were derived from this analysis, taking into account the most restrictive requirements to ensure broad applicability.
Initial datasets were compiled from three main sources: publicly available datasets relevant to the sensors and scenarios under development (although limited due to the novelty of the technology), datasets previously captured by project partners, and new datasets acquired in real-world conditions using commercial sensors as close as possible to the ones being developed. These datasets supported the early-stage development of the ML platform, which includes tools for data visualization, semi-automatic labelling, and model training. They also enabled the first round of ML algorithm testing across the three use cases. The ML platform now includes functional tools for fast, semi-automatic data labelling and efficient ML training in the online version and as a Python SDK.
In parallel, the fabrication of the first versions of the photonic sensors has begun. These prototypes are expected to be ready for laboratory testing within the year. Specifically, for the PIC-based LiDAR, an initial fabrication run was carried out to test isolated novel photonic circuits, and the development of a complete PIC version is underway. For the InGaAs and quantum dot (QD) spectral imagers, the first prototypes have already been fabricated and are currently undergoing laboratory testing to be later integrated in the prototype systems.
During the first project period, only partial progress toward the intended results has been achieved, and as a result, the roadmap to impact has not yet significantly advanced. Nevertheless, the project maintains its original ambition and continues to follow the established roadmap outlined in the Grant Agreement (Annex II, Part B, Section 3.1 'Project’s pathways towards impact'), which remains closely aligned with the project’s objectives. In this context, the project continues to actively strive to contribute to the key expected outcomes of the call, namely:
• Outcome #1: “The development of next generations sensory systems based on photonic technologies”.
RETINA will address key challenges for the European photonics industry to remain as leader in this field, currently representing 16% of the global photonics market and more than 390.000 jobs in the region. For this purpose, the development of the next generation of low-cost miniaturized sensory systems, the integration of multiple components on a single chip, and the standardisation of production processes will be addressed. RETINA will focus on photonic technologies with the potential to make a major impact on a wide range of sectors, enabling their industrialization and reducing the gap between development and market adoption.
• Outcome #2: “Technology leadership in autonomous vehicles, robots and sensory systems; Growth in a number of strategic industries such as medical devices, automotive, manufacturing, agriculture & food, security of large added value which are in Europe”.

RETINA innovative technologies (hardware developments, advanced data processing, fusion algorithms, and software techniques) will result in specific multimodal sensing applications that will respond to main sectorial challenges in strategic industries as healthcare, automotive and agriculture. In this manner, RETINA will contribute to strengthen EU position in the global market, increase its influence in global technological development and granting a pathway towards EU international technology leadership.

• Outcome #3: “Contribution to the Digital Green deal policy and/or to the technological sovereignty of Europe”.

RETINA will offer a unique opportunity for EU to globally lead new photonic sensory systems design and manufacturing and fit for the Digital and Green Deal policies thanks to: Innovative sensory systems. RETINA low-cost and miniaturized PIC-based LIDAR design does not require phase modulators, covers both short and long ranges reducing the number of LIDAR sensors per system and eliminates mechanical parts for beam steering, which implies a significant size reduction and longer sensor´s service life, becoming more robust and with less potential failures. The QD-based low power camera proposed in RETINA has the potential to be also more compact than silicon-based, using a single sensor capable of detecting multiple spectral ranges simultaneously, and higher pixel density, reducing the size, and increasing its usability. In addition, QD are also more environmentally friendly as they do not require CRM for its manufacturing, unlike InGaas sensors. The implementation of multi-sensor camera in medical environments will improve surgical accuracy and reduce the need for additional treatments, thereby decreasing medical waste generation and resource consumption contributing to higher sustainability of the National Health Systems. Therefore, RETINA sensory systems will reduce raw material consumption & waste generation, and thus, the energy consumption derived from virgin raw materials extraction, in line with the Green Deal, Zero Pollution strategy and CRM Act.

The quantification of these impacts is expected to become more streamlined and effective in the subsequent phase of the project. With regard to scientific impact, the consortium has prioritized the generation of preliminary results, and the development of scientific outputs has been actively progressing throughout the initial project period.
Mi folleto 0 0