Periodic Reporting for period 1 - INSPIRE (Three dimensional INtegrated PhotonIcS to RevolutionizE deep Learning)
Reporting period: 2022-12-01 to 2025-05-31
However, despite their promise, existing neural network architectures suffer from fundamental inefficiencies, particularly when scaled to large and complex systems. Current hardware operates far below theoretical limits, posing a significant bottleneck to future AI advancements. Overcoming this challenge requires a paradigm shift in how neural networks are designed and implemented.
The EU-funded INSPIRE project will introduce a groundbreaking approach by harnessing advanced photonic integration. By leveraging three-dimensional photonic waveguides, **INSPIRE** will develop a biologically inspired, fully parallel, and highly scalable architecture. This pioneering technology will unlock unprecedented computational efficiency, paving the way for next-generation AI systems with orders of magnitude greater performance.
During the first reporting period the ERC INSPIRE Consolidator Grant project has delivered transformative research and technological advancements, pioneering three-dimensional photonic integration, has substantially outperformed traditional electronic counterparts in various metrics. We have enabled real-time processing capabilities for complex machine learning with 10^10 inferences per second, and made substantial breakthroughs in training unconventional neuromorphic computing substrates.
Using the large-scale multimode vertical-cavity surface-emitting lasers (LA-VCSEL) with chaotic cavity geometry we have been able to push the number of ‘neurons’ implemented in a single LA-VCSEL to ~10.000! What this means in numbers is truly astonishing and revolutionary: we implement 10.000 neurons (demonstrated experimentally) in a single CMOS compatible device that consumes around 100 mW (demonstrated experimentally), that provides recurrent connections fully in parallel as well as neurons that have a transient time of around 0.5 ns (demonstrated experimentally), creating a large neural network providing 1010 results per second.
 
           
        