Once thought of as science fiction, autonomous road vehicles (AVs) are quickly becoming part of everyday life, and the market is expected to grow exponentially in the years to come. While great strides have been made in the development of AVs, they are still not safe enough for operation on public roads due to poor object detection and false alarms, amongst other things. To overcome these safety concerns, vehicle developers have utilised higher resolution and higher-cost sensors. Despite this, they remain unable to adequately address the issue of poor perception. VayaVision, a LeddarTech company and a leader in sensor fusion software, set out to address this in the EU-funded STV project by developing an object detection solution based on innovative architecture and software design. The solution is scalable and supports all levels of driving automation developed by the Society of Automotive Engineers (SAE). “Additionally, in STV, we aimed to build a demonstration unit that tests and explores VayaVision’s raw sensor fusion architecture with leading European and global automotive partners: Original equipment manufacturers (OEMs) and Tier 1 suppliers (T1s),” explains Youval Nehmadi, project coordinator.
Towards safer AVs
“In the STV project, VayaVision developed LeddarVision, a comprehensive, state-of-the-art raw sensor fusion perception technology that is modular, customisable, and sensor-agnostic,” highlights Nehmadi. Its architecture supports multiple sensor types, various sensor sets, and custom sensor configurations. This structure enables VayaVision to provide the customer with a customised perception solution that meets their application’s technical and budgetary requirements. Furthermore, the LeddarVision software generates a comprehensive 3D environmental model, providing vital information on the dynamically changing environment around the vehicle in real-time, ensuring safer and more reliable autonomous driving. It achieves best-in-class performance, tested both in public databases and by major OEMs and T1s. The model also achieved the best results in the leading nuScenes challenge for a camera- and radar-based solution. “The system is extremely robust with built-in redundancies and will generate the 3D environmental model even when some of the input sensors are not functioning optimally due to malfunction and/or environmental conditions (dirt or fog) or if they become non-operational,” adds Nehmadi. These achievements have opened the door for VayaVision to collaborate on projects in automation in Europe, Asia, and the United States.
Continuing the work: Sensor fusion solutions
Discussing the future, Nehmadi outlines: “In the short term, we expect VayaVision’s architecture to provide radar- and camera-based affordable sensor fusion solutions to support SAE L2 and L2+ advanced driver-assistance systems features.” VayaVision is currently working with leading European automotive T1s to develop camera and radar sensor fusion and perception use cases. “Together with these T1s, we are exploring a commercial offering of the L2 solution. In addition, we are investigating the option of establishing a European consortium of other leaders in the field of sensor fusion, perception, and driving decision-making for AVs,” concludes Nehmadi. In the long term, the technology is expected to provide light detection and ranging (LiDAR) – technology used for determining ranges – and camera-based sensor fusion and perception solutions to support Levels 3-5 fully autonomous applications, such as highway autopilot, autonomous shuttles, and mobility as a service applications. Moreover, it will also support off-road applications like automated heavy machinery in the agriculture, construction, and mining sectors.
STV, VayaVision, sensor fusion, object detection, 3D environmental model, autonomous road vehicles, autonomous driving, LeddarVision, LeddarTech