Helm.ai Launches Level 3 Urban Perception System with ISO 26262 Safety Components
Helm.ai, a leading provider of advanced AI software for high-end Advanced Driver Assistance Systems (ADAS), autonomous driving, and robotics automation, recently announced Helm.ai Vision, a production-grade urban perception system designed for Level 2+ and Level 3 autonomous driving in mass-market vehicles. This system aims to deliver accurate, reliable, and comprehensive perception capabilities, addressing the complexities of urban environments. It provides a scalable and cost-effective solution for automakers looking to enhance their self-driving technologies. Helm.ai Vision has been rigorously assessed and has achieved significant certifications. UL Solutions awarded Helm.ai ASPICE Capability Level 2, reflecting the company’s robust and structured software development processes. Additionally, the system has been certified to meet ISO 26262 ASIL-B(D) requirements, indicating that its Software Safety Elements out of Context (SEooC) components are ready for integration into production-grade vehicle systems, as detailed in the safety manual. These certifications underscore Helm.ai's commitment to safety and reliability in its autonomous driving solutions. At the heart of Helm.ai Vision is the company’s proprietary Deep Teaching™ technology. This innovative approach leverages large-scale unsupervised learning from real-world driving data, allowing the system to learn and adapt without the need for expensive, manually labeled datasets. The result is advanced surround view perception that can handle dense traffic, varied road geometries, and complex pedestrian and vehicle behavior. Key features of Helm.ai Vision include real-time 3D object detection, full-scene semantic segmentation, and multi-camera surround-view fusion, all of which contribute to a high-precision interpretation of the vehicle’s surroundings. One of the standout capabilities of Helm.ai Vision is its generation of a bird’s-eye view (BEV) representation. By fusing multi-camera input into a unified spatial map, the BEV feature enhances the performance of downstream intent prediction and planning modules. This is particularly crucial in urban settings where vehicles must navigate through crowded and unpredictable environments. The BEV representation allows the system to better understand the spatial relationships between objects and predict their movements more accurately. Helm.ai Vision is designed with modularity in mind, making it highly adaptable to various automotive hardware platforms. It is optimized for deployment on leading platforms such as Nvidia, Qualcomm, Texas Instruments, and Ambarella, ensuring compatibility and flexibility for different automaker needs. The system’s modular design also simplifies integration and validation efforts, which can be time-consuming and resource-intensive in the development of autonomous driving technologies. By reducing these efforts and increasing interpretability, Helm.ai Vision streamlines the production process for full-stack AI software solutions. Vladislav Voroninski, the CEO and founder of Helm.ai, emphasized the importance of robust urban perception in advancing autonomous driving technology. He noted that the BEV fusion task is a critical component of this advancement, acting as a "gatekeeper" for higher levels of autonomy. Helm.ai Vision addresses the full spectrum of perception tasks necessary for Level 2+ and Level 3 autonomous driving on production-grade embedded systems, offering a vision-first solution with high accuracy and low latency. Voroninski highlighted how the modular approach of the autonomy stack significantly reduces validation effort and increases interpretability, making Helm.ai Vision well-suited for near-term mass market production deployment in software-defined vehicles. Helm.ai Vision's ability to operate up to Level 3 autonomous driving without the need for high-definition (HD) maps or Lidar sensors is particularly noteworthy. This means that automakers can achieve advanced autonomy with lower costs and fewer hardware constraints, a significant advantage in the competitive and rapidly evolving autonomous driving landscape. The system's reliance on camera-based inputs reduces complexity and potentially speeds up the deployment timeline for autonomous technologies in consumer vehicles. Founded in 2016 and headquartered in Redwood City, California, Helm.ai is dedicated to reimagining AI software development to make scalable autonomous driving a reality. The company collaborates with global automakers on production-bound projects and offers a range of full-stack real-time AI solutions, including deep neural networks for highway and urban driving, end-to-end autonomous systems, and development and validation tools powered by Deep Teaching™ and generative AI. Visit helm.ai for more information about Helm.ai’s products, SDK, and career opportunities. Industry insiders praise Helm.ai Vision for its innovative use of deep learning and unsupervised training. They note that the system's ability to generate BEV representations and perform comprehensive perception tasks with high accuracy is a significant step forward in the field of autonomous driving. The certification to ASPICE Level 2 and ISO 26262 ASIL-B(D) requirements further solidifies Helm.ai's position as a leader in safe and reliable AI software for the automotive industry. Companies like Nvidia and Qualcomm see potential in Helm.ai Vision’s modular and cost-effective approach, which could accelerate the adoption of autonomous technologies in mass-market vehicles.