Xinzhou Wu, one of Nvidia's shining examples in the automotive industry, shared the strategies they employ in autonomous driving that differentiate them from competitors, emphasizing that there's no need for excessive data. The company claims that combining the right sensors with truly intelligent artificial intelligence can deliver robust safety and smooth driving.

Wu and CEO Jensen Huang test-drove the Mercedes CLA during a trip from California to San Francisco. MB.Drive Assist Pro The system was used. While progressing towards the goal of driverless driving, the system safely managed everyday obstacles such as construction sites, vehicles parked in two rows, and narrow corridors, without requiring driver intervention. Nvidia is bringing the concept of physical artificial intelligence to life for its customers. independent leadership dependent on high data He stated that he approached it with that goal in mind.

Company, Tesla beyond being a chip supplier Mercedes, Jaguar Land Rover ve Lucid Alpamayo offers its partners, such as [company name], its own AI-powered driving features. The Alpamayo portfolio, announced at CES 2025, offers Level 4 autonomy under certain conditions. AI modelsThis includes simulation templates and datasets. Huang described this launch as "the ChatGPT moment for physical AI."

Alpamayo's logical inference capability is comparable to classical engineering-based systems. combined approach It offers a hybrid structure that both supports human-like driving behavior and creates a framework based on regulations that enhance road safety. Nvidia emphasizes that this hybrid architecture is unique in the world, end-to-end. According to Wu, the most significant difference of the system is... multi-sensor integration to ensure safe driving. In addition to cameras, radar and ultrasonic sensors, in some configurations deal Its use enhances security even under challenging conditions. However, although lidar increases costs, Nvidia's vertical integration This approach aims to achieve the required performance at the most optimal cost.
The Mercedes CLA being tested has 10 cameras, 5 radars, and 12 ultrasonic sensors; the DRIVE Hyperion platform is designed to support various sensor configurations. The basic version relies on a more affordable combination of cameras and radars, while advanced versions can add lidar. Wu predicts that with the decrease in lidar costs, even vehicles in the $40.000-$50.000 range will be able to adopt advanced autonomy. Nvidia also takes a clear stance on the advantage of simulation over real-world data, working on two main strategies: reconstructing real-world scenarios with NuRec and testing extreme cases by modifying scene elements.
Nvidia simulates events like Waymo's robotic axis blocking intersections during power outages in urban areas, ensuring the system's safe response. The ultimate goal is to enable safe driving without needing billions of kilometers of real-world driving data; this vision is realized through the implementation of a visual-language-action model called Vision Language Action (VLA). This model, which combines visual perception, language understanding, and physical actions in a single structure, leverages large baseline models fed by internet-scale datasets. According to Wu, the next step is to add memory to the system and further enhance the model's capabilities through reinforcement learning.
