— Ch. 1 · Defining Automation Levels —
Vehicular automation.
~7 min read · Ch. 1 of 6
The Stanford Cart, a boxy machine resting on four bicycle wheels, navigated a 100-foot room in the late 1970s. Hans Moravec created this experimental vehicle while he was a graduate student at Stanford University. It carried a camera and battery but relied on a remote computer to process images wirelessly. The cart could move around large obstacles yet took five hours to cross the short distance. This slow pace occurred because the computer stopped frequently to analyze visual data before issuing navigation instructions. Modern systems distinguish between assisted semi-autonomous vehicles and fully autonomous ones based on human intervention requirements. Assisted vehicles use advanced driver-assistance systems to help operators handle specific tasks like braking or steering. Fully autonomous vehicles travel without any human operator under defined conditions. The Society of Automotive Engineers established five levels of autonomy to categorize these capabilities. Level zero requires constant human control with no automation. Level one offers basic assistance such as cruise control or lane keeping. Level two combines steering and acceleration for partial automation while demanding driver supervision. Level three allows the vehicle to manage all driving functions in certain environments but requires human readiness to take over. Level four enables operation without human input within restricted geographic areas or weather conditions. Level five represents complete freedom where the car drives anywhere without human oversight.
Core Software Architecture
A perception module ingests data from cameras, LIDAR, RADAR, and ultrasonic SONAR sensors. These inputs create a comprehensive understanding of the vehicle's immediate surroundings. A localization module uses 3D point cloud data combined with GPS and IMU information to determine precise position. It calculates orientation, velocity, and angular rate to place the vehicle accurately on digital maps. The planning module takes inputs from both perception and localization to compute actions like velocity and steering angle outputs. Machine learning algorithms, particularly deep neural networks, enable the vehicle to detect objects and interpret traffic patterns. Modern systems employ sensor fusion techniques that combine data from multiple sensors to improve accuracy. This process helps maintain reliability across different environmental conditions ranging from heavy rain to bright sunlight. Navigation systems rely heavily on Global Positioning System technology for air, water, and land vehicles. Some approaches use detailed maps holding lane and intersection data while others crowdsource updates from the fleet itself. Real-time kinematic technologies enhance positioning accuracy to sub-meter levels crucial for autonomous navigation decisions. Software integration remains a challenging task due to the large number of safety processes required. Robust systems must ensure hardware and software can recover from component failures during operation. Prediction capabilities allow fully autonomous cars to anticipate actions of other vehicles similar to human drivers.