When a human drives a car, they know that what they see at any given moment may not completely represent potential obstacles in their path. For example, a truck could block a human driver’s vision of a pedestrian crossing the street, but a reasonable driver would consider that someone could be in the crosswalk before proceeding when a traffic light turns green.
Autonomous vehicles struggle with that concept.
Technology developed at Carnegie Mellon University could help fill in these gaps with more data. By borrowing techniques from map-making, an automous vehicle can better interpret what its sensors are seeing, according to Deva Ramanan, associate professor of robotics at CMU. Ramanan is also the director of the CMU Argo AI Center for Autonomous Vehicle Research.
“We [humans] make use of this knowledge all the time. We reason about what we can see and what we can’t see in order to make safe decisions,” he said.
Data from the vehicle's LIDAR, a laser-based system for measuring distance, isn't really three-dimensional and as a result, the sensor can't see the blocked parts of an object, and current algorithms don't reason about obstructions. The CMU method matches point-clouds —a set of data points— with objects in a 3-D library. This allows the vehicle to consider potential obstacles that it can't actually see.
Another way CMU is using map-making is to help a vehicle predict where objects will be in the future.
“One of the crucial things when you’re driving … is being able to reason about where things will be in the future,” in order to make safe maneuvers, Ramanan said. Humans can assume a cyclist following the curve of a bend will continue to do so. Now, autonomous vehicles can make the same assumption using point clouds and autonomous driving data.
Using a technique called scene flow, autonomous vehicles can predict where a moving object is headed based on calculating a sequence of data points and the speed and trajectory of those points.
“Humans have to make these kind of predictive judgements in order to safely drive,” Ramanan said.
Like human drivers, the system gets more sophisticated and accurate as it’s used. Once data is generated by the system in real time, it goes back and reviews predictions it made about objects and corrects any errors.