The Washington PostDemocracy Dies in Darkness

Google opens up about when its self-driving cars have nearly crashed

January 12, 2016 at 5:00 p.m. EST
A new Google report explains how often its driverless cars have been in near-crashes on California roads. The data is from 424,331 of the 1.37 million miles Google has driven autonomously. (Noah Berger/AFP/Getty Images)

Google’s fleet of 53 driverless cars, currently being tested on roads in California and Texas, have never been at fault in an accident. But in 13 cases, the vehicles came pretty close, and the driver had step in and prevent a crash, according to a new company report on the California tests.

The report also stated that during 272 occasions in the 14-month span, drivers took control of autonomous vehicles because the software was failing. In 69 other incidents, the test drivers chose to take control of the autonomous vehicles  to ensure that the vehicles operated safely.

The new data shows that autonomous cars are making progress, Google said. But other experts cautioned that the company’s report doesn’t provide enough information to definitively say whether the technology is safe.

Google’s test drives have been very closely watched because they have put driverless cars on real roads for the first time. Even minor incidents between human drivers and Google’s cars have garnered media scrutiny because of the huge interest in the technology.

[What it’s like to ride in a Google self-driving car]

The report was the most detailed to date on how the cars are performing and was required by California rules. Google is also testing the tech in Austin, but Texas did not obligate the company to release similar data. The report shows an overall decline in incidents in which the technology fails since the fall of 2014.

“We’re really excited about these numbers. It seems to be a pretty good sign of progress,” Chris Urmson, who leads Google’s self-driving car project, said in an interview with The Post.

Experts caution that the findings should be taken with a grain of salt.

“It’s not going to be reflective on the quality of the system,” said Alain Kornhauser, chairman of Princeton University’s autonomous vehicle engineering program. “From an evaluation standpoint, I don’t think there’s anything you can read into it in the end.”

How good the cars look can be skewed by the situations they face, according to Kornhauser. Easy road conditions will make a car look much more impressive than tough situations.

“It’s informative, but it shouldn’t be treated as a true measure of the vehicle’s safety,” said Aaron Steinfeld, a Carnegie Mellon professor who researches human-robot interaction.

The most significant improvement in the report is the rate at which the cars detect a system failure and request the test driver to take over — incidents that Google and regulators call “disengagements.” These situations happened once every 785 miles in late 2014, but only once every 5,318 miles in the fourth quarter of 2015.

The measure is an indicator of the stability of the overall system. Urmson said he was pleased with the improvement as his engineers have focused on adding new capabilities to the software. He said a focus on stability will happen before the technology is released to the public.

While the rate at which test drivers chose to take control of the cars decreased in early 2015, it took an upward turn late in 2015. Google says that’s because the cars have been pushed into more difficult circumstances.

“You’ll see that vary over time but generally trend downwards,” Urmson said. “If you only drove on Sunday afternoon you might get the software to the point where you don’t have any of the disengagements, but then you throw it into rush-hour traffic on Monday morning, the driving environment is just that much more challenging.”

He cited recent rain in the Bay Area, and roads with dense exhaust fog as tougher challenges the cars have faced recently.

The most significant improvement in the report is the rate at which the cars detect a system failure and request the test driver to take over. These situations happened once every 785 miles in late 2014, but only once every 5,318 miles in the fourth quarter of 2015.

According to the report, the most common reason that test drivers had to take control of the autonomous vehicles is a perception discrepancy — essentially an error in how the car sees the world. Urmson says that perception is probably the hardest part of developing a self-driving car.

For example, the car might think another vehicle has turned 10 degrees in its lane when it is really proceeding straight down its lane. Or the car might stop because it sees trash on the road, which a human driver wouldn’t stop for.

The second most common reason the report cites for test drivers intervening is what Google calls software discrepancies. These can be very slight differences in how the software is operating the car, such as a measure from a sensor coming at every 11 milliseconds instead of every 10 milliseconds.

The third most common reason for test drivers needing to take control of the cars is an unwanted maneuver of the vehicle, which includes unwanted braking or swerving to avoid an obstacle that wasn’t actually a hazard.

Those three categories make up the majority — 74 percent — of the cases when test drivers took control of the autonomous vehicles.