A self-driving Waymo minivan crashed in Chandler, Arizona, this afternoon, resurrecting tough questions about the safety of autonomous technology and ripping the barely-crusted scab off the technology’s reputation, which was badly wounded when an Uber self-driving car hit and killed a pedestrian in the same state just seven weeks ago.
Chandler police report only minor injuries. According to a police statement, a Honda sedan traveling eastbound through an intersection swerved into the Waymo Chrysler Pacifica’s westbound lane to avoid hitting another car traveling north. (It’s unclear at this time who had the light and who is at fault.) The Honda hit the Waymo vehicle on its side, injuring the female safety driver behind the wheel of the SUV. Police say the vehicle was in autonomous mode when the incident occurred and was not traveling above the 45 mph speed limit. Waymo did not immediately respond to a request for comment.
Photos from local news stations show the Waymo vehicle pushed up against the sidewalk, with extensive damage to its front left bumper and wheel. The Honda’s entire front has been smashed in, along with its front passenger door. Its airbags appear to have deployed.
Video footage from the Waymo car’s cameras will help determine what happened, but based on preliminary evidence, the police say Waymo was not at fault. “The vehicle was at the wrong place at the wrong time,” says Seth Tyler, a spokesperson for the Chandler Police Department. “Waymo and the driver of the vehicle won’t get cited for anything because she didn't do anything wrong.” (Still, it’s early. In the hours after a self-driving Uber SUV killed Elaine Herzberg in nearby Tempe, police told reporters the crash may have been unavoidable. But video footage released a week later showed the opposite: The SUV's safety driver was not looking at the road in the moments prior to the crash, and autonomous vehicle experts say Uber’s tech should have picked up pedestrian with enough time to hit the brakes or swerve out of the way.
The WIRED Guide to Self-Driving Cars
After that crash, Waymo CEO John Krafcik said his team’s technology would have done better. “We're very confident that our car could have handled that situation,” he told Forbes. “We know that for a lot of different reasons. It's what we have designed this system to do in situations just like that.”
In February, Waymo announced its cars have driven 5 million miles on public roads since its beginnings as the Google Self Driving program in 2009. Crash reports (which companies developing autonomous tech must make public in California) show Waymo cars have been involved in upward of 30 minor crashes but have caused just one: In 2016, a Lexus SUV in autonomous mode changed lanes into the path of a public bus. The SUV sustained minor damage, and no one was hurt. The numbers for humans are hard to pin down, but researchers at the Virginia Tech Transportation Institute estimate people crash 4.2 times per million miles. That would be 21 crashes over 5 million miles, roughly matching Waymo’s record.
The company has been running tests without safety drivers in Chandler and plans to launch a driverless taxi service sometime this year. Waymo (and other self-driving car companies) love Arizona for its sunny, sensor-friendly weather and its regulation-free approach to the emergent autonomous tech. (Governor Doug Ducey did, however, suspend Uber’s testing following March’s death.) Waymo is also testing AVs in northern California and Atlanta.
Even if the Waymo minivan isn't at fault here, the company can’t be happy about the timing. After the Uber crash (and the death of a man using Tesla’s semiautonomous Autopilot feature a week later), unsettling questions return to the surface: How do we know these systems are ready for service? Should they really be testing on public streets? Is keeping a human overseer behind the wheel an adequate backup? And aren’t these cars supposed to make everybody safer?
“The images simplify the story and look like terrible accidents,” says Bart Selman, an artificial intelligence expert at Cornell University. “Lots of mistakes are made by human drivers. We have gotten used to that and don’t even report that anymore.”
Indeed, in 2016, human drivers in Arizona averaged nearly 350 crashes and two deaths a day. That’s why the promoters of autonomous technology harp on the facts that nearly 40,000 people die on US roads every year, and that human error causes more than 90 percent of crashes. Letting robots—which don’t get drunk, distracted, sleepy, or ragey—take the wheel could put a serious dent in those figures.
It will take time. Time to improve the technology, to test and deploy and make it widespread. Road deaths will never reach zero, and it will take decades to get anywhere near that level. In the meantime, the people trying to get there will have to keep at their work—and get ready to answer some unpleasant questions yet again.
Jack Stewart contributed reporting.
Uber's fatal crash was exactly the kind of thing autonomous cars are supposed to prevent
Having a human safety driver at the wheel is no guarantee of safety
If self-driving car companies really want the public to accept their new services, they need to aim lower