Arizona is proud of its wild west heritage, and in many ways, the state still embodies that largely lawless lifestyle, but even its cavalier leader had to pull the reins on some self-driving activity there following the first pedestrian death caused by an autonomous vehicle.
Uber has been suspended from testing its autonomous cars in the state following a March 18 incident where one of its self-driving fleet vehicles in Tempe smashed into 49-year-old Elaine Herzberg as she crossed the street, killing the woman from injuries suffered during the impact.
Arizona Governor Doug Ducey (R) issued the ban Monday, saying public safety should be the top concern for any company testing the experimental vehicles in Arizona, and the fatal crash was “an unquestionable failure to comply with this expectation.”
The bold move marks a stark departure for the state, which has otherwise embraced and promoted its largely unregulated landscape as fertile ground for testing the new-age transportation, in an attempt to court top players in the burgeoning industry. (The strategy has so far proven successful, with Arizona boasting nearly 600 autonomous cars on its roads—more than even California, the longtime leader in self-driving tech, can claim.) But while the official reaction may have stopped everything for Uber, did it change anything about the way we deal with autonomous cars?
What Went Wrong?
Tempe Police Vehicular Crimes Unit is actively investigating the details of this incident that occurred on March 18th. We will provide updated information regarding the investigation once it is available. pic.twitter.com/2dVP72TziQ— Tempe Police (@TempePolice) March 21, 2018
Footage of the incident released by the Tempe Police Department earlier this month shows test driver Rafaela Vasquez barely keeping her eyes on the road in the seconds leading up to the accident. She’s seen looking back up in what seems like a panic just before the imminent crash.
The video—which stops short of showing the impact—also seems to confirm earlier reports by police that the car showed no signs of slowing before the crash, which mirrors statements made by Vasquez herself claiming that “the first alert of the collision was the sound of the collision.”
Yet the 22-second clip seems to raise more questions than it answers, including whether Uber will be held legally accountable for the death, how it will handle the incident internally, and whether Vasquez’s apparent lack of attention behind the wheel was in violation of any company policies.
It also fails to address perhaps the most important issue at hand: why didn’t the autonomous technology recognize—and react to—the pedestrian? Herzberg crossed the road about 100 yards away from a crosswalk and the incident took place while it was dark out, though autonomous experts say neither factor should have prevented the car from detecting her presence or attempting to avoid a collision.
Still, an investigation by The New York Times found such technical hiccups weren’t uncommon in Uber’s Arizona test fleet. According to the report, the autonomous cars there had been experiencing issues for months before the incident and were struggling to average even 13 miles between each necessary human driver intervention. (Waymo, Google’s self-driving arm and perhaps Uber’s biggest competitor on the autonomous scene, averages about 5,600 miles per each human takeover.)
Uber has struggled with its testing in Arizona—drivers there intervened with the autonomous systems once every 13 miles. Compare that to once every 5,600 miles for Waymo vehicles.
Despite the buggy tech, the company continued pushing for more miles on the experimental cars, splitting test driving teams up for solo driving missions in an attempt to reach a year-end goal of driverless service in Arizona, the report goes on. Company execs were allegedly concerned with catching up to competition after Uber suffered through a contentious lawsuit against Waymo that put its self-driving efforts behind schedule.
In the wake of the tragedy, the company voluntarily halted its autonomous testing across all current sites, including in California, Pennsylvania, and Toronto, but even if it starts those programs back up, it will no longer be allowed to experiment on Arizona’s roads. The weight of the incident was heavy enough to even prompt several other top players to temporarily give up their autonomous testing, including Toyota and Boston-based start-up NuTonomy, and it remains unclear when those experiments will resume.
But it’s likely just a matter of time before they do.
A number of states are still extremely permissive when it comes to autonomous testing, and Washington has failed to produce any national regulations
on the issue—partly because several senators were concerned about this type of fatal accident occurring, saying the self-driving bill that quickly made its way through Congress didn’t do enough to address the subject of safety.
Still, the governing body recently made the technology a top economic priority, earmarking more than $100 million in its latest spending bill for autonomous car testing and research.
And self-driving companies have continued to lobby for a light regulatory touch in the wake of the incident, saying the fact that they’re willing to experiment so extensively on the technology proves their commitment to safety, and little-to-no federal oversight is needed.
Outside of banning Uber from its roads in the future, Arizona itself has barely budged its stance on the issue, with state officials saying they saw no immediate need to implement any further rules or regulations on the industry or walk back recent legislation that would allow for the cars to roam the roads with no one behind the wheel. Many experts attest that a key to success is turning difficult situations into teachable moments, but faced with the most tragic challenge of all, has the autonomous industry—and those who support it—bothered to learn anything?