What Happens When an Autonomous Car Kills Someone?

By: Bridget Clerkin May 9, 2018
Today's legal system isn't totally equipped to deal with what may happen when a self-driving car is involved in a fatal accident.
Share This Page
Share Pin It Email Print

Ed.’s note: This is the third in a series of five articles on the future of the legal issues surrounding autonomous vehicles.

Check out the other entries in our series on self-driving cars and the law:

At the dawn of driver-free transit, autonomous vehicle developers aren’t putting the cart before the horse; they’re putting the car before the law.

The race to get fully self-driving vehicles on the road has hit a fever pitch, with a cadre of increasingly competitive companies across the globe already offering some level of automation in their rides—and promising more on the way.

But it seems the greatest ambitions of the world’s technical minds have already surpassed the legal thinking that needs to be done about their computer-driven machines—and the consequences of such incongruence have already been highlighted by a handful of deadly incidents.

The first death involving the technology took place in 2016, when the test driver behind the wheel of a Tesla Model S smashed into a semi-truck while the car was in autopilot mode. This March, two others were killed by the autos, when a woman crossing a Tempe, Arizona road was run over by an autonomous car and a man in California was self-driven into a highway median.

Yet the lack of clear regulations or case law parceling out liability in such cases—or any accidents at all involving driverless vehicles—means little has been done in the wake of the incidents to hold anyone accountable for the tragedies, and still less has been done to change the ways the vehicles are built, tested, or sold. In many ways, the current set-up has been a helpful arrangement for the companies still hotly pursuing the perfection of autonomous tech, but not everyone is happy with the current state of affairs.

Required Reading

Partly fueling the rush toward autonomous cars is the number of lives the autos are predicted to save once the programs have had time to develop their driving skills. Roadway deaths have been on the rise for several years and likely topped 40,000 in 2017, marking the second consecutive year fatalities have breached that benchmark.

With roadway deaths climbing in recent years, autonomous vehicle technology is assumed to be safer and is therefore in high demand.

Offering a seemingly easy fix to such a dire situation, the technology has been a popular cause for politicians on both sides of the aisle to rally around, and legislation drafted last year by Congress to regulate and foster its growth was once thought a slam dunk for passage.

But this winter, the approval process was held up by several senators citing concerns with the bill’s lack of clarity around safety and security, and the March 18 death of Arizona pedestrian Elaine Herzberg at the hands of an Uber self-driving vehicle only strengthened their numbers.

Called the AV START Act, the bill offers automakers a legal loophole by failing to prohibit the companies from enacting forced arbitration contracts with future riders.

Such clauses would require any victims of self-driving accidents to settle with manufacturers out of court and thwart their ability to sue a carmaker over an incident or take part in a class action lawsuit.

The arrangement is especially beneficial for the corporations, who often hire the arbitrator and represent repeat business for an arbitrator’s law firm. And keeping victims out of open court—and engrossed in backroom dealings—prevents information on any potential problems with a company’s technology from reaching the public.

Lyft and Uber—two leaders in the self-driving race who are already experimenting with transporting customers in autonomous vehicles—both include forced arbitration clauses in their terms of service. And while a test vehicle operating under a different set of bylaws than those that would govern a commercial ride struck Herzberg, her family settled out of court with Uber days after the fatal incident, both enforcing the idea of arbitration and preventing the creation of any new case law to move the liability issue forward. (Arizona Governor Doug Ducey has also subsequently banned Uber from testing the vehicles on his state’s roads.)

To showcase their concern, a group of 10 senators sent a letter to 60 manufacturers of the technology asking them to consider the negative aspects of forced arbitration clauses. Yet it’s unlikely the legislative group will be able to change the actual measure in question at this point in the process, and in the absence of federal law on the subject, states have taken it upon themselves to legislate the matter—and left plenty of loopholes of their own.

Keeping It Vague

Thirty states plus the District of Colombia all have some form of self-driving laws on their books, making the concept of doling out legal responsibility for autonomous accidents a patchwork one at best. But the language selected by many states to dictate those rules only complicates things further.

Most of them keep it as vague as possible, which could be useful when discussing a technology still very much in development. But the wording also has the effect of further muddying legal waters, allowing for the entire industry to move on from such incidents without having to change much—or anything—about their testing process or business model.

governordougducey 0
Arizona Gov. Doug Ducey has left his directives regarding automated vehicles purposefully vague in order to allow the technology to grow in that state.

The 2015 executive order permitting autonomous car testing in Arizona, for example, directs state agencies to “undertake any necessary steps” to encourage the proliferation of the technology there and merely calls for an operator’s ability to “direct the vehicle’s movement if necessary,” without explaining what would be considered a necessary move.

A second executive order signed by Ducey this March does much of the same, requiring vehicles to meet a “minimal risk condition,” while offering one broad example of such a state: “Bringing the vehicle to a complete stop.”

And states looking to other laws, such as the common carrier doctrine, for guidance may find similar linguistic complications.

Typically applied to services providing amenities to the masses, such as buses, taxis, hotels, insurance companies, and Internet providers, common carrier laws call for such industries to be held to a higher standard of care than operators of other businesses, but it remains unclear what, exactly, a “higher standard of care” means or whether the autonomous rideshare services of the future—or even the human-driven rideshare services of today—are legally considered common carriers at all.

But there may be another reason so many mushy definitions end up in state measures: holding the regions accountable for the actions of any autonomous cars on their roads—or for their lack of legal oversight on the experimental vehicles—is a highly unlikely scenario, legal expert Bryant Walker Smith recently explained to The Atlantic.

“In general, states are not liable for policy determinations,” he said. “A state might be liable for not properly maintaining a road, but not for deciding whether or not to build the road.”

Recent Articles