How Do You Sue a Self-Driving Car?

By: Bridget Clerkin May 11, 2018
An autonomous vehicle's manufacturer is the likely target of a lawsuit if someone is injured by that vehicle, according to legal minds studying the issue.

Ed.’s note: This is the last in a series of five articles on the future of the legal issues surrounding autonomous vehicles.

Check out the other entries in our series on self driving cars and the law:

Autonomous cars may be easily acquiring the skills needed to drive, but it’s much harder for the technology to take responsibility for any accidents it should cause on the road.

The self-driving vehicles are resistant to distraction or fatigue, but they remain far from impervious to mistakes, as a pair of deadly March incidents showed. And for all their ability to “think” like a human, they lack the capacity to carry an insurance policy like one, making it difficult to assign fault in a driver-free crash.

Once fully realized (called “Level 5” automation), the systems are anticipated to all but eliminate traffic incidents, but those days are still likely decades away—and in the meantime, the cars will not only have to continue adapting, but do so while sharing the road with imperfect human drivers.

The impending in-between years represent not just the biggest crucible the machines are likely to face in their implementation, but also the biggest challenge for those meting out legal responsibilities for any future accidents.

And while current laws have proven flexible enough to adapt to ever-progressing technology, the vehicles are unlike anything the justice system has ever encountered.

A Class All Their Own

To start laying the legal groundwork needed to navigate through such uncertainty, many in the world of law have turned to the idea of strict liability, which would hold a vehicle manufacturer accountable for any incident caused by a defect with a car.

After practicing liability law for more than seven years in both the U.S. and Canada—and counseling automakers on regulatory compliance matters concerning the U.S. Department of Transportation and the National Highway Traffic Safety Administration—Tina Georgieva agrees.

“From what the experts tell me, they don’t believe a fully-automated, level 5 vehicle will be prevalent until about 20 years from now,” the senior attorney at Detroit-based firm Miller Canfield said. “This big transition period, when there’s going to be more Levels 3 and 4 on the road, there’s going to be a lot of issues involving the shared control between the human operator and the machine, which are going to lead to, in my opinion, product liability issues.”

There’s going to be a lot of issues involving the shared control between the human operator and the machine, which are going to lead to, in my opinion, product liability issues.

Tina Georgieva

Whether the cars give a driver “sufficient warning” when he or she needs to retake the wheel is one issue Georgieva imagines will need to be sorted with semi-autonomous vehicles. (Indeed, the matter has already come up during investigations into several deadly accidents involving Tesla’s autopilot system.)

But within the broader context of product liability, there’s one particular kind of case Georgieva foresees becoming much more prevalent with the rise of self-driving cars.

“From what we’ve seen on our side of the world [as defense attorneys], a pattern of litigation has emerged, and a lot of class action has emerged,” she said. “Plaintiff attorneys aren’t as focused on specific cases as much as cases concerning design defects, because that would create thousands of potential consumers who could claim not just personal injury but economic loss.” It would also offer a more straightforward path toward arguing a product liability claim, as manufacturers would be on the hook for any engineering flaws.

At Your Service

Still, not everyone in the legal world agrees that product liability is the only avenue for legislating such cases.

California personal injury lawyer Christopher Dolan pointed out in a recent report on the concept that autonomous vehicles could instead be designated a “service,” which would make the rides subject to contract law—a body of rules that could skew very favorable for businesses.

Legal scholar Bryant Walker Smith has also pointed out this possibility, saying it could give automakers more control over how the cars are handled and updated after they’re sold, which could help shield the companies against accusations of negligent manufacturing.

Tina Georgieva
Tina Georgieva

For her part, Georgieva disagreed that contract law would be the best route to take, saying the trade-off would be too great for the consumer.

“For contract law, you’d have to accept the terms and conditions for the service in one of those long service agreements,” she said, adding that, in most cases, such contracts include a mandatory arbitration agreement, which would prohibit users from suing service providers in the case of an accident or joining a class action lawsuit.

“My personal opinion is that the plaintiffs’ bar would not allow that to happen,” she said. “The stakes are high when you’re talking about the potential for personal injury. It would be difficult to get that type of law passed, where somebody would be signing away their right to a jury trial by their peers.”

Getting the Picture

That’s not to say the idea of product liability is any easier to litigate.

Product liability is almost a he said/she said scenario,” Georgieva said. “The plaintiff says, ‘This happened one way,’ and the manufacturer says, ‘No, it couldn’t have. Our product did not cause this accident.’ And essentially, you just have to look at the artifacts of the accident and you have to work backwards to reconstruct what happened.”

While new-age technology makes this process more difficult in some ways, as a broader range of experts will be required to analyze what could have possibly gone wrong with such a complex machine, in other ways, it drastically simplifies things, as the car itself could be called in as a star witness.

Most autos are now manufactured with an event data recorder, which essentially acts as the vehicle’s black box. The device can offer specific information from up to 15 seconds before a crash, including whether the brake pedal was used, the speed of the vehicle at the time of the accident, whether the car was accelerating or decelerating at the point of impact, and even the angle of the steering wheel at the time.

The right to retrieve that data was reserved exclusively for vehicle owners or lessees in the Data Security Act of 2015, a move that would prevent manufacturers from obtaining the information, even in the case of a lawsuit, unless granted permission—an idea which may have had good intentions but actually complicate the matter, Georgieva said.

“Look, if the tech is so developed that the levels 2, 3, 4, 5 will already be harvesting all the data from all the cameras and LiDAR and sensors, why not make that data available to stakeholders?” she said. “I’m not suggesting making it available for the public or through a [Freedom of Information Act] request, but if you’re a first responder trying to allocate fault or if you’re an insurer or a manufacturer who has been sued for a product liability claim, you should be able to get that data.”

Lady Justice Leading the Blind

It’s not just those practicing the law who need to wrap their heads around these issue, it’s those writing the laws.

So far, the subject matter has proven a tricky one for legislators, especially on a national scale, with the adoption of federal regulations currently stalled in the Senate due to concerns over the vehicles’ safety and security. And though a majority of states have since ventured into their own rulemaking territory, the regional governments are still looking—and waiting—for guidance from federal agencies on certain matters—particularly those concerning how the vehicles themselves can be built.

Federal regulations are currently stalled in the Senate due to concerns over the vehicles’ safety and security.

“Frankly, I think lawmakers are a little overwhelmed – with good cause,” Georgieva said. “The process of creating law needs to take its course, and it’s time consuming and important to write it in a way that, in two years, that law doesn’t become obsolete.”

Still, it’s not the first time an industry has been far more fleet-footed than the bureaucracy meant to oversee it. Georgieva mentioned that the financial industry went through a similar growth spurt after it was largely deregulated in the late ‘90s, with lawmakers eventually codifying most of the best practices that corporate leaders had naturally developed over time.

But the Frankenstein qualities of a self-driving car may make reaching such industry-wide consensus more difficult.

“The space is changing, where a lot of nontraditional entities are entering into the auto industry,” Georgieva said. “The auto manufacturers who have been in this business for decades and decades, they’re the more cautious ones, because frankly, they know what it’s like to testify before Congress if their vehicle is found responsible for a death. Then you have the younger tech companies who are more eager, and not as seasoned in the regulatory space. If your computer fails or your iPhone fails, it’s not the same thing as your vehicle failing – there’s a little bit of a disconnect.

Regardless of how cautiously or carefully they’re crafted, however, the technology of tomorrow will quickly test the laws of today, and it won’t be long before the consensus of industry titans or even the perfect video recall of an event data recorder won’t be enough to determine who’s at fault in a world where vehicles can think for themselves.

New Frontiers

The pursuit of a more perfect intelligence to guide the revolutionary vehicles begs the question of how much longer the cars can—or should—be considered a “product” at all.

Autonomous autos are being expressly designed to utilize their own past experiences to dictate future moves on the road, making the vehicles, at least in theory, as susceptible to various driving styles as their human counterparts.

And even if all artificial intelligence is created equal—and programmed to follow the letter of human laws—the nature of the systems will cause them to “grow” and learn different skills at different rates, making them behave more like man than machine.

Some have argued that the futuristic concept fosters enough ambiguity to warrant the creation of an entirely new category of “personhood,” which could both more expressly define the role of the technology in the eyes of the law and allow for the vehicles to absorb greater legal responsibility for any autonomous actions—though it remains unclear how, exactly, any judgement would be exacted from the robocars.

But even in those advanced cases, it may be possible to find manufacturers accountable by boiling things down to the source—literally.

“In these cars, the source code rewrites itself and it develops into something that’s supposed to be better in situational scenarios,” Georgieva said. “Plaintiffs’ attorneys can absolutely still use the product liability law system to bring claims in that case.”

She gave the classic example of the “trolley problem," in which a collision is inevitable, and any possible move could lead to potential death or severe injury of at least one party. The vehicle, in that case, would have to “choose” which path to take—and essentially which party to put in harm’s way—but even an action seemingly resulting from free will could be legally traced back to the car’s programming, Georgieva said.

“The vehicle makes either Choice A or Choice B, and the injured parties bring the suit against the manufacturer, saying ‘It should’ve chosen the other choice,’” she explained. “But the plaintiff is allowed to argue that alternative designs exist—and that maybe there’s a better design that would have allowed a better outcome. It’s going to be an issue when you have a bunch of people all trying to develop their own source code, and someone says Company A’s car can’t handle a certain scenario, but Company B’s car could, and Company A could’ve designed it to react like Company B.”

Ultimately, Georgieva said, accountability for even the smartest machines would likely still fall to its makers.

“Until we reach a point where AI can truly think like a human, and reach a certain level of human thinking in terms of how to write its own code and what type of decisions it makes, the argument would essentially be that the engineer decided that the value of the occupant’s life is more valuable than the pedestrian—or vice versa.”

The concept may rob the vehicles of their new distinction of “personhood,” but it certainly presents a new-age definition of tragedy.

Recent Articles