Are Today’s Laws Adequate for Self-Driving Cars?

By: Bridget Clerkin May 8, 2018
Current laws around driving—everything from speeding to hit-and-runs—is based on having a human behind the wheel. As more autonomous vehicles hit the road, jurists and lawyers will have to determine how the new tech fits within existing law.
Share This Page
Share Pin It Email Print

Ed.’s note: This is the second in a series of five articles on the future of the legal issues surrounding autonomous vehicles.

Check out the other entries in our series on self driving cars and the law:

With two recent deaths on their hands going unpunished, it might be easy to assume self-driving car developers are above the law. But punitive measures for such fatal incidents may exist—if one can read between the legal lines.

The burgeoning technology—which killed an Arizona woman and a California man during a single week in March—is undeniably tricky for the current justice system to address, with the most obvious problem facing any jurist or jury being the lack of a driver to pin the misdeeds on.

Currently, many states require a human’s back-up assistance behind the wheel of autonomous cars, though some—including both Arizona and California—let manufacturers deploy autos without one.

Yet even in cases where a pair of human eyes and hands are available, parsing out punishment can prove difficult, as it remains fuzzy whether man or machine should ultimately be accountable for the vehicle’s actions. State legislators have been loath to define the responsibilities of flesh-and-blood test drivers, preferring to leave those determinations to the companies experimenting with the technology. And in some cases, including the recent fatal incident in Arizona, crashes could be deemed “unavoidable” regardless of who—or what—is in charge of the car.

Compounding those complications is the likely decline of individual vehicle ownership. As autos become nimbler navigators and begin taking over the streets, many predict roadway transportation will shift to a communal model, with autonomous cars dispatched in fleets owned and operated by the auto and tech companies building them.

In that case, manufacturers could be held liable to some degree—and plans for pertinent insurance policies are currently being developed—but if their product is operating the way it’s intended, the car will be able to make decisions beyond the scope of predictability, making it difficult to assign fault in an accident.

The futuristic issues will surely spawn a spate of new rules and regulations for years to come, many of which may be established on a per-case basis. Still, some in the legal world believe at least some of those gray areas can be addressed today through a pre-existing body of law.

To a Fault

Tort law largely covers any case that could crop up in civil court. The extended body of rules offers lawful definitions of “injury” and “harm” and lays the groundwork for suing any party deemed responsible for inflicting such damage.

Several types of tort—essentially legal shorthand for “wrongful act”—exist, including cases of strict liability, through which a party could be found responsible for a mishap even if he or she did not intentionally create the outcome or produce it through the act of neglect, and strict product liability, which is extended to the manufacturer, distributor, or seller of a faulty product.

Strict liability is often applied when one party places another in the way of potential danger through the possession of a potentially hazardous product, animal, or weapon, while strict product liability puts manufacturers on the hook for any harm caused by their product, whether or not the manufacturer attempted to address the item’s defect.

Legal experts recently highlighted both legal categories as the most likely avenues to litigate cases of autonomous vehicle6-based injury—though not without caveats.

Strict liability cases are typically reserved for those where specific risks are commonly associated with a certain activity and may not aptly apply to new-age technology capable of making its own decisions—and therefore creating unpredictable levels of danger.

A similar issue arises when considering strict liability law, as it would become increasingly difficult to determine what constitutes a “defect” in a product designed to adapt from its original—and conceivably problem-free—state. Complicating the issue is the individual way in which the machines are programmed to change, as they will learn different techniques at different rates of speed depending on their particular experiences.

A third potential avenue for trying such cases exists in the set of vicarious liability laws, which hold that one party—typically an employer—can be found responsible for the acts of another, such as an employee.

But this scenario, too, begs uncomfortable questions about the nature of the human-robot relationship, including whether the car would be considered a machine, a service, or assume its own legal “personhood.” Still, at least one governing body is already preparing for a future of such blurry lines.

Stranger than Science Fiction

Isaac Asimov
Science fiction author and ethicist Isaac Asimov created the Three Laws of Robotics that may be instructive for how lawmakers think about the safety responsibilities of self-driving cars.

In 2017, the European Parliament realized that the future is now, and the body set about drafting rules to dictate future legal interactions with mankind’s robotic peers.

The 22-page report was filled with novel takes on addressing the issue, including the idea of a “compulsory insurance scheme” not unlike those used in the auto insurance industry, through which designers, manufacturers, programmers, and users of the machines would all contribute to a universal fund that would be used to pay for any incident “subject to limited liability.”

But the document also recognizes that such a time may arise when no one person or company could be singled out for the decisions of an artificially intelligent entity—which the report proposed legally defining as an “electronic person.”

At that point, the draft says, other measures must be pursued. And while no suggestions were offered for such potential future incidents, the report references another group of novel rules for engineers of the technology to adhere to until then: those found in the pages of author Isaac Asimov.

Responsible for a wealth of hard science-fiction stories, including the ominous I, Robot, the writer famously penned a set of rules that have since come to be known as Asimov’s Laws, or the Three Laws of Robotics:

  • A robot may not harm a human being, whether through direct action or inaction.
  • A robot must obey the orders of human beings, except when such orders would conflict with the First Law.
  • A robot must protect its own existence, as long as such protection doesn’t interfere with the First or Second Laws.

Together, the principals are meant to foster a world in which robots remain obedient to their human masters, rather than using their superintelligence to rise up against their organic lords.

At a time when so many legal questions surround the future of robotics and artificial intelligence, the European Parliament paper argues, these laws at least can be considered essential to follow.

Recent Articles