Live and Let Die: When Driverless Cars Decide Your Fate

By: Bridget Clerkin July 5, 2016
As we continue to put more trust in technology, big questions about morality arise.
Share This Page
Share Pin It Email Print

It started out simply enough: crunching numbers with a few stones and some wooden spokes.

Now, in every purse, pocket, or perhaps laying absentmindedly forgotten on a bedside table, there’s a cell phone more powerful than all the computing might needed to send mankind into space.

In its path to preeminence, our technology has learned a great many skills, but it’s never been required to consider why what it does matters—until now.

Our computers now have enough savvy to take on the task of driving our vehicles—and much more safely than we do, an increasing number of experts say—but even in a highly-controlled environment, accidents will happen. And when they do, and a collision is inevitable, should the autonomous vehicles swerve or stay the course?

How do you encompass the total number of possibilities borne of freedom of choice in a rigid algorithm? Can even the most finely-tuned machine land on the right ethical judgment, when nothing but a split second separates it from the decision to put either a passenger or pedestrian in harm’s way? And how do you define what makes an ethical judgment “right” to begin with?

Short of the development of true artificial intelligence, these questions must be answered by us, and the burden of this moral quandary rests largely with the vehicles’ engineers. But even if an industry consensus is—or can be—reached, the concept will have to be sold to the public, who already look at self-driving cars with sideways glances and a healthy amount of reserved judgment.

New Age Game Theory

Imagine this: your automated vehicle is cruising right along when, out of nowhere, a pedestrian jumps in your way. You have no direct control over the situation, so what do you hope your car ends up doing?

Now picture that your mother, father, or significant other is in the car with you. But now it’s not just one pedestrian, but five who would get mowed down. Or 10. Maybe there’s a baby stroller being pushed across the street. Maybe your dog is in the car with you. Does it make a difference? Does what you want to happen change?

It sounds like the basis for the world’s worst video game, but it’s actually the line of questioning a group of scientists posed recently in a series of surveys conducted to gain some insight on how to deal with the very real ethical conundrum.

And the answers they got were as murky and uncomfortable as the proposed hypotheticals.

Participants overwhelmingly answered that any self-driving car should automatically choose the path of least destruction: that is, whichever scenario would put the least lives in danger, even if the lives belonged to the vehicle’s own passengers.

But when asked if they would consider buying a vehicle programmed to sacrifice its riders in such cases, a majority of survey participants balked—choosing instead a model that would “protect its passengers at all costs.”  

As we continue to put more trust in technology, big questions about morality arise.
As we continue to put more trust in technology, big questions about morality arise.

This seemingly hypocritical stance is actually so deeply rooted in our psychology that the phenomenon has a name: “the tragedy of the commons.”

The theory proposes that, when a shared pot of resources is at stake, individuals will stop thinking about the greater good and instead try to get the biggest slice possible for themselves.

The behavior causes a ripple effect when others who may have been more open to sharing reevaluate their stance after realizing that the pool is dwindling at an ever-increasing rate as each person greedily and frantically dips into the supplies, and they worry that, if they don’t act fast and take as much as they can, too, there might be nothing left for them in the end.

On the road, drivers and passengers will tend to lean towards a car that provides them with a bigger piece of the safety pie, leaving the idea of the “public good” in the rearview mirror.

Who Really Holds the Keys?

Such survival instincts are deeply entrenched in our biology, but we’ve long since needed them to succeed in a society that has eschewed the virtues of baser impulses for steady reasoning and the rule of law.

Still, hammering out the legal issues surrounding the regulation of such an ethical quandary caused another stir for survey-takers, who couldn’t quite decide if they felt the programming choice should be made at the government’s, the auto makers’ or their own discretion.

Self-driving technology engineers say the sheer logistics of tweaking autonomous settings can rule out car owners deciding individually which method they preferred, and many believed leaving it to automakers would create a bureaucratic nightmare in and of itself.

In the end, the federal government would likely make the final call—and are, in fact, due next month to release a new set of regulations for the burgeoning self-driving industry—though many survey respondents also seemed wary of that scenario.

Many government workers are likely themselves unsure of the situation, which will require some tightrope walking skills.

Too much of a utilitarian route—that is, programming the cars to seek out the fewest likely casualties, whether or not its own occupants were sacrificed—could turn the public off from self-driving technology all-together.

That could be a crucial turn in the wrong direction for the government, who has repeatedly and publicly touted the potential for self-driving cars to put a dent in the nation’s rising traffic fatalities at a time when distracted driving has become more deadly—and costly—than ever. (An estimated $1 trillion was spent on car accidents nationwide last year alone.)

Whichever agency is saddled with the thankless task of providing such guidelines will have to ask itself: is creating cars that don’t sacrifice themselves for the greater good a move that could ultimately benefit the greater good of the country as a whole?

A Rich Man’s Game?

But telling cars to keep their riders safe regardless could prove discriminatory in other ways.

When a product is marked for early success in the market, it’s typically sold to those who are already successful themselves. And while self-driving cars have yet to be released to the general public and no price point has been set, it’s likely that it would be out of reach of most consumers—at least at first.

Letting cars choose passenger over pedestrian then could be chalked up to creating another advantage for the rich—equating the right to one’s safety with purchasing power.

But in a future world where automated vehicles are the only type on the road, others could argue that pedestrians in such scenarios would more likely than not be at fault, provided that the technology was working correctly.

That, of course, doesn’t take into account the idea that the programming could—and would—be susceptible to virtual hackers and elevates the machines themselves to take on the duties of judge and jury in this grisly scenario.

Future Mile Markers

Thankfully, such no-win accidents are rarer than most, but as we prepare to transfer more of our responsibilities to our computers, we must also begin to transfer some of our humanity. The forethought involved in how to do so is a novel concept, but helps us create a blueprint of how to convert our instincts from neurological pathways to electrical circuits.

Automated vehicles have proved deft replacements for human hands on the wheel and feet on the pedal, but have yet to even approach adapting the human heart and reason needed to truly navigate our civilization. While we seem bound nonetheless by sheer inertia to continue driving down that path, all we can do is hope we’re not on a collision course.

Recent Articles