×
×

MIT Researches the Trolley Problem with Self-Driving Cars

By: Bridget Clerkin January 3, 2019
The trolley problem simulation from MIT continues the age-old ethics debate: who should die in an inevitable accident? But the university has added a twist by replacing the trolley with an autonomous vehicle.
Share This Page
Share Pin It Email Print

Say you’re walking down the streets of San Francisco, when all of a sudden you see a runaway trolley car, speeding down the tracks and bound to run over 5 innocent bystanders unlucky enough to be trapped in its way.

Then you realize this: there’s a side track upon which the trolley can be redirected—though further down that track, there’s one person bound to the rails who would also perish should the trolley roll by.

You then find that you’re standing right next to the lever that would switch the path the trolley takes. Do you pull it, and actively participate in the death of the single bystander, or keep your distance from the tragedy all together, and see 4 more people die?

This moral dilemma is a classic philosophical thought experiment called the trolley problem, and it’s been used for years to eke out the ethical outlines of the human psyche.

But the old-school issue is getting a new-age adaptation, thanks to the development of self-driving cars.

Live & Let Drive

As the vehicles increasingly take over for humans behind the wheel, they’ll have to deal with more roadway situations in general—including those messy traffic issues caused by our imperfect intelligence.

Testing has already shown that humans are exceptionally wary of the cars, and, if anything, the self-driving rides err on the side of too much caution when dealing with people.

But what happens when they’re cruising down the street and an impending collision is all but inevitable? Should the car swerve to protect the pedestrians? Or keep barreling through to save its passengers?

The issue is arguably the thorniest problem facing the new-age machines, and one that researchers at the Massachusetts Institute of Technology (MIT) decided to look into further in 2014.

The school put together what amounts to the world’s most depressing video game called the Moral Machine, challenging participants to call the shots in such situations when someone will live and someone will likely die in an unavoidable crash.

Contributors were walked through several scenarios, where all manner of variables were changed—including whether they were driving alone, or with children or an elderly parent or pet in the car, as well as how many, and what type of, pedestrians would potentially be the target of a speeding autonomous vehicle.

The result was reportedly even surprising for MIT, with more than 40 million responses coming from 233 countries in the 3 years the test was live.

Now researchers there have compiled the results, and a few righteous patterns emerged.

Thinking Globally

roadside memorial

Documented recently in an article for the magazine Nature, the results reveal a few overarching preferences humans have in the no-win situation.

Universally, participants chose to save more lives over fewer—regardless of whether that total came from inside the car or out of it. Contributors across the globe also showed an inclination to save humans over animals, and young humans over old ones.

Still, that triad of truths may be where the agreements cease.

Digging deeper into the statistics, the school began seeing patterns emerge on a much more regional level, with researchers using the results to officially divide the world into 3 different “moral clusters” based on the prevailing preferences of the areas.

The first group, called the Western cluster, includes all of North America, along with much of Europe—especially the countries where Protestant, Catholic, or Orthodox Christian groups have a big impact on the culture.

The Eastern cluster involves a majority of Asian countries, including Japan and Taiwan, where “Confucianist cultural groups” are the norm, as well as many Middle Eastern or Islamic-centric countries, such as Indonesia, Pakistan, and Saudi Arabia.

And the final group, the Southern cluster, consists mainly of Central and South America, as well as a number of countries with heavy French cultural influence, according to the research.

Within those groups, the researchers found that the West showed much more preference for sparing the young, while the East felt much more strongly about keeping the elderly and pedestrians alive, and the South tended toward saving women and “more fit” characters.

But even those broad preferences can be further broken down.

Acting Locally?

Even more differences were found on a country-by-country basis, with everything from a prevailing “individualist” or “collectivist” culture to the strength of a country’s economy and political institutions coming into play.

Within each of the greater regional clusters, a lower GDP and weaker rules of law tended to correlate with a more favorable attitude toward “criminals” (or those, in the experiment, who were crossing the street without permission).

Which cluster’s set of standards, for example, should be considered “correct”? And what would stop the cars from making a move that no one agrees with anyway?

Real-life schisms between the social classes also translated into the Moral Machine, as countries with larger gaps between rich and poor were found to treat characters of different classes in the game with “much less equality.”

And the disparities even break down to the male and female divide, with countries where women have statistically better access to healthcare favoring female characters in the Moral Machine on a more regular basis than countries with worse health records for women.

Still, in attempting to answer one question, the Moral Machine findings beg many, many more.

Which cluster’s set of standards, for example, should be considered “correct”? And what would stop the cars from making a move that no one agrees with anyway?

With the lack of widespread consensus in mind, should developers begin programming their vehicles to think differently depending on where they’re located? Or will our best human attempts to adhere to local moral codes bleed into a more universal ethics regardless?

After all, the machines are being programmed to know better than us. Perhaps it truly will take artificial intelligence to find some semblance of global understanding.

Recent Articles