Advertisement

Can a Magic Math Formula Stop Self-Driving Cars from Crashing?

By: Bridget Clerkin October 31, 2017
A comprehensive study by computer chip manufacturer Intel may have found the formula for drastically reducing the chance for crashes in self-driving vehicles.

When it comes to theories on the meaning of life, there exists one that posits the world as a math problem. Reality, it says, is simply the sum of a nearly infinite number of variables, which recalculate constantly. And if one could possibly identify and solve for all factors involved, one would become omniscient—and even see the future.

The idea may not be taught in calculus class, but it seems such mathematical determinism has found a follower in Intel. The computing giant recently announced that it had enough of the universal equation mapped out to steer the fate of autonomous vehicles.

Specifically, the company’s officials said it could stop all self-driving crashes with the power of math.

According to a study conducted by the company, Intel’s golden equation could not only help clear up car insurance questions on the new technology but also foster greater trust in the machines.

The idea of trust is paramount for Intel—the California-based computer chip maker has billions to gain from the proliferation of self-driving autos. (In fact, it’s already run a separate study to see how to get more humans on board with the robo-cars.)

But does its mathematical model really add up to a safer ride, or is it merely meant to help subtract steering wheels, brake pedals, and human control from the vehicles?

Checking the Receipts

Where some see mistakes, others see opportunity, and Intel falls into the latter camp.

In order to scrub the world of autonomous crashes, the company studied what led human drivers to err behind the wheel, reportedly reading up on nearly every fatal accident logged by the National Highway Traffic Safety Administration (NHTSA) to look for useful patterns.

Among the figures the company examined were the average speeds and distances from other vehicles at which the accident-bound cars were traveling, along with road condition analyses and known reaction times and evasive maneuver abilities of automated autos.

Intel studied nearly every fatal accident logged by the NHTSA to look for useful patterns.

Intel also calculated the probability of fatal crashes occurring per hour—which it determined to be 10-6—with a human behind the wheel. In layman’s terms, that measures out to about one deadly crash for every million hours of driving.

But by utilizing the math formula created by its number crunching pros, Intel found that self-driving cars would come much closer to one fatal accident every billion driving hours.

While the set of equations—which include a “safe distance formula”—won’t stop accidents completely, they’d essentially guarantee fault would lie with human drivers. The algebraic parameters would whittle down the odds of the robo-cars misbehaving to a statistically insignificant number, Intel said. The system is called Responsibility-Sensitive Safety (RSS), and it bans the cars from issuing commands that would lead to an accident.

Still, in order to safely share the road with their flesh-and-blood counterparts, the machines will have to take some human-like driving cues.

Solving the Problem?

IntelHeadsUp
Giving self-driving cars a blueprint for what to do when things are going right—and an emergency mode for when things go wrong, like anticipating other drivers' bad behavior—is a key to Intel's Responsibility-Sensitive Safety system.

Self-driving cars are still far from true autonomy—and the robots are expected to share the streets with humans for decades.

So in order to create a safer environment, the vehicles must keep it real, and work within their imperfect surroundings. (Anything less, like directing the cars to creep at sluggish speeds and stay away from other autos, amounts to nothing more than a “very expensive science experiment,” Intel officials said.)

Essentially, the company’s equations set the cars on one of two tracks: a “safe state” or a “default emergency policy.” Each is designed to help the autos fit in with—and prepare for—their less-than-perfect roadmates.

“Safe state” is the default mode of the vehicles, where the RSS is in control, but the system continuously monitors the road for brewing trouble, such as a fellow motorist angling to cut the car off. If such an event should happen, the auto will switch to “emergency mode,” where it’s directed to use “the most aggressive evasive action” available to return to “safe state” as quickly as possible.

And the formulas leave some room for free will to set in, with the machines able to exercise deep learning abilities by making their own choices on how to proceed in such situations—as long as the action is mathematically sound when checked against the formula.

With the lengthy transition toward truly driverless vehicles just beginning, the numerically guided ability to act “normal” is a crucial step for the cars, Intel’s report argues—but it may be an even more important milestone for humans.

The “X” Factor

In its mathematical manifesto, Intel says it hopes to avoid an “AI winter”: a period of time in which work on such technology slows due to the inability of society to understand—or accept—such projects.

And an important part of fostering understanding—and the eternal spring of artificial intelligence—is ensuring we all speak the same language.

Translating the abstract idea of safety to a computer transfers the tricky topic from elusive thoughts to concrete numbers. (According to Intel, the process “provides specific and measurable parameters for the human concepts of responsibility and caution.”)

But this concept does more than keep humans and robots on the same page—it offers a basis for new liability laws.

With the formulas understood by both man and machine, new legislation can be drawn up on the presumption that the vehicles will never be held accountable for accidents, Intel suggests. The idea is to normalize not just the fact that self-driving cars are on the road, but also the process of dealing with crashes they’re involved in.

While a number of self-driving accidents have already occurred, the technology has never been found at fault for a crash. But the current insurance infrastructure is soft on how to deal with the scenario, often dragging out investigations and causing outsized public interest in something that should be considered routine, Intel’s report argues.

Clear-cut liability laws will allow drivers, insurance companies, and law enforcement officers to quickly and easily move on from the situations, giving automakers breathing room to keep tinkering with the technology and beating back any negative public opinions that may form, the report said.

Still, for now, the magic formula has yet to be been adopted, so self-driving vehicles will just have to remain innocent until calculated guilty.

Recent Articles