More “self-driving” options are installed in new cars every day, but no universal vocabulary designates what the systems can—and can’t—do. And the problem isn’t only inconvenient; it could also be deadly.
That’s the assessment of analysts with the British agency Thatcham Research, a nonprofit charged with evaluating car safety in the UK, similar to the U.S. Insurance Institute for Highway Safety.
The European group recently railed automakers for using vague terms such as “autopilot,” “super cruise,”and “driver assist” to describe their “autonomous” programs. The words may look good on glossy PR materials, but on the road, they could lead to a dangerous misunderstanding of—and overreliance on—the technology’s true capabilities, Thatcham reps said.
Indeed, a 2017 poll by the Massachusetts Institute of Technology found people have coin-toss odds at best at identifying what different self-driving systems can do. And such confusion and blind trust is already linked to several fatal crashes involving autonomous cars.
But Thatcham researchers hope they can help sort out the problem.
The group recently released a report on what, exactly, consumers should expect from an “autonomous” system—and one that merely “assists.”
The guidelines will be used when Thatcham begins evaluating safety levels of different self-driving systems later this summer. And with the amount of clout the influential agency wields, automakers—especially in Europe—seem poised to follow their lead.
At Your Assistance
Both “assisted driving” and “automated driving” features exist under the umbrella category of “vehicles that drive themselves,” the Thatcham report explains. But that’s just about where the similarities end.
“Assisted driving” systems involve much lower levels of technological engagement.
According to the report, the programs should provide “continued driving assistance for sustained periods of time,” in specific roadway environments, but the human behind the wheel should be responsible for actively piloting the vehicle.
“Assisted” programs can help with steering or speed support, for example, but don’t absolve the driver from watching the road or reacting to changes there.
To help insurers, carmakers, and consumers understand where to draw that line, Thatcham’s report listed the 10 considerations they will use when rating safety levels of “assisted driving” systems:
- Naming—The system’s name should clearly describe its capabilities.
- Law Abiding—The system should be programmed to follow all legal rules of the road.
- Design Domain—The system should only provide assistance in environments where it can be safely engaged.
- Status—Vehicles should provide adequate alerts for when the system is in use, the level of assistance being offered, and when the system is no longer operating.
- Capabilities—Vehicles should provide the assistance in all typical driving situations.
- Driver Monitoring—The system should not impede a driver from remaining engaged in the act of driving or being able to take back full control of the vehicle.
- Safe Stop—Vehicles should be able to stop safely in the event a driver fails to heed system warnings.
- Crash Intervention—Vehicles should be able to avoid or prevent an accident in case of an emergency.
- Back-Up Systems—There must be safeguards in place should an assisted driving system fail.
- Accident Data—The vehicle should record and report which systems were in use at the time of an accident.
The group is especially hopeful that the ideas will aid insurance agencies struggling to solve new-age problems, such as who to blame for an accident involving a “self-driving” car.
If the technology involved in a crash is deemed an “assisted driving” program, human drivers should still be on the hook for any damages, whether or not the program was in use at the time, Thatcham’s report recommends.
But “automated” systems will play by their own sets of rules.
An “automated car,” as defined by Thatcham researchers, comes much closer to what many imagine when hearing the term.
The vehicles will be rated by the capability to safely drive themselves in specific environments, without any input from a human passenger. That means the computer will be considered in control of the car and ultimately responsible for its movements.
But when autos are truly autonomous, humans become omni-passengers, which complicates things for insurance providers. Since cars will be deemed responsible for driving, the companies could be held liable for nearly every accident involving vehicles driving in “automated” mode. And car owners could potentially collect on the incidents, even when their own car is at fault.
Since cars will be deemed responsible for driving, the companies could be held liable for nearly every accident involving vehicles driving in “automated” mode.
This makes it especially important for automakers to speak the same language, Thatcham’s researchers argue. If one man’s “autopilot” is another’s “cruise control,” it will be increasingly difficult for insurance companies to determine fault—or what tasks a driver should have expected their car to handle.
It’s also imperative for the vehicles to clearly communicate with the driver when autonomous mode is in use—which will be a top consideration when Thatcham’s team begins rating “automated driving” programs later this year. (Other considerations were similar to those listed for “assisted driving.”)
And many automakers are heeding the call.
Writing on the Wall
The coming safety ratings could be a watershed moment for producers of autonomous vehicles.
Thatcham’s influential rankings are widely regarded in the UK and beyond. Their safety ratings often impact vehicle sales and influence insurance premiums set for certain models. And most automakers in Europe strive to meet their stringent standards to get ahead of those problems.
The safety rating system could also filter across the Atlantic, where lawmakers are struggling to come up with their own rules of engagement with the technology.
And if the concept takes hold, Thatcham’s ten considerations could come closer to the Ten Commandments.