What does it mean to be human? It’s a philosophical quandary both for and of the ages. But one federal agency recently offered its own legal take on the timeless query—and it involves the ability to navigate a car.
The computer system controlling Google’s new self-driving vehicles could be considered the equivalent of a human driver in the eyes of the law, according to a statement made by the National Highway Traffic Safety Association (NHTSA) earlier this month.
The lofty designation could clear a number of tricky hurdles previously facing Google—and other companies on similar quests to take the responsibility of driving out of human hands—on their path toward bringing self-driving technology to roads across America, and, eventually, the world.
What could potentially make the NHTSA’s announcement so effective is its broad definition of the concept of a “driver.” Until now, the ability to test “driverless” cars required the vehicles to include at least one licensed individual. But in a letter written to Google, the NHTSA declared, “. . . it is more reasonable to identify the driver as whatever (as opposed to whoever) is doing the driving.”
“In this instance”—where no human driver is available in the car—“an item of motor vehicle equipment, the Self-Driving System, is actually driving the vehicle,” the letter went on.
Specifically, this declaration will help the tiny, white, pod-type cars preferred by Google. The vehicles have been controversial in their testing, as they include neither a steering wheel nor pedals that a human could use to override the self-driving system.
The NHTSA’s announcement flies in the face of a recent set of rules recommended by the California Department of Motor Vehicles (DMV), which took a more traditional stance on the definition of “driver.”
The precedent-setting attempt at self-driving vehicle regulation, which was released late last year, stated that any self-driving vehicle being tested in the state would be required to include a steering wheel, pedals, and at least one human with a driver’s license, among a spate of other suggestions.
The rules have yet to be officially adopted in California, but it’s unclear what effect they would have in the wake of the national agency’s declaration, and which philosophy would take precedent in the state.
Also unclear is who would be deemed responsible should a car with no drivers find itself in an accident or otherwise breaking the rules of the road, and the effect of this development on the car insurance industry.
What is crystalline, though, is the other doors the NHTSA’s definition of “driver” will open for Google and other self-driving car companies. Most notably, it would allow the businesses to tap into the new $4 billion program designed by the Obama administration to perfect driverless technology.
And a recent batch of numbers released by the California DMV shows that such technology still has much room for improvement. Out of 459,695 test miles driven by driverless cars in the state last year, the automated technology was disengaged a total of 2,704 times, for a number of different reasons, according to data the CA DMV now requires from self-driving car companies on an annual basis.
Those remaining bugs in the system—as well as the until-recent lack of a federal embrace of automated technology—has led many car companies to focus instead on less invasive technology that will aid a driver, rather than fully take over driving duties, such as assistance with staying in one’s lane and help with parallel parking, among other advances.
But never afraid to buck a trend, Google started its decidedly different path in the industry last year, after coming to the conclusion that humans could scantly be trusted to navigate the “hand-off”—the moment when they must step in and take control from a car that was driving itself.
The tech giant relied on data gathered from its own employees to inform its decision. After it was initially approved to test its automated vehicles near San Francisco in 2010 with professional drivers behind the wheel, Google was granted permission to use the vehicles to escort some of its members to and from work in 2014. What they found was a number of employees exhibiting dangerous behavior—becoming too distracted to know they needed to step in, or, in some cases, even falling asleep—in the lack of the need to pay constant attention to the road.
The experiments led Google to take the stance that the most dangerous thing on the road today is a human driver, not his automated counterpart.
But how much safer will these new batch of “drivers” become? And how will people react to this new technological development? That’s just another question the human race will now have to answer.