If you’ve ever cooed at Pikachu, you’ve felt its effect: “Kawaii,” the Japanese construct of amplified adorableness responsible for the army of precious, doe-eyed cartoon characters hailing from the island nation.
Altogether, the word can mean tiny, sweet, or cuddly, but at its root is ai, Japanese for “love,” literally putting the (bubble-shaped) heart in kawaii.
And the concept could prove to be crucial for another type of “AI”: the artificial intelligence attempting to drive our vehicles.
In order for us to accept fully autonomous rides, we must first learn to trust the machines replacing us at the wheel, and tugging at heartstrings may be a potent way to foster that loving feeling.
Some in the auto industry have already tested designs that have more than a passing resemblance to an anime character—but with public faith in the technology flagging, will a cute face sell us on the tech and teach us to love our robot chauffeurs?
Take It at Face Value
Clever carmakers in pursuit of cuteness need look no further than Mother Nature for style cues.
Small creatures from across the animal kingdom tend to have large eyes, chubby cheeks and small noses for a reason. It’s called “baby schema,” and it may play a big role in our survival.
Studies have shown that the “adorable” configuration—which also includes a round face, plump body, and small mouth—sets off a chain reaction in our brains that stimulates our protective instincts. The evolutionary trick helps ensure fussy infants are cared for and nurtured. (It also explains why we all squeal with delight at babies and puppies, despite ourselves.)
Many of those features were mirrored in the earliest self-driving prototypes produced by Google, which were about as round and chubby as a car can get. The 50-vehicle fleet—which have since been retired—also sported large, wide-set headlights, tiny sensor noses, and cute turning signal cheeks. Even the auto in Google’s self-driving car logo wore a smiling fender.
And according to a report by The Oatmeal, the design was intentional. Engineers hoped it would encourage fellow motorists to treat the cars kindly, they said—and not fear or ram into them.
But transferring human qualities to the inanimate objects could help sway the public in other ways.
When we recognize face-like shapes on other objects, we tend to assign them human qualities, in a process called anthropomorphism. And the results can be powerful.
Inanimate objects can be deigned inherently “good” or “bad” when the mental filter is applied, making us perceive that the things deserve punishment or reward, according to recent study. The same parts of our brain charged with judging the behavior of fellow humans start activating when we elevate inanimate objects to the level of our peers, the researchers found. And the closer something looks to a person, the stronger the psychological link.
If we start seeing ourselves in the cars, we may be more inclined to approve of them; and if we see the most adorable version of ourselves in the vehicles, we may be more inclined to love them.
But should we trust them?
What's Love Got to Do with It?
Currently, trust is one quality the technology isn’t widely afforded.
Seventy-eight percent of participants in a recent AAA survey said they feared self-driving cars, while 54% said they’d feel less safe sharing the road with the autos. And that reluctance is growing, according to a study by the Massachusetts Institute of Technology. Compared to a similar assessment last year, the MIT study found both a jump in the number of respondents saying they’d want to remain in control of their vehicles and a drop in those who’d be okay with handing off all driving responsibilities, especially in the critical 25-34 age group.
Most of that anxiety is likely rooted in the newness of the technology, but some of it may be wariness developed over the years.
Automakers have used their considerable clout to combat new safety features in the past—including the installation of airbags and, subsequently, seat belts—saying the additions were too costly. A history of slow responses to problems and botched recalls could also have left a bad impression, argued Jason Levine, executive director of the Center for Auto Safety, in a Washington Post opinion piece.
Cuteness could serve as a powerful rebranding tool, allowing the auto industry to recast cars as family-oriented and safe, instead of powerful and cool.
To add, new legislation on the subject could further reduce transparency—and ramp up suspicions. A raft of laws, passed recently by the House of Representatives, deems driverless car crash data “confidential business information,” letting automakers off the hook from reporting any incidents to the public. If approved by the Senate, the measures would also let manufacturers flout safety rules in 275,000 vehicles and prevent states from drafting their own policies on the technology.
Cuteness in autonomous vehicles, then, has the potential to foster trust and serve as a powerful rebranding tool, allowing the auto industry to recast cars primarily as family-oriented and lending their safety instead of the sexy, powerful machines that lend us their cool. (And if we really do see ourselves in the vehicles, such fast, eye-catching models would seem even more threatening.)
With a potential $7 trillion economy riding on the new transportation, the cars will assuredly be rolling our way soon. Is it time to embrace the inevitable?
The idea that technology should be adorable isn’t unique to Google. Others in the autonomous realm have spoken out on the subject, including Carol Reiley, president of Mountain View, California-based tech company Drive.ai.
“I think robots need to be adorable and loved,” she said recently to Wired. “And it’s hard to love something you can’t understand.”
Her firm and a number of others are currently working on ways to develop that understanding, through enhancing robocar-human communication. But nurturing an emotional connection by stoking psychological instinct could help foster an even stronger relationship with the technology.
And if the vehicles are eventually able to think like humans, they may similarly develop differently in an environment of fear versus one of love.
Maybe by meeting the technology with open arms, we can help teach our AI about ai.