Google’s smart car looks a bit like a marshmallow. It’s round, puffy and aggressively unaerodynamic. The lights in the front and back look like slightly crossed eyes, giving the whole car an air of comic buffoonery, despite the deeply complex hardware and algorithms that dictate its motion and steering.
But despite being perhaps a bit embarrassing to climb into, the poofy-cute look might be one of the best tools for keeping you safe on the road–a look specifically designed to charm and pacify other drivers as they navigate around the slow speeds and sluggish reaction times of the smartcar.
As it turns out, the more adorable the vehicle, the less it gets in accidents with other drivers.
This leveraging of our penchant for the cute and lifelike within everyday technology is hardly new.
Humans anthropomorphize practically everything they can project a face onto–even images as simple as an electrical socket have been proven to elicit positive reactions from viewers.
But use of the power of squee in robotics is relatively new, and has serious implications for how the future of human-computer interaction can play out.
On the one hand, it makes the use of pervasive robotics much easier to pitch to the consumer public. Nobody wants a Terminator vacuuming their house after they’ve left for work, but a puppy-sized Roomba making happy chirping noises and clumsily bumping around is a much easier sell.
And when considered for more advanced applications, using the power of cute when designing a human-driven system can overcome some significant psychological hurdles. Simple additions such as rounded physical shapes, pervasive use of faces, voice responses and spontaneous actions are all ways to make a machine seem lifelike, and in doing so, change our expectations and behaviors while using it.
Anthropomorphized robots are easier to trust, harder to get frustrated at and more inclined to be regarded well in hindsight. When put through a simulation of crashing in a self-driving car, users were more likely to rate their experience positively when the experience was narrated by a robotic voice rather than when they were driven silently, even though both groups experienced the same virtual accident.
Relating with the user can help too. IT researchers have discovered that if a piece of software self-deprecates on occasion, or takes the user’s side, it can assist in offsetting user anger at software bugs or technical failures.
When Clippy the Microsoft Office Assistant is programmed to politely remind you to submit an error report, users can’t stand him; but when researchers programmed the same icon to chew his own company out a bit (“Let’s tell Microsoft how bad their help system is!”), user reactions soared.
Spontaneity is surprisingly critical to these interactions–it’s only when we feel we can’t fully expect what a robot or program is going to do next that our brains start to respond as if it is a real person. But making robots seem human can come with risks as well.
Giving a machine too wide a breadth of social options can end up being frustrating, as users can begin to expect more humanity than it’s capable of, and become frustrated by its inevitable limitations. Over-empathizing with an unthinking system can cause real problems with replacements and repairs as well, since users often end up keeping anthropomorphized technology long after it becomes broken or obsolete out of sentiment alone.
Like it or not, we can build real bonds with our lifelike little robots, and the idea of tossing one for a new model could be heartbreaking for some users–not exactly the attitude a company wants when trying to launch their new product line. It’s a philosophy that requires restraint, leveraging just enough affection that you can reap the positive benefits, but leaving enough emotional breathing room for the user to feel comfortable tossing an old system.
So, while the dawn of cute robots might not sound all that threatening or unpleasant, at least try to at least understand what you’re getting into. A system’s babylike features and chirping voice might be good for improving your mood, but keep your sympathy in check–no matter how cute and cuddly, it’s still just lights and clockwork underneath.
Copeland is a member of the class of 2015.