Politeness in Virtual Assistant Design
The wave of chatbots and virtual assistants like Cortana, Siri, and Alexa means that we’re engaging in conversations with non-humans more than ever before. Problem is, those non-human conversations can turn inhuman when it comes to social norms.
Interactions with virtual assistants aren’t totally devoid of human interaction. Indeed, they often disguise a true human interaction. Many chatbots aren’t fully automated and rely on humans to pick up the slack from the code. More fully-constructed virtual assistants like you find in Amazon’s Echo or your Apple iPhone are carefully programmed by humans. The programming choices they make also define your interactions with the personalities—and these interactions can redefine how you treat people.
A clear indication that someone is truly polite and kind is treating service people with respect, patience, and kindness. The rise of chatbots and virtual assistants, however, means that you’re never quite sure whether you’re speaking to a human. You might think that people can easily tell the difference between when they’re interacting with humans and when they’re interacting with a voice inside a smart box, but as the technology behind virtual assistants like Google Assistant, Amazon Alexa, or used by call centers evolves, that will get harder to evaluate. (Even when you’re calling a call center, it can be hard to tell whether you’ve reached a well-programmed intake bot or a real person who’s fully in the groove of their phone voice).
I find it fascinating (and saddening) that the programmers of Google Assistant’s Duplex chose to program in “umms” and “mmhmms” and did not program in any kindness indicators. Instead the voices come across as impatient and slightly condescending. I listened to the sample clips linked by Ethan Marcotte in his post Kumiho, about Google Duplex. If virtual assistants don’t include programmed kindness, the emotional labor performed by service workers will continue to be too high.
Programming to add kindness from virtual assistants is important, but so too is programming virtual assistants to expect kindness. We’re starting to be conditioned to treat chatbots as recipients for code-like commands, requiring a specific set of inputs, and those inputs do not acknowledge politeness.
It may seem overly-prescriptive, but in the same way that parents withhold items from their children until they “ask for it nicely”, it might be practical to include a “politeness mode” in virtual assistants. Hunter Walk wrote about how Amazon Alexa interactions are affecting his child, and Ben Hammersley blogged about the fact that there is no reward for politeness when he interacts with Amazon Alexa:
But there’s the rub. Alexa doesn’t acknowledge my thanks. There’s no banter, no trill of mutual appreciation, no silly little, “it is you who must be thanked” line. She just sits there sullenly, silently, ignoring my pleasantries. And this is starting to feel weird, and makes me wonder if there’s an uncanny valley for politeness. Not one based on listening comprehension, or natural language parsing, but one based on the little rituals of social interaction. If I ask a person, say, what the weather is going to be, and they answer, I thank them, and they reply back to that thanks, and we part happy. If I ask Alexa what the weather is, and thank her, she ignores my thanks. I feel, insanely but even so, snubbed. Or worse, that I’ve snubbed her.” “It’s the computing equivilent of being rude to waitresses. We shouldn’t allow it, and certainly not by lack of design. Worries about toddler screen time are nothing, compared to future worries about not inadvertently teaching your child to be rude to robots.
As virtual assistants become more common in day-to-day interactions, if they do not account for politeness, we might become a less kind society. Not only that, but impolite virtual assistants will add to the emotional labor performed by the service workers that don’t find their jobs replaced by technology.