Recently, Google announced its “Pretty Please” feature for Google Assistant. The feature rewards kids’ use of polite language by saying, “Thanks for saying please,” “Thanks for asking so nicely” and “You’re very polite.” TechCrunch reports that the feature is meant to address parents’ growing concern that kids are learning to treat “the virtual assistants in smart speakers rudely, which would translate into their interactions with people.”
Mounting evidence suggests that kids learn to imitate their parents’ frustrated and rude interactions with “sociable tech,” such as Google Assistant and Amazon’s Alexa. And there’s also plenty of proof that kids need no role models to develop effortlessly imperious relationships with tech all on their own. As one parent confessed to SFGate, “I’ve had to warn my daughter repeatedly to speak to OK Google respectfully. She has a bad habit of talking over her, especially when she suspects OK is on the wrong track. And it’s carried over to her behavior at home. She’s been interrupting us more than usual.”
The topic of “etiquette and technology” might strike us as radically new outside of the “How rude!” protestations of our robotic sci-fi sidekicks. But studies into how humans relate to machines and media in situations that require social(ish) interactions have been ongoing for years. And it turns out that the ways we treat machines and the ways we treat each other may be inextricably intertwingled.
In 1996, Clifford Nass and Byron Reeves, communications researchers at Stanford University, published The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. They reran 35 classic social psych experiments replacing the people test subjects would normally interact with in these studies with computers and media. And what they found was remarkable. People “relate” to tech in the only way they know how – they treat social encounters with technology like they treat their encounters with people.
Reeves and Nass conclude, “Our brain evolved in an environment where only humans and other living creatures behaved socially. Everything that seemed to be a social actor with feelings was one. Therefore, we respond automatically and spontaneously socially on social cues.”
For example, if a computer asks users to evaluate it after it performs a task, users will evaluate it more positively (and less honestly) than if another computer asks, “Hey, how did that other computer do?” In other words, we’re polite to machines when giving them direct feedback to not hurt their “feelings” just as we are with people. If we’re going to talk trash, we’ll do that behind their backs, or, rather, screens.
That’s timeless human nature. Our deep social plumbing is applied to strange new technological environments that increasingly simulate social interactions. Google seems to get this. In fact, Google seems to get two separate and contradictory things.
First, their politeness initiative for kids seems to acknowledge that kids are born into a world of technology with social-psychological wetware optimized for the human Serengeti of 150,000 years ago. Google understands that teaching “etiquette” through technology is not all that crazy, because not doing so is a form of social education that may actually be encouraging incivility.
But Google also seems to know something else. Something about how pleasurable it can be to get technology to do our bidding. And that’s revealed in how Google markets their Assistant.
Google’s marketing campaign explicitly personifies Google Assistant as something you can and should compel to act. Don’t just cause it, let it or ask it. Make it. Transitive verb applied to an entity with implied agency. As in “force, coerce, press, drive, pressure, oblige or require” (as per Google’s dictionary definition). It’s your right. You’re entitled. It won’t object. It exists to comply. To serve.
This is not just an “etiquette-free” sentiment. It’s gleefully authoritarian. Just look at the use case examples Google gives in its wonderfully acted and produced celebrity Oscar spot. They include long lists of chores, unpleasant tasks and small things to remember or do that you can’t or don’t want to do yourself.
Throughout, it’s clear that the source of control Google promises the user is juxtaposed against the absolute lack of agency of its Assistant. What makes it desirable is that it can’t say no. A semi-social thing you never have to say “please” to. By definition.
What if this contradiction is not a bug? What if this is the defining (uncanny, creepy) feature of “sociable” technology? What if the more we make things with no agency human-like, the more we teach our kids that some human-like things (i.e., humans) can also have no agency and therefore deserve no respect? (Please read that again. I’ll gladly wait. Take all the time you need. Thanks in advance.)
Plus: Read part 2 of Ted Florea’s Human Etiquette discussion here.