Trash talk has long been an effective (but underhanded) tool when it comes to sports, gaming, and other competitive endeavours. But it’s been assumed that it’s a strategy that only works between humans who can deliver remarks with emotional weight. It turns out that’s not the case, as researchers from Carnegie Mellon University discovered after programming a docile robot to trash talk a human opponent.
[referenced url=” thumb=” title=” excerpt=”]
The robot in question was one of Softbank’s Pepper automatons that is one of the few widely deployed robots that deal directly with humans by answering questions in museums or directing travellers around airports. It’s about the least intimidating robot you can imagine, and its trash talk, which included phrases like “I have to say you are a terrible player,” or “Over the course of the game your playing has become confused,” aren’t exactly the type of utterances that will fuel a barroom brawl.
But when Pepper was pitted against 40 study participants who were technologically savvy and knew that a pre-programmed emotionless robot was doling out the insults, it was found that those who were subject to the bot’s taunts and insults (as opposed to statements of encouragement) still didn’t score as well and didn’t improve as well after playing a game against the robot 35 times.
Furthermore, the study, which was presented at the IEEE International Conference on Robot & Human Interactive Communication in New Delhi, India, last month, found that the trash-talking didn’t necessarily have to come from a robot as sophisticated as Pepper. Even a device in a non-humanoid form like a computer could affect a person’s behaviour using negative feedback—which is what is most concerning here.
The study sheds some important light on just how influential robots could one day be to humanity, but even the AI-powered voice assistants we all rely on now can have an effect on human decision-making and mental health. On the plus side, there’s the potential for machines to be genuinely useful when it comes to improving someone’s mental health based on how they intelligently respond to comments or questions, and eventually robot’s could be genuinely beneficial as companions for those who simply don’t want to be alone by delivering effective words of encouragement.
However, if a personal assistant or automated AI is working towards goals that are contrary to a human’s best interests, there’s the potential for the conclusions of this research to be abused. Imagine a store’s interactive shopping assistant that’s been programmed to direct shoppers towards pricier items by making them feel inferior or preying on their insecurities over choosing cheaper items. Eventually how a robot or an AI responds to a human—the emotions used, the specific phrasing of a response—could be even more important than the accuracy of its comprehension and feedback.