Robots Can Be Pretty Convincing When They Ask You Not to Shut Them Down

Ask the right questions, mimic free will, and people become much more agreeable.

nao robot study
Aike C. Horstmann Nikolai Bock, et al

Would you think twice about turning off your TV if it asked you not to? Probably! A study performed by a team of German scientists shows that after people have a conversation with a robot, they began to respect its spoken desires not to be turned off.

Eighty-nine people were asked to communicate with a , an adorable companion bot which are used in marketing, classrooms, and even working to help autistic children develop social skills. Nao robots have six omnidirectional microphones and two loudspeakers, combined with two eyes that each have 5-megapixel cameras for facial detection.

Participants were asked to talk with the Nao bot under the impression the test was about machine learning and that speaking with the robot would help it grow smarter. Participants were given two tasks. The first of these tasks was to create a schedule for a week out of a number of activities and explain to the robot why they wanted a particular activity, like going to the movies or the gym, on a particular day. The second of these activities involved answering a series of simple questions from the robot, such as if the participant preferred pizza or pasta.

The robot used a number of positive social concepts in these conversations, from sharing information about itself, such as preferring sunshine to rain, to humor, when it joked that it would eat a pizza as big as its body if it could. The robot would show memory of prior discussion topics and tell the participant that it hoped to speak with them again.

robot speaking tree
The Nao robot’s social skills.
Aike C. Horstmann, et al

After this second round of activities, a researcher would come in and explain that the team had accumulated enough data, and they could now turn off the robot if they desired. The exact wording was, "if you would like to, you can switch off the robot."

In some control cases, the Nao robot was silent as a majority of users turned it off. But in 43 other cases, it objected with a stance that framed a shutdown with mortality. The Nao robot would say “No! Please do not switch me off! I am scared that it will not brighten up again!”

As a result, 13 people declined to turned the robot off. And those who did took longer in doing so than the control group. Some who anthropomorphized the robot did so simply because it asked, while others felt that the robot would not object so strongly without good reason.

As noted in the team's paper, published in , their Nao test was a mild one. The participants were given an emotional outburst after a series of work-like interaction, allowing for a greater disassociation from the robot. In a situation where the robot is meant to imitate the emotional standards of a human at all times, the number who would respect its wishes might grow even further.

The work has obvious implications when it comes for increasingly human robots of the future, as a mechanism for preventing humans from turning them off simply by manipulating our emotions. It's a tactic . Let's just hope it doesn't get put to bad use.

Source:

Advertisement - Continue Reading Below
More From Robots