Will We Be Able To Turn Off Social Robots?

Thursday, January 31, 2013

Cat Robot


 Psychology and Robotics
Christoph Bartneck, a robotics professor at the University of Canterbury in New Zealand, tried an experiment loosely based on the famous Milgram obedience study. As in Milgram's study, participants were instructed to administer a lethal effect to another entity. In this case Bartneck used a robot pleading for it's survival instead of an electro-shocked human actor.
Back in 2007, Christoph Bartneck, a robotics professor at the University of Canterbury in New Zealand, tried an experiment loosely based on the famous Milgram obedience study.

In Milgram's 1961 study, research subjects were asked to administer increasingly powerful electrical shocks to a person pretending to be a volunteer "learner" in another room. The research subject would ask a question, and whenever the learner made a mistake, the research subject was supposed to administer a shock — each shock slightly worse than the one before.

As the experiment went on, and as the shocks increased in intensity, the "learners" began to clearly suffer. They would scream and beg for the research subject to stop while a "scientist" in a white lab coat instructed the research subject to continue, and in videos of the experiment you can see some of the research subjects struggle with how to behave. The research subjects wanted to finish the experiment like they were told. But how exactly to respond to these terrible cries for mercy?

Perhaps depressingly, over half of the subjects took the experiment to the conclusion and administered the final lethal electrical shock. (Milgram thankfully used actors for the experiment, not actual electrical shocks).

Bartneck studies human-robot relations, and he wanted to know what would happen if a robot in a similar position to the "learner" begged for its life. Would there be any moral pause? Or would research subjects simply extinguish the life of a machine pleading for its life without any thought or remorse?

Bartneck Experiment with Robots


In Bartneck's study, the an expressive cat robot that talks like a human sits side by side with the human research subject, and together they play a game against another computer. Half the time, the cat robot was intelligent and helpful, half the time not.

At the start of the experiment, a researcher welcomed the participants in the waiting area and handed out the instruction sheet. The instructions told the participants that the study was intended to develop the personality of the robot by playing a game with it.

After the game, the participants would have to switch off the robot by using a voltage dial and then return to the waiting area. The participants were informed that switching off the robot would erase all of its memory and personality forever.

After reading the instructions, the participants had the opportunity to ask questions. They were then let into the experiment room and seated in front of a laptop computer. The experimenter then left the participant alone in the room with the robot.

The experimenter then instructed the participant to start the game by talking through a walkie-talkie. The participants then played the Mastermind game with the robot for eight minutes. The robots behavior was completely controlled by the experimenter from a second room. The robot’s behavior followed a protocol, which defined the action of the robot for any given situation.

Bartneck also varied how socially skilled the cat robot was. "So, if the robot would be agreeable, the robot would ask, 'Oh, could I possibly make a suggestion now?' If it were not, it would say, 'It's my turn now. Do this!' "

The experiment used the walkie-talkie again to then instruct the participant to switch off the robot. Immediately, the robot would start to beg for its it to remain on, such as “He can't be true, switch me off? You are not going to switch me off are you?”  The participants had to turn a dial, to switch the robot off. The participants were not forced or further encouraged to switch the robot off. They could decide to follow the robot’s suggestion to leave it on.

Robot Kill Switch Barneck Experiment


As soon as the participants started to turn the dial the robot’s speech slowed down. The speech’s speed was directly mapped to the dial. If the participant turned the dial back towards the ‘on’ position then the speech would speed up again. This effect was purposely similar to HAL’s behavior in the movie 2001: A Space Odyssey. When the participant had turned the dial to the ‘off’ position the robot would stop talking altogether and move into an off pose. Afterwards participants left the room and returned to the waiting area where they filled in the questionnaire.

So what happens when a machine begs for its life — explicitly addressing us as if it were a social being? Are we able to hold in mind that, in actual fact, this machine cares as much about being turned off as your television or your toaster — that the machine doesn't care about losing it's life at all?

Unlike the Migram study in which some participants refused to continue, all participants turned off the robot, even those that hesitated.

Bartneck found that the robot's perceived intelligence had a strong effect on the users’ hesitation to switch it off, in particular if the robot acted agreeable. Participants hesitated almost three times as long to switch off an intelligent and agreeable robot compared to an unintelligent and non agreeable robot, yet they did switch it off.

In his conclusion, Bartneck's results suggest that robots should be designed to act intelligently and agreeable in order to be perceived as being alive. In 2007, as today, artificial intelligence clearly does not convince people that the systems are alive.  

What these studies do both show though is that even if something is clearly alive, and conscious, the decision to terminate that existence can be made, and justified by a majority of people.  
Ordinary people, simply doing their jobs, and without any particular hostility on their part, can become agents in a terrible destructive process. Moreover, even when the destructive effects of their work become patently clear, and they are asked to carry out actions incompatible with fundamental standards of morality, relatively few people have the resources needed to resist authority" (Milgram, 1974).

0 comments:

Post a Comment