Quantcast

Robot Helping To Gain Knowledge Of Trustworthy Behavior

September 13, 2012
Image Caption: Nexi the robot has been furthering social science and social robotics studies for the past several years. Photo by Mary Knox Merrill

Lee Rannals for redOrbit.com – Your Universe Online

“Let’s get weird with this.” That is what I imagine researchers saying when they decided to start working on their latest robotic creations.

Northeastern University collaborated with MIT to use its Nexi robot to help scientists determine whether or not someone is trustworthy.

The robot has been used in previous experiments to help scientists understand social interactions, but this latest experiment focuses more on picking up on subtle hints of sketchy behavior.

Writing in the journal Psychological Science, the researchers said they performed experiments to help understand these behaviors.

During their first experiment, researchers collected data from face-to-face conversations with research participants. They asked 86 Northeastern University students to have either a face-to-face conversation with another person, or to engage in a web-based chat session.

The live conservations were then video recorded and later coded for the amount of fidgeting the two participants demonstrated.

After the conservation, the same two people were asked to play a game with real money at stake. During the game, players could either be selfish and make a lot of money for themselves, or they could be generous for a smaller profit.

The players tended to be less generous when they didn’t trust the other player, according to the team.

Those players who engaged in face-to-face conversations were much better at picking out the less honest than those who only participated in online chats. If someone displayed the cues of dishonesty, the partner was less likely to be generous.

In order to help validate the cues, the researchers called in Nexi to help determine who was trustworthy.

Experimenters controlled Nexi’s voice and movements from behind a curtain while the robot participated in conversations with students.

The participants were unaware of the experimenters behind the curtain, and when they played the money game with Nexi later, they assumed they were playing a robot. When Nexi touched its face and hands during the initial interview, or leaned back or crossed its arms, people did not trust it to cooperate in the game.

By controlling the nonverbal cues participants received, the Nexi experiment helped to confirm that not only were the cue sets from the first experiment true, but that robots are capable of building trust and social bonds with humans.

Northeastern University psychology professor David DeSteno said that through the research, the team was able to determine that our minds are willing to accept robots, and assign moral intent to technological entities.


Source: Lee Rannals for redOrbit.com - Your Universe Online



comments powered by Disqus