Why Do Human-like Robots Look So Creepy?
Researchers have peered into the “uncanny valley” of the human brain for the first time.
The “uncanny valley” refers to an artificial agent’s drop in likeability when it becomes too humanlike.
The international team of researchers said they used an MRI to determine that what may be going on is due to a perceptual mismatch between appearance and motion.
The team said many viewers find the characters in the animated film “Polar Express” give them the creeps. Most of the androids used in the study also fell into the uncanny valley category.
Ayse Pinar Saygin of the University of California, San Diego and colleagues set out to identify “the functional properties of brain systems that allow us to understand others’ body movements and actions.”
The researchers tested 20 subjects between the ages 20 and 36 who had no experience working with robots.
The participants were shown 12 videos of Repliee Q2, a Japanese android, performing ordinary tasks as waving, nodding, taking a drink of water and picking up a piece of paper from a table. They were also exposed to videos of the same actions being performed by the human on whom the android was modeled after, as well as a bare-version of the android.
The researchers observed fMRI results of the subjects as they were watching the videos.
According to the researchers, the brain “lit up” when the human-like appearance of the android and its robotic motion “didn’t compute.”
“The brain doesn’t seem tuned to care about either biological appearance or biological motion per se,” Saygin, an assistant professor of cognitive science at UC San Diego and alumna of the same department, said in a statement. “What it seems to be doing is looking for its expectations to be met ““ for appearance and motion to be congruent.”
The researchers say that if it looked like a human and moved like a human, the subjects were ok with it. Also, the participants were ok if it looked like a robot and acted like a robot. The team said that our brains have no difficulty processing this information.
However, the twist fell into play when appearance and motion did not match up.
“As human-like artificial agents become more commonplace, perhaps our perceptual systems will be re-tuned to accommodate these new social partners,” the researchers write in the journal Social Cognitive and Affective. “Or perhaps, we will decide it is not a good idea to make them so closely in our image after all.”
Saygin said it is “not so crazy to suggest we brain-test-drive robots or animated characters before spending millions of dollars on their development.”
Image 2: Brain response to videos of a robot, android and human. The researchers say they see, in the android condition, evidence of a mismatch between the human-like appearance of the android and its robotic motion. Credit: Courtesy Ayse Saygin, UC San Diego
On the Net: