Ain’t No Mountain High Enough, Ain’t No Uncanny Valley Low
February 12, 2017 § 5 Comments
Looking at the images above, how do they make you feel? Disturbed? Uncomfortable? Maybe even scared? The four pictures shown are all humanoid with a face,mouth, eyes, and every feature we as humans have. However, each photograph makes the viewer feel that something is off. They look human, yet there is something about them that is not entirely organic. The second photo shows the most realistic looking human out of the four, right? Well, head on over to this site and see if you feel the same: http://www.cubo.cc/creepygirl/. Her head moves with the direction of your mouse, and she also changes facial expressions. Yet, though she looks like any other human being in a still picture, her facial movements are far from being perfectly human.Thus leading us down to the uncanny valley.
The term “uncanny valley” comes from roboticist Masahiro Mori. In 1970, Masahiro Mori hypothesized that there comes a certain point when an object that is human like becomes strange due to its lack of familiarity. In his translated words, “[T]he appearance is quite human like, but the familiarity is negative” (Pg. 2). This leads humans to feel unease at the the unfamiliarity produced by these creations. Thus, Masahiro Mori had a message to designers out there: “I recommend designers take the first peak as the goal in building robots rather than the second. Although the second peak is higher, there is a far greater risk of falling into the uncanny valley” (Pg. 3). He implored designers to take this hypothesis into consideration and that we should not be too ambitious as to design a robot that is as human as possible, for even missing that mark by a small percentage will lead to the discomfort of the uncanny valley.
Scientists have been wondering why this phenomenon happens. One reason may have to do with perception of experiences. Can you trust your eyes? Take a look at this video and see just how many of these mind tricks you can out-see: https://youtu.be/0NPH_udOOek.
Now, this may seem unrelated to perception. However, there is a crucial point in the video. “Our brains and eyes have evolved to see, but our vision makes assumptions based on learning, memory, and expectation…” (2:10-2:14). We as humans, through our everyday experiences, have come to expect what a real human looks, acts, and feels like. We have a set criteria that makes humans human because of the information we received from what we see day to day. So when something comes across our eyes that doesn’t meet our expectations of what we think it’s supposed to look like, it leads us to the uncanny valley.
Ayse Pinar Saygin, Thierry Chaminade, Hiroshi Ishiguro, Jon Driver, and Chris Frith conducted a study on people’s Action Perception System (APS), which includes the temporal, parietal, and frontal areas of the brain, and looked at fMRI (functional magnetic resonance imaging) repetition suppression. Repetition suppression is a phenomenon where there is “reduced neural response to a repeated stimulus” (Pg. 415). Therefore, “[P]ositive suppression means there [is] less response to repeated stimuli (Pg. 416).” They hypothesized that ” the uncanny valley may, at least partially, be caused by the violation of the brain’s predictions…” (Pg. 414). We assume a certain object will move a certain way because that is what our life experiences have taught us. However, when our APS sees that the movement is not what we expected, a type of “error” occurs (Pg. 415).
The study had 20 participants reacting to 3 different agents (though one participant’s data was excluded from the end results): a robot with robot-like movements (A), an android with robot-like movements (B), and a human with human-like movements (C). Both A and B used the same agent, but they were dressed differently. A was made to look like a robot in which the mechanics of the face and body were revealed. On the other hand, B was dressed with human-like qualities, such as skin, hair, and clothes. This robot/android was called Repliee Q2 and was modeled after the human in agent C (Pg. 415). The participants would then be monitored with fmri to see how their brains would react to videos of these 3 agents moving in transitive (picking up, grapping, wiping, etc.) and intransitive (waving, bowing, nodding, etc.) motions (pg. 416).
The results showed that APS reaction to A and C were quite similar. However, there was a clear difference between these two and B. The participants’ brain scans for agent B showed much more repetition suppression. Though this experiment was not designed to explain the uncanny valley, it leads us to believe that there may be a connection to our brain’s APS and the phenomenon created by the uncanny valley. Because the actions of the Android, Agent B, who looks like a human, did not present the actions participants thought it would enact, a prediction error occurred in their APS (Pg. 420).
This shows there is a sort of discomfort when introduced to things that are familiar yet do not perform as expected, a discomfort hypothesized by Masahiro Mori. The objects, motions, and scenery we see everyday are the norm for us. But what does the uncanny valley implicate for our daily lives? Can you really trust your eyes? Perhaps there will come a time when we really can make androids that look as real as we do in which we cannot see any difference and finally surpass the uncanny valley. But is this a good thing or not? What does it say about humans and humanity? Is it something that can be created and imitated, even if it is not organic?