How are faces recognized? Which features are important? Does culture play a role?

droppedImage_1.jpg

We have recorded a large, 3D database of Korean faces. This has been used to construct a morphable model with which we can precisely manipulate faces (Shin et al., 2012). Since the database is compatible to the MPI Face Database (Troje et al., 1996; faces.kyb.tuebingen.mpg.de), we can easily create cross-cultural morphs. Studies about cross-cultural perception of faces (such as the other-race effect) are currently underway (Lee et al., 2011).

Do monkeys and humans process faces similarly?

shapeimage_9.png

shapeimage_10.png

In addition, we are interested in comparing face perception across species. Two of our recent studies have shown that monkeys and humans look at faces in a very similar way and that they are even “fooled” by the same perceptual illusion (Dahl et al., 2009; 2010)!

Shown on the left are one human face and one monkey face. If you look at the human face on the top right, you will immediately notice that it looks grotesque as its eyes and mouth have been inverted inside the face. Turning this face upside-down, however, the grotesqueness disappears. This is the well-known Thatcher illusion. If you look at the monkey face, chances are that all of the monkey faces will look equally “normal” to you. This is because humans are not experts for monkey faces. So if you present these faces to monkeys, their gaze will be immediately drawn to the grotesque monkey face, as something is not right about them. Similarly, they will not be interested in the manipulated human face more than in its “normal” version. We showed for the first time that monkeys and humans, indeed, fall prey to the exact same illusion.

To what degree can a face space be represented by touch?

fig2b_large.jpg

Distances3D.png

The concept of the “face space” is highly influential in psychology. It posits that faces are represented in a vector space in which, for example, the average face is the coordinate origin and distinct, atypical faces would be in the periphery. This concept is able to explain many facets of face perception.

Here, we went one step further: since our other studies have shown that shape knowledge is very well shared across modalities (vision and touch, see here), we wanted to know whether touch could also re-create a face space as well as vision (Wallraven, 2014).

Using computer graphics and a morphable model (see above), we created a structured face space and then tested participants using either vision or touch in a standard similarity rating task (see picture on the left for one participant exploring a face mask in the touch condition). We found that whereas there were some similarities between the visual and touch-based “face spaces”, there were still some critical differences in terms of the detailed topology of the space. This may indicate that faces may be actually special to vision (see also our study on blind participants trying to recognize faces from touch; Wallraven and Dopjans, 2013).

How do we process emotional and conversational facial expressions? Are Koreans and Germans similarly expressive? Can we teach the computer to recognize expressions?

droppedImage.png

droppedImage_2.jpg

As the space of facial expressions is very large, we need to create good expression databases. Currently, we are using a German database for both perceptual and computational experiments (Kaulard et al., 2012); recording of a Korean database is underway. Results show that humans are extremely good at interpreting facial expressions – computers still have a long way to go.

In the Cognitive Systems lab, we have recorded a database of Korean conversational expressions. The database is modeled after the MPI database of Facial Expressions and will contain more than 50 expressions at 2 levels of intensity. With both databases, we will be conducting experiments both in Korea and in Germany to investigate within-cultural and cross-cultural processing of facial expressions at unprecedented detail.

In a collaboration with the University of Cardiff with funding from the Research Institute of Visual Computing, we are currently recording a database of conversations with the specific aim of investigating facial movements in the context of natural conversations.

For this, we recorded >60 conversations between two people, each about 5 minutes long. Recordings were captured with standard video cameras, as well as two state-of-the-art 4D scanners. We are currently post-processing the data and have released parts of the database to the general public (Aubrey et al., 2013).

How automatic is processing of emotional facial expressions? Is this different for empathic and non-empathic people?

shapeimage_11.png

In our experiment, we show that empathic people need to process emotional faces, as these captures their attention.

To demonstrate this, we use a large sample of 100 participants who undergo an attention test. Importantly, we include a control condition, in which we show not only another person's emotional face in the attention task, but also participants' own emotional faces. Since empathy per se is directed towards other people, we hypothesized that the attentional effect

should not happen for own faces, which is exactly what we found in our results. We also were able to link the amount of attentional capture with daily prosocial behaviors as measured by post-experiment online surveys, showing a connection between empathy, attentional processes, and helping other people.

How do we categorize ethnicity in faces?

image.png

In collaboration with Isabelle Buelthoff from the Max Planck Institute for Biological Cybernetics, we analyzed with parts of the information in a face drive ethnicity categorization (in our case, Asian versus Caucasian) in both Asian and Caucasian participants.

We found that eyes and surface texture had the most weight for both cultural backgrounds in determining ethnicity.

In another set of experiments, Cansu Malak is analyzing how well Korean participants are able to tell apart other Asian ethnicities - something that according to anecdotal information Koreans should easily be able to do.

Publication