Email updates

Keep up to date with the latest news and content from BMC Neuroscience and BioMed Central.

This article is part of the supplement: Twentieth Annual Computational Neuroscience Meeting: CNS*2011

Open Access Poster presentation

Inscrutable games? Facial expressions predict economic behavior

Filippo Rossi1*, Ian Fasel2 and Alan G Sanfey13

Author Affiliations

1 Department of Psychology, University of Arizona, Tucson, AZ 85721, USA

2 Department of Computer Science, University of Arizona, Tucson, AZ 85721, USA

3 Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, NL-6500 HB, Netherlands

For all author emails, please log on.

BMC Neuroscience 2011, 12(Suppl 1):P281  doi:10.1186/1471-2202-12-S1-P281

The electronic version of this article is the complete one and can be found online at: http://www.biomedcentral.com/1471-2202/12/S1/P281


Published:18 July 2011

© 2011 Rossi et al; licensee BioMed Central Ltd.

This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Poster presentation

Neuroscientific and behavioral evidence shows that when subjects are engaged in simple economic games, they pay attention to the face of their opponents. Is this a good idea? Does the face of a decision-maker contain information about his strategy space? We tested this hypothesis by modeling facial expressions of subjects playing the Ultimatum Game. We recorded videos of 60 participants, and automatically extracted time-series of facial actions (12 action units [1], shown in Fig. 1A., as well as pitch, yaw, and roll of the head) using the real-time facial coding system of [2,3]. We then trained non-linear support vector machines (SVM) to predict the decision of the second player from a segment of video acquired after the offer was received and before the decision was entered (n = 376). To separate the dynamics of facial behavior into different temporal scales, the data was preprocessed with a bank of Gabor filters. With this method we achieved a between-subjects cross-validation accuracy of 0.66 (chance = 0.50) in predicting decisions. Because receiving an unfair offer in the Ultimatum Game is known to evoke a differential facial expression [4], we also trained a model which can capture non-linear relations between facial expressions, fairness and decisions. To do so, we labeled each instance as fair (offer > $3) or unfair, and then trained different classifiers to be ‘experts’ on either fair or unfair offers only. In this case, out-of-sample classification accuracy increased to 0.78. For both cases, we used a foreword selection procedure to identify the most predictive features (Fig.1B).

thumbnailFigure 1. A. Action units (AUs) used in the analysis (image of the face created with Artnatomy [5]). B. Frequency with which a feature is selected as covariate in a logistic classifier, using increases in area under the ROC as inclusion criterion.

Abstract approaches that study social decision-making usually disregard the fact that choices are made in informationally rich environments. Instead, one important goal is to model different sources of information as well as the way in which they affect decisions. The current study suggests that one important source of information about strategic decision making behavior may be the face, since, given sensitive enough instruments, this information can be measured and quantified in real-time by a computer. This also suggests that real-time analysis of facial action codes may serve as a powerful new tool for understanding strategic decision making which can complement neuroimaging techniques such as EEG and fMRI.

References

  1. Ekman P, Friesen WV: Facial action coding system: A technique for the measurement of facial movement. Palo Alto: Consulting Psychologists Press; 1978.

  2. Littlewort G, Whitehill J, Wu T, Fasel I, Frank M, Movellan J, Bartlett M: The Computer Expression Recognition Toolbox (CERT). Face and Gesture Recognition. to appear

  3. Littlewort G, Bartlett MS, Fasel I, Susskind J, Movellan J: Dynamics of facial expression extracted automatically from video.

    Image and Vision Computing 2006, 24:615-625. Publisher Full Text OpenURL

  4. Chapman HA, Kim DA, Susskind JM, Anderson AK: In Bad Taste: Evidence for the Oral Origins of Moral Disgust.

    Science 2009, 328:1222-1226. Publisher Full Text OpenURL

  5. Flores VC: ARTNATOMY/ARTNATOMIA. [http://www.artnatomia.net] webcite