Post on 03-Jun-2018
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 1/282
Section de psychologie
Sous la direction de Susanne Kaiser
STRUCTURAL ANALYSIS OF TEMPORAL PATTERNS
OF FACIAL ACTIONS:
MEASUREMENT AND IMPLICATIONS FOR THE STUDY OF EMOTION
PERCEPTION THROUGH FACIAL EXPRESSIONS
THESE
Présentée à laFaculté de psychologie et des sciences de l’éducation
de l’Université de Genève
pour obtenir le grade de Docteur en Psychologie
par
Mr. Stéphane WITH
de
Genève
Thèse No 455
GENEVE
Mars 2010
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 2/282
INDEX
I. FOREWORD ........................................................................................................... 1
II. THEORETICAL INTRODUCTION ....................................................................... 6
Prototypical Facial Expressions of Emotions ............................................................. 7
Issues of Ecological Validity of Emotion Recognition Studies ............................... 14
The Componential View on Facial Expressions of Emotions .................................. 19
Summary .................................................................................................................. 23
III. EXPERIMENTAL SECTION ............................................................................... 26
Research aims ........................................................................................................... 27
Collecting samples of dynamic emotional facial expressions .................................. 27
The MeMo database ............................................................................................. 27
Emotional narratives eliciting task ....................................................................... 27
Participants to the narration tasks (production study) .......................................... 28
Laboratory and interview settings ........................................................................ 29
Assessment of emotional induction ...................................................................... 29
Extracting video sample files from the original films .......................................... 30
Assessing the message value of spontaneously expressed dynamic displays of
emotions ............................................................................................................................... 32
Judgment task ....................................................................................................... 33
Participants to the judgment task and rating protocol .......................................... 33
Reliability analyses ............................................................................................... 34
Principal Components Factor Analysis ................................................................ 35
Clustering of video files on factor scores ............................................................. 39
Methodology of behavior annotation ....................................................................... 41
The Anvil annotation tool ..................................................................................... 41
Coding scheme ..................................................................................................... 41
Measurement of facial activity ............................................................................. 42
Additional Nonverbal Codes ................................................................................ 45
Speech and Voice Codes ...................................................................................... 46
Scoring procedure and reliability assessment ........................................................... 47
Results .................................................................................................................. 50
Descriptions of scores in database ............................................................................ 52
Interpretation of FACS Codes with EMFACS/FACSAID ....................................... 56
Methodological issues in measuring the co-occurrences of Action Units ............... 59
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 3/282
Measuring Co-ocurrences of Facial Actions with GSEQ (Generalized Sequential
Querier) ................................................................................................................................ 62
Contingency table statistics with GSEQ............................................................... 62
From Odds ratios to Yules’s Q ............................................................................. 64
Testing for the occurrence of EMFACS predicted facial patterns using Yules Qs
.......................................................................................................................................... 65
Relating nonverbal signals to emotion perception ................................................... 70
Relative frequencies of single action units across the rating clusters ...................... 70
Clusters characterization by patterns of action units ................................................ 71
Prototypical Patterns of Facial Expressions across the Clusters .............................. 74
Results for prototypical expressions across the clusters........................................... 74
Happiness ............................................................................................................. 74
Anger .................................................................................................................... 75
Fear ....................................................................................................................... 76
Surprise ................................................................................................................. 77
Sadness ................................................................................................................. 77
Disgust .................................................................................................................. 78
Contempt .............................................................................................................. 79
Masking smiles (blends of smiles with displays of negatively valenced emotions) 80
Summary of results ............................................................................................... 86
Sequential Analysis communicative behaviours – Methodological Issues .............. 88
Definition of T-patterns. ....................................................................................... 90
Statistical validation of T-patterns........................................................................ 92
Setting up T-patterns detection parameters .......................................................... 93
T-patterns search results and selection criteria ..................................................... 94
T-Pattern statistics by clusters .................................................................................. 98
Enjoyment cluster ................................................................................................. 98
Hostility cluster .................................................................................................... 99
Embarrassment cluster........................................................................................ 101
Surprise cluster ................................................................................................... 102
Sadness cluster ................................................................................................... 104
Summary of results ............................................................................................. 105
T-patterns illustrations and comparison by clusters ............................................... 107
Enjoyment cluster ............................................................................................... 108
Hostility Cluster ................................................................................................. 112
Embarrassment Cluster ....................................................................................... 123
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 4/282
Surprise Cluster .................................................................................................. 129
Sadness Cluster ................................................................................................... 134
Summary of findings .......................................................................................... 141
IV. GENERAL DISCUSSION AND CONCLUSIONS ............................................ 143
General Discussion ................................................................................................. 144
MeMo - a new research database of dynamic facial expressions of emotions ... 145
The perceived message value of dynamic facial expressions of emotions ........ 145
The role of sequential patterns of communicative actions in the perception of
emotions ......................................................................................................................... 146
Limitations .......................................................................................................... 147
Future perspectives ............................................................................................. 148
Conclusions ............................................................................................................ 150
V. REFERENCES .................................................................................................... 151
Appendix I. Facial Action Coding System Figures and Definitions of Major Action Units .
.............................................................................................................................. 163
Appendix II Frequency Distribution Tables for Event Types in Rating Clusters ............. 170
Appendix III. Transition Graphs for T-patterns. ............................................................... 194
Appendix IV. Instructions and questionnaires. ................................................................. 258
Appendix V. Consent Form ............................................................................................... 269
Appendix VI. Normality tests for Action Units Distribution in Database. ....................... 271
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 5/282
1
II.. FFOORREEWWOORRDD
The human face provides a rich source of information that we use to identify other
members of our species, gather information about gender, age, attractiveness (Rhodes, 2006)
or personality traits. Besides static signals extracted from the face, dynamic facial expressions
are an important means of communicating and understanding other’s intentions and affective
reactions (Keltner and Ekman, 2000). Facial displays have been associated with the signaling
of emotions and pain (Ekman, 1993) the communication of empathic understanding (Bavelas
et al. 1986) the regulation of conversations (Cohn and Elmore, 1988). They may signal some
brain dysfunctions (Rinn, 1984), psychopathological conditions (Ellgring and Gaebel, 1994;
Benecke and Krause, 2005), suicidal intent (Heller et al. 2001). They have also been showed
to signal developmental changes in children (Yale et al. 1999; Yale, Messinger, Cobo-Lewis,2003) inform person recognition (Cohn et al. 2002) and betray attempts at deceit (Frank and
Ekman, 2003).
In recent years, we have witnessed the rapid emergence of an interest for the
automated analysis and interpretation of facial activity through computer vision. Computer
vision is the science of extracting and representing meaningful information from digitized
video and recognizing perceptually meaningful patterns. In 1992, the U.S. National Science
Foundation convened a seminal interdisciplinary workshop on this topic (Ekman, Huang,
Sejnowski, & Hager, 1992), which brought together psychologists with expertise in facial
expression and computer vision scientists with interest in facial image analysis. Since then,
there has been considerable research activity, as represented by a series of six international
meetings beginning in 1995, devoted to the topic face and gesture. Several automated facial
image analysis systems have been developed (Cootes, Edwards & Taylor, 2001; Essa &
Pentland, 1997; Lyons, Akamasku, Kamachi, & Gyoba, 1998; Padgett, Cottrell, & Adolphs,
1996; Wen & Huang, 2003; Yacoob & Davis, 1996; Zhang, 1999; Zhu, De Silva, & Ko,
2002). Most of those systems have in common to attempt classifying facial movements in a
small set of specific emotion categories, such as joy, anger surprise, fear or happiness. Of
course the potential economical stakes linked to the development of such technologies are
high. Possible commercial applications include notably the development of cameras taking
pictures of your friends and family only when they oblige you with a smile, computer tutoring
systems adapting your learning gradients depending on your perceived level of frustration, or
artificial agents attuning their reactions to your nonverbally expressed emotions. In the post
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 6/282
2
9/11 U.S funds have been granted by Homeland Security for the training of behavioral
profilers for screening airport clients for potential terrorist intents. In the August 14 2009
issue of the Ottawa Citizen newspaper, reporter Ian MacLeod writes:
«Beginning next year, some air travellers will be scrutinized by airport
"behavior detection officers" for physiological signs of hostile intent — in
other words: screening for dangerous people rather than just for dangerous
objects……… Similar programs operate in the United States, the United
Kingdom and Israel, which pioneered spying on people's expressions and body
movements for involuntary and fleeting "micro-expressions" and movements
suggesting abnormal stress, fear or deception. This might indicate a passenger
has malicious intentions," said Mathieu Larocque, spokesman for the security
authority, which is responsible for pre-board screening of airport passengers.
"It offers an additional security layer for the aviation system."
In some airports passengers’ voices are already being screened by machines for signs
of stress, when asked to answer questions about terrorist intents. On the Digital Civil Rights in
Europe website (www.edri.org) one reads1:
“Lie detectors will be used in Russian airports as part of the security measures
starting with July 2006. Meant to identify terrorists or other types of criminals,
a lie-detecting device developed in Israel, known as "truth verifier," will be
first introduced in Moscow's Domodedovo airport as early as July. The
technology…. is said to be able to detect answers coming from imagination or
memory.”
In the United Kingdom, local social institutions have introduced voice stress analysis
to detect fraudulent benefactors. For example journalist Les Reid reports (2009):
«A lie detector system designed to root out benefits cheats in Coventry has
identified 1,200 dodgy claims in just over a year. The technology detects stress
levels in people's voices over the phone and has been used by the city council
to assess new Housing Benefit claims since November 2007.Council bosses say
1http://www.edri.org/edrigram/number4.7/liedetector
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 7/282
3
the Voice Risk Analysis (VRA) system has dramatically sped up the time taken
to process at least 1,700 genuine claims, saving money on paperwork. (Reid,
2009). »
Amongst the numerous TV soaps praising the feats of scientific polices so common
these days; one especially attracted my attention in 2009. It is called “ Lie To Me”. Produced
by FOX television the series’ main character is called Dr Cal Lightman. A scientific expert in
reading emotions from the face and detecting subtle signs of deceit from nonverbal behavior;
Dr Lightman rents his services to U.S. federal agents and private parties to help resolve
different sorts of investigations. Freely inspired by the life of psychologist and facial
expressions expert Paul Ekman, this show can boast to have renowned experts like Dr Erica
Rosenberg, a long time collaborator of Ekman, as scientific advisor. All this seems to show
that the current Zeitgeist is somehow ripe for accepting the spread of human and automated
technologies of behavior profiling, possibly as a necessary evil in return for a safer society.
And for those of us who still think they can get away with cheating, the message is clear. You
will be caught one way or the other. All this is well and good but how much of this is based
on sound science and reliable technologies?
In 2008, Noldus technology historically commercialized the first ever facial emotion
expressions analysis software called “FaceReader”2. It is based on a classification algorithm
developed by Den Uyl and Van Kuilenburg (2005). By curiosity, Susanne Kaiser and I invited
to our lab a commercial agent from the distributing company to test the system on video
records collected for the present thesis. I recall feeling amused, relieved and frustrated at the
same time while watching the program detect “anger” whenever the participant in the video
was frowning; “surprise” if her eyebrows rose and “happy” if she happened to smile. I was
amused to see how unreliable and arbitrary these interpretations were; I was relieved because
it comforted me that I had not spent the last three years and half cautiously, and I must say
painfully, annotating facial actions manually when a computer program could have done it
just as well in no time (although I still have hopes that this will soon become at least partially
2http://www.noldus.com/human-behavior-research/products/facereader
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 8/282
4
possible). Eventually, I was frustrated and upset at seeing how promising technological
advances were put to such uninformed use.
Surprising as it may be to computer engineers, behavioral scientists still have many
unanswered questions as to even what an emotion is, and what are the essential nonverbal
features necessary to communicate affective mental states in social contexts The present
thesis is an attempt to investigate some unsettled issues about the perception of emotions from
facial expressions. Even though in real life facial expressions are inherently dynamic, the vast
majority of claims about how emotions are communicated through facial patterns come from
recognition studies using static photographs of prototypical expressions posed by actors. Even
though some questions about the way these expressions are perceived have already largely
been addressed, not much is known about their frequency of occurrences in naturalistic
contexts and therefore what their contribution to communicating emotions in social
interactions really amounts to. In this study we will put a special emphasis on the relationship
between how spontaneously produced facial actions unfold in time and how they are
perceived in terms of the emotional messages they convey. Moreover, because faces are
usually not perceived in isolation but are integrated with numerous additional nonverbal
signals in a multi-channel communicative system, we will also consider head and gaze
movement/orientation as well as some speech and voice related variables as communicative
signals that may potentially play a role in moderating the message value derived from
dynamically perceived facial expressions of emotions.
The study described in this thesis can be divided in four sections. First, we will begin
with a theoretical introduction questioning the validity of generalizing results of traditional
emotion recognition studies to emotional signal processing in real life situations. Second, we
present our strategy to collect spontaneous facial expressions produced during an emotion
sharing task. We will describe a new audio-video database called MeMo created for the need
of this thesis. MeMo is constituted of 200 sample files extracted from 50 face to face semi-
structured interviews conducted with 10 female participants narrating five emotional
autobiographic episodes each. All the extracted video sample were pre-selected on the basis
of agreements between two independent judges asked to identify sequences in the interviews
were participants appeared “emotional”. All the facial actions occurring in these files were
then annotated with the Facial Action Coding System as well as with additional codes
designed specifically for this study. Second we report the results of a multi-scalar rating
judgment study conducted to assess if a) independent judges could agree on the message
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 9/282
5
value of the emotions (if any) communicated in the sample files, b) if on the basis of these
judgments, we could create distinct clusters of video records with similar rating profiles to
conduct quantitative comparisons of the facial and related nonverbal actions contained in
these groups. In section three, we report our attempt to compare groups of videos with
distinctively different rating scores on five emotional factors, in terms of both traditional
prototypical patterns of facial configurations and dynamic patterns of multimodal
communicative actions detected by the T-pattern detection algorithm developed by
Magnusson (2005). An emphasis will be put on the different types of information derived
from these two types of analysis. Finally, in the discussion section, the main results of the
study are reviewed and potential implications for future research are discussed.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 10/282
6
IIII.. TTHHEEOORREETTIICC A ALL IINNTTRROODDUUCCTTIIOONN
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 11/282
7
Prototypical Facial Expressions of Emotions
In psychology, the prevalent conception about emotions assumes that there exist a
limited number of fundamental or basic emotions that are neurologically pre-wired and that
are expressed through distinctively recognized facial expressions. Initial support for this view
started to emerge in the early 1970s with the first research reports suggesting that a limited set
of emotion terms could be matched above chance level with static photographs of faces of
individuals posing emotions corresponding to six English affective terms: happiness, fear,
anger, surprise and sadness. Later work led to the addition of a seventh emotion: “contempt”
to the list (Ekman and Friesen, 1986; Ekman and Heider, 1988 – see figure 1. for illustration).
The results of these early research projects inspired by the neo-Darwinian theories of Tomkins
(1962, 1963) and conducted by psychologists Paul Ekman (1972, 1994; Ekman et al., 1987)
and Caroll Izard (1971) across various literate and illiterate cultures having had little contact
with each other have revived an interest for the search of universal invariants in the way facial
displays communicate emotions, at a time where socio-constructivist models viewing
emotional behavior as determined solely by cultural influences on expressive prescriptions
were predominant (Lutz & White 1986; Wierzbicka, 1994).
Since these princeps studies replications by other research groups have generally
produced congruent results with the original findings. Notably a meta-analysis by Elfenbein
and Ambady (2002) performed on 87 articles; describing 97 separate studies on cross-cultural
emotion recognition supports empirical claims made in favor of cross-cultural recognition of
emotions, suggesting that certain core components of facial expressions of emotions are
universal. The conclusion that prototypical facial displays can be reliably and cross-culturally
associated with predicted emotion labels or appropriate emotion eliciting scenarios is usually
taken as evidence that at least some facial patterns function as innate and phylogenetically
evolved signals for the communication of emotions. Based on his empirical work and
theoretical intuitions of Tomkins, Ekman proposed a neuro-cultural account of emotions
(1972) positing a dual influence of psycho-physiological and socio-cultural mechanisms asexplanatory causes for both “universal” invariants and “culture” specifics in facial displays of
emotions. The neuropsychological component of the model posits the existence of facial
affect programs or FAPs that are automatically activated when an emotion is triggered. These
FAPs are essentially hypothetical neuro-motor programs triggered during an emotion episode
and considered responsible for organizing the full facial response patterns distinctively
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 12/282
8
characteristic of a small set of fundamental or “basic” emotions. The very notion that there
exist a limited number of emotions sharing a more basic or fundamental status compared to
others is essentially derived from Ekman’s interpretations of the implications to be drawn
from the thesis of a universally recognized set of facial emotional expressions.
The neuro-cultural model views it as a sine qua non requirement for the inclusion of
an affective state in the category of emotions, that evidence be produced that this state
possesses a distinctive expressive signal that can accurately be recognized cross-culturally.
When such evidence can be provided it is taken as strong suggesting evidence that the label
referring to the emotion emerged as a semantic transliteration or rendering of some naturally
preexisting, phylogenetically evolved and innate response. Additional indirect evidence cited
in favor of phylogenetically evolved «facial affect programs» comes from observed
similarities between human and nonhuman emotional displays (Chevalier-Skolnikoff, 1973;
Darwin, 1872/1965; Redican, 1982, Parr, Waller, Vick, & Bard, 2007; van Hooff, 1972;
Waller & Dunbar, 2005), the mutual recognition of emotional signals across species
boundaries (Itakura, 1994; Linnankoski, Laasko, & Leinonen, 1994). Note however, that
strictly speaking these studies provide evidence for the existence of cross-species similarities
in the forms and functions of communicative signals; not that they are related to the
theoretical construct of emotion.
Figure 1. Prototypical expressions for seven « Basic » emotions according to Paul Ekman’s predictions (1994). From left to right: Surprise, Anger, Disgust, Fear, Sadness, Happiness, and Contempt. The expressions have been generated with the FACSGen tool (Roesch and al 2006).
A recent study also showed that congenitally, non-congenitally blinds and sighted
athletes photographed while receiving medals during Paralympics and Olympic Games
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 13/282
9
produce similar prototypical expressions of emotions seemingly excluding the possibility of
social modeling of the expressions (Matsumoto & Willingham, 2009; but see Mistschenka,
1933 for contradictory observations). Also recent dissection work on 18 human shows that
independently of the fact that facial musculature is far from consistent between individuals in
terms of both presence and symmetry (McAlister, Harkness, & Nicoll, 1998; Pessa, Zadoo,
Adrian, Yuan, & Garza, 1998; Pessa et al., 1998); muscles essential for producing
prototypical facial displays of emotions vary little among individuals. In this study, all
examined cadavers were equipped with the facial muscles necessary to produce the required
actions, almost always exhibited these muscles bilaterally, and exhibited minimal size
asymmetry. In contrast, muscles non essential for the production of facial prototypes of
emotions showed inconsistency in presence, and were often asymmetric in presence and size
(Waller, Cray, Burrows, 2008). This explains how universally recognized facial expressions
can be produced even in light of individual variation in facial musculature. In a 1999 text
promoting the construct of basic emotions Ekman univoquely restates:
“I have gone back and forth on the question of whether or not a universal
signal is the sine qua non for emotion. Once again, I will set out that claim, as
a challenge for someone to identify states which have all the other
characteristics (of emotions)... but no signal. To date there is no such evidence
and I doubt it will be found. I believe it was central to the evolution of
emotions that they inform conspecifics ….about what is occurring inside the
person. What most likely occurred before to bring about that expression, and
what is most likely to occur next. (Ekman, 1999).
Paul Ekman often refers to himself as the upholder of the ideas of Charles Darwin’s
exposed in the book: The Expression of Emotions in Men and Animals (1872, 1996). In the
1996 edition of the book published by Oxford University Press, Ekman even authored, an
extensive commentary of Darwin’s text based his own views of basic emotions. Interestingly
enough, attentive readers of Darwin have highlighted the fact that the notion of basic
emotions and prototypical expressions were probably foreign to the mind of Darwin. For
example, Michel Heller states:
“Darwin aimait tellement décrire avec minutie ce qu’il observait, qu’il
n’aurait jamais pu se contenter de réduire les expressions émotionnelles à
quelques traits, ou à quelques émotions de base. Ce qu’il inclut dans sa liste
des expressions émotionnelles est à la fois varié à l’extrême et hétérogène. Il
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 14/282
10
aurait plutôt tendance à croire qu’il n’a pas pu tout analyser, et que la réalité
est encore plus différenciée que ce qu’il est parvenu à décrire.” (Heller, 2008).
Whatever opinion one may hold about Ekman’s arguments, the principal strength of
his work has probably been to reaffirm Darwin’s propositions that emotions are an
appropriate object of study for the natural sciences. Even though Ekman puts a strong
emphasis on the biological determinants of facial expressions of emotions, he does not
altogether rejects the importance of culture as is reflected in the second part of his model’s
name. The neuro-cultural model acknowledges that both cultures and institutions promote
implicit and explicit expectations which are meant to influence the ways in which emotional
episodes are being acted out in the interpersonal arena. The notion of cultural display rules
was first introduced by Ekman and Friesen (1969) as a hypothetical construct to explain the
observed differences of facial expressive styles in a study comparing Japanese and American
students. Since that time, the notion of display rules has become a central concept in the study
of culture and emotion. Cultural display rules can be defined as culturally prescribed rules,
which are learnt early in life through socialization. These rules influence the emotional
expression of people from any culture depending on what that particular culture has
characterized as an acceptable or unacceptable expression of emotion (Matsumoto, Kasri, &
Kooken, 1999).
These culturally shared norms dictate how, when, and to whom people should express
their emotions. Note, that in keeping with the notion of a set of universal emotions
communicated through prototypical facial patterns, the concept of display rules does not
extend to the initial shaping of an emotion display per se. The “how”, in the sentence:
“how….people should express their emotions” is not meant to refer to the canonical form of
the emotional expression, considered an innate and phylogenetically inherited pattern
produced identically across individuals and cultures. Rather, display rules are inferred from
the operation of modulation strategies of an expressive response already triggered by an
emotion. Several modulation strategies meant to alter the supposedly natural course of an
expression have been described such as: acting out of an « unfelt » emotion, as in social or polite smiling; trying to suppress the expression by activating counteracting muscles,
minimizing, or maximizing the amplitude and or duration of a response, and also masking
negative displays with social smiles. Interestingly, the same meta-analysis by Elfenbein and
Ambady (2002) that seems to confirm a minimal universality in the recognition of core
elements of facial expressions of emotions also provides non-accounted for evidence that
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 15/282
11
emotional expressions may lose some of their meaning across cultural boundaries. For
example these authors report that some facial displays are more accurately understood when
they are judged by members of the same national, ethnic, or regional group that had expressed
the emotion.
According to Ekman’s model, this in group advantage effect ought to be explained by
a shared knowledge of culture specific display rules by individuals of the same culture. This
interpretation of the in-group advantage in decoding emotions facially expressed was
challenged by two subsequent studies that explored not only how emotions were perceived
but also how they are produced cross-culturally (Elfenbein et al. 2007). Surprisingly little
research has examined cross cultural differences in actual (not self-reported) emotionally
expressive behaviors (the most often cited of these studies—Ekman, 1972—was not
published in a peer-reviewed journal). In the study by Elfenbein and colleagues participants
from Quebec and Gabon were asked to pose facial expressions of emotions. Group specific
displays in the form of pattern activation of distinct muscles for the same expression emerged
most clearly for serenity, shame, and contempt and also for anger, sadness, surprise, and
happiness, but not for fear, disgust, or embarrassment. In a second study, Quebecois and
Gabonese participants were asked to recognize these expressions as well as expressions
standardized to erase the cultural specificities. Results showed that an in-group advantage
emerged for non standardized expressions only and most strongly for expressions with greater
regional expressive specificities. These authors have interpreted these results as suggesting
the existence of nonverbal dialects showing cultural variations similar to linguistic dialects,
thereby decreasing accurate recognition by out-group members.
From the early 1970s on to this day, theoretical claims in favor of the existence of
basic emotions, depend largely on convergence of results from cross-cultural studies where
participants are asked to judge pre-selected displays from static faces. Typically, links
between facial expressions and self reported emotional experience are at best moderate
(Rosenberg, 2005). In a recent review of 257 published papers covering the 1997-2007
period, Eva Bänninger (2009) showed that studies on facial expressions were dominated by
either judgment or production studies (N=158, 61%). Only 38 (15%) combined the
measurement of actual facial behavior with impression formation. This easily produces a
problem of circularity since production studies usually rely on coding systems and
interpretation tables derived from the results of these judgments studies to select which
behavior to observe and subsequently how to make sense of their data.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 16/282
12
Ekman’s arguments in favor of the universality of a small set of basic emotions,
signaled by corresponding patterns of facial expressions, has not gone unchallenged by
alternative psychological models of emotions. For example according to Russell’s
dimensional model, facial expressions are not seen as expressing distinct emotions,
nevertheless observers do infer much about the expresser from the face (Carroll and Russell,
1996).
According to this view one can extract two kinds of information from the face easily
and automatically. First, quasi-physical information is perceived. That is, the observer can see
from the eyes whether the expresser is weeping, winking, looking down or up, staring at
something, or looking way. The mouth shows whether the expresser is talking, shouting,
yawning, laughing, smiling, or grimacing. Carroll and Russell (1996) refer to such
information as quasi-physical to indicate its simplest literal meaning. Thus, as quasi-physical,
the smile is recognized simply as a smile—not whether it is a smile of joy, of embarrassment,
of nervousness, or a polite greeting. Second, based in part on perceived quasi-physical
features, the observer infers the expresser's feelings on the general dimensions of pleasantness
and arousal. Any further differentiation of a displayed emotion is considered to be inferred
from additional contextual cues according to Russell’s dimensional model.
According to Frijda’s definition, emotions as essentially states of action readiness;
facial expressions are seen as reflecting an intention to act in a certain way. For example,
using Ekman’s facial prototypes of basic emotions Frijda showed that participants couldreliably associate the displays with particular states of action readiness. For example disgust
and fear were associated with the tendency to “avoid” and “protect oneself”, happiness with
the desire to “approach” and “be with” (Frijda and Tcherkassof 1997).
Even though their theories differ in what they predict Ekman’s prototypical expression
should signal; what those researchers share in common is the use of recognition studies to
back up their particular claim as to what an emotion is. This may prove inappropriate since
the only thing that these researches convincingly show is that not only emotions but, levels of
arousal and hedonicity, cognitive appraisal as well as action tendencies can also be inferred
from prototypical facial emotional expressions (see Scherer and Grandjean, 2008) . This
suggests that results from recognition studies may be compatible with several theoretical
models and thus inadequate for testing the competing theories. As for the investigation of
correspondences between self reported feelings and theoretically predicted facial displays
inconsistent results come up. Some researchers report a weak (Bonanno & Keltner, 2004;
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 17/282
13
Frijda & Tcherkassof, 1997; Kappas, 2003) to moderate link (Rosenberg, 2005). For instance,
Fernandez-Dols, Sanchez, Carrera and Ruiz-Belda (1997) have found no coherence between
the subjective reports of the participants watching emotion-eliciting movies and their facial
expressions. As an example, two participants have displayed a prototypical expression of
surprise while reporting feeling disgust.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 18/282
14
Issues of Ecological Validity of Emotion RecognitionStudies
Emotional signals in social interactions are typically not conveyed by specific facial
patterns alone but by a complex combination of rapidly changing and overlapping individual
facial actions integrated with other nonverbal cues. The well established empirical evidence
demonstrating that a limited number of discrete emotion categories can be cross-culturally
recognized from facial configurations alone, rest on the results of emotion recognition studies
using static photographs of posed expressions presented in isolation and pre-selected for
maximum discriminability (Barret, Lindquist and Gendron, 2007).
The generalization of these recognition data to emotional signal processing in more
realistic social contexts is questionable on several grounds. First, most of the standardized
facial stimuli used in laboratory experiments (most often of Matsumoto & Ekman’s, 1988,
JACFEE set or Ekman & Friesen’s, 1976, pictures of facial affect) were produced by actors
who followed strict guidelines detailed in the « Directed Facial Action Task » protocol
(Ekman, 2007) on how to pose each facial expressions corresponding to Ekman’s prototypical
set of basic emotions.
By contrast, naturally occurring facial expressions are often of weaker intensity, less
clear cut and their interpretation more elusive and ambiguous than posed expressions
(Nummenmaa, 1992, Hess and Kleck, 1994, Russell, 1997). Indirect evidence to this fact is
that drastic drops or even disappearance of inter-rater’s agreement for specific emotion labels
have been reported when spontaneously produced facial expressions instead of posed
expressions are used (Motley and Camden, 1988, Motley, 1993, Yik, Meng and Russell,
1998).
Standardized posed expressions are more easily recognized than spontaneous ones
probably because they act as super stimuli by exaggerating the features of the emotion type
they depict. As Ekman (1972, 1989) stresses it, they possess a ‘“snapshot quality”’ that fosters
instant recognition. Second, several studies using well established facial coding systems like
FACS to specify the configurations of spontaneously produced expressions report little
evidence for the existence of specific prototypical expressions predicted by proponents of
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 19/282
15
basic emotion theories (Matias, and Cohn, 1993; Camras and al. 2002, Scherer and Ellgring,
2007a).
Third, in real life situations, facial displays are only one component of an integrated
complex multi-channels, multi-signals communicative system, where additional components
provide a possible context for modulating the perceived meaning of facial displays. Despite
this fact, most research on emotion recognition has focused on isolated modalities, mostly
facial and vocal, at the expense of other communicative channels. The few studies having
investigated the combination of facial displays with additional nonverbal signals suggest that
head orientation (Hess, Adams and Kleck, 2007), body postures (Aviezer and al., 2008), head
positions (Krumhuber, Manstead and Kappas, 2007), and gaze orientation (Reginald and
Kleck, 2005) all have a modulating impact on the meaning derived from facial displays. For
example, the role of horizontal head tilt for the perceptions of facially expressed emotions was
examined by Hess, Adams and Kleck (2007). Head position was found to strongly influence
reactions to anger and fear but less so for other emotions. Direct anger expressions were more
accurately identified, perceived as less affiliative, and elicited higher levels of anxiousness
and repulsion, as well as less desire to approach than did averted anger expressions.
Conversely, for fear expressions averted faces elicited more negative affect in the perceiver.
The authors conclude that their findings suggest that horizontal head position is an important
cue for the assessment of threat. Additionally Reginald and Kleck (2005) have demonstrated
that the way in which gaze direction influences emotion perception actually depends on the
specific type of emotion in question. They show that direct gaze enhances the perception of
anger; whereas averted gaze enhances the perception of fear expressions.
These patterns of findings are explained according to the perspective that emotional
expressions and gaze behavior communicate basic behavioral intentions to approach or avoid.
Thus, when congruent in signal value, gaze direction acts to enhance the perception of the
emotion communicated by the face. Gaze direction influences anger and fear perception
because it indicates the source of threat, as part of an early warning mechanism, whereas for
joy and sadness, gaze may simply be a social signal indicating a tendency for social
engagement. In this example, averted gaze may enhance the perception of fear because it
helps indicate the source of potential threat via joint attention (see Driver et al., 1999),
whereas averted gaze may enhance the perception of sadness because it indicates social
withdrawal.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 20/282
16
Keltner (1995) has been the first to provide evidence that two distinct emotions
sharing the same facial signal of smiling could only be accurately differentiated on the basis
of additional behavioral cues. He found that when people report embarrassment, they show a
consistent pattern of behavior distinct from that of amusement. When embarrassed, people
look down, then smile and simultaneously attempt to control the smile with facial actions that
are antagonistic to the upwards pull of the zygomatic muscle, turn their head away and then
touch their face. Follow up research has shown that observers are able to discriminate displays
of spontaneous embarrassment and amusement. This suggests that an important part of the
embarrassment signal might the sequential unfolding of its multimodal component actions.
The same emphasis on the temporal unfolding that was useful in differentiating
different kinds of smiling could also be of interest to understand if and how observers
attribute different emotional values to morphologically similar facial actions depending on the
their sequential organization and pairing with other nonverbal actions. In the same line as
what Keltner (1995) showed with embarrassment, recent work suggests that positive affect
states that were not previously considered as basic emotions can possibly be identified by
specific pattern of expressive actions (Shiota, Campos, & Keltner, 2003).
For example, Tracy and Robins (2004, 2008) found that an action pattern involving a
small smile, a head tilted back, the arms raised or akimbo with hands on hips, and visibly
expanded posture could be reliably interpreted as an expression of pride. Because they were
able to reproduce these results cross culturally, these authors argue that the term « pride »should be added to the traditional list of basic emotions.
Another under-investigated hypothesis is that in social conversations, the verbal
communication of the circumstances and evaluation of a situation serve to reduce the
uncertainty inherent to some facial expressions and constrain their meaning to allow for quick
categorization of emotion (Lindquist and al., 2006). Indirect evidence in favor of this
hypothesis is suggested by the fact that, when given the opportunity, judges invent plausible
eliciting scenarios when presented with prototypical emotional expressions (Frijda and
Tcherkassof, 1997). Therefore, it only takes a small leap to assume that when an actual
eliciting event is known, it will be taken into account in interpreting someone's facial
expressions.
Finally, when it comes to produce relevant empirical data about how emotions are
perceived through the face, traditional judgments studies using static stimuli do not capitalize
on the fact that the natural dynamic component of facial expressions provides unique
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 21/282
17
information about a sender’s mental state that is not available in static displays (Ekman,
1982). In natural settings, the face moves and shifts, sometimes even quickly, from one
expression to another. In other words, observers in natural environments observe social
signals conveyed by the face not as static stimuli but as complex action patterns unfolding in
time.
Thus, the sequential unfolding of facial actions provides observers with different
information than the ones provided by static photographs, since still expressions do not
present subtle changes. It may be that differences in the social information displayed by static
and dynamic expressions lead to differential effects on emotion perception.
Indeed, preliminary investigations suggest that the dynamic aspects of facial displays
are likely to be of importance (Bassili, 1978, 1979; Buck, Miller, & Caul, 1974; Hess &
Kleck, 1990; Kamachi et al., 2001). For example, Edwards (1998) has shown that observers
are sensitive to subtle changes in a person’s facial expression. When asked to assess the
temporal progression of emotional facial displays, the participants were able to detect
extremely fine dynamic cues. It led the author to assert that facial expressions of emotion are
temporally structured in a way that is both perceptible and meaningful to an observer.
The relevance of temporal aspects has also been stressed in a research conducted by
Wehrle et al. (2000) on emotion perception for schematic facial expressions. The results
support the claim that dynamic displays improve the recognition and differentiation of the
facial patterns of emotions as compared to static displays (see also, Ambadar, Schooler, and
Cohn, 2005; Bould and Morris, 2008, Lemay, Kirouac, and Lacouture, 1995). Evidence is
starting to accumulate concerning the importance of dynamic parameters on observer's
categorization of subtle facial expressions judgment of genuineness (Krumhuber and Kappas,
2005) and trustworthiness (Krumhuber and al., 2007).
The relevance of the relative timing emerged from studies showing that humans were
sensitive to the duration of a facial display when considering the sincerity or deceptiveness of
an emotional display (Ekman, Friesen, and O’Sullivan, 1988). Ekman and Friesen (1982)
have suggested that social or polite smiles are sometimes obvious because of their short onset
and irregular offset times which convey a lack of authenticity. Cohn and Schmidt (2004) have
shown that spontaneous smiles have smaller amplitude and present a more linear relation
between amplitude and duration than deliberate smiles. Hess and Kleck (1990) have also
pointed out the importance of the dynamics of facial movements, and particularly the
irregularity, or phasic change, in the expression’s unfolding.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 22/282
18
Thus, pauses and stepwise intensity changes, for example, the number of onset, offset
and apex phases that the expression contains, came out as significant parameters. (See also
Frank, Ekman, and Friesen, 1993; Hess, et al., 1989; Messinger, Fogel, and Dickson, 1999).
All of these researches point to the possibility that perception of emotions from static and
dynamic facial stimuli might involve distinct cognitive processing strategies and that until
recently researchers in behavioral sciences may have seriously underestimated the importance
of context and motion dynamics for making sense of subtle or otherwise ambiguous facial
expressions that permeate real life situations. In the next section, we will introduce an
alternative theoretical account of the relationship between facial displays and emotions that
take into account the unfolding of facial expressions in time in its formulation.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 23/282
19
The Componential View on Facial Expressions ofEmotions
Recognition studies of emotions have been largely limited to a set of 7 (±2)
prototypical facial patterns and minor variants on them. At the same time, Ekman (1972) and
Izard (1994) acknowledged that each emotion is associated with more than one facial pattern.
Indeed, Ekman and Friesen (1978) originally listed 55 facial patterns for six emotions (30 for
anger, 8 for sadness, 6 each for fear and disgust, 3 for surprise, and 2 for happiness; this count
ignores possible variations in head and eye movements and variations in degree of mouth
opening. Several predictions for incomplete prototypes have been proposed by Ekman and
Friesen in their 2003 book: Unmasking the Face – A guide to recognizing emotions from
facial expressions. This text contains detailed descriptions of partial prototypes centering on
the brow, eye or mouth region. Multiple patterns for a single emotion raise a conceptual
problem: Which pattern occurs in a given instance of the emotion and why? For example,
from the 6 variants predicted for disgust: what on a specific occasion determines which one of
the 6 actually occurs? Furthermore, facial expressions outside the predicted set of 55 also may
also occur. If so, Ekman and Friesen's (1978) analysis may not specify the full set of patterns
that an observer will attribute to a specific emotional category. Nonetheless, Ekman (1980)
was clear that all the patterns for a given emotion should be quite similar.
One characteristic trait of the still facial images provided by Ekman and colleagues is
that they show global patterns. The typical facial expressions used in most recognition studies
is the result of different muscles acting to move the brows, eyelids, cheeks, and mouth
converging simultaneously to their maximum point of contraction. Ekman and Friesen (1978)
developed a system of analyzing a facial display into its constituent movements, called action
units (AUs). To illustrate, figure (3) shows how the how prototypical facial pattern of anger
(figure 2) can be decomposed into four different facial actions or AUs: AU4 (lowering the
brow) AU5 (raising the upper eyelids), AU17 (raising the chin) AU23 (pursing the lips)
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 24/282
20
Figure (2)
AU4+5+17+23
AU4 AU5 AU17 AU23
Figure (3). The four individual action units that constitute the pattern shown in figure(2).
There is an alternative account of facial behavior that does not predict one specific
facial pattern for each emotion and thus can explain the existence of multiple patterns. Indeed,
it raises the possibility of even more diversity, including the frequent occurrence of no facial
action, single actions, and small combinations of muscular group. This alternative is
commonly referred to as “componential theories”. The central claim of the componential view
of facially communicated emotions is that single elements of facial expressions might convey
meaningful information’s at more molecular levels than full blown prototypes (Smith and
Scott, 1997).
Componential accounts of facial expressions of emotions are derived from appraisal
theories of emotion (for a review see Scherer, 2001). Appraisal theories claim that emotions
are elicited and differentiated by conscious and/or nonconscious evaluations of events and
situations. Although different appraisal theories vary with regard to both the number of
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 25/282
21
appraisal dimensions and their exact definitions, there is substantial overlap (Frijda, 1986;
Ortony & Turner, 1990; Russell, 1997; Scherer, 1984; Smith, 1989; Smith & Scott, 1997).
Amongst appraisal theories Scherer’s multi componential model of emotions or CPM
for short presents a particular interest because it provides specific predictions linking
individual facial actions with appraisals dimensions (see table 1. extracted from Wehrle et al.,
2000). According to that model, the temporal orders in which individual facial actions unfold
are seen to reflect an individual’s ongoing cognitive processing of emotionally relevant
stimuli (Scherer, 1992, 2009). The CPM’s appraisal categories can be broadly categorized
into four classes: appraisals related (1) to the intrinsic properties of the stimulus, such as
novelty and pleasantness; (2) to the significance of the event for the individual’s needs and
goals; (3) to the individual’s ability to cope with the consequences of an event; and (4) to the
compatibility of the event with social and personal norms and values. According to appraisal
theory, it is the subjective evaluation of an event as pleasant or unpleasant, conducive, or
obstructive to one’s goals, as changeable or not, and as compatible or incompatible with
social and personal norms that determines the type of emotion that is experienced.
Thus, an event that is appraised as pleasant and goal conducive elicits joy, whereas
one that is appraised as goal obstructive and as difficult or impossible to redress elicits
sadness (Scherer, 1992). In the CPM, an emotion episode is the result of a momentary
synchronization of functionally distinct components, including cognitive appraisals,
subjective feelings, physiological changes, pre-motor activation preparing for action andfacial expressions.
No one component will be common to all instances of any one type of emotion, and
each component can function independently of any other and in the absence of any emotional
feeling. If facial movements are the direct outcomes of an appraisal process, an emotion is
therefore expressed in the face only indirectly, through its correlation with the other defining
components.
To illustrate, let us return to Figure 2. Component theory posits that several AUs are
concomitant with a cognitive appraisal. In the first slide of figure 3, the brows are lowered
and brought together in an action called AU4 according to the FACS system. The CPM
predicts that this could signal that the person is appraising an event as unexpected, unfamiliar,
unpleasant or as obstructing his/her goals. The rising of the upper eyelid referred to as AU5 in
the second slide of figure 3, could reflect an attentional response to a sudden change in the
environment.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 26/282
22
Note that for Smith and Scott (1997) the appraisal component can initiate a facial
expression even in the absence of any underlying emotional feeling. On the other hand a
feeling of anger dissociated from the activation of additional components, would produce no
facial behavior at all. Put more formally, components are necessary and sufficient for facial
action; emotions are neither necessary nor sufficient for facial action. According to this,
prototypical patterns of facial actions can arise only secondarily, through the coincidental
combination of two or more dimensions of the appraisal process. Table 2, provides some
predictions for facial action patterning corresponding to the basic categories identified by
Ekman. Of course, these are just hypotheses to illustrate the componential view of facial
expression. Even though some encouraging data have been reported (see: Lanctôt and Hess,
2007; Aue and Scherer, 2008; Delplanque and al. 2009) most of the details remain to be
established empirically.
Nevertheless, it opens up for the possibility that partial expressions that would be seen
as meaningless when considered in isolation could in fact still be quite informative when
preceding and following actions are taken into consideration. Authors in this tradition
componential approach have called for more research on the temporal dynamics of facial
expressions (Kaiser and Wehrle, 2008) and possible combinations with other expressive
modalities (Scherer and Ellgring, 2007b).
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 27/282
23
Summary
Besides speech, we use notably facial expressions, gaze direction, vocal modulation
and body segment orientation to interact with others. One characteristic of human
communicative abilities is to combine actions from different behavioral modalities into
specific patterns that involve either some temporal overlap or sequence. For example, a vocal
emphasis on a word might begin and end within a bilateral rising of the brows; or a gaze at a
person’s face might contain a smile that is followed by downwards movements of the head
and eyes. To date little attention has been given to the temporal sequence in which facial
actions unfold and how they are coordinated with head and eye motions. Such coordinated
patterns may be perceived as communicating specific emotional meanings, but relevant
research is scarce. In this thesis, we attempt to provide detailed examples of the ways dynamic
facial expression of emotions are produced and perceived. By extracting and representing the
sequential unfolding of facial and other nonverbal actions during spontaneous emotional
displays, nonverbal analysis can begin to discriminate among the message values of otherwise
undetected features of expressive actions. This is a critical step if we are to move from simple
prototypical expression recognition to the interpretation of dynamic and naturally occurring
expressions.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 28/282
24
T a b l e 1 . S c h e r e r ' s C o m p o n e n t i o n a l m o
d e l p r e d i c t i o n s l i n k i n g c o g n i t i v e a p p r a i s a l s w i t h f a c i a l g r o u p m u s c l e s a c t i v a t i o n . R e t r i e v e d f r o m W e
h r l e e t a l . 2 0 0 0 .
N o v e l t y
H i g h
L o w
S u d e n n e s s
1 + 2 + 5 + 2 6 / 2 7 / 3 8 / G a z i n g a t
N A
F a m i l i a r i t y
N A
4 B + 7
P r e d i c t a b i l i t y
N A
4 B + 7
H i g h
L o w
T a s t e 6 + 1 2 + 2 5 / 2 6
T a s t e ( 9 + 1 0 ) + 1 6 + 1 9 + 2 6
S i g h t
5 A + 2 6 A
S i g h t 4 + 7 / ( 4 3 ) / 4 4 + 5 1 / 5 2 + ( 6 1 / 6 2 )
S m e l l 2 6 A + 3 8
S m e l l 9 + ( 1 0 ) + ( 1 5 + 1 7 ) + ( 2 4 ) + 3 9
S o u n d s 1 2 + ( 2 5 ) + 4 3
S o u n d s : a n y c o m b i n a t i o n s o f t h e o t h e r s
G o a l S i g n i f i c a n c e
G o a l R e l e v a n c e
H i g h F
o c u s i n g r e s p o n s e s : l o w e r i n t e n s i t y o f t h e c u m u l a t i o n o f t h e 2 f i r s t S E C s
L o w : N A
O u t c o m e p r o b a b i l i t y
P r o b a b l e : h i g h e r i n t e n s i t y f o r f u t u r e r e s p o n s e s
N o t p r o b a b l e : l o w e r i n t e n s i t y f o r f u t u r e r e s p o n s e
E x p e c t a t i o n
C o n s o n a n t : N A
D i s s o n a n t : r e a c t i v a t i o n o f n o v e l t y r e s p o n s e 1 + 2 + 5 o r 4 B + 7
C o n d u c i v e n e s s
C o n d u
c i v e : 6 + 1 2
O b s t r u c t i v e : 4 C ( l o n g ) + 7 + 1 7 + 2 3 / 2 4
U r g e n c y
U r g e n t : i n t e n s i f i c a t i o n ; h i g h t e n s i o n
N o t u r g e n t : D e a m p l i f i c a t i o n - l o w t e n s i o n
C a u s a l i t y
A g e n t
S e l f o r
n o n h u m a n : l e s s i n t e n s e t h a n e x t e r n a l p e r s o n a l a t t r i b u t i o n
O t h e r p e r s o n : i n t e n s i f y f u t u r e r e s p o n s e s , m o r e i n t e n s e t h a n s e l f o r n
o n p e r s o n a l
M o t i v e
N o n i n
t e n t i o n a l : d i m i n u t i o n o f i n t e n s i t y o f e x i s t i n g a n d f u t u r e r e s p o n s e
I n t e n t i o n a l : i n t e n
s i f y e x i s t i n g a n d f u t u r e r e s p o n s e : M o r e i n t e n s e t h a n n o t i n t e n t i o n a l
C o p i n g p o t e n t i a l
C o n t r o l
L o w : 1
5 + 2 5 / 2 6 + 4 3 B / 4 3 C / + 5 4 + 6 1 / 6 2 + 6 4 o r 1 + 4
H i g h : 4 + 5 o r 7 + 2
3 + 2 5
P o w e r
L o w : 2
0 + 2 6 / 5
H i g h : N A
A d j u s t m e n t
L o w : h
o l d i n g t h e e x i s t i n g p a t t e r n
H i g h : D e a m p l i f i c
a t i o n
S t a n d a r d s C o m p a t i b i l i t y
A c h i e v
e , c o m p l y o r s u r p a s s s t a n d a r d s
F a i l t o a c h i e v e o r v i o l a t e s t a n d a r d s
S e l f
S e l f : 1 7 + 2 4 ( + 5 3 )
S e l f : 1 4 / 4 3 A / 4 3
C / 4 3 C + 5 4 + 5 5 / 5 6 + 6 1 + 6 2 + 6 4
S e l f : 1 7 + 2 4 ( + 5 3 )
S e l f : 4 1 / 4 2 / 4 3 + 5 4 + 5 5 / 5 6 + 6 1 / 6 2 + 6 4
O t h e r :
D i r e c t g a z e a t , 1 + 2 + 5 + 2 6
O t h e r : 4 + 1 0 U + ( 1
2 ) + ( 5 3 + 6 4 ) / 1 2 U / 1 4 U
O u t c o m e B
I n t r i n s i c p l e a s a n t n e s s
O t h e r
A p p r a i s a l d i m e n s i o n
O u t c o m e A
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 29/282
25
Emotion Prototype Predicted Appraisal1 Predicted Sequence2
Unfam./Discrepant 4+7
Disgust Unpleasant 9,10,15,39High Coping 23+25, 17+23,6+17+24
Sudden 1+2
Discrepant 4+7
Obstructive 17+23,17+24
High Coping 23+25, 17+23,17+24
Sudden 1+2
Fear Unfam./Obstructive 4+7
Low coping 20,26,27
Discrepant 4+7
Sadness Obstructive 17+23,17+24
Low coping 20,26,27
Sudden 1+2
Happiness Pleasant 12+25
Conducive 6+12
1. Predicted Appraisal: Antecedents postulated by CPM. 2. In this colomn left, centre and right alignement are used
to suggets the relative temporal position of the indicated action units (+: simultaneous AU / ,: alternative AU)
Anger
Table 2. Components Processes Model - Facial Action Units prediction for five modal
emotions. Derived from Scherer (2001)
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 30/282
26
IIIIII.. EEXXPPEERRIIMMEENNTT A ALL SSEECCTTIIOONN
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 31/282
27
Research aims
The general aims of this thesis are exploratory in nature. They can be divided into two
main questions that we will attempt to address. First, we want to investigate the possibility
that the natural unfolding of spontaneously produced facial expressions combined with head,
gaze and speech parameters, follow structured rules of organization that can be detected by
hierarchical sequential pattern analysis. Second, in case such patterns can indeed be detected
we want to test the possibility that they may play a role in the way emotions are perceived
from the face.
Collecting samples of dynamic emotional facialexpressions
In order to accomplish our research tasks we first need to collect a database of
dynamic facial expressions that are both natural and emotional enough. Describing the steps
taken to create such a database will be the topic of the next sections.
The MeMo database
The MeMo (Multimodal Emotions) corpus was created for the purpose of this study.
We wanted to collect a database of facial expressions that were natural enough though
emotional enough to be used for this thesis. We decided to use an emotion sharing task as
affect eliciting methodology. MeMo is an audiovisual database consisting of 200 short video
segments extracted from 50 autobiographic narratives of emotional episodes produced in the
context of face to face semi-structured interviews.
Emotional narratives eliciting task
The emotional memories were elicited using a set of fixed propositions corresponding
to situation parameters predicted to lead to specific emotions according to appraisal theories
(see Scherer, 2001). We used predictions of appraisal patterns to produce items to guide
participants in recollecting and selecting five negatively valenced emotions corresponding to:
anger, fear, guilt, sadness and contempt. Before being used in the actual experimental task, the
appraisal items had previously been tested and refined in three pilot studies until respondents
could identify the predicted emotion in at least 75% of the time. During the experimental task
each participant was expected to produce five narratives of personal events during which they
had experienced intense emotions. At no time were the participants told the type of emotion
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 32/282
28
expected, the only constrain was that the retrieved event had to correspond to the suggested
features detailed in the appraisal items. The complete appraisal list of items used for this
protocol can be found in appendix 5. The order of the five emotional narratives elicitation
tasks was counterbalanced, in order to neutralize potential effects of fatigue and habituation
with the task.
Participants to the narration tasks (production study)
Participants were 16 females recruited by ads placed in various university facilities.
Ages varied from 23 to 56 (µ =31). Because of the potential negative effects of remembering
difficult life events, each participant was screened for signs of depressive symptoms and
current use of antidepressive medications prior to inclusion in the study. Signs of depressions
were assessed with the Beck Depression Inventory – 21 items (Cottraux, 1985). According to
protocol, prospective participants with a BDI score over 13 (upper threshold for lowsymptoms) or currently under antidepressants medication would not be invited to take part in
the experiment. In practice, none of the prospective participants met these exclusions criteria.
Out of the 16 participants 13 completed the protocol to the end. Three participants were
unable to produce all the required narratives. One could not produce a narrative targeting
anger and two others could not produce narratives targeting contempt. Later we had to
renounce using the videos of three additional participants for technical problems. The first
was not included because the audio recording did not function, the second, because the
participant often moved outside of the camera’s frame, and the third because the camera
unexpectedly stopped during the session.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 33/282
29
Laboratory and interview settings
The narratives were recorded in our lab at the University of Geneva. The participants
were seated on a chair and their faces were videotaped by a hidden camera located in a
cupboard in front of them. The author of this thesis was seated at a 45°angle to the right and at
a distance of one and a half meter of the participant. The role of the interviewer was to present
the instructions for the tasks and to introduce the items for the guided recollection and
selection of personal memories. Once the participants were ready to begin their storytelling,
we would start listening without asking questions or engaging the conversation. Support in the
form of backchannels signals and minimum empathic statements were given when
appropriate. After each narratives ended, the participant was handed a self report
questionnaire to fill in order to assess the possible induction of emotions aroused during the
task. Once the self report file was completed, we would leave the room for five minutes to let
the participant take a short break. We would then come back and present the instructions for
the second narrative. This sequence would be repeated 4 times until the participant had
produced the five narratives. Finally, at the end of the procedure we debriefed the participants
about the purpose of the study and obtained their permission to make use of their films for
scientific purposes. Each participant received an amount of 50 Frs. in return for participation.
Assessment of emotional induction
The emotional induction effects of the task were assessed by self-reports once the
narratives were completed. The emotional induction was measured with the « Geneva
Emotion Wheel » (Bänziger, Tran and Scherer, 2005), a self report tool composed of 20
frequently used French emotional terms that can each be rated for intensity on a 5-point scale.
A response indicating no emotions could also be checked. Participants were asked to report
the emotions, if any that they had experienced while sharing their story. Self-reports’ data
show that the emotional induction was generally effective for each narrative (Table 3).
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 34/282
30
Table 3. Emotional Induction Assessment
Other Other
Task induction emotion Fel t emotion No emotion Anger Fear Guilt Contempt Sadness Positive Negative
Anger Group Means 0.00 3.2 0.00 0.00 2.27 0.00 1.80 2.9
N=10 (-) (1.80) (-) (-) (1.90) (-) (0.50) (0.56)
Fear Group Means 0.00 1.30 1.80 2.20 0.00 0.00 2.30 1.2 N=10 (-) (0.52) (0.67) (0.80) (-) (-) (0.95) (1.05)
Guilt Group Means 0.00 1.20 0.00 2.30 0.00 1.05 1.30 2.4
N=10 (-) (0.39) (-) (1.20) (-) (0.23) (0.80) (1.25)
Contempt Group Means 0.00 2.90 0.00 0.00 3.20 1.00 0.00 1.22
N=10 (-) (1.70) (-) (-) (1.40) (0.59) (-) (0.90)
Sadness Group Means 0.00 2.30 0.60 0.86 0.00 3.90 2.30 1.80
N=10 (-) (1.3) (0.25) (0.50) (-) (2.10) (1.84) (0.84)
Note: SD between brakes
For example, when telling sadness stories participants have reported high scores of
sadness (3.90). In fact the emotion targeted by the appraisal profiles is always the one with the
highest score, at the exception of fear. This somehow makes sense because narratives of fear
frequently involved situations in which a participant felt threatened by a situation, which in
the end didn’t turn out as badly as expected. To illustrate this point, one participant told the
story of a time when she was caught on a boat at see during a storm. She really felt she was
about to die at that time, when recalling the event she frequently blames herself in front of us
for having neglected to check the weather forecast before sailing off with the boat that
morning. She also expresses her relief at having survived this dreadful experience.
Consequently her self reported current level of fear is rather low while her scores on guilt
and « other positive » (relief in this case) is rather high. On the other hand the sadness
appraisal involves an irreversible loss; therefore the potential for reactivating a sadness affect
is high because the stories frequently trigger unfinished business like the unexpected death of
a close relative or friend. Interestingly, even though each narrative is usually characterized by
a dominant emotion, participants did not hesitate to use additional adjectives to account for
complex feelings.
Extracting video sample files from the original films
The second step of the study was to select the most appropriate material, which is
natural though emotional enough stimuli, in order to end up with a good quality sample of
spontaneous and dynamic facial displays. Two judges, undergraduate psychology students,
not otherwise involved in the study were instructed to independently look the 50 films and
time mark the start of sequences where participants “seemed” to be experiencing an emotion.
Segments on which the judges agreed yielded an initial database of 350 clips. In order to
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 35/282
31
avoid giving more weight to the expressive style of some participants over others we
randomly selected 20 clips per participants (corresponding to the maximum number of clips
extracted from the least expressive participant) ending up with a core set of 200 video sample
files. It must be noticed that up to now, there is no consensual criterion for defining dynamic
sequences. Researchers using video clips do not always specify the length of their films, and
when they do so, do not justify it (clips’ length typically vary from 2 seconds to at least 24
seconds). For the present research, the set rule for extracting a segment of video was to
include the full propositional unit accompanying, preceding or following the emotional
sequence. A propositional unit was defined as a unit composed at minima of an actor
(generally the grammatical subject of the sentence) and an action verb. The mean duration of
the sequences in our database is 5.88 seconds (with minima at 1.8s and maxima at 15s).
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 36/282
32
Assessing the message value of spontaneouslyexpressed dynamic displays of emotions
Now that we have collected the database needed to address our research questions we
need to verify that our sample files of video records are indeed perceived as conveying
reliable emotional signals. Our next step has been to determine what methodology to use in
order to answer this question. In most studies of emotion perception from facial expressions,
subjects are shown photographs of prototypical expressions and asked to choose one adjective
from a short list to classify them. For example, presented with a predicted “anger” expression,
and given the list: “happy, surprise, fear anger and disgust ” most subjects have been shown
to choose the “anger ” label to characterize the expression (Ekman and Friesen, 1975).
Many important studies presenting evidence in favor of basic emotions being
recognized from prototypical expressions have used forced choice response formats (Boucher
and Carlson, 1980; Ekman, 1972, Ekman and Friesen, 1971, 1975, Ekman and al. 1987;
Ekman and Heider 1988; Ekman, Sorensen and Friesen, 1969; Izard, 1971, MacAndrew,
1986; Niit and Valsiner, 1977). Nevertheless in the context of emotion judgment studies,
forcing a participant to choose one from a short list of emotions labels has been shown to
inflate agreement and even produce blatant artifacts (Russell 1994). Providing the judge with
more options lowers agreement (Banse & Scherer 1996). Allowing the judge to specify anyemotion (free labeling) lowers agreement still further (Russell 1994). Some of the artifacts
can be eliminated by providing “none of the above” as a response option (Frank & Stennett
2001).
One alternative to forced choice response format that has been used but has not
received as much attention involves scalar ratings of multiple labels. Multi-scalar rating tasks
are interesting because observers can describe not only the most salient emotions they
perceive (by giving a label higher ratings than others); they can also rate the presence of other
emotions as well, as neutral or no emotion by giving all labels zeros (Yrizarry, Matsumoto,
and Wilson-Cohn, 1998). The ability to detect the presence of multiple emotions and provide
a neutral response makes this task unique. A prototypical analysis of french affective lexicon
has revealed that the intensity component of the subjective emotional experience was the most
important predictor of prototypicality for the French category “émotion” (Niedenthal and al.,
2003).
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 37/282
33
Judgment task
Judgment task: Multi-scalar ratings. To report their impressions of what the subjects
on the video were expressing, participants rated the relevance for each video sample of 17
adjectives– embarrassed, disgusted, ironic, proud, surprised, nervous, entertained, sad,
scornful, joyful, affectionate, angry, enthusiastic, anxious, perplexed, disappointed and
relieved -by rating its intensity using a continuous bipolar scale labeled: " Not at all" and "A
lot". The adjectives were selected with the following criteria in mind:
Participants to the judgment task and rating protocol
A total of 45 individuals participated in this study. All were native french speaking
women recruited through ads posted in various university facilities. Female judges were
chosen because their accuracy in judging the emotional meaning of nonverbal cues is well
established to be greater than male's (Hall, 1978, 1984; Hall, Carter, & Horgan, 2000). The
overall gender difference corresponds to a Cohen’s d (Cohen, 1988) of about .40 and a point-
biserial correlation (r ) of about .20. The tasks used in this literature encompass wide variation
in stimuli and response options such as whether observers are asked to identify emotions,
situations, or interpersonal relationships (Costanzo & Archer, 1989; Hall, 1978, 1984;
Nowicki & Duke, 1994; Rosenthal, Hall, DiMatteo, Rogers, & Archer, 1979). More recent
research suggests that women’s higher accuracy in « recognition » of emotional meaning of
nonverbal cues extends to stimuli presented so fast as to be at the edge of conscious
awareness (Hall and Matsumoto, 2004).
In this study, each participant worked alone and received a nominal fee of 25 Frs. for
participation. Upon arrival to the laboratory, the participants were all given similar oral
instructions on how to proceed with the task. They were told that they would be seeing brief
videotape samples extracted from longer interviews in which individuals had been asked to
recall and talk about an autobiographical event from their life. They were informed that they
would be asked to decide for each video sample the degree with which 17 adjectives
corresponded to their impression of the person on the videos. They were then handed out a
written copy of the list of rating adjectives with corresponding definitions (see appendix 5).
After reading the definitions they could ask for clarifications about the meaning of a specific
term or definition. In order to limit the impact of cognitive overload and fatigue on the
experimental results several measures were insured. First, participants were not asked to rate
the totality of the 200 video samples. Our pilot studies had previously shown that rating the
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 38/282
34
200 videos took 2h30 on average and was experienced as highly demanding by most
participants. The 45 participants in this study were randomly assigned to one of three groups
composed of 15 individuals each. Each group had to rate a unique subset of the original core
set of 200 videos. At least six videos from each of the ten participants were distributed across
the three groups so that no encoder would be over represented in any group. Group one
included 72 sample files (mean age = 26.4; Caucasians, 87% other ethnicities 13%); group
two included 72 video samples (mean age = 25.8 Caucasians, 100 other ethnicities); group
three included 71 video samples (mean age = 26.3 Caucasians, 93% other ethnicities, 7%).
Before pooling the results of the three rating blocks, we first had to verify that the use
of the scales by the judges was consistent for a common subset of video samples. This was
done by computing interrater's agreements indexes across the three groups for five videos not
part of the 200 core set files. The actual rating task was implemented in a Matlab script
written for this study so that each participant could work alone during the task. Given the
large number of adjectives to review for each video record we decided to present each records
twice. Right after the presentation of a video, a screen appeared with either 8 or 9 of the 17
adjective scales. By default, the cursor appeared at the center of the first scale on top of the
screen. The cursor would only change scale item after the participant provided his response by
moving it to a desired location on the line and entered his response by a mouse click. When
all the items listed on the page had been rated, the routine would load the same video a second
time. The remaining adjectives were then presented in the same way.
To neutralize order of presentation effects, both the position of the adjectives on the
screen and the order of video files were presented randomly by the program for each trial.
Participants were seated in front of a Dell PC wired to a 17 inches screen and stereo
headphones. Before the actual rating task started, the instructions given orally were presented
again on the screen and a mock trial was launched to insure that the participants had
understood and could comply with the instructions. Once any questions were answered and
the participant understood the task, the program was started. The experiment ended with
completion of the ratings for the last item.
Reliability analyses
A first cronbach α was computed on the sum of the 17 scales scores for the five video
samples common to the three rating blocks. Because the alpha was high (α = 0.94,
standardized α = 0.95, inter-item correlations = 0.20) we felt justify pooling the ratings of the
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 39/282
35
three groups together for further analysis. Cronbach’s alphas were then computed for each
adjective scale independently. Results are reported in table 4. Setting an inferior threshold of
α = 0.70, the analysis provides support for the internal reliability of 14 adjective scales. Three
adjectives had to be rejected according to the set criteria: proud, relieved and affectionate. In
subsequent analysis we dropped these three scales for which internal reliability was not
supported.
Principal Components Factor Analysis
To reduce the number of remaining terms we performed a principle components factor
analysis with varimax rotation on the ratings with the terms as variables and the subject and
clips as cases. Using eigenvalues >0.90, it revealed five factors (see table 5).
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 40/282
36
T a b l e 4 . I n t r a c l a s s c o r r e l a t i o n s f o r t h e e m o t i o n s c a l e i t e m s
A d j e c t i v e s
C r o n b a c h a l p h a
S t a n d a r d
i z e d a l p h a
i n t e r - i t e m c o r r .
A d j e c t i v e s
C r o n b a c h a l p h a
S t a n d a r d i z e d a l p h a
i n t e r - i t e m c o r r .
A d j e c t i v e s
C r o n b a c h a l p h a
S t a n d a r d
i z e d a l p h a
i n t e r - i t e m c o r r .
A f f e c t i o n a t e
0 . 5 5
0 . 6 4
0 . 1 0
A f f e c t i o n a t e
0 . 5 2
0 . 6 0
0 . 0 9
A f f e c t i o n a t e
0 . 6 0
0 . 6 8
0 . 1 5
A n g e r
0 . 8 8
0 . 8 9
0 . 3 6
A n g r y
0 . 8 3
0 . 8 5
0 . 2 8
A n g r y
0 . 8 7
0 . 8 7
0 . 3 3
A n x i o u s
0 . 8 0
0 . 8 1
0 . 2 1
A n x i o u s
0 . 8 4
0 . 8 5
0 . 2 8
A n x i o u s
0 . 8 1
0 . 8 2
0 . 2 4
D i s a p p o i n t e d
0 . 8 3
0 . 8 4
0 . 2 6
D i s a p p o i n t e d
0 . 8 3
0 . 8 3
0 . 2 5
D i s a p p o i n t e d
0 . 8 5
0 . 8 6
0 . 2 9
D i s g u s t e d
0 . 8 4
0 . 8 6
0 . 3 0
D i s g u s t e d
0 . 8 3
0 . 8 4
0 . 2 6
D i s g u s t e d
0 . 8 0
0 . 8 1
0 . 2 5
E m b a r r a s s e d
0 . 7 7
0 . 7 9
0 . 2 0
E m b a r r a s s e d
0 . 8 0
0 . 8 1
0 . 2 3
E m b a r r a s s e d
0 . 7 7
0 . 7 8
0 . 1 9
E n t e r t a i n e d
0 . 8 6
0 . 8 8
0 . 3 4
E n t e r t a i n e d
0 . 8 8
0 . 8 8
0 . 3 5
E n t e r t a i n e d
0 . 8 1
0 . 8 5
0 . 2 8
E n t h u s i a s t i c
0 . 7 8
0 . 7 9
0 . 3 0
E n t h u s i a s t i c
0 . 7 8
0 . 8 8
0 . 3 4
E n t h u s i a s t i c
0 . 7 2
0 . 7 3
0 . 1 6
I r o n i c
0 . 8 1
0 . 8 3
0 . 2 3
I r o n i c
0 . 8 0
0 . 8 2
0 . 2 4
I r o n i c
0 . 8 4
0 . 8 5
0 . 2 9
J o y f u l
0 . 7 9
0 . 8 1
0 . 2 4
J o y f u l
0 . 8 8
0 . 8 9
0 . 3 8
J o y f u l
0 . 8 0
0 . 8 1
0 . 2 5
N e r v o u s
0 . 7 7
0 . 7 8
0 . 1 8
N e r v o u s
0 . 7 8
0 . 8 1
0 . 1 4
N e r v o u s
0 . 7 6
0 . 8 0
0 . 3 0
P e r p l e x e d
0 . 7 9
0 . 8 1
0 . 2 3
P e r p l e x e d
0 . 8 5
0 . 8 5
0 . 2 9
P e r p l e x e d
0 . 7 1
0 . 6 9
0 . 1 3
P r o u d
0 . 6 2
0 . 6 8
0 . 1 8
P r o u d
0 . 6 8
0 . 6 4
0 . 2 5
P r o u d
0 . 6 0
0 . 6 1
0 . 5 3
R e l i e v e d
0 . 6 6
0 . 6 9
0 . 2 5
R e l i e v e d
0 . 6 0
0 . 6 6
0 . 1 2
R e l i e v e d
0 . 5 3
0 . 6 5
0 . 1 1
S a d
0 . 8 9
0 . 8 9
0 . 3 7
S a d
0 . 9 1
0 . 9 1
0 . 4 2
S a d
0 . 8 9
0 . 9 0
0 . 3 8
S c o r n f u l
0 . 7 1
0 . 7 4
0 . 1 6
S c o r n f u l
0 . 7 8
0 . 7 6
0 . 1 9
S c o r n f u l
0 . 8 2
0 . 8 3
0 . 2 5
S u r p r i s e d
0 . 8 4
0 . 8 6
0 . 2 1
S u r p r i s e d
0 . 8 1
0 . 8 4
0 . 2 6
S u r p r i s e d
0 . 8 2
0 . 8 3
0 . 2 5
R a t e r s b l o c k 3
R a t e r s b l o c
k 1
R a t e r s b l o c k 2
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 41/282
37
Table 5. Eigenvalues
Factor Eigenvalue % Total Cumulative %
Factor 1 4.19 29.93 29.93
Factor 2 2.83 20.21 50.15
Factor 3 1.58 11.27 61.42
Factor 4 1.49 10.62 72.03
Factor 5 0.90 6.44 78.47
These accounted for >78% of the variance in the contextual use of the adjectives. We
assigned each adjective to only one factor according to its largest partial correlation
coefficient. The loadings are listed in table 6.
Table 6. Factors loadings
Factor 1 Factor 2 Factor 3 Factor 4 Factor 5
Adjectives Enjoyment Hostility Embarrassment Surprise Sadness
Angry 0.12 0.87 -0.01 0.08 0.10
Anxious 0.28 0.12 0.41 -0.21 -0.65
Disappointed 0.26 -0.43 -0.08 0.10 0.71
Disgusted 0.16 0.84 -0.04 -0.03 -0.32
Embarrassed 0.06 0.29 0.82 0.03 -0.11
Entertained 0.83 0.13 0.13 0.12 0.31
Enthusiastic 0.87 0.10 -0.24 -0.05 0.12
Ironic -0.48 -0.14 0.22 0.25 0.41
Joyful 0.93 0.12 0.03 0.05 0.16
Nervous -0.04 -0.07 0.89 -0.13 -0.05
Perplexed 0.24 0.07 0.24 0.79 -0.15
Sad 0.30 0.11 0.15 0.16 0.82
Scornful 0.03 0.84 -0.15 0.09 0.15
Surprised -0.08 0.06 -0.05 0.85 0.22
The adjectives with the highest loadings on the first factor are “entertained”,
“enthusiastic” and “joyful”. Because these adjectives all refer to some pleasant affect we
named this factor “enjoyment”. On the second factor the highest loadings are on the “angry”,
“scornful” and “disgusted” scales. Because these terms all have a connotation of rejecting or
opposing something/someone, we decided to refer to this factor as “hostility”. For the thirdfactor, the loadings are highest on the “nervous” and “embarrassed” scales. We decided to
refer to it as “embarrassment”. The fourth factor was named “surprise” for its highest loadings
on the “surprised” and “perplexed” scales and the fifth was labeled “sadness” with high
loadings on the “sad” and “disappointed” scales. In this study it appears that the raters have
not made a differentiated use of the six adjectives possibly referring to enjoyable affects. The
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 42/282
38
use of the “relieved”, “affectionate” and “proud” terms was inconsistent across judges and
had to be dropped from the analysis. As for the three remaining terms, the notion of joy is
highly correlated in its use with the concepts of entertainment/amusement and enthusiasm.
Even though, Paul Ekman acknowledges the probable existence of several different enjoyable
emotions-sensory pleasures, excitement, relief, wonder, ecstasy, pride, schadenfreude
(enjoyable feelings experienced when one learns that an enemy has suffered) , elevation,
amusement and gratitude-he seems pretty convinced that the face does not provide distinctive
signals for each of these emotions (Ekman, 2003). Rather, he has suggested that the Duchenne
smile (Ekman, Davidson, Friesen, 1990) is a part of all of them. He further proposes that the
voice might provide the distinctive signal for each of them. Despite the fact that the three
adjectives scales – angry, disgusted and scornful - refer to emotions considered more
fundamental or basic than others, our data does not support the view that participants have
made a differentiate use of these terms to report their impressions of the displays presented to
them. Recently, Widen and Russell (2008) produced evidence showing that children of 4
years old that know the meaning of the word disgust as well as the meaning of anger and fear;
for example, when asked, they are equally able to generate a plausible cause for each of these
emotions. Yet, in tasks involving finding a ‘‘disgust face’’ in an array of faces a majority of 3
to 7 year-old children are found to associate the prototypical ‘‘disgust face’’ with anger and
deny its association with disgust. Moreover he was able to show that 25% of adults on the
same tasks did so as well. As for contempt expressions, it has been previously documented
that native English speakers also do not label the contempt expression as “contempt” in free-
response tasks (Haidt & Keltner, 1999; Rosenberg & Ekman, 1995; Russell, 1991; Wagner,
2000), in which participants can generate any label of their own to describe the stimuli. These
tasks are completely free of any effects of process of elimination. The modal response for
Americans free labeling the contempt expression in Haidt and Keltner’s (1999) study was
“annoyance”; for Canadians in Russell’s (1991b) study, it was “disgust.” One suggested
explanation for the low agreement rates for native English speakers judging contempt is that
people understand which situations are associated with the contempt expression even though
they do not have an agreed-on label for such situations or the expressions that occur within
those situations (see Matsumoto & Ekman, 2004). To our knowledge no similar studies have
been conducted with french labels. So we can only speculate that the results observed in
English speaking samples might also be relevance for french speaking participants
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 43/282
39
Clustering of video files on factor scores
To constitute groups of video samples that were rated most similar on the five factors
extracted form the previous PCA, we performed a K-means clustering on the video records
means (across subjects) factors scores. In order to maximize the probability of characterizing
each group of video samples by one factor, we first imposed a five group’s solution on the
analysis. Graph 1 shows that this attempt proved adequate since all the clusters do indeed
relate strongly to one specific factor 3.
Figure 4. K-means clustering. Plot of Means on Factors for each Cluster
-1
-0.5
0
0.5
1
1.5
2
2.5
3
1 2 3 4 5
Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5
3 The coherence of k-means clustering was also examined for two, three and six groups’ solutions. The details of
these analyses are not reported here for reason of readability. For information, two groups divided the video samples into
what could be reffered to as videos rated positively versus negatively. A three group’s solution maintained the
positive/negative partition but the third cluster could not be interpreted easily. A four clusters partition yielded a solution
where three factors were best represented by three clusters – enjoyment- surprise and sadness. The second and third factors
were mixed with all the others in clusters two and three. A six clusters solution did produced two variants of the
embarrassment cluster one where the enjoyment factor played a substantial role along with the embarrassment factor and one
where embarrassment stood alone. This partition could have been interesting to explore in order diffrentiate amongst possibly
two types of embarrassment displays. Unfortunately, this solution produced an enjoyment group constituted of only eight
video files. We decided to priviledge the five groups’ solution that maximized our power of analyis for further quantitative
explorations.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 44/282
40
The results of the K-means clustering in five groups demonstrates that it is justified to
relabeled each cluster according to the factor that most characterizes it. Accordingly, cluster
one will be renamed “enjoyment”, cluster two “hostility”, cluster three “embarrassment”,
cluster four “surprise” and cluster five “sadness”. Figure 4 shows that the 200 video files
constituting our core set of videos are not distributed evenly across the five clusters. The
enjoyment and surprise group are the least populated clusters with respectively 14 and 26
sample files. The three other clusters share approximately the same number of video files.
Figure 5. Frequency Distribution of
Video Samples in Five Clusters
Surprise
N=26
(13%)
Hostility
N=54
(27%)
Enjoyment
N=14
(7%)Sadness
N=55
(27%)
Embarrassment
N=51
(26%)
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 45/282
41
Methodology of behavior annotation
In this section, we present the various methodological options chosen to annotate
facial displays and other communicative signals of interest in the MeMo database.
The Anvil annotation tool
Anvil (Kipp, 2004) is a free tool written in Java well suited for manual annotation of
audio-video corpora containing behaviors from different communicative modalities.
Annotation takes place on freely definable multiple tracks by inserting time-anchored
elements (variables) that can be further specified by several attributes values (modalities for a
variable). Anvil allows for a multiple tracks export in text format. Each annotated element is
defined by its behavioural class, attributes, onset time, end time and duration. The beginning
and end tags of each element on an annotation track is precisely aligned to match the
corresponding beginning and end parts of the segment of video they refer to (see figure 1 for
an illustration of the Anvil annotation environment).
Coding scheme
Our multimodal annotation script was written in XML and implemented in the beta
version 4.9., of the Anvil software. The annotation window provides 4 tracks for the
annotation of formal linguistic features: speech flow, verbal encoding difficulties, vocal
emphasis and non linguistic vocalizations. In addition, we provided 35 tracks for nonverbal
behavior coding.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 46/282
42
Figure 6. Anvil Annotation Environnment
Measurement of facial activity
Facial expressions of emotions or any facial display of interest (e.g. raised brows
found in greeting displays), result from the contraction of facial muscles and their consequent
effects on skin and underlying subcutaneous fascia. Facial activity was annotated according to
the Facial Action Coding System or FACS (Ekman and Friesen, 1978; Ekman, Friesen and
Hager, 2002). The use of FACS for this research imposed itself for two reasons. First, FACS
is the current standard tool for researchers looking for a measurement system allowing micro-
analytic collection and analysis of facial movements. Second, contrary to facial EMG which
requires the placement of several electrodes on the surface of the skin’s face, FACS is non
invasive, and only actions visible by the coders are taken into account. This is important since
we are interested in measuring facial activity that is potentially accessible for inferences about
emotional states.
FACS is a comprehensive and anatomically based coding system designed for the
measurement of all visually distinguishable facial activity on the basis 44 action units (AUs)
and action descriptors (ADs) as well as several additional categories of head and eyes
positions and movements. Action descriptors codes differ from Action Units in that the
authors of FACS have not specified the muscular basis for these actions and have not
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 47/282
43
distinguished specific behaviors as precisely as they have for the AUs (see table 7 and 8 for a
description of FACS codes). The development of FACS greatly benefited from the work of
Swedish anatomist Hjortsjö (1970), who was the first to describe the muscular basis of facial
expressions (see figure 6) and to classify the visible changes on the surface of the skin caused
by different muscular actions (see table 7).
Figure 7. Facial Muscular Structure
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 48/282
44
AU codes Description Muscular basis
AU1 Inner Brow Raiser Frontalis, Pars Medialis
AU2 Outer Brow Raiser Frontalis, Pars Lateralis
AU4 Brow Lowerer Procerus, Corrugator
AU5 Upper Lid Raiser Levator Palbebrae SuperiorisAU6 Cheek Raiser Orbicularis Oculi, Pars Orbitalis
AU7 Lid Tightener Orbicularis Oculi, Pars Palebralis
AU9 Nose Wrinkler Levator Labii Superioris, Alaeque Nasi
AU10 Upper Lip Raiser Levator Labii Superioris, Caput Infraorbitalis
AU11 Nasolabial Fold Deepener Zigomatic Minor
AU12 Lip Corner Puller Zigomatic Major
AU13 Cheek Puffer Caninus
AU14 Dimpler Buccinator
AU15 Lip Corner Depressor Triangularis
AU16 Lower Lip Depressor Depressor Labii
AU17 Chin Raiser Mentalis
AU18 Lip Puckerer Incisivii Labii Superioris and Inferioris
AU20 Lip Stretcher Risorius
AU22 Lip Funneler Orbicularis Oris
AU23 Lip Tightener Orbicularis Oris
AU24 Lip Pressor Orbicularis Oris
AU25 Lips Part Depressor Labii; Relaxation of Mentalis or Orbicularis Oris
AU26 Jaw Drop Masetter; Pterygoid relaxed
AU27 Mouth Stretch Pterygoid; Digastric
AU28 Lip Suck Orbicularis Oris
Table 7. Single Action Units in the Facial Action Coding System
While FACS is anatomically based, there is no one to one correspondence between
muscle groups and AUs. This is due to the fact that a single muscle may contract in different
ways or in selective regions, resulting in visibly different actions. For example, contraction of
the medial portion of the frontalis muscle raises the inner corners of the brows only
(producing AU1), while contraction of the lateral portion of the same muscle raises the outer
brow (producing AU2). Also, more than one muscle group can be involved in the production
of a single action unit. For example, AU9 (Nose Wrinkler) results from the combined
contraction of the levator labii superioris and alaeque nasi muscles. In the Anvil annotation
tool, each facial code was assigned a separate track. During coding, every individual AUs and
ADs were coded in separate runs. Each facial action was assigned a duration code delimited
by its onset and offset times. Onset and offset times were annotated using Anvil’s variable
speed option set at a frame by frame resolution. Asymmetries (A) and laterality (U) ofmovements were also coded. Intensity of muscle contraction for each AU was coded near the
apex on a three level scale: low, medium or high.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 49/282
45
AU or AD codes
AU8(25)
AD19
AU21
AD29
AD30
AU31
AD32
AD33
AD34
AD35
AD36
AD37
AU38
AU39AU43 Eye Closure
AU45 Blink
AU46 Wink
Lip Wipe
Nostril Dilator
Nostril Compressor
Blow
Puff
Cheek Suck
Tongue Bulge
Jaw Thrust
Jaw Sideways
Jaw Clencher
Lip Bite
Lips toward Each other
Tongue out
Neck Tightener
Table 8. More grossly defined Actions in the Facial Action Coding System
Description
Additional Nonverbal Codes
In addition to the FACS action units, we created additional codes for gaze and head
positions, movements and social orientation (towards or away from the interviewer). Even
though FACS suggests certain codes for some of these event types, not all the actions we
wanted to include were accounted for in FACS.
Gaze: gaze orientation was scored for two modalities: the participant could be scored
either as looking at or away from the interviewer (“look at”/”look away”). We determined
several position codes for the eyes, these were: looking straight; looking away and to the side
(on the left since the experimentator was seated to the right of the participant); looking up or
down; blinking and the upper eyelids drooping (AU43 in the FACS system).
Head: as for gaze, the head could be said to either be oriented on “Head On” or away
from the interviewer “Head Away”. Head movement codes include: the chin lowering or
raising on a vertical axis (Head Lower; Head Raise); Diagonal movements upwards or
downwards (Head raise and turn; Head lower and turn); horizontal movements either to the
right or to the left (Head turns) as well as lateral head tilts (Head tilting). Two emblems
performed with the head are also coded: head nods (as in saying yes) and head shakes (as in
saying no). A last category of head codes involves head positions (positions of the head that
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 50/282
46
are maintained at least 2 seconds). These head position codes were: head down; head raised,
head turned away (from the interviewer to the left side of the participant). Head laterally tilted
“head tilted” either to the left or to the right. Finally two types of extra-communicative
gestures were coded: manipulators (ex. the participant manipulates a pen or her glasses) and
auto contacts (ex. the participant is putting her hand in her hair).
Speech and Voice Codes
The last categories of codes concern behaviors related to the flow of speech, verbal
encoding difficulties, nonverbal vocalizations - laughter’s and tears - and phatic devices like
word or sentence stresses. In order to score each of these categories we first had to align the
written transcriptions of the sample video files with the sound waves of the recorded voices of
the participants. In order to do this time dependent transliteration we used the PRAAT voice
acoustic analysis tool (Boersma and Weenink, 2005) and subsequently imported the text filesas a track into Anvil. The transcription track of Anvil’s annotation window contains the word
by word orthographic transcription of the speaker's propositional statement in each video
samples.
Speech flow: For each record file we did discriminate between segments where a
participant was speaking from those where she remained silent (pause). Only “pauses” lasting
¼ of a second or more were taken into account.
Verbal encoding difficulties: We defined two different event types for verbal encoding
difficulties: “hedges” and “disfluencies”. Both were scored for occurrence and duration.
Hedges are a special case of discourse markers, which Lakoff (1972) defines as: «words
whose job it is to make things more or less fuzzy”. “kind of ”; “sort of”;” somehow”;”like” etc.,
are considered to be fuzzy hedges. Disfluencies reflect production problems coming along
with spontaneous speech. Following Shriberg (1999), we code the following features as
disfluencies: (1) filled pauses (”uh”,”um”), (2) repetitions (”the the”), (3) repairs (”that’s her
fault- his fault ”), and (4) false starts (” I was feeling really - I should have told her ”).
Words or sentence stresses were scored on an “emphasis” track. Emphasis was
defined as a clearly perceived change in vocal intensity during speech segments (single words
or sequence of words). Vocal intensity can increase (moderate or strong) or decrease
(reduced).
Non-linguistic vocalizations codes include laughter and sobbing. They were coded at
three levels of intensity: low, moderate or strong.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 51/282
47
Scoring procedure and reliability assessment
FACS coding was performed by the author and an undergraduate student getting
credits for a Master’s degree in psychology at the University of Geneva in 2008. Reliability of
facial action coding was assessed in several ways. First, the author took and succeeded the
FACS final test achieving a mean agreement ratio of 0.88 with the authors of FACS (minimal
requirement for passing is set at 0.70). The second coder did not attempt the final test but was
trained in FACS with certified FACS coder, Prof. Susanne Kaiser. Her training took place
over a period of one semester prior to the coding of the research videos. Besides using the
FACS manual as a constant aid to scoring decision, we established a scoring protocol to
insure adequate comparability in procedures. Scoring by both coders was performed in our lab
on a computer equipped with a 17 cm screen, resolution 1680 x 1050, sampling rate 60 Hz.
During the scoring phase both the author and the assistant coder worked independently and
were blind as to which judgment clusters each sequence belonged to. Each sequence was
viewed mute to avoid being influenced by speech content. The first pass was viewed at
normal speed to get a realistic impression of the sequence. On successive pass, we viewed the
record in slow motion from beginning to end, looking at each individual AU independently,
always starting with the upper face and finishing in the lower face region. When a scorable
action was identified, a « start » and « end » tags were placed at the onset and offset of the
event. At this stage, precise location of the event on the time line was done at a frame by
frame resolution. When it was difficult to determine if an AU was involved, we reviewed the
event a maximum of three times. If the uncertainty was not resolved with three reviews, we
scored as though the suspected AU(s) did not occur. Thus, only the most obvious aspects of
the activity were scored.
Exhaustive FACS coding can be problematic when subjects are speaking because
certain lower face AUs may be involved in speech articulation: 10, l4, 16, 17, 18, 20, 22, 23,
24, 25, 26, and 28. Initially, Ekman discouraged scoring AUs 17, 18, 22, 23, 24, 25, or 26 if
they coincided with speech and recommended instead an action descriptor 50 to indicate thatthe person was talking. In the 2002 version of the FACS Investigator manual, he revised his
opinion claiming that: « …we have found since that all these actions can be scored and we
now only omit 25 and 26 when 50 is scored. For almost all of these AUs the amount of action
required by talking is below what has been set as the criteria for the B intensity in the FACS
manual. » (Ekman and Hager, 2002). In other words, the opinion expressed is that coders should
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 52/282
48
be sensitive to the intensity of an action when deciding whether or not to code during speech.
When actions are more intense than needed for mere articulation these actions ought to be
scored. Nevertheless, to remain cautious, we decided for this study not to score AU16, AU18
AU19, AU22, AU25, AU26, AU27, AD18 and AD19 while subjects were speaking. Any
miscellaneous and other non FACS related codes were scored as described above. The
Master’s student double-coded 80 video sequences (40% of the corpus) extracted randomly
from the core dataset. Because, we weren’t able to recruit another person to work on the “non
FACS” codes, we assessed our own intraindividual reliability for these categories. This was
done by rescoring 30% of the data set on these codes, with a one year interval between the
two sessions. In all cases, scoring agreement was quantified with Cohen's Kappa. Cohen's
Kappa is a standard measure of observer agreement. It is defined as Kappa = ( p observed - p
chance) / (1 - p chance) and can vary from 0 to 1 (Cohen, 1960, Bakeman and Gottman,
1997). Coefficients ranging from 0.40 to 0.60 indicate fair reliability. From 0.61 to about 0.75
coefficients are considered good; 0.75 or higher indicate excellent reliability (Fleiss, 1981).
The reliability of FACS scoring was assessed at two levels of analysis: 1) Agreements on the
occurrences of individual AU scoring; and 2) temporal precision of individual AU scoring for
onsets and offsets. In a seminal work on scoring reliability of FACS codes for non acted
expressions, Sayette and al. (2001) have shown that a precise frame by frame unit of
measurement usually provide adequate Kappa’s, but that the coefficients significantly
improve when using a 1/6th. For most purposes they consider a ½ second tolerance window
acceptable. Since brief latencies are crucial to our hypotheses we found it necessary to use
smaller tolerance windows. In assessing precision of scoring, we used tolerance time windows
of 0, and 5 frames, which correspond to a reliability of 1/25 th to 1/6th of a second,
respectively. Coders were considered to agree on the occurrence of an AU if they both
identified it within the same time window. Results are reported in tables 9, 10, 11 and 12.
Action Units Frames Occurrence 1/25th 1/6th 1/25th 1/6th
AU1 1913 0.79 0.71 0.79 0.57 0.68
AU2 436 0.88 0.80 0.87 0.64 0.80
AU1+2 14787 0.82 0.74 0.80 0.65 0.78
AU4 2954 0.87 0.69 0.82 0.66 0.81
AU5 8314 0.92 0.67 0.91 0.60 0.90
AU6 5342 0.70 0.58 0.70 0.49 0.61
AU7 2882 0.68 0.49 0.67 0.51 0.65
Table 9. Kappa's Coefficients for Single Upper Face Action Units
Upper Face Codes
Onset Offset
Tolerance window (seconds)
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 53/282
49
Action Units Frames Occurrence 1/25th 1/6th 1/25th 1/6th
AU9 2552 0.85 0.69 0.80 0.66 0.78
AU10 7026 0.91 0.79 0.91 0.70 0.80AU11 380 0.38 0.27 0.35 0.22 0.28
AU12 12881 0.93 0.87 0.90 0.59 0.68
AU12A 1359 0.74 0.65 0.72 0.50 0.61
AU12U 458 0.81 0.63 0.81 0.60 0.76
AU13 62 0.35 0.30 0.32 0.25 0.28
AU14 2964 0.78 0.69 0.76 0.58 0.70
AU14A 114 0.67 0.48 0.60 0.45 0.62
AU14U 1469 0.72 0.61 0.72 0.59 0.70
AU15 5463 0.68 0.59 0.65 0.54 0.59
AU16 1584 0.64 0.54 0.64 0.58 0.62
AU17 10027 0.82 0.75 0.81 0.72 0.75
AU18 272 0.30 0.21 0.29 0.20 0.25AU20 2260 0.78 0.70 0.75 0.65 0.73
AU22 173 0.32 0.29 0.32 0.28 0.30
AU23 1811 0.65 0.58 0.63 0.50 0.59
AU24 1901 0.69 0.54 0.67 0.61 0.63
AU25 13149 0.92 0.82 0.90 0.70 0.88
AU26 9044 0.93 0.75 0.89 0.72 0.89
AU27 76 1.00 0.80 0.96 0.70 0.76
Table 10. Kappa's Coefficients for Single Lower Face Action Units
Lower Face Codes
Onset Offset
Tolerance window (seconds)
AU and AD Frames Occurrence 1/25th 1/6th 1/25th 1/6th
AU8(25) 0 -- -- -- -- --
AD19 74 0.31 0.26 0.27 0.21 0.28
AU21 607 0.49 0.43 0.48 0.40 0.45
AD29 28 -- -- -- -- --
AD30 22 -- -- -- -- --
AU31 0 -- -- -- -- --
AD32 179 0.66 0.52 0.65 0.51 0.60
AD33 343 0.70 0.63 0.70 0.62 0.68
AD34 40 -- -- -- -- --
AD35 30 -- -- -- -- --
AD36 18 -- -- -- -- --
AD37 351 0.82 0.75 0.80 0.70 0.76
AU38 101 0.36 0.23 0.32 0.20 0.27
AU39 101 0.28 0.21 0.25 0.22 0.30
Table 11. Kappa's Coefficients for Miscellaneous FACS Codes
Onset Offset
Miscellaneous Codes Tolerance window (seconds)
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 54/282
50
Action Units Frames Occurrence 1/25th 1/6th 1/25th 1/6th
Blink 9691 0.90 0.82 0.90 0.76 0.80
Eyelids Droop 4857 0.85 0.78 0.83 0.69 0.72Look At 15222 0.96 0.80 0.94 0.79 0.92
Look Away 18828 0.95 0.70 0.89 0.83 0.90
Look Down 7602 0.80 0.76 0.79 0.74 0.79
Look Up 1070 0.87 0.69 0.82 0.70 0.78
Lower Head 1380 0.94 0.83 0.91 0.78 0.89
Head Turns 2492 0.83 0.77 0.81 0.78 0.80
Head Down 1223 0.89 0.71 0.79 0.74 0.79
Head Raise 2065 0.76 0.69 0.74 0.68 0.70
Head Raise and Turn 2980 0.92 0.73 0.92 0.80 0.89
Head Lower and Turn 2240 0.74 0.71 0.72 0.69 0.73
Head Raised 1178 0.75 0.65 0.71 0.58 0.68
Head On 15869 0.90 0.80 0.88 0.75 0.82
Head Turned Away 11237 0.71 0.60 0.69 0.65 0.70
Head Tilted Side 3147 0.83 0.74 0.80 0.70 0.80
Head Tilting Side 3205 0.70 0.49 0.58 0.60 0.69
Head Shake 2861 0.87 0.82 0.87 0.78 0.83
Head Nod 1653 0.93 0.88 0.92 0.76 0.90
Pause 15917 0.97 0.80 0.95 0.74 0.79
Speak 27748 0.96 0.90 0.95 0.91 0.94
Hesitation 1559 0.88 0.85 0.88 0.76 0.83
Verbal Filler 1411 0.77 0.70 0.75 0.68 0.72
Word Stress 3248 0.80 0.72 0.80 0.76 0.79
False Start 3247 0.96 0.88 0.94 0.85 0.92
Manipulator 0 -- -- -- -- --
Autocontact 455 0.82 0.78 0.82 0.80 0.82
Laughing 1352 0.94 0.87 0.93 0.85 0.90
Crying 833 0.97 0.90 0.95 0.76 0.92
Onset Offset
Tolerance window (seconds)
Table 12. Kappa's Coefficients for non FACS Codes
Non FACS Codes
Results
Using a 1/6th-second tolerance window, all the upper and lower face action units, but
four AUs 11 (Nasolabial furrow deepener), 13 (Sharp lip puller), 18 (Lip pucker) and 22 (Lip
funneler) had good to excellent reliabilities for scoring onsets (see tables 9 and 10). The
results are similar for offsets scoring except for AU23 (Lip tightener) whose coefficient
regresses to a still acceptable 0.59 value. Generally, as the tolerance window decreased to an
exact frame criterion AUs with good to excellent reliability decreased. However, even at this
smallest possible tolerance window, 16 of the 27 AUs continued to have good to excellent
reliability for both onset and offset scoring. Moreover, AUs 6 (Cheek raise), 7 (Lids Tight),
14A (Asymetric dimpler), 15 (Lip corner depressor), 16 (Lower lip depressor), 23 (Lip
tightener) and 24 (Lip presser), still achieved acceptable scores at 1/25th-second ranging from
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 55/282
51
0.40 to 0.59. We are not able to report Kappa’s for 50% of the miscellaneous codes This is
due in part to the low frequency of 7 codes of that category in the database : 8(25) ( Lips
toward each other) 29 (Jaw thrust), 30 (Jaw sideways), 31 (Jaw clencher), 34 (Puff), 35
(Cheek suck), 36 (Tongue bulge). Furthermore, agreement on occurrences for three more
miscellaneous actions: 19 (Tongue show), 38 (Nostril dilate) and 39 (Nostril compress); yield
unsatisfactory coefficients. The large majority of additional non FACS scores have good to
excellent Kappa’s for occurrences as well as event’s start and end times at an exact frame
resolution. Two exceptions are limit scores for the onsets of Head Tilting Side (1/25th and
1/60th) and the offset of Head Raise at 1/25th-second. Generally, reliability analysis indicate
good to excellent scores at an exact frame resolution for both lower and upper face action
units that are elements of emotion prototypes as proposed by discrete emotion theorists. One
exception is AU11 « Nasolabial furrow deepener » sometimes involved in « Sadness »
expressions. Note however that scored AU11 represent less than one percent of the total upper
and lower face action units in the database. FACS miscellaneous codes are shown to be unfit
for further analysis due to low frequency of occurrences and unreliable Kappa’s. On the other
hand, our additional nonverbal categories are definitely stable in time and can be included in
further multimodal analysis.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 56/282
52
Descriptions of scores in database
Before we can address specific questions concerning the patterning of facial actions, it
is necessary to insure that the constitutive elements of these patterns are actually present in
our database. In figures 8 to 12 we report the raw frequency and relative percentages of
recorded scores.
Note, that these frequency distributions do not imply any rank order with regards to
the importance of single actions. Events occurring rarely may be highly important for gaining
a specific impression. Nevertheless, in comparison to frequently occurring codes, these
elements cannot be relied upon in quantitative analysis.
Table . Frequency and Percentage Distributions of Code Categories in Database
104 (1%)
1328 (12%)
2252 (21%)
7294 (66%)
0
1000
2000
3000
4000
5000
6000
7000
8000
Non FACS Codes Lower Face Upper Face Miscellaneous
Code Categories
Figure 8. Frequency and Percentage Distribution of Codes Categories in Database
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 57/282
53
Table . Frequency and Percentage Distributions of Upper Face Action Units in Database.
11 (1%)
39 (4%)
72 (8%)87 (9%)
117 (13%)
254 (28%)
337 (37%)
0
50
100
150
200
250
300
350
400
AU1+2 AU5 AU7 AU6 AU4 AU1 AU2
Upper Face Codes
Figure 9. Frequency and Percentage Distribution of Upper Face Action Units in Database
Table . Frequency and Percentage Distributions of Lower Face Action Units in Database
2 ( 0 % )
3 ( 0 % )
3 ( 0 % )
1 2 ( 1 %
)
1 3 ( 1 %
)
1 4 ( 1 %
) 7 0 ( 3 %
)
7 2 ( 3 %
)
7 7 ( 3 %
)
8 3 ( 4 % ) 1 0
5 ( 5 % ) 1 5
5 ( 7 % )
1 6 5 ( 7 %
)
1 6 7 ( 7 %
) 2 3 1 ( 1 0
% 3 1
0 ( 1 4 %
3 1 9 ( 1 4
%
4 5 1 ( 2 0
% )
0
50
100
150
200
250
300
350
400
450
500
A U 2 5
A U 2 6
A U 1 7
A U 1 2
A U 1 4
A U 1 0
A U 1 5
A U 2 0
A U 2 4
A U 2 3
A U 1 6
A U 9
A U 2 2
A U 1 1
A U 1 8
A U 1 3
A U 2 8
A U 2 7
Lower Face Action Units Codes
Figure 10. Frequency and Percentage Distribution of Lower Face Action Units in Database
Unsurprisingly, the most frequent event types scored in the database are gaze, head
and speech codes. These event types cover 66% of the codes scored in the database. Gaze and
head codes are predominant because they refer to behavioral categories that are always coded
active on at least one of their modalities. This is also true for the “speak” and “pause” codes.
Lower face action units come next with 21% of the totality of event types followed by 12% of
upper face action units. Finally the FACS category of miscellaneous actions is way behind
representing 1% of scored event types. For upper face actions the most frequent event type is
AU1+2 (bilateral eyebrow raise) and the least frequent is AU2 a unilateral raise of the outer
part of the brow. For lower face action units the most frequent action is the lips parting
(AU25) followed by the jaw opening (AU26). Three actions do not reach a 1% threshold in
the lower face category. These are: AU13 (Chaplin smile); AU27 (Mouth stretch) and AU28
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 58/282
54
(Lips Suck). All action units that are predicted to enter in the composition of facial
expressions of emotions have been produced by the participants to the emotion eliciting task.
In the next section, we will introduce how FACS action units are usually collected and then
interpreted as expressive displays of 7 discrete emotions.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 59/282
55
G r a p h ( ) F r e q u e n c i e s a n d P e r c e n t a g e s D i s t r i b u t i o n s o f F A C S A c t i o n U n i t s i n D a t a b a s e
4 5 1 ( 1 2 % )
4 1 1 ( 1 1 %
) 3 3 7
( 9 % )
3 1 9 ( 9 %
)
3 1 0 ( 8 %
)
2 5 4 ( 5 %
)
2 3 1 ( 6 % )
1 6 7 ( 5 %
)
1
6 5 ( 4 % )
1 5 5 ( 4 % )
1 1 7 ( 3 %
)
1 0 5 ( 3 %
) 7 2 ( 2 %
)
8 7 ( 2 %
) 7 0
( 2 % )
7 2 ( 2 % )
7 7 ( 2 % )
8 3 ( 2 % )
3 9 ( 1 %
)
3 1 ( 1 % )
2 0 ( 1 % )
0 5 0
1 0 0
1 5 0
2 0 0
2 5 0
3 0 0
3 5 0
4 0 0
4 5 0
5 0 0
A U 2 5
A U 4 3
A U 1 + 2
A U 2 6
A U 1 7
A U 5
A U 1 2
A U 1 4
A U 1 0
A U 1 5
A U 7
A U 2 0
A U 4
A U 6
A U 9
A U 1 6
A U 2 3
A U 2 4
A U 1
A U 2 1
A D 3 7
F i g u r e 1 2 . F r e q u e n c y a n d P e r c e n t a g e D i s t r i b u t i o n o f F A C S A c t i o n U n i t s i n D
a t a b a s e
T a b l e .
F r e q u e n c y a n d P e r c e n t a g e D i s t r i b u t i o n s f o r N o n F A C S c o d e s i n D a t a b
a s e
9 ( 0 %
)
3 2
( 0 % )
4 3 ( 1 %
)
6 0 ( 1 %
)
7 3 ( 1 %
)
3 8 5 ( 5 % )
4 1 1 ( 6 % )
4 8 5 ( 7 % )
5 3 7 ( 7 % )
5 4 2 ( 7 % )
8 1 5 ( 1 1 % )
7 9 2 ( 1 1 % ) 6 3 1 ( 9 %
)
2 8 6 ( 4 % )
2 6 4 ( 4 % )
2 2 4 ( 3 % ) 2
2 4 ( 3 % )
2 1 0 ( 3 % )
2 0 5 ( 3 % )
1 8 3 ( 3 % )
1 6 8 ( 2 % )
1 3 5 ( 2 % )
1 2 2 ( 2 % )
1 1 2 ( 2 %
) 1 1 2 ( 2 %
) 1 0 7 ( 1 %
) 9 4
( 1 % )
4 ( 0 %
)
0 ( 0 %
)
0
1 0 0
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
7 0 0
8 0 0
9 0 0
L o o k A w a y
B l i n k
H e a d O n
L o o k A t
P a u s e
S p e a k
E y e l i d s D r o o p
H e a d T u r n e d
A w a y
L o o k D o w n
H e a d T i l t i n g
S i d e
H e a d L o w e r s
a n d T u r n
H e a d T u r n s
F a l s e S t a r t
H e a d R a i s e
a n d T u r n
H e a d R a i s e
W o r d S t r e s s
L o w e r H e a d
V e r b a l F i l l e r
H e a d T i l t e d
S i d e
H e s i t a t i o n
L o o k u p
H e a d S h a k e
H e a d D o w n
H e a d R a i s e d
H e a d N o d
L a u g h i n g
A u t o c o n t a c t
C r y i n g
M a n i p u l a t o r
C o d e s L a b e l s
F i g u r e 1 1 .
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 60/282
56
Interpretation of FACS Codes with EMFACS/FACSAID
With FACS coding, data collection is independent of data interpretation. When
scoring, the focus is exclusively on the identification of specific movements that are translated
into numerical codes. These codes refer to no more than a precise description of facial actions
detailed in the FACS manual. There is no a priori assumption between a code and its possible
psychological meaning, either in terms of emotional or otherwise communicative signals.
With EMFACS/FACSAID dictionaries, FACS coded facial events can be classified into
emotion and non-emotion categories. EMFACS (EmotionFACS) refers to both: a) a
simplified FACS coding manual and b) an interpretation dictionary implemented in a
computer program (Levenson, 2005). The EMFACS dictionary determines whether coded
events include core facial movement’s characteristic of prototypical facial displays of
emotion. References to emotions in table 13 cannot be used as a definite guideline. They
correspond to possible interpretation originally proposed by Tomkins and mainly validated on
the basis of recognition studies from posed photographs by Ekman and Friesen (1975, 1978).
AU Description Surprise Fear Happiness Sadness Disgust Anger Contempt
1 Inner Brow Raiser x
1+2 Inner and outer Brow Raiser x x
4 Brow Lowerer (x) (x) x
5 Upper Lid Raiser x x
6 Cheek Raiser x
7 Lid Tightener x
9 Nose Wrinkler x
10 Upper Lip Raiser x x x
12 Lip Corner Puller x
14 Dimpler x
15 Lip Corner Depressor x
17 Chin Raiser x x x
20 Lip Stretcher x
23 Lip Tightener x
24 Lip Pressor x
25 Lips part (x) x
26 Jaw drop x (x)
Emotion Labels
Table 13. EMFACS AUs and Hypothetical Relations to Discrete Emotions
Nevertheless, the EMFACS dictionary has been used for the classification of
spontaneous facial behavior in many published studies (Berenbaum & Oltmanns, 1992;
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 61/282
57
Ekman and al., 1990, 1997; Keltner and al., 1995; Matsumoto, Haan, Gary, Theodorou, &
Cooke-Carney, 1986; Rosenberg & Ekman, 1994; Rosenberg, Ekman, & Blumenthal, 1998,
Rosenberg and al. 2001; Steimer- Krause, Krause, & Wagner, 1990), as well as in studies that
used FACS and then virtually the same dictionary codes to produce emotion predictions but
did not mention EMFACS (Chesney and al., 1990; Ekman and al., 1988; Frank and al., 1993;
Gosselin and al., 1995; Heller & Haynal, 1994; Keltner, 1995; Levenson, Carstensen, Friesen,
& Ekman, 1991; Messinger, Fogel, & Dickson, 2001; Ruch, 1993, 1995; Sayandte and al.
2003).
All AUs combinations can be entered in the EmotionFACS (EMFACS) dictionary to
obtain emotion predictions (Ekman & Friesen, 1982a; Matsumoto, Ekman, & Fridlund, 1991).
The dictionary is accessed via a computer program made available to all researchers who have
FACS data (Levenson, 2005). Figure 14, present a flowchart of the various possible EMFACS
outputs.
Action Units and
Descriptors No interpretations
Interpretations
Conversational facial
gestures
(Ex: eyebrow flash)
Prototypical displays of basic
emotions
Negatively valenced
Anger Contempt
Disgust Fear Sadness
No valence
Surprise
Positively valenced
Enjoyment
Controlled enjoyment
Social smiles
“non enjoyment”
Masking
smilesBlends of negative
displays
Addtional EMFACS Categories
Figure 13
Structure of the EMFACS Dictionary
(Derived from Merten, 2001)
EMFACS identifies AUs that are theoretically related to facial expressions of emotion
originally proposed by Tomkins (1962, 1963) and partly empirically verified by studies of
judgments of expressions by Ekman and colleagues over 20 years (Ekman and al., 1990;
Ekman & Friesen, 1971; Ekman and al., 1980; Ekman, Friesen, & Ellsworth, 1972; Ekman
and al., 1988; Ekman, Sorenson, & Friesen, 1969). The facial configurations associated with
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 62/282
58
the emotion predictions were first listed in Ekman (1972) and in the original FACS manual
(Ekman & Friesen, 1978). Prototypic examples of the emotion facial configurations were
described in Ekman and Friesen’s (1975) Unmasking the Face and portrayed in their Pictures
of Facial Affect (Ekman & Friesen, 1976) and the Japanese and Caucasian Facial
Expressions of Emotion (Matsumoto & Ekman, 1988). Tables 14, 15 and 16 provide the most
common predictions of emotion related facial categories derived from the EMFACS
dictionary.
Anger Sadness Fear Contempt Positive emotions Surprise Disgust
AU17+23 AU1 AU1+2+4 AU1+2+14 AU6+12 (D-Smile) AU1+2+5 AU9
AU17+24 AU6+15 AU1+2+5 AU10U AU12 (Social Smile) AU1+2+26 AU10
AU4+5+10 AU6+17AB AU1+2+5+20 AU12U AU5+26 AU10+12
AU4+7+10 AU6+15+17 AU1+2+20 AU14U
AU4+5+7 AU6+11+15 AU20
AU4+5+23 AU6+11+17
AU4+5+23+25 AU6+11+15+17
AU23 AU11+15
AU4+7 AU11+17
AU11+15+17
AU1+14
AU1 +10
AU1+14
Table 14. EMFACS predictions for "Basic" emotions prototypes
Anger/Contempt Anger/Disgust Fear/Surprise Sadness/Anger Sadness/Disgust Negative unspecified
AU4+10U AU4+5+9 AU1+2+5 AU1+23 AU1+9 AU1+10
AU4+14U AU4+7+9 AU1+2+5+7 AU1+24 AU1+4+5AU4+7+14U AU9+23 AU1+2+5+7+25 AU20+23
AU4+7+10U AU9+15+23 AU1+2+5+25 AU1+2+4+9+20
AU9+17+23 AU1+2+5+20+25
Table 15. EMFACS predictions for typical "Blended" expressions
Table 16. EMFACS predictions for typical "Masked" expressions
Anger Sadness Fear Contempt Surprise Disgust
AU12+23 AU1+12 AU1+2+4+12 AU6+12+14U AU1+2+12 AU6+12+9
AU6+12+23 AU1+12U AU6+12+20 AU6+12+14U AU1+2+5+12 AU9+12
AU6+12+23+14 AU1+6+12 AU12+20 AU12+14U AU1+2+5+6+12 AU10+12
AU6+12+23+15 AU1+4+12 AU12+14U AU6+12+10
AU6+12+23+17 AU1+4+6+12 AU10U+12
AU6+12+23+24 AU1+5+12 AU6+10U+12
AU12+23+14 AU1+4+5+12 AU7+12+14U
AU12+23+15 AU1+4+5+6+12 AU7+12+14U
AU12+23+17 AU1+5+7+12
AU12+23+24 AU1+5+6+12
AU1+12+17
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 63/282
59
Methodological issues in measuring the co-occurrencesof Action Units
The traditional scoring procedure for the EMFACS system is event based. This means
that the duration, the dynamics and the sequential unfolding of the facial actions are not
accounted for. The EMFACS scoring manual instructs coders to view a video record in real
time concentrating on any movement of the following upper face AUs: 1, 2, 4 and 5. When
the coder detects any activity of any of those AUs he/she should observe the upper and lower
face and determine all additional AUs that are in the event. If the event contains the AUs of at
least one core combination (see table 17 for a list of core AUs/combinations), the coder is
further instructed to:
"Locate one time point (or frame number) when all of the AUs scored in the
event first reached a mutual apex." (Ekman, Irwin and Rosenberg, 1994, p. 8).
When this point has been located the event is scored. This scoring strategy has the
advantage of retaining only the most prototypical displays that have been shown to be
relatively well recognized in judgments studies. The disadvantage is of course that the
procedure precludes any discovery of potentially new and meaningful patterns. At best one
finds the predicted pattern if they exist in a dataset at worst nothing comes out.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 64/282
60
Upper Face Core AUs Lower Face Core AUs Lower and Upper Face
and combinations and combinations AUs combinations
AU1+2+4 AU9 AU1+2+26
AU1+2+5 AU10 AU1+2+27 AU5 AU10A AU1+2+14
AU7 AU10U AU5+26
AU12A AU5+27
AU12U AU6+12
AU14* AU6+15
AU14A AU6+17
AU14U
AU14+Head Tilt**
AU14+Eyes gazing to the side**
AU14+Head moving uppwards**
AU14+Eyes gazing upwards**
AU15
AU17
AU17+23
AU17+24
AU20
AU23
AU24
*If a bilateral 14 is scored, one must also score whether or not the person is swallowing simultaneously. The AU for swallowing is 80.
Note: AU1 is not to be considered a core AU for the combination 1+2. AU1+2 is sco red only when AUs 4, 5, 14, 26 or 27
are present or, when the requirements for another core combination in the upper or lower face have been met.
Source: EMFACS-V8 coder's ins truction manual (Ekman, Irwin & Rosenberg, 1994).
**To be scored with a symmetrical 14, the add itional movements mus t directly preeced or overlapp with the AU.
Core EMFACS AUs and AUs combinationsTable 17. Core EMFACS AUs and Combinations
Since our main interest lies with the unfolding of facial actions in time, we had to
define an alternative coding procedure that would take into account the onset and offset points
for each event type that was scored. With information on both the start and end time of an
event type we could then easily compute its duration, as well position in a sequence and time
of overlap with other events (see figure). With this procedure we could then operationalize the
co-occurrences of several event types (two or more facial actions with or without other
nonverbal behavior codes) as the number of overlapping time units shared between the codes.
On the descriptive level co-occurrence analysis defined as such provides a fine grained
description of momentary facial configurations that judges are exposed to when asked to
report their impressions of a video segment. On a more theoretical level the analysis of co-
occurrences provides empirical data against which hypothesis concerning the relevance of
prototypical facial patterns in attributing emotional meaning to a facial display can be tested.
In this section we report the methodological strategies used to analyze co-occurring codes. At
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 65/282
61
this time, we’re not yet interested by the sequential structure of the datasets. Rather, what
we’re focusing on in is the identification in our dataset of EMFACS predicted facial patterns
of two or more AUs overlapping in time.
Given: AU1
Targets: AU4; AU5; AU12
Onset Offset
AU4 duration AU5 duration AU12 duration
Boundaries of overlapping time
units between « Given » and
« Target » codes.
AU1 duration
Figure .Figure 15Figure 14.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 66/282
62
Measuring Co-ocurrences of Facial Actions with GSEQ
(Generalized Sequential Querier)
In order to test the hypothesis that some EMFACS predicted patterns of facial actions
were more characteristics of certain rating cluster than others we used the GSEQ statistical
package (Quera, Bakeman and Gnisci, 2007). The program has been specifically designed for
the analysis of sequential observational data. In this case we used it to compute simple
contingency table statistics.
Contingency table statistics with GSEQ
In GSEQ most of the overlapping codes analyses performed are based on 2x2
contingency tables. 2x2 table stats are summary statistics for 2x2 tables. When these stats are
selected for larger than 2x2 tables, GSEQ forms as many separate 2x2 tables as there are cells
in the larger table. Each cell becomes cell X11 in one of these tables and cells X12, X21, and X22
are formed by collapsing the appropriate cells from the larger table.
Let a = X11, b = X12, c = X21, and d = X22. Then:
Table 18 shows a joint frequency table output of GSEQ where rows represent any
“given” event type and columns represent any “target” event type. Individual cells indicate
how many time units a subject is producing a “target” behavior B while also displaying a
“given” behavior A. Let’s take the example of three Action Unit streams – codes for AU1+2
and AU4 are assigned a “given” category and the code for AU5 is treated as the “target”
category. The resulting 3x2 contingency table looks like Table 18.
Table 18
Givens Targets
AU5
&
AU1+2AU4
&
The first row of the table divides the time units (seconds) coded AU1+2 into those that
were also coded AU5 and those that were not (here the ampersand represents the residual
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 67/282
63
category every other code not specified as either “givens” or “targets”). The second row
divides the seconds coded AU4 (that were not also coded AU1+2) into those that were also
coded AU5 and those that were not. Finally, the last row indicates the number of seconds
coded AU5 but none of the other codes specified as “givens”. Joint frequency is the number
of tallies observed in a cell: Xij (where i is the row subscript and j the column subscript).
Based on these joint frequency tables it becomes possible to compute conditional and
marginal probabilities. Conditional probability being the probability of some event A, given
the occurrence of some other event B. Conditional probability is written P ( A| B), and is read
"the probability of A, given B" . So a joint probability is the probability of two events in
conjunction. That is, it is the probability of both events together. The joint probability of A
and B is written P ( A∩ B). Then, the marginal probability is the unconditional probability P ( A)
of the event A; that is, the probability of A, regardless of whether event B did or did not occur.
If B can be thought of as the event of a random variable X having a given outcome, the
marginal probability of A can be obtained by summing (or integrating, more generally) the
joint probabilities over all outcomes for X . For example, if there are two possible outcomes
for X with corresponding events B and B' , this means that P(A)=P(A∩ B)+P(A∩ B’). This is
called marginalization. From these joint probability tables it becomes possible to compute
index of associations. Perhaps the most straightforward descriptive index of association is the
odds ratio. Imagine that we label the cells of a 2×2 table:
A B
C D
Where a, b, c, and d refer to observed frequencies of time units for the cells of a 2×2
table. Then, the odds ratio can be formalized as: Estimated odds ratio = ((A/B)/(C/D)) ; or
more commonly Estimated odds ratio = ((AxD)/(BxC)). Considering a fictional 2x2 table
such as this one:
AU5 & Total
AU1+2 20 460 480
& 20 1300 1260
Total 40 1760 1800
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 68/282
64
Then, the odds of Action Unit 5 beginning during a time unit also coded for Action
Unit 1+2 are 20/460 or 1 to .043, whereas the odds of Action Unit 5 beginning in seconds not
coded for Action Unit 1+2 are 20/1300 or 1 to .015. That is, the odds of Action Unit 5
beginning are about 2.87 times greater with Action Unit 1+2 than without it, which is the ratio
of the two odds (i.e., the odds ratio). The odds ratio can assume values from 0 (the odds for
the first row are vanishingly small compared to the second row), to 1 (the odds for the two
rows are the same), to infinity (the odds for the second row are vanishingly small compared to
the first row); especially when the odds ratio is greater than 1. It has the merit, lacking in
many indices, of a simple and concrete interpretation. Note that if (B or C) is zero then the
odds ratio is infinite; if (A or D) is also zero, then it is undefined. Odds ration vary from 0 to
infinity. One ('1') is the neutral value and means that there is no difference between the groups
compared; close to zero or infinity means a large difference. An odds ratio larger than 1
means that group one has a larger proportion than group two, if the opposite is true the odds
ratio will be smaller than one. If you swap the two proportions, the odds ratio will take on its
inverse (1/OR). If no cells are zero, 95% confidence intervals are given. The odds ratio gives
the ratio of the odds of suffering some fate. The odds themselves are also a ratio. To explain
this we will take the example of traditional versus experimental surgery. If 10% of operations
results in complications, then the odds of having complications if traditional surgery is used
equals 0.11 (0.1/0.9, you have a 0.11 times higher chance of getting complications than of not
getting complications). 12.5% of the operations using the experimental method result in
complications, giving odds of 0.143 (0.125/0.875). The odds ratio equals 0.778 (0.11/0.143).
You have a 0.778 times higher chance of getting complications than of not getting
complications, in traditional as compared with experimental surgery. The inverse of the odds
ratio equals 1.286. You have a 1.286 times higher chance of getting complications than of not
getting complications, in experimental as compared with traditional surgery. The problem of
the odds ratio index is that it is difficult to compare indexes that can vary to infinity. One
possibility is to transform odds ratios into Yule’s Q.
From Odds
ratios
to
Yules’s
Q
Yule's Q is an association index based on transformation of the odds ratio designed to
vary, not from zero to infinity, with 1 indicating no effect, but from -1 to +1 with zero
indicating no effect, just like the Pearson correlation. For that reason, we find it more
descriptively useful than odds ratios. First, C/D is subtracted from the numerator so that
Yule’s Q is zero when A/B = C/D. Then an index of association based on the odds ratio and a
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 69/282
65
symmetric measure taking on values between -1 and +1. One (1) implies perfect negative (-)
or positive (+) association, Zero (0) no association. In two by two tables Yule's Q is equal to
Goodman and Kruskal's Gamma. The interpretation of Q as Gamma is easiest to understand.
Each observation is compared with each other observation; these are called pairs, the
relationship between two observations. If an observation is higher in value as another
observation on both the horizontal and the vertical marginals, the pair of observation is called
concordant; if this is not the case the pair is discordant. The Gamma is the ratio of concordant
pairs on the total number of pairs. A high Gamma means that there is a high proportion of
concordant pairs; high values on the vertical marginal tend to go with high values on the
horizontal marginal. Note that if (A or D) and (B or C) is zero, Yule's Q is undefined.
Testing for the occurrence of EMFACS predicted facial patterns using
Yules Qs
Because specific facial actions patterns have been found repeatedly and cross-
culturally to signal a limited number of basic emotions, we first wanted to examine how
frequent these patterns were in our database of emotional displays. In order to address this
question our first step was to compute odds ratios and Yules Qs for the main EMFACS
predictions in the categories of prototypical, blended and masked expressions of emotions.
Note that for the prototypical category we only tested combinations for which empirical data
have demonstrated that they could reliably be recognized as expressing the target emotion.
Even though we couldn’t find any study systematically exploring the EMFACS predictions
for masking smiles, partial empirical validation could arguably be said to exist with studies
investigating the perceptions of distinct forms of smiles, including some action units
considered part of negative expressions (for example: Ekman, Friesen, and O’Sullivan, 1988).
As for blended negative expressions we couldn’t find any published studies having
systematically investigated how they are perceived. Note that most studies looking at the
production of facial expressions rely on the EMFACS dictionary to assign an emotional
category to their patterns. Typically the facial combinations leading to these categorizations
are not specified. Rather, the emotional labels (anger, happiness, sadness, surprise, fearcontempt) are used to test for possible differences in groups of interest. Because one single
emotion category can encompass many distinct combinations one could hope that these have
been previously empirically tested for their ability to communicate that emotion. In fact this is
not the case for most of them. So instead of testing all possible combinations and accept the
EMFACS interpretations at face value, we only included in the analysis the more common
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 70/282
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 71/282
Table 19. Odds ratios and Yules Q’s for Prototypical Expressions of Basic Emotions accor
(Ekman, Irwin and Rosenberg, 1994)
Prototype FACS
G ive n Targe t Joint Duration Given
Residual Given
Total Target
Residual Target
Tota
Anger AU17+23 17 23 2367 17687 20054 1255 3622
Anger AU17+24 17 24 2890 17164 20054 911 3801
Anger AU4+5 4 5 755 5152 5907 15872 16627
Anger AU4+5+10 4+5 10 59 696 755 11789 11848
Anger AU4+7 4 7 1972 3935 5907 3791 5763
Anger AU4+7+10 4+7 10 192 1780 1972 11656 11848
Fear AU1+2+4 1+2 4 387 29186 29573 5520 5907
Fear AU1+2+5 1+2 5 7061 22512 29573 9566 16627
Fear AU1+2+20 1+2 20 1205 28368 29573 2546 3751
Fear AU1+2+5+20 1+2+5 20 569 6492 7061 3182 3751
Disgust AU10+12 10 12 4347 7501 11848 21414 25761
Positive AU6+12 6 12 8916 1768 10684 16845 25761
Contempt AU1+2+14 1+2 14 1562 28011 29573 4366 5928
Surprise AU1+2+5 1+2 5 7061 22512 29573 9566 16627
Surprise AU5+26 5 26 1903 14724 16627 16185 18088
Surprise AU1+2+26 1+2 26 3236 26337 29573 14852 18088
Sadness AU1+10 1 10 239 3586 3825 11609 11848
Sadness AU1+14 1 14 204 3621 3825 5724 5928
Sadness AU1+14U 1 14U 116 3709 3825 2822 2938
Sadness AU6+17 6 17 2125 8559 10684 17929 20054
Sadness AU6+15 6 15 1073 9611 10684 9853 10926
Sadness AU6+15+17 6+15 17 604 469 1073 19450 20054
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 72/282
Blends FACS Given Target Joint Duration Given Residual Given Total Target Residual Target Total Neithe
Anger/Contempt AU4+10U 4 10U 116 5791 5907 2088 2204 12
Anger/Contempt AU4+14U 4 14U 148 5759 5907 2790 2938 12
Anger/Contempt AU4+7+14U 4+7 14U 48 1924 1972 2890 2890 12
Anger/Contempt AU4+7+10U 4+7 10U 36 1936 1972 2168 2204 12
Anger/Disgust AU4+5+9 4+5 9 64 691 755 5040 126530 12
Anger/Disgust AU4+7+9 4+7 9 216 1756 1972 4888 5104 12
Anger/Disgust AU9+23 9 23 88 5016 5104 3534 3622 12
Anger/Disgust AU9+15+23 9+15 23 24 604 628 3598 3622 12
Anger/Disgust AU9+17+23 9+17 23 88 1352 1440 3534 3622 12
Fear/Surprise AU1+2+5 1+2 5 7061 22512 29573 9566 16627 93
Fear/Surprise AU1+2+5+7 1+2 5+7 0 29573 29573 217 317 10
Fear/Surprise AU1+2+5+7+25 1+2+5 7+25 0 7061 7061 5763 5763 11
Fear/Surprise AU1+2+5+25 1+2+5 25 1327 5734 7061 24970 26297 10
Fear/Surprise AU1+2+5+20+25 1+2+5 20+25 159 6902 7061 757 916 12
Sadness/Anger AU1+23 1 23 140 3685 3825 3482 3622 12
Sadness/Anger AU1+24 1 24 44 3781 3825 3757 3801 12
Sadness/Disgust AU1+9 1 9 243 3582 3825 4861 5104 12
Negative unspecified AU1+10 1 10 239 3586 3825 11609 11848 11
Negative unspecified AU1+4+5 1+4 5 63 674 737 16564 16627 11
Negative unspecified AU20+23 20 23 345 3406 3751 3277 3622 12
Negative unspecified AU1+2+4+9+20 1+2+4 9+20 0 387 387 196 196 1
Table 20. Odds ratios and Yules Q’s for Blended Expressions of Basic Emotions according to
Rosenber , 1994
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 73/282
Table 21. Odds ratios and Yules Q’s for Masked prototypes according to EMFACS predictions (Ekman, Ir
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 74/282
70
Relating nonverbal signals to emotion perception
So far, we have been able to show that 45 independent judges could agree about their
impressions of the affects being conveyed in 200 video records of spontaneously produced
dynamic facial expressions. Five factors were found to explain over 78% of the variance in
the use of 17 affective adjectives scales. These five factors could easily be interpreted as
reflecting impressions of enjoyment, hostility, embarrassment, surprise and sadness.
Furthermore a cluster analysis (K-means) demonstrated that we could distribute the 200 video
records into five groups, each of which being related strongly to one specific factor. Then we
have described how the totalities of the video samples were manually annotated for the onset
and offset times of any visible movement of the face using the categories of the FACS
system. Additionally gaze and head orientation, positions and movements as well as several
characteristics of speech related variables were also annotated. We also have reviewed the
methodological propositions suggested by proponents of basic emotion theory to interpret
facial patterns. We showed that our database contained the event types predicted to be
constitutive elements of prototypical facial expressions of emotions. Therefore we felt
confident that we could address the question of the prevalence of these prototypes in our
dataset. We are now ready to address more specific questions relating to the perception of
emotions from non acted and non static facial expressions. In the next sections, we will
compare different types of analysis to represent the same set of behavioral data in distinctive
ways. Our purpose in doing so is to determine if one type of data representation can be shown
to better account for the reasons that in our judgment study video samples were classified
according to distinct and coherent emotional categories. From now on, we will only work by
comparing facial action units and related patterns across the five rating clusters. Our first
analysis will involve a simple univariate comparison of means for single action units.
Relative frequencies of single action units across therating clusters
Univariate analysis based on comparisons of means even if they seem simplistic, are
often very valuable for the characterization of group differences. Even though our data would
not usually be considered suited for classical variance analysis for lack of a normal
distribution (see appendix 6), we still attempted to determine to what extent mean differences
for individual action units across the clusters are statistically significant. We performed a
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 75/282
71
repeated measures analysis of variance (ANOVAS) on the relative frequencies of individual
action units for each clip with the cluster as repeated factor (five levels). Frequency is the
number of onsets recorded for each code selected for the analysis. Relative frequency is a
code's frequency divided by the sum of the frequencies for all of the selected codes. Relative
frequencies sum to 1. For readability reasons we only report differences reaching statistical
significance. AU1 (Inner Brow Raise) is significantly found more often in the sadness than in
the embarrassment cluster [F(4,195) = 2.3; p = 0.0500]. This action which raises and possibly
brings together the inside corners of the brows, creates appearance changes on the forehead
that Ekman identifies as the « sadness brow » (Ekman and Friesen, 2003, p. 117). Differences
in the frequency for this action between the sadness, the enjoyment, hostility and surprise
clusters do not reach statistical significance. Nevertheless, we do find a significantly lower
frequency of AU1 in the embarrassment cluster compared to the other groups. The tests also
show that AU6 (cheek raise) is found significantly more often in the enjoyment cluster than in
any other group of video samples [F(4,195) = 11.70; p = 0.0000]. The relative frequency of
occurrence of AU12 (lip corner puller) is significantly highest in the enjoyment and
embarrassment clusters. Moreover the frequency of AU12 is higher in the enjoyment than in
the embarrassment cluster [F(4,195) = 7.176; p = 0.0002]. Taken individually the action unit 9
(nose wrinkler) is interpreted in EMFACS as a prototypical disgust expression. The analysis
shows that the occurrence of this action is significantly lower in the sadness than in the
hostility cluster. No other significant differences concerning this action reaches statistical
significance [F(4,195) = 7.176; p = 0.0002]. Finally, AU15 (lip corner depressor), an
important element of prototypical expressions of sadness, is found significantly more often in
the sadness cluster than in the hostility cluster [F(4,195) = 2.663; p = 0.003]. Overall, the
ANOVA tests reveal that only a limited number of facial action units (5 out of 20)
theoretically associated with the display of emotional expressions can be said to differ in
relative frequency of occurrences across the clusters. Still, when the differences are
significant they are coherent with the basic emotion view in the sense that these AUs are
essential elements in the constitution of prototypical patterns considered characteristic for
these clusters.
Clusters characterization by patterns of action units
Next, we performed a G.G-corrected MANOVA to check if the patterns of frequencies
of occurrences of the AUs could be said to differ across the five clusters. The analysis does
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 76/282
72
shows a significant AUs by clusters interaction [F(160, 7800) = 2.12, p = 0.000]. We feel that
this supports the view that patterns of facial actions rather than individual AUs considered in
isolation will better be able to explain how raters have classified the video samples in the five
clusters. Accordingly our next step was to try to rank the facial actions units possibly involved
in the composition of facial emotion prototypes according to their importance within each
clusters. To perform this, we used a "test value" on AUs predicted to be part of emotion
prototypes according to the EMFACS dictionary. The test value (VT) is mainly used for the
characterization of a group of observations according to continuous or categorical variables.
Here, the groups were defined by our five clusters: enjoyment, hostility, embarrassment,
surprise and sadness; and the variables were the relative frequency distributions of the core
EMFACS AUs in each cluster. The principle is elementary: we compare the values of the
means computed on the whole sample and computed on sub sample related to each clusters.
By ranking the VTs for each action units we wanted to detect the ones that are most
characteristic of each cluster. As an example, let us examine AU6 in the enjoyment group (see
table 22 column 1). The group includes 7% (N=14) of the video sample files. The mean
relative frequency for AU6 in this group is 0.07 compared to 0.02 for the rest of the sampled
population. Into the brackets, we have the computed standard deviation. The test value for
AU6 in the first column is 1.31, which makes it the most characteristic action unit for this
cluster. Note that one should not overly focus on the comparison of the computed VT with a
threshold, very difficult to define in practice (Lebart and al., 2000). It is more important to use
the VT as a criterion for the ranking of the variable, in order to distinguish those that play an
essential role in the interpretation of the groups. At first glance, what table 22 shows seems to
be roughly compatible with Ekman’s predictions of prototypical expressions. When looking at
the most characteristic AUs in each of the cluster, we find that they are constitutive elements
of prototypical facial patterns predicted by basic emotion theory for the enjoyment (AU6+12),
surprise (AU1+2+5) and sadness categories (AU1+15). As for our composite hostility
category, it is mainly characterized by facial action actions constitutive of “disgust” patterns.
Finally, because embarrassment is not considered a basic emotion, no EMFACS predictions
are suggested. We find that similarly to the enjoyment cluster, the embarrassment group is
characterized by AU12 and AU6. At this point however we do not know if these action units
do indeed significantly overlap in time in the clusters, producing momentary facial patterns
corresponding to the predicted prototypes.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 77/282
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 78/282
74
Prototypical Patterns of Facial Expressions across theClusters
In this section, we will test the hypothesis that some EMFACS patterns of facial
configurations may be more characteristic to some clusters than others. We used the relative
joint frequency duration (joint duration of AUs/duration of video record) of the overlapping
actions to test for group differences. The facial patterns tested in this analysis include the
same combinations that were used at the database level. Because Kolmogorov-Smirnov and
Lilliefors tests for normality (see appendix 6 for details) showed that the frequency
distributions of the majority of actions units did not follow a normal distribution we decided
to use Kruskal-Wallis and U Mann Whitney tests for potential group differences concerning
specific facial patterns. Kruskal-Wallis tests show significant general group differences for
both prototypical and masked expression for the EMFACS categories ( Respectively:
Prototypical: H(4, N= 3800) =12.49 p =.014 and Masked: H(4, N= 3800) =72.24 p =.000).
However, we found no statistically significant differences for blended expressions across the
five clusters: H(4, N= 3800) =9.99 p =.071. Next, we performed Kruskal-Wallis tests on each
individual action unit combination listed in the "prototype" and "masked" categories to
determine if the frequency of overlapping time units of predicted AUs differed significantly
across the five clusters. Finally, we performed two ways Mann Whitney tests to estimate the
direction of these differences. In the next section, we will report the results of both analyses.
First we will look at the displays predicted for the seven basic emotions included in the
EMFACS taxonomy-enjoyable emotions, anger, contempt, fear, surprise and sadness. Second,
we will look at the masking smiles patterns.
Results for prototypical expressions across the clusters
First, we will review and discuss significant Kruskal-Walliss and U Mann Whitney
tests for the basic emotions prototypes; then we will do the same for the masked emotion
category.
Happiness
The only EMFACS prediction for any enjoyable emotions includes a combined
innervations of the Orbicularis oculi, Pars orbitalis (AU6) and the Zygomatic major (AU12)
see figure 15. The Kruskal-Wallis test confirms that the relative duration of this combination
differs significantly across the five rating groups [H (4, N = 200) = 42.81 p = 0.000]. More
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 79/282
75
specifically, the Mann Whitney tests reveals that the relative duration of overlap between
action unit 6 and 12 is significantly longer in the video records rated as conveying a sense of
enjoyment than in all the other clusters (see table 23).
Figure 15
AU6+12
Anger
For angry expressions, only 2 out of 8 proposed combinations are shown to differ
across clusters. These are combinations AU4+5 [H (4, N = 200) = 8.86 p = 0.039] and AU4+7
[H (4, N = 200) = 8.31 p = 0.040]. Both these configurations involve the brow and eye
regions. Action unit 4 draws the brows down and together. Combined with AU4, action units
5 or 7, produce the two versions of “angry” eyes described by Ekman (Ekman and Friesen,
2003, p.83). In the AU4+7 version (figure 16), the lower eyelid is tensed narrowing the lower
part of the eye. The lowering of the upper part of the eye is due to the action of unit 4 that
creates the impression that the upper eyelid is lowered. In the AU4+5 version (figure 17), the
brow is also lowered reducing the eye aperture but the action of unit 5 produces a wider gaze
opening than in AU4+7. The AU4+5 configuration is found in both the “hostility” and
“sadness” clusters. Mann Whitney tests shows it is significantly more frequent in the
“hostility” than in the “sadness” cluster (see table). The AU4+7 combination is found only in
the “embarrassment” and “enjoyment” clusters. Significantly more in the “embarrassment”
than in the “enjoyment” group (see table). The Kruskal-Wallis tests for the other predictions
tested were all non significant: AU17+23 [H (4, N = 200) = 2.57 p = 0.039]; AU17+24 [H (4,
N = 200) = 1.85 p = 0.039]; AU4+5+10 [H (4, N = 200) = 3.47 p = 0.482]; AU4+7+10 [H (4,
N = 200) = 5.06 p = 0.280]; AU4+7+23 [H (4, N = 200) = 2.70 p = 0.600] and AU4+5+7 [H
(4, N = 200) = 3.24 p = 0.518]. Prototypical “anger” expressions also include lower face
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 80/282
76
movements notably lips pressing against each other or open square mouth (as in screaming).
When these lower face elements are not present together with the involvement of the brows /
forehead and eyes / lids, the meaning of these expressions are ambiguous (Ekman and
Friesen, 2003, p.87). Aside from a slight display of anger or signs of anger control; a serious,
concentrated or a determinate attitude are also presented as possible interpretative alternatives
by Ekman and Friesen (2003).
Figure 17
AU4+5
Figure 16
AU4+7
Fear
Out of the five "fear" expressions investigated, only a single configuration involving
three upper face actions was found to differ across the clusters: AU1+2+4 [H (4, N = 200) =13.29 p = 0.009]. The combination of AU1+2 with AU4 (see figure 18) is considered a typical
“fear” brow in Ekman’s terminology (Ekman and Friesen, 2003, p.50). The brows are lifted as
they are in surprise, but in addition to the lift they are drawn together so that the inner corner
of the brows are closer together in fear than in surprise. According to Ekman, a full blown
expression of fear would also include an upper eyelid raise (AU5) as well as a bilateral stretch
of the mouth (AU20). When the brow is held in the fear position with the rest of the face
uninvolved, Ekman suggests that worry or controlled fear might be conveyed (Ekman and
Friesen, 2003, p.52). The Mann-Whitney test reported in table 23 shows that this expression is
found significantly more in the embarrassment and sadness clusters than in the hostility
group. The surprise and enjoyment clusters contain no instances of this combination. The non
significant combinations tested were: AU1+2+5 [H (4, N = 200) = 3.60 p = 0.560];
AU1+2+20 [H (4, N = 200) = 3.95 p = 0.412]; AU1+2+4+20 [H (4, N = 200) = 0.00 p =
1.000]; AU1+2+5+20 [H (4, N = 200) = 3.60 p = 0.462].
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 81/282
77
Figure 18.AU1+2+4
Surprise
None of the three proposed configurations for "surprise" expressions seems to differ in
terms of frequency of common overlapping time units across the five clusters. The
configurations tested were: AU1+2+5 [H (4, N = 200) = 2.94 p = 0.560]; AU1+2+26 [H (4, N
= 200) = 2.94 p = 0.908] and AU25+26 [H (4, N = 200) = 2.94 p = 0.36]. Note, that Ekman
(2003) has pointed out that the evidence for surprise being a basic emotion in his sense is the
weakest of all candidates because it hedonically neutral. Moreover in recognition studies
prototypical displays of surprise are often not distinguishable from fear (Ekman, 2003). Apart
from the possible ambiguous status of surprise as a “basic” emotion possessing a distinctive
facial display, an alternative explanation can be invoked to explain the apparent lack of
specificity in the distribution of the AU1+2+5 configuration across the five rating groups. In
our dataset emotional as well as conversational facial signals are presented together to the
judges. Because the combination of AU1 with AU2 has been documented to serve as a
common conversational gesture used to emphasize (baton) or underlie (underliner) parts of
speech it is possible that a large proportion of action units 1+2 in the database serve such
conversational functions. If this is the case, it becomes difficult to find quantitative
differences in the association of AU1+2 with other AUs that are not due to chance alone.
Sadness
From the six configurations tested for their predicted potential to convey a sad
demeanor only one did come out as significantly differing across the five clusters [H (4, N =
200) = 11.82 p = 0.019]. The combination of action units AU6 (cheek raise) with AU15
(corner lip depressor) (figure) is found to be more prevalent in the enjoyment than in the
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 82/282
78
hostility and sadness clusters (see table). The non significant combinations tested were AU1
+14, [H (4, N = 200) = 2.51 p = 0.642]; AU1 +10, [H (4, N = 200) = 5.10 p = 0.277];
AU1+14 [H (4, N = 200) = 5.29 p = 0.258]; AU6+17 [H (4, N = 200) = 8.98 p = 0.068] and
finally AU6 +15 +17 [H (4, N = 200) = 5.61 p = 0.230].
Figure 19AU6+15
Disgust
For disgust displays, the frequency of occurrences of only 1 out of 4 possible
EMFACS prototype tested is found to differ across the rating groups [H (4, N = 200) = 17.18
p = 0.0018]. Table 23 shows that in the sample files belonging to the enjoyment group the
duration of overlap between AU10 and AU12 (see figure 20) is significantly longer than in
any other cluster. Additionally this combination is also more characteristic of the
embarrassment than the sadness cluster. Although "disgust" expressions are mainly depicted
with single actions; AU9 or AU10 in emotion recognition studies, the combination of AU10
with a smiling action (AU12) appears in the dictionary predictions as a prototypical
expression of disgust.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 83/282
79
Figure 20AU10+12
Interestingly, the same configuration is also listed as a possible masked expression of
disgust. The authors of the dictionary seem to leave open the interpretation of this display as
communicating either a frank attitude of disgust or alternatively an attempt at concealing a
disgust reaction. The fact that this display is more characteristic of video files rated as
conveying enjoyment rather than hostility can be interpreted in several ways. First it is
possible that raters have simply disregarded subtle signs of disgust (AU10) in their
assessment of video-files rated as positively valenced. Second the AU10 may have been noted
but its association with AU12 may have dampened its negative message value
Contempt
For contempt displays, the only action units combination proposed in the EMFACS
taxonomy involves the following actions: AU1+2+14. The other predictions are limited to
single action units AU10U, AU12U and AU14U that were shown not to differ significantly in
their frequency of occurrences across the five clusters. In our dataset the cumulating time
units when AU1+2 and AU14 overlap is not found to be significantly larger in any of the
rating clusters [Kruskal Wallis: H (4, N = 200) = 3.47 p = 0.1].
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 84/282
80
Masking smiles (blends of smiles with displays ofnegatively valenced emotions)
After examining the predictions for prototypical emotions, we will now turn to the
EMFACS category of masking smiles. By definition patterns of AUs in this category all
involve an action unit 12 supposed to signal either enjoyment or politeness (social smile) with
actions units typical of either negatively valenced affects – sadness, anger, contempt, anger or
the hedonically neutral category of surprise. The Kruskall-Wallis tests shows that for the
EMFACS category of smiles blending with negative signals, several combinations vary
significantly in the overlap durations of their constitutive action units across the clusters.
Action units 1+2 accompanied with a D-smile (figure 21) [H (4, N = 200) 12.30 p = 0.008]
which can be interpreted either as a concealment of surprise or alternatively a pleasantly
surprised reaction is found more frequently in the enjoyment than in all the other clusters (see
table 24 for U tests and p values). When AU1+2 is paired with a simple AU12 (social smile)
[see figure 22. Kruskal Wallis: H (4, N = 200) 11.22 p = 0.04], the Mann Whitney U tests
reveal that the pattern is more frequent in the enjoyment group than in the hostility, surprise
and sadness groups. Moreover, this pattern is also more frequent in both the hostility and
embarrassment groups than in the surprise cluster. No significant differences are found
between the enjoyment and embarrassment clusters. Another variant of blends between
surprise and enjoyment: AU1+2+5+12 (figure 23), is present in all the clusters at the
exception of the surprise group. Moreover, it is significantly more frequent in the enjoyment
than in the sadness cluster.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 85/282
81
Figure 21AU1+2+6+12
Figure 22AU1+2+12
Figure 23AU1+2+5+12
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 86/282
82
T a b l e 2 3 . E M F A C S P r o t o t y p e J o i n t D u r a t i o n
M a n n - W h i t n e y U T e s t
B y v a r i a b l e c l u s t e r . M a r k e d t e s t s a r e s i g n i f i c a n t a t p < . 0 5
A U C o m b o
C l u s t e r c o m p a r a i s o n
P r o t o t y p e
R a n k S
u m G 1
R a n k S u m G 2
M e a n G r o u p 1 S t d . G 1
M e a n G r o u p 2
S t d . G 2
U
Z a d j u s t e d
p - l e v e l
A U 6 + 1 2
E n j o y m e n t - H o s t i l i t y
H a p p i n e s s
4 6 7 . 0
1 8 7 9 . 0
0 . 1 9 1
0 . 1 3 6
0 . 0 1 9
0 . 0 5 3
1 1 1 . 5
5 . 1 0 9
0 . 0 0 0
A U 6 + 1 5
E n j o y m e n t - H o s t i l i t y
S a d n e s s
5 7 7 . 0
1 7 6 9 . 0
0 . 0 2 6
0 . 0 4 7
0 . 0 0 4
0 . 0 1 9
2 8 4 . 0
2 . 7 7 4
0 . 0 0 6
A U 1 0 + 1 2
E n j o y m e n t - H o s t i l i t y
D i s g u s t
6 3 1 . 5
1 7 1 4 . 5
0 . 1 0 9
0 . 1 5 5
0 . 0 1 6
0 . 0 5 8
2 2 9 . 5
3 . 1 8 8
0 . 0 0 1
A U 6 + 1 2
E n j o y m e n t - E m b a r r a s s m e n t
H a p p i n e s s
6 3 3 . 5
1 5 1 1 . 5
0 . 1 9 1
0 . 1 3 6
0 . 0 8 3
0 . 1 0 4
1 8 5 . 5
2 . 8 5 3
0 . 0 0 4
A U 4 + 7
E n j o y m e n t - E m b a r r a s s m e n t
A n g e r
3 6 4 . 0
1 7 8 1 . 0
0 . 0 0 0
0 . 0 0 0
0 . 0 2 4
0 . 0 5 2
2 5 9 . 0
- 2 . 1 7 5
0 . 0 3 0
A U 1 0 + 1 2
E n j o y m e n t - E m b a r r a s s m e n t
D i s g u s t
5 7 9 . 5
1 5 6 5 . 5
0 . 1 0 9
0 . 1 5 5
0 . 0 2 4
0 . 0 5 6
2 3 9 . 5
2 . 3 3 4
0 . 0 2 0
A U 6 + 1 2
E n j o y m e n t - S u r p r i s e
H a p p i n e s s
4 1 6 . 5
4 0 3 . 5
0 . 1 9 1
0 . 1 3 6
0 . 0 2 0
0 . 0 5 3
5 2 . 5
4 . 1 4 7
0 . 0 0 0
A U 1 0 + 1 2
E n j o y m e n t - S u r p r i s e
D i s g u s t
3 7 2 . 5
4 4 7 . 5
0 . 1 0 9
0 . 1 5 5
0 . 0 0 2
0 . 0 1 2
9 6 . 5
3 . 4 7 0
0 . 0 0 1
A U 6 + 1 2
E n j o y m e n t - S a d n e s s
H a p p i n e s s
7 4 9 . 5
1 6 6 5 . 5
0 . 1 9 1
0 . 1 3 6
0 . 0 2 8
0 . 0 6 4
1 2 5 . 5
4 . 6 1 5
0 . 0 0 0
A U 6 + 1 5
E n j o y m e n t - S a d n e s s
S a d n e s s
5 8 9 . 0
1 8 2 6 . 0
0 . 0 2 6
0 . 0 4 7
0 . 0 0 2
0 . 0 1 0
2 8 6 . 0
3 . 0 2 2
0 . 0 0 3
A U 1 0 + 1 2
E n j o y m e n t - S a d n e s s
D i s g u s t
6 3 0 . 5
1 7 8 4 . 5
0 . 1 0 9
0 . 1 5 5
0 . 0 2 5
0 . 0 7 4
2 4 4 . 5
2 . 9 0 5
0 . 0 0 4
A U 1 + 2 + 4
H o s t i l i t y - S a d n e s s
F e a r
1 2 3 8 6 . 0
1 1 4 8 5 . 0
0 . 0 0 9
0 . 0 2 7
0 . 0 0 1
0 . 0 0 6
5 3 8 0 . 0
2 . 8 3 0
0 . 0 0 5
A U 4 + 5
H o s t i l i t y - S a d n e s s
A n g e r
3 1 6
6 . 0
2 8 2 9 . 0
0 . 0 1 5
0 . 0 4 7
0 . 0 0 1
0 . 0 0 5
1 2 8 9 . 0
2 . 4 8 9
0 . 0 1 3
A U 6 + 1 2
H o s t i l i t y - E m b a r r a s s m e n t
H a p p i n e s s
2 3 4
0 . 0
3 2 2 5 . 0
0 . 0 1 9
0 . 0 5 3
0 . 0 8 3
0 . 1 0 4
8 5 5 . 0
- 4 . 0 2 7
0 . 0 0 0
A U 1 + 2 + 4
H o s t i l i t y - E m b a r r a s s m e n t
F e a r
1 1 8 1 0 . 0
1 0 3 4 5 . 0
0 . 0 0 9
0 . 0 2 7
0 . 0 0 1
0 . 0 0 6
5 0 9 2 . 0
2 . 0 5 5
0 . 0 4 0
A U 6 + 1 2
H o s t i l i t y - E m b a r r a s s m e n t
H a p p i n e s s
2 3 4
0 . 0
3 2 2 5 . 0
0 . 0 1 9
0 . 1 0 4
0 . 0 8 3
0 . 0 5 3
8 5 5 . 0
- 4 . 0 2 7
0 . 0 0 0
A U 1 + 2 + 4
H o s t i l i t y - S a d n e s s
F e a r
1 2 3 8 6 . 0
1 1 4 8 5 . 0
0 . 0 0 9
0 . 0 2 7
0 . 0 0 1
0 . 0 0 6
5 3 8 0 . 0
2 . 8 3 0
0 . 0 0 5
A U 4 + 5
H o s t i l i t y - S a d n e s s
A n g e r
3 1 6
6 . 0
2 8 2 9 . 0
0 . 0 4 7
0 . 0 4 7
0 . 0 0 1
0 . 0 0 5
1 2 8 9 . 0
2 . 4 8 9
0 . 0 1 3
A U 6 + 1 2
E m b a r r a s s m e n t - S u r p r i s e
H a p p i n e s s
2 2 2
6 . 5
7 7 6 . 5
0 . 0 8 3
0 . 1 0 4
0 . 0 2 0
0 . 0 5 3
4 2 5 . 5
2 . 8 8 4
0 . 0 0 4
A U 1 0 + 1 2
E m b a r r a s s m e n t - S u r p r i s e
D i s g u s t
2 1 2
0 . 5
8 8 2 . 5
0 . 0 2 4
0 . 0 5 6
0 . 0 0 2
0 . 0 1 2
5 3 1 . 5
2 . 1 7 1
0 . 0 3 0
A U 6 + 1 2
E m b a r r a s s m e n t - S a d n e s s
H a p p i n e s s
3 1 6
9 . 5
2 5 0 1 . 5
0 . 0 8 3
0 . 1 0 4
0 . 0 2 8
0 . 0 6 4
9 6 1 . 5
3 . 2 5 0
0 . 0 0 1
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 87/282
83
When paired with a D-smile, the upper lip raise action typical of disgust expressions
(figure 24) is shown to differ across the clusters [Kruskal Wallis: H (4, N = 200) 4.37 p =
0.05]. The U tests show that this pattern is more frequent in the enjoyment clusters than in any
other clusters. It is also more present in the embarrassment than in the hostility and surprise
clusters. As a reminder, the analysis of variance has previously shown that considered
individually the relative frequency of AU10 was not significantly different across the five
clusters. The frequency of overlapping actions units of AU9 with AU12, a second variant of a
disgust display paired with a smile, also varies significantly across the clusters [H (4, N =
200) = 3.47 p = 0.04]. AU9+12 (figure 25) is more frequent in the enjoyment and hostility
clusters than in the sadness cluster. It is absent in the embarrassment and surprise clusters.
Figure 24AU6+10+12
Figure 25AU9+12
The combination AU6+12+17+23 is listed in the dictionary as a blended expression of
anger and enjoyment. In addition to a D-smile, the action of the mentalis muscle raises the
chin (AU17) and the lips are The Kruskall-Wallis test is significant [H (4, N = 200) = 10.36 p
= 0.001], and the U tests show that this configuration is significantly more frequent in the
enjoyment than in the hostility group (see table 24). The pattern does not appear in the
embarrassment, surprise or sadness clusters.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 88/282
84
Figure 26AU6+12+17+23
The two remaining configurations for which the Kruskall Wallis tests show positively
significant differences between the clusters are AU12+20 [H (4, N = 200) = 18.3 p = 0.000]
and AU6+12+20 [H (4, N = 200) = 12.36 p = 0.005]. AU20 is a lower face action that
stretches the lips laterally. This action unit is an important element of prototypical fear
displays. Here, it is associated with signs of both “genuine/felt” and “social” smiles. These
two configurations are mostly associated with the enjoyment cluster.
Figure 27AU6+12+20
Figure 28AU12+20
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 89/282
85
T a b l e 2 4 . E M
F A C S M a s k e d E x p r e s s i o n s - J o i n t D u r a t i o n
M a n n - W h i t n e y U T e s t
B y v a r i a b l e c
l u s t e r . M a r k e d t e s t s a r e s i g n i f i c a n t a t p < . 0 5
A U C o m b o
C l u s t e r c o m p a r a i s o n
M a s k e d E .
R a n k S u m G 1
R a n k S u m G 2
M e a n
G r o u p 1
S t d . G 1
M e a n G r o u p 2
S t d . G 2
U
Z a d j u s t e d p - l e v e l
A U 1 + 2 - + 6 + 1 2
E n j o y m e n t - H o s t i l i t y
S u r p r i s e
6 3 0 . 5
1 7 1 5 . 5
0 . 0 3 1
0 . 0 5 2
0 . 0 1 1
0 . 0 4 1
2 3 1
3 . 3 6 7
0 . 0 0 1
A U 1 + 2 + 1 2
E n j o y m e n t - H o s t i l i t y
S u r p r i s e
6 4 5 . 0
1 7 0 1 . 0
0 . 0 6 6
0 . 0 5 9
0 . 0 3 2
0 . 0 6 1
2 1 6
2 . 6 8 3
0 . 0 0 7
A U 6 + 1 0 + 1 2
E n j o y m e n t - H o s t i l i t y
D i s g u s t
6 0 5 . 0
1 7 4 1 . 0
0 . 0 5 7
0 . 1 1 2
0 . 0 0 5
0 . 0 3 1
2 5 6
3 . 5 0 8
0 . 0 0 0
A U 6 + 1 2 + 1 7
+ 2 3
E n j o y m e n t - H o s t i l i t y
A n g e r
5 1 0 . 0
1 8 3 6 . 0
0 . 0 0 3
0 . 0 1 1
0 . 0 0 0
0 . 0 0 0
3 5 1
1 . 9 6 4
0 . 0 5 0
A U 6 + 1 0 + 1 2
H o s t i l i t y - E m b a r r a s s m e n t
A n g e r
2 6 7 1 . 0
2 8 9 4 . 0
0 . 0 0 5
0 . 0 3 1
0 . 0 1 5
0 . 0 4 7 1
1 8 6
- 2 . 3 0 4
0 . 0 2 1
A U 1 + 2 + 5 + 1
2
H o s t i l i t y - S u r p r i s e
S u r p r i s e
2 2 9 1 . 0
9 4 9 . 0
0 . 0 1 1
0 . 0 3 4
0 . 0 0 0
0 . 0 0 0
5 9 8
2 . 0 5 2
0 . 0 4 0
A U 1 + 2 + 1 2
H o s t i l i t y - S u r p r i s e
S u r p r i s e
2 3 6 8 . 0
8 7 2 . 0
0 . 0 3 2
0 . 0 6 1
0 . 0 0 9
0 . 0 2 7
5 2 1
2 . 3 2 7
0 . 0 2 0
A U 9 + 1 2
H o s t i l i t y - S a d n e s s
D i s g u s t
3 0 8 0 . 0
2 9 1 5 . 0
0 . 0 0 3
0 . 0 1 3
0 . 0 0 0
0 . 0 0 0 1
3 7 5
2 . 0 4 7
0 . 0 4 1
A U 1 + 2 + 5 + 1
2
E m b a r r a s s m e n t - S u r p r i s e
S u r p r i s e
2 1 1 9 . 0
8 8 4 . 0
0 . 0 1 1
0 . 0 3 2
0 . 0 0 0
0 . 0 0 0
5 3 3
2 . 3 9 7
0 . 0 1 7
A U 1 + 2 + 1 2
E m b a r r a s s m e n t - S u r p r i s e
S u r p r i s e
2 2 2 5 . 0
7 7 8 . 0
0 . 0 5 5
0 . 0 7 8
0 . 0 0 9
0 . 0 2 7
4 2 7
3 . 0 1 8
0 . 0 0 3
A U 6 + 1 0 + 1 2
E m b a r r a s s m e n t - S u r p r i s e
D i s g u s t
2 1 0 6 . 0
8 9 7 . 0
0 . 0 1 5
0 . 0 4 7
0 . 0 0 0
0 . 0 0 0
5 4 6
2 . 2 5 9
0 . 0 2 4
A U 1 + 2 + 6 + 1
2
E n j o y m e n t - E m b a r r a s s m e n t
S u r p r i s e
5 6 2 . 5
1 5 8 2 . 5
0 . 0 3 1
0 . 0 5 2
0 . 0 1 5
0 . 0 3 6
2 5 7
2 . 0 7 5
0 . 0 3 8
A U 1 2 + 2 0
E n j o y m e n t - E m b a r r a s s m e n t
F e a r
5 7 9 . 0
1 5 6 6 . 0
0 . 0 4 4
0 . 0 7 8
0 . 0 0 9
0 . 0 3 3
2 4 0
2 . 8 5 8
0 . 0 0 4
A U 1 + 2 + 5 + 1
2
E n j o y m e n t - S u r p r i s e
S u r p r i s e
3 3 9 . 0
4 8 1 . 0
0 . 0 1 7
0 . 0 3 5
0 . 0 0 0
0 . 0 0 0
1 3 0
2 . 8 3 2
0 . 0 0 5
A U 1 + 2 + 6 + 1
2
E n j o y m e n t - S u r p r i s e
S u r p r i s e
3 7 0 . 5
4 4 9 . 5
0 . 0 3 1
0 . 0 5 2
0 . 0 0 2
0 . 0 1 2
9 9
3 . 3 8 9
0 . 0 0 1
A U 1 + 2 + 1 2
E n j o y m e n t - S u r p r i s e
S u r p r i s e
4 1 0 . 5
4 0 9 . 5
0 . 0 6 6
0 . 0 5 9
0 . 0 0 9
0 . 0 2 7
5 9
4 . 1 1 1
0 . 0 0 0
A U 6 + 1 0 + 1 2
E n j o y m e n t - S u r p r i s e
D i s g u s t
3 5 2 . 0
4 6 8 . 0
0 . 0 5 7
0 . 1 1 2
0 . 0 0 0
0 . 0 0 0
1 1 7
3 . 2 0 8
0 . 0 0 1
A U 6 + 1 2 + 2 0
E n j o y m e n t - S u r p r i s e
F e a r
3 2 6 . 0
4 9 4 . 0
0 . 0 2 4
0 . 0 5 0
0 . 0 0 0
0 . 0 0 0
1 4 3
2 . 4 2 1
0 . 0 1 5
A U 1 2 + 2 0
E n j o y m e n t - S u r p r i s e
F e a r
3 5 8 . 0
4 6 2 . 0
0 . 0 4 4
0 . 0 7 8
0 . 0 0 2
0 . 0 1 2
1 1 1
3 . 0 4 0
0 . 0 0 2
A U 1 + 2 + 5 + 1
2
E n j o y m e n t - S a d n e s s
S u r p r i s e
5 7 7 . 0
1 8 3 8 . 0
0 . 0 1 7
0 . 0 3 5
0 . 0 0 1
0 . 0 0 7
2 9 8
2 . 3 3 5
0 . 0 2 0
A U 1 + 2 + 6 + 1
2
E n j o y m e n t - S a d n e s s
S u r p r i s e
6 3 7 . 5
1 7 7 7 . 5
0 . 0 3 1
0 . 0 5 2
0 . 0 0 7
0 . 0 2 7
2 3 8
3 . 2 2 6
0 . 0 0 1
A U 1 + 2 + 1 2
E n j o y m e n t - S a d n e s s
S u r p r i s e
6 8 1 . 5
1 7 3 3 . 5
0 . 0 6 6
0 . 0 5 9
0 . 0 2 4
0 . 0 4 7
1 9 4
3 . 1 8 4
0 . 0 0 1
A U 6 + 1 0 + 1 2
E n j o y m e n t - S a d n e s s
D i s g u s t
5 9 3 . 0
1 8 2 2 . 0
0 . 0 5 7
0 . 1 1 2
0 . 0 1 4
0 . 0 4 7
2 8 2
2 . 5 1 0
0 . 0 1 2
A U 6 + 1 2 + 2 0
E n j o y m e n t - S a d n e s s
F e a r
5 7 2 . 5
1 8 4 2 . 5
0 . 0 2 4
0 . 0 5 0
0 . 0 0 0
0 . 0 0 0
3 0 3
3 . 4 8 4
0 . 0 0 0
A U 9 + 1 2
E n j o y m e n t - S a d n e s s
D i s g u s t
5 1 7 . 5
1 8 9 7 . 5
0 . 0 0 3
0 . 0 1 2
0 . 0 0 0
0 . 0 0 0
3 5 8
1 . 9 8 2
0 . 0 4 7
A U 1 2 + 2 0
E n j o y m e n t - S a d n e s s
F e a r
6 3 8 . 0
1 7 7 7 . 0
0 . 0 4 4
0 . 0 7 8
0 . 0 0 6
0 . 0 4 0
2 3 7
3 . 9 7 2
0 . 0 0 0
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 90/282
86
Summary of results
In this section, we have examined the main patterns of FACS action units
corresponding to the most common EMFACS predictions for the basic, blended and masked
expression categories. We explored the possibility that these patterns may be more prevalent
in some rating clusters than others. Results show that out of 39 tested predictions for
prototypical expressions of basic emotions 6 patterns (15%) were found to differ across the
clusters in their relative frequency of co-occurrences. These correspond to one pattern for
enjoyment (6+12), one for disgust (AU10+12) and one for sadness (AU6+15). Additionally,
two partial prototypes for anger (AU4+5 / AU4+7) and one for fear (AU1+2+4) were detected
as significant. These last three cannot be considered full blown patterns though since only the
upper face region is involved and the lower face defining elements are missing. No significant
cluster differences were found for surprise and contempt prototypes.
Surprisingly, for the category of blended expressions, which is defined as the co-
occurrences of two or more action units from distinct negative basic prototypes, no significant
differences for the 21 combinations tested were found across the clusters. We did find some
significant cluster differences for masked expressions (a social or D-smile, combined with
AUs of negative displays). Out of the 39 EMFACS predictions tested for this category 8
(21%) stood out. These were three blends of surprise/happiness (AU1+2+6+12),
(AU1+2+12), (AU1+2+5+12); two blends of disgust/happiness (AU6+10+12), (AU9+12);
one blend of anger/happiness (AU6+12+17+23) and finally two blends of fear/happiness
(AU6+12+20), (AU12+20).
Overall, out of the 99 EMFACS predicted patterns of action units tested for the
categories of basic, blended and masked expressions, only 14 (14%) are shown to somehow
differ in their frequency of occurrences across the rating clusters. In keeping with basic
emotions theory, co-activation of AU6 with AU12 is strongly characteristic of the enjoyment
group. One of the anger prototype (AU4+5) appears in both the hostility and sadness groups
but is found to be more characteristic of hostility than sadness. Contrary to what basic
emotion theory would predict the other partial configuration for anger (AU4+7) is found to be
most characteristic of the embarrassment rather than the hostility cluster. Moreover it is also
found to a lesser extent in the enjoyment cluster. The “fear” brow (AU1+2+4) is found
significantly more in the embarrassment and sadness clusters than in the hostility group. The
sadness pattern (AU6+15) is found to be more prevalent in the enjoyment than in the hostility
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 91/282
87
and sadness clusters. AU10+AU12 a typical disgust pattern is most prevalent in the
enjoyment cluster. As for the masking smiles category most of them are shown to be more
prevalent in the enjoyment than the other clusters.
In summary, predicted patterns of co-occurrences specified by the EMFACS
dictionary for the categories of basic, blended and masked expressions, are rarely found to
occur above chance level for any one cluster. When they do, their predominance in some
clusters over others does not lend itself to unequivocal interpretations in terms of their
predicted emotional meaning. At this point, we feel that the case in favor of momentary
configurations of prototypical patterns of facial action units as explanation for the distribution
of our video-records in five rating groups is hardly supported. In the next section, we will
introduce our methodological choice of sequential analysis method to attempt to detect
dynamic patterns of facial expressions in the clusters.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 92/282
88
Sequential Analysis of Communicative Behaviors –Methodological Issues
“Behavior consists of patterns in time. Investigations of behavior deal with
sequences that, in contrast to bodily characteristics, are not always visible”
(Eibl-Eibesfeldt, 1970).
Generally speaking, sequential analyses pursue two aims (Bakeman and Gottman,
1986). The first one is to discover probabilistic patterns in the stream of code events; in other
words, the interest is centered on the order and the prevailing sequences that characterize a
data set. The second aim is to assess the effect of contextual or explanatory variables on the
sequential structure of experimental data sets. Because our aim is to discover regularities in
the temporal unfolding of facial actions, the appropriate methodology to investigate the
behavioral sequences is pattern recognition. Pattern research can be both hypothesis driven
(an earlier part of the pattern may be seen as a likely cause of the later part of the same
pattern) and empirically driven (the search for associations between parts of sequences
without specific hypotheses). Since the structure of human behavior is very complex, it is
convenient to distinguish between manifest behavioral patterns (visible and recurrent patterns
in behavioral streams) and what has been described by Magnusson (2000) as hidden
behavioral patterns (particular relations between groups or pairs of events in a time series). In
the last case the aim is to find some nested relations among a complex sequence of
occurrences. In the first approach the statistical techniques are easy and are based on the
conditional probability theory, but they require testing theory driven hypotheses. Prior
knowledge, theory based or based on previous studies, plays an important role in determining
the choice of the sequence pattern under study. The aim is to test whether the expected pattern
occurs among the observed patterns more often than by chance in terms of its conditional
probability distribution. The conditional probability of a sequence is estimated by dividing the
number of times it occurred by the numbers of times it could have occurred in theobservational records (the numbers of occurrences plus the number of non-occurrences). This
approach to describe behavioral sequence, however, is not exhaustive, because patterns easily
become invisible to the naked eye when other behaviors occur in between. For example,
imagine that each letter on the first line of figure 29 represent a distinct event type. The
underlying line stands for the time period on which the successive events unfold. Note how,
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 93/282
89
with only six different event-types, it is already difficult to visually detect the repetition of a
simple ((ab)(cd)) sequence at first glance.
Traditional sequential analysis will only be useful in cases of confirmatory research
designs and thus are ill suited for discovering new behavioral patterns that could generate
further hypothesis testing and model development. The “hidden pattern” approach helps to
distinguish which pattern is relevant (signal) and which is irrelevant (noise), by reducing the
extraneous sources of variability (e.g. by increasing signal-to-noise ratio). The tools for the
study of hidden patterns refer to a specific branch of statistics (called also classification
theory) which explore structure and patterns in large data sets, without recourse to the
classical assumptions of the confirmatory approach, often too rigid for practical use (Lange,
1998). Magnusson (2000) defined hidden patterns as “patterns of patterns of patterns”. (T-
patterns). He suggests that his definition of t-patterns (to be reviewed below) and its
associated detection algorithm could be particularly useful for the study of human behavior.Especially, to discover hierarchical patterns (patterns composed of simpler sequence patterns)
that are impossible to detect by traditional sequential analysis.
“…the patterns in question are not only patterns of elements as their various
components are also patterns as, for example, any common phrase that is a
repeated pattern of words, which again are composed of letters, etc…. as we
go from the phrase to the letter the patterns in question become increasingly
frequent, that is, in a standard phrase, each of its words are more frequent
than the phrase, and any letter more frequent than a word. On the other hand,there are far more different words than letters. But behavior is not always as
plain to see as words and letters on a page." (Magnusson, 2005)
In fact, a few German research groups have been pioneers in using the t-pattern
approach for the investigation of facial actions coded with FACS. The main focus of these
research groups has been on the detection of episodes of mutually responsive facial patterns,
Figure 29
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 94/282
90
observed in the context of face to face conversations. For example, Merten and Schwab
(Merten, 1997, Merten and Schwab, 2005) have studied different types of patterns involving
mutual smiling episodes. They described several t-patterns involving two interacting
individuals that are composed of either two Duchenne smiles, two social smiles, a Duchenne
smile being responded to with a social smile or a social smile being responded to with a
Duchenne smile. Their research supports the notion that these t-patterns might serve distinct
communicative and interpersonal regulation functions. For example, they proposed that a t-
pattern where person A shows a Duchenne smile while the other (B) is reacting with a social
smile, might be understood as an intimacy de-intensifying signal. When both individuals
show a Duchenne smile aligned with a t-pattern, the authors suggest that the episode may
signal positive intents and or positive affect sharing. If person A starts with a social smile and
person B responds with a Duchenne smile, the episode is understood as an intimacy
intensifying pattern. Finally, when both show a social smile aligned by a t-pattern, the
exchange is seen as an appeasement signal with no connection to enjoyment. These studies
show that dyads composed of women tend to produce more interactive intimacy
implementing patterns. In contrast, all male dyads show more appeasement patterns.
Interestingly, in mixed gender dyads, males seem to modify their behavior by increasing their
participation in intimacy implementing patterns initiated by women. Intimacy de-intensifying
patterns are mostly found when women interact with mental disordered patients. This is
understood by the authors as a polite way of rejecting an invitation (initiated by a patient) to
intimacy and mutual positive evaluation.
Definition of T-patterns.
In the present section, we will go into more depth into the definition of what a t-
pattern is, and what its distinctive characteristics are. A t-pattern is essentially a combination
of events where the events occur in the same order with the consecutive time distances
between consecutive pattern components remaining relatively invariant with respect to an
expectation assuming, as a null hypothesis, that each component is independently and
randomly distributed over time. As stated by Magnusson:
“if A is an earlier and B a later component of the same recurring T-pattern
then after an occurrence of A at t, there is an interval [t+d1, t+d2]
(d2 _ d1 _ d0) that tends to contain at least one occurrence of B more often than
would be expected by chance” (Magnusson, 2000).
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 95/282
91
The temporal relationship between A and B is defined as a critical interval and this
concept lies at the centre of the pattern detection algorithms. Through use of the THEME 5.0
software package, pattern detection algorithms can analyze both ordinal and temporal data
however, for the algorithms to generate the most meaningful analyses the raw data must be
time coded. An event-type here refers to some behavior that occurs or not at a particular point
on a discrete time scale, but has no duration otherwise: For example, “subject begins to
speak” (or short: x,b,speak) and “subject stops AU12” (or short: x,e,12) are event types. Each
event type is scored in terms of the occurrence times of its beginnings and endings points on a
discrete time scale. Each beginning and/or ending thus occurs or not at a discrete time point.
Note that any number of event-types may occur at the same point. That means that the
program can detect both synchronicity as well as sequentiallity in patterns. The occurrences of
all event-types within an entire set of observation periods (sample audio-video files) forming a
rating cluster, constitute the basic type of multivariate time point data set that have been
submitted to t-pattern analysis.
This figure shows all the occurring time points for each of the 55 event ‐types
coded in the 55 joined sample files of the sadness cluster. The occurrence times
for
each
of
55
different
event
types
as
well
as
their
offset
time
points
can
thus
be
read from left to right across the chart.
The method of time-motion computerized video analysis we used with The Anvil
program lends itself well to data collection that can then be submitted to t-pattern detection
with THEME. Using the THEME software will allow us to detect potentially highly complex
patterns that are specific to the rating clusters. In each of the t-patterns, the components occur
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 96/282
92
in a particular order and the temporal distance from one to the next is of an approximate
length that is characteristic for the particular pattern. If these time distances between
components become too short or too long the pattern disappears. These constrains are here
essential for the detection of patterns often impossible to detect on the basis of component
order alone because of the highly varied number of random behaviors that can occur between
their components. T-pattern structure
Example:
A T-pattern with m components Xi..m (each an event type or a T-pattern) can be noted
as:
X1 [d1, d2]1 X2 [d1, d2]2 .. Xi [d1, d2]i Xi+1 .. Xm-1 [d1, d2]m-1 Xm
[d1, d2]i stand for the interval within which the characteristic distances vary. Xi [d1, d2]i
Xi+1 thus means: if Xi ends at t, it is followed within [t+d1, t+d2]i by the beginning of
component Xi+1. Furthermore, any T-pattern can be described as a binary tree by splitting it
recursively (top-down) into left and right halves until the event-type level is reached. Thus,
for detection purposes, the T-pattern definition is narrowed to a binary tree of critical
intervals between left and right branches:
Xleft [d1, d2] Xright
Where Xleft stands for the first part, ending at t, followed within [t+d1, t+d2] by the
beginning of Xright; where 0 N d1 N d2. In [d1, d2], t is omitted to simplify notation. Note that,
when the two branches are concurrent, 0 = d1 = d2.
Statistical validation of T-patterns
The detection of critical intervals and therefore T-patterns is based on a null
hypothesis that is tested possibly millions of times when exploring for patterns in a single data
set. Obviously, many would thus be significant even if the data were random. A crucial issue
is whether the subject’s coordination of their own expressive actions in patterns found in our
video sequences is due to chance. More fundamentally, the question here is whether our
findings are statistically significant, that is, whether much fewer patterns are detected after
randomization of the data. To tackle this issue, each search in the experimental data is
followed by a search in a shuffled version of the same data, that is, after the time points in
each series in the real data have been randomly redistributed over the observation period. In
this way, the size of the data remains the same as the number of series, and the number of
points in each, remain unchanged. By repeatedly shuffling and then searching for patterns in
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 97/282
93
the same data set, an occurrence distribution with a mean and a standard deviation is obtained
for each pattern length. This allows comparison with the findings in the original (un-shuffled)
data and differences can be expressed in terms of standard deviations and p values. Recently,
a second simulation method has been added to THEME (Magnusson, 2006). The
“randomization” method maintains practically unchanged the structure of each series while
randomizing the relationship between them. This can be visualized as figure (refer to multiple
data point) being wrapped around a cylinder whereby each series forms, around the cylinder,
a circle that can be rotated by a random number of degrees independently of the others. Thus,
instead of shuffling every series, all series are left unchanged, but each one is rotated by a new
random number of degrees (between 1 and 359).
Setting up T-patterns detection parameters
Setting search parameters for the detection of T-patterns has a critical influence on thekinds and number of patterns that THEME detects. Here we will explain and justify the
various parameters we used for this study. The first decision to make was to decide the
number of times a pattern had to occur to be detected. In general setting smaller values means
that more patterns are detected. We decided to be rather conservative and to retain only
patterns that occurred at least 15 times in any given cluster. A significance level of 0.001 was
set as the accepted probability to determine how far from random expectation critical interval
relationships could occur for patterns to be kept or dropped. The next decision related to the
minimum percentage of video samples within a cluster in which a pattern should occur to be
detected. By default, in case of a high rate frequency of occurrences of a pattern that is present
in only one or two samples of a cluster it will still be detected. In order to report only the
patterns that are present across a maximum number of records we set up a 60% samples
threshold for patterns detection. One also needs to define the maximum number of
hierarchical levels that are going to be investigated for pattern detection. Given that one major
advantage of THEME is the detection of non obvious patterns , we chose not to set a limit to
the search level. The number of simulations performed to statistically validate the T-patterns
depends on the value set as the significance level. Because the p-value was set at 0.001, the
simulation was repeated (1/0.001)*10 = 10,000 times, twice, once for the shuffling and then
the randomization method.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 98/282
94
Table 25. T-patterns search parameters
Minimum Occurrences Minimum Samples Significance Level Max levels search Random Runs
15 60% 0.001 999 10.000
One issue we had to tackle with in setting appropriate search parameters, was the non
balanced distribution of records across the five clusters (Positive Emotions =14; Hostility =
54; Embarrassment = 51; Surprise =26; Sadness =55). All of the settings options we have
chosen had the effect of reducing the number of patterns found in the data. Initial analysis
pointed to that necessity in order to avoid program overload with the largest clusters. On the
other hand, to impose a 15 repetitions detection threshold for the “positive emotions” and
“surprise” clusters, aggregating only 14 and 26 samples respectively, implied a substantial
risk of missing to detect a number of “interesting” patterns in these groups. Nevertheless, in
order to keep the search procedure comparable across all data sets, we applied the same
search parameters to all clusters.
T-patterns search results and selection criteria
Pattern search was performed on all the sample files constitutive of each emotion
clusters. Every codes specified in our annotation scheme were fed into the analysis. By
joining all the related video samples into a single data file, it becomes possible to ask THEME
to detect patterns that occur only once in any given sample. This is crucial because we do not
expect to find as many as 15 repetitions of a behavioral pattern in sequences that only last a
few seconds. Note, that after joining data sets into a single file, the program searches for
patterns within the original samples, that is a pattern cannot begin in one sample and end in
another. The number of patterns found in each cluster is reported in table 26, column 3.
Unsurprisingly, the number of patterns found, increases with the number of files in a cluster.
This renders the statistical comparison of differences in number, complexity or length of
patterns between groups of observation of little interest. We could of course work on ratios,
but this approach wouldn’t yield any meaningful information on pattern’s composition.
Rather, we want to concentrate on the analysis of structural differences in pattern
compositions across the clusters. Because we are interested in patterns that are sequential in
nature and that include at least one facial action, we reduced the data sets by excluding all
patterns containing no EMFACS code or that contained transition lags between two EMFACS
events that were equal to zero. Lags of zero indicate a simultaneous “onset” of codes on two
or more EMFACS tracks. Because co-occurrences of EMFACS codes have already been
addressed with more traditional statistical methods in previous sections, we will focus here
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 99/282
95
only on purely sequential data. Number and percentages of total patterns that were kept for
further analysis are reported in columns 4 and 5 of table ? Note, that as the number of sample
files in the clusters increases the proportion of patterns including EMFACS codes decreases.
This can be explained by two factors: first FACS codes are “event based”. This means that
they can occur or not in any video sequence. On the other hand, codes like gaze direction or
head position are typical “state” codes. This implies that at any point in time of the
observation period, a subject is continuously scoring positive on one modality of the variable.
For example, the subject is either looking “at” or “away” from the interviewer but never both
or neither. This induces an increase of the global frequency of “state” codes, present in all the
video samples, over the more circumstantial EMFACS events. A second factor contributing to
the absence of a linear relationship between the number of files in a cluster and the frequency
of EMFACS patterns is probably due to the fact that the state transitions of some non FACS
variables operate on a faster time scale. It is not unusual to observe many gaze transitions
within the boundaries of a single FACS code. For example, a subject might start by raising the
brows then looks at the interviewer, blinks, looks down and away before the brow action
finally recedes. In any case, we can conclude from these results that the mere fact of adding
more sample files in a cluster diminishes rather than increases the proportion of EMFACS
patterns discovered by THEME in our datasets.
Table 26. Pattern Statistics for Clusters
Cluster Samples Patterns Total EMFACS patterns % of Tot. Pat.
Enjoyment 14 134 32 24 %
Surprise 26 241 38 16 %
Hostility 54 3903 260 7 %
Embarrassment 51 2069 116 6 %
Sadness 55 7605 143 2 %
Total 200 13952 589 4 %
The exclusion of patterns that include the simultaneous onset of two or more
EMFACS codes does not reduce drastically the proportion of t-patterns that will be
considered for further analysis. In fact, depending on the cluster, 71% to 87% of the originalEMFACS patterns are sequential in nature. This is coherent with the fact that previous
analysis show little overlap between EMFACS codes. Altogether, the proportion of EMFACS
t-patterns corresponding to our selection criteria amounts to 4% (N=496) of all the t-patterns
detected by the program.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 100/282
96
Table 27. Number and proportion of sequential EMFACS patterns in Clusters
Cluster EMFACS Tot. EMFACS Sequential % of EMFACS Tot. % of Tot. Patterns
Enjoyment 32 25 75% 18%
Surprise 38 27 71% 11%
Hostility 260 224 86% 6%
Embarrassment 116 100 86% 5%Sadness 143 117 81% 2%
Total 589 493 84% 4%
Following the t-patterns selection procedure, we constructed transition diagrams
showing all possible transitions from one behavior unit to the next. The entry point to each
diagram is an event type that was identified by THEME as initiating a pattern. Note, however
that these are not traditional transition graphs but ones where all transitions have been
previously verified statistically by the program’s algorithm. An example of such a transition
diagram is given in figure 30.
STARTSTART
AU12AU12
11%
Stop
AU12
Stop
AU12AU17AU17
AU6AU6
Stop
AU6
Stop
AU6
51%27% 22%
ENDEND
100%
100%
100%43%
57%
Stop
AU17
Stop
AU17
AU6AU6
Stop
AU6
Stop
AU6
Stop
AU12
Stop
AU12
100%
100%
100%
100%
Figure 30. Transition Graph
This diagram comes from the positive emotions group and displays all the possible
paths and their respective weights, given that a pattern starts with an action unit 12 ( Lip
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 101/282
97
Corner Puller ). Here, one finds that 11% of all the patterns in this group start with an AU12.
Four distinct patterns starting with AU12 are identified composed of two to six behavior
units. Note that all the transition diagrams are to be found in full in appendix 3. From one
cluster to another, the types, number and proportion of event types that initiate t-patterns may
vary. This basic information’s will be used as entry points to the complex task of comparing
the structural similarities and differences of event type’s combinations and sequential
organization in time. Note that, the patterns we will report might be difficult to detect visually
as such when looking at the original video-sequences. This is because occurring events that do
not enter into a pattern composition are considered random noise by the program and have
been dropped from the description. In the next section, we will first introduce basic search
results concerning the number, statistical validity and length distribution of the t-patterns
found in the five clusters. Then, we will use the FACSGen program to generate illustrations
of some patterns that have been found to differentiate.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 102/282
98
T-Pattern statistiques by clusters
Enjoyment cluster
The positive emotions cluster is characterized by higher ratings on the joyful,
entertained and enthusiastic scales. Out of the five clusters, the enjoyment group contains the
smallest number of video samples. It is composed of 14 clips (7%) out of the original 200.
This is no surprise since the experimental protocol was designed to elicit mainly negative
emotion narratives. In this cluster, we have found 134 different t-patterns in total. The length
of the t-patterns in the experimental data varied from 2 to 6 behavior units, with a mean of 2.7
and a mode of 2.
In contrast the mean number of t-patterns detected in randomized and shuffled data
was 5, with a maximum length of 2 behavior units.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 103/282
99
This figure shows the number of t-patterns detected in the positive emotion cluster and
the mean number detected after 10.000 thousand shuffling (blue bar) and random rotations
(red bar) of the same data.
The significance of the difference between the numbers of independent patterns found
in the experimental data compared to those found in the simulated data is computed on the
basis of the distance, expressed here in standard deviations, between the number of
“experimental” patterns and the distribution of the simulated values around their mean. The
large number of standard deviations (>23SD) reported in figure 4 strongly suggests that the t-
patterns in our cluster are not the results of chance effects for a p-value set at 0.001.
Following the selection procedure described above, 24 independent patterns (18%) were kept
for further analysis.
Hostility cluster
The hostility cluster combines video-samples that were rated high on the “disgusted”,
“angry” and “scornful” adjective scales. It is the second largest group in terms of number of
video samples (N=54) included in the analysis, between the “sadness” (N=55) and
“embarrassment” (N= 51) clusters. These 54 files represent 26% of the entire corpus. The
program detected 3903 t-patterns in the original dataset. The pattern length distribution varies
between 2 to 7 behaviors units around a mean of 3.99 and a mode of 4.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 104/282
100
In contrast, the simulations do not produce any random patterns that have more than
three behavioral units. The differences between experimental and simulated data are much
smaller for short length patterns of two events compared to those with three units (>300 SD).
Even then, the difference is still large enough, above nine standard deviations, to consider
these patterns for further comparison with other clusters.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 105/282
101
Out of the 3903 initial t-patterns detected by THEME, 227 (6%) met the EMFACS
and sequentiallity criterion and were kept for further analysis.
Embarrassment cluster
The embarrassment cluster combines the “embarrassed” and “nervous” scales and
joins together 51 files; 26% of the core set. In total, 2069 t-patterns have been detected.
Pattern length distribution in the experimental data varies between 2 to 6 behaviors units
around a mean of 3.4 and a mode of 4
Interestingly, in the rotated and shuffled data the program detects almost no t-patterns.
The mean number of patterns after the application of the shuffling procedure is 0.5 for
sequences of two events. After rotation it falls below 0.5. Note also that the standard deviation
values are very large: 687 for shuffling and 1.250 for rotations. Again, we can be confident
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 106/282
102
that the associations of events found in the t-patterns of the embarrassment cluster cannot be
explained by chance effects.
Out of the 2069 original t-patterns detected by THEME, only 101 (5%) met the criteria
for further inclusion in subsequent analysis.
Surprise cluster
The surprise cluster combines the “perplexed” and “surprised” scales. After the positive emotions cluster it is the least populated group in our datasets with 26 video files.
This represents 13% of all the files in the core data set. Length of patterns is distributed
between two to five events; with a mean and a mode of 3.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 107/282
103
The number of patterns found in the simulated data does not exceed three events.
Regardless of the simulation procedure applied, the program finds a mean of 2 patterns of
maximum two events in length. With three events that figure falls below a 0.5 mean value.
The computed distance between the experimental and simulated data is respectively 58 and 45
standard deviations for the shuffling and rotation methods. Conclusions about the validity of
t-patterns detected in previous groups holds for the “surprise” cluster. Out of the 241 original
t-patterns detected by THEME, 27 (11%) presented the necessary features for further
inclusion in subsequent analysis.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 108/282
104
Sadness cluster
The sadness cluster is composed of the “disappointed” and “sad” scales. With 55 video
records, this cluster contains the largest number of files from the core data set (28%). The
number of independent patterns detected is also the largest out of all the clusters. The program
found 7605 t-patterns in total. Pattern distribution varies between two to nine behavioral units
in length. The mean length is 4.5 with a mode of 4.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 109/282
105
Patterns validity assessment points to a mean number of 19 random patterns detected
in sequences, two events in length. With differences between “real” and simulated data above
30 standard deviations for both simulation methods, the t-patterns found in the experimental
data seem still valid for two events sequences. The t-pattern selection for this cluster led to a
drastic reduction of the number of sequences to consider for further analysis. Only 2% of the
original 7605 patterns have been kept (N=117).
Summary of
results
At this stage, the examination of the general characteristics of the t-patterns found in
the five clusters, yield the following informations. First the number of t-patterns found in a
cluster is in a linear relationship with the number of files composing a cluster. Nonetheless,
the proportion of t-patterns involving at least one EMFACS action decreases when the
number of sample files rises. We proposed two non-mutually exclusive possible explanations
for this phenomenon. The first concerns the type of coding involved. EMFACS codes are
“event” type codes whereas a majority of the additional non FACS codes are what we call
“state” codes. By definition, “event” codes are scored based on their frequency of occurrences
and vary from one file to the other. On the other hand, “state” codes are scored positive on all
the sample files. Only the frequency of transition states from one modality of a variable to
another varies across clusters. Second, the frequency of transition states is sensitive to the
time scale most characteristic for a specific variable. We argued that some variables, like eye
or head movements, can be so rapid and pervasive that their frequency of occurrences
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 110/282
106
increases dramatically, compared to less frequent and longer lasting facial actions, with the
number of files involved in a cluster. We think that these two factors combined, are the main
reason why the proportion of t-patterns with EMFACS codes decreases when cluster’s size
increases. Considering that we decided to keep only the patterns that included at least one
EMFACS code, we were able to retain, depending on the cluster, from 2% to 24% of all the t-
patterns originally detected by THEME. The second selection filter for patterns to be
considered for further analysis implied that events in a pattern needed to be composed of
events that are sequenced in time rather than simultaneously occurring. This second filtering
procedure did not have a dramatic impact on the proportion of t-patterns to be dropped.
Indeed, 71% to 87% of the t-patterns containing a core EMFACS action showed a sequential
structure. The statistical validity of the patterns found in the clusters was estimated by
comparing the number of t-patterns found in the experimental datasets with the mean number
of patterns found after applying two randomization procedures implemented in THEME.
Results show that for all the clusters, the difference between the number of patterns detected
in the “real” data and the mean number of patterns found in the simulated data is typically
great, somewhere between 9 to 1250 standard deviations. Note that patterns longer than µ=3
are not found in either kind of randomized data, while in the real data patterns up to length 9
are detected. Even with patterns between 2 and 3 in length, the difference between real and
randomized data is always large enough to suggest that the patterns detected by the program
cannot be explained by chance effects. In the next section, we will examine the compositions
of the sequential patterns containing EMFACS actions putting an emphasis on the t-patterns
that stand out as most specific for each cluster.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 111/282
107
T-patterns illustrations and comparison by clusters
In this section, we will explore to what extent the t-patterns detected by THEME are
specifically related to the rating categories produced by the judges. Each t-pattern that is
unique to one cluster may be considered either as an element that was used by the judges to
score the videos or alternatively an element that co-occurs non-randomly with a latent
variable that was not measured in this thesis and which does affect the categorization process
(ex. Speech content).
We will also examine how additional nonverbal and paraverbal participate to the
structure of t-patterns containing facial action units. When possible, we will discuss possible
functional interpretations of the detected patterns. To do this, we will often refer to the
predictions of Ekman as well as those of Scherer. Note, however that this is not done in an
attempt to test the respective predictive values of these models. Rather In the discussion
section of this thesis we will suggest possible methodologies that could be used to empirically
test the specific meaning of the t-patterns found in this thesis in a more rigorous manner.
To illustrate t-patterns that are specific to each rating cluster, we generated FACSGen
simulations that depict the unfolding of expressive actions in time. For readability reasons we
will not discuss each instances of cluster specific t-patterns. However, the exhaustive list of
all t-patterns detected in the clusters, as well as their frequency of occurrence expressed in
percentages, is illustrated in the form of transition graphs available in appendix 3. Note, that
by default, we added one “neutral baseline” slide, before the first event in the patterns. On this
baseline slide, the subject is represented as if the head and gaze were oriented straight at the
camera and no facial actions were produced. Remember also, that in order to look at the
interviewer, situated at a 45° angle to the right of the subject; she needs to either move her
eyes or head to the right. Unless an “offset” code appears in the pattern, the actions are
maintained and codes accumulate on successive slides. This may, or not be an accurate
description of what would visually occur depending on whether or not an “offset” score
presents the required characteristics to be included as an event in a t-pattern.
Our discussion of the t-patterns will be structured by presenting the specificities of
each cluster one by one.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 112/282
108
Enjoyment cluster
The enjoyment cluster includes 14 video samples rated high on the joy, amused and
entertained adjectives. 24 distinct t-patterns were considered in the following exploration. Out
of the 70 behavior units defined in the original annotation scheme, we only analyzed the 47
actions for which adequate inter-rater agreement was reach. Out of these 47 event types, 13
(28%) appear in the composition of t-patterns that have been rated as positive. They include
three upper face action units: AU1+2 (combined), AU5, AU6 and five lower face action units:
AU10, AU12, AU14, AU17 and AU20. Additional non FACS codes include: participant
looking at the interviewer (look at), participant looking away form the interviewer (look
away), the head being turned to the side and away from the interviewer (head turned away),
blinking (blink ) and the participant starting to speak ( speak ). FACS action units that are never
part of a t-pattern in sample files rated as positive include: AU1, AU4, AU7, AU9, AU10U,
AU12U, AU14U, AU15, AU16, AU23, AU24, AU25, and AU26. Figure 27, shows the
proportion of codes that initiate a t-pattern in decreasing order. The most pervasive kind of
patterns (34%) in the “enjoyment” cluster is initiated by the participant starting to look at the
interviewer.
34
1311
9 9 86 5 5
0
5
10
15
20
25
30
35
40
Look At AU5 AU12 AU1+2 Look Away AU6 Speak AU14 AU20
Codes
P e r c e n t a g
Figure 27 "Enjoyment" Cluster. First codes in T-patterns
This kind of opening seems most characteristic of this group. Neither the “Hostility”
nor the “Embarrassment” clusters contain patterns that start with a participant looking at the
interviewer. We do find such patterns in the “Surprise” and “Sadness” clusters, but they are
marginally represented in these groups, 3% and 2% respectively. In the positive emotion
group, the majority (76%) of actions that follow the “look at ” event include, either a social
smile, (AU12 -54%) or a Duchenne-smile, (AU6+12 -22%). No AUs 6 or 12 are found in the
other clusters where patterns start with a “look at ” event.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 113/282
109
Figure 28.
Neurtral
Baseline Look At AU12 AU5
Figure 29.
AU12
Neutral
Baseline
Look At
AU5
AU1+2
Blink
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 114/282
110
Figure 30
Neutral
baseline AU6 AU1+2 AU12
AU6
stops
Look At
All these patterns present the characteristic of combining a social or a D-smile (a.k.a.
Duchenne smile) with a bilateral eyebrow raise, an upper lid raise or both. These two latter
actions are part of the “surprise” prototype in Ekman’s terminology. Partial sequences of t-
patterns including action unit 12 with either 1+2 or 5 are also found in the "Hostility" (5%)
and "Embarrassment" (12%) clusters. However the combination of action unit 6+12 with 1+2
and/or 5 is unique to the positive emotions cluster. Moreover, in the other clusters the " look
at " action is never part of these sequences. Ekman finds that happiness often blends with
surprise (Ekman and Friesen, 2003 p.107). The typical eliciting scenario in his case would be
one where something unexpected occurs and the evaluation of it is favorable. The appraisal
theory interpretation of these sequenced actions would be quite comparable. AU1+2+5 would
be interpreted as signaling an evaluation of suddenness on the novelty dimension. The
AU6+12 actions could either signal an intrinsically pleasant or goal conducive situation. Note
that in patterns 1 and 3; AU5 or AU1+2 but not both; occur in combination with 6+12 or 12
alone. In this case the appraisal model provides no interpretation. Also, in these two patterns
the predicted, a) suddenness, b) intrinsic pleasantness or c) goal conduciveness sequence
would not be respected as AU5 and 1+2 occur after and not before AU6 or AU12 have been
activated. An additional sequence covering 6% of all the t-patterns in the PE group includes a
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 115/282
111
"look at" action combined with an AU12. In this case, the subject starts to speak, looks at the
interviewer and finally produces a social smile (AU12). Not all the patterns including smiling
actions start with the subject looking at the interviewer. In fact, 11% of all the patterns in the
PE group are initiated with AU12. The most prevalent sequence in this case (51%) combines
action unit 12 with AU17 as illustrated in figure 31. Other less complex sequences including
AU12 the combination of 12 with 6, in a classic Duchenne smile (27%). We also find a
simple two events sequence starting and ending with 12 (22%).
Figure 31
AU12 AU17 Stop AU6
AU6(Stop AU17)
Stop
AU12
NeutralBaseline
The enjoyment group is not devoid of action units that are traditionally associated with
negatively valenced emotions. AU10, typically associated with "disgust" or "anger", is found
in 9% of the patterns. In the appraisal framework, AU10 is thought to signal an unpleasant
evaluation of olfactory or gustatory stimuli. Recent work in the field of embodied cognition
has shown that “disgust” feelings induced by unpleasant odors do affect the severity of moral
judgments that research participants’ are asked to produce (Schnall and al, 2008). This could
be an illustration of how a facial action that was selected phylogenetically to reduce the
inflow of odors potentially harmful to an organism, could through the development of
symbolic thinking and language skills acquisition, become a nonverbal emblem conveying a
symbolic repulsion that could be roughly interpreted as "This situation, action or person
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 116/282
112
stinks". Also, AU20 which is the main lower face element of the "fear" prototype is observed
in 5% of the patterns. In opposition to most of the "smiling" patterns, these actions are
systematically preceded by the subject either gazing or turning her head away from the
interviewer. In no other cluster are AU10 or AU20 directly preceded by a “look away” action.
Note also, that AU12 is never aligned with AU10 or AU20 in a t-pattern. AU20 is believed to
reflect a “low power ” appraisal on the CPM “coping ” dimension. It is as if the subject did not
want the interviewer to see or think that these displays were addressed to him.
Hostility Cluster
The hostility cluster includes 54 video samples in which 227 t-patterns corresponding
tour inclusion criteria were detected. In this group 62% (N=29) of the annotation codes enter
in the composition of at least one t-pattern. Because the raters did not make a differentiated
use of the “angry”, "contempt” and “disgusted” adjective scales, we could expect to find t- patterns that exemplify any of these three facial prototypes in this cluster. Graphic 1 shows
the event types initiating t-patterns in the “Hostility” group. Event types are ranked by order
of importance.
Figure 32. Hostility Cluster. First codes in T-patterns (%)
25
15
109
7 7
54 4 4
32 2
1 1 1
0
5
10
15
20
25
30
AU4 AU7 AU9 AU17 AU1+2 AU5 AU15 AU10U AU14U AU20 AU10 AU12 AU14 AU12A AU23 AU24
Codes
Most of the EMFACS action units are found to initiate t-patterns in this cluster. One
notable exception is the Duchenne marker of “enjoyment” smiles, action unit 6 (Cheek raiser).
The most prevalent event initiating a t-pattern in this group is AU4 (25%) which lowers and
draws the brows together. After AU4, the next most frequent event types starting a t-pattern
are AU7 (15%) and AU9 (10%). Those three actions together (4, 7 and 9) account for 50% of
all the t-patterns in the videos rated as communicating some form of hostile demeanor.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 117/282
113
According to the EMFACS taxonomy, AU4 and AU7 are constitutive elements of “anger”
prototypes, while AU9 signals “disgust”. The preeminence of these event types in the
composition of “hostile” patterns is therefore no breaking news. If one adds AU10 as an
alternative to the encoding of “disgust”; AU10U and 14U as signs of “contempt” and AU23
and AU24 as possible elements in “anger” displays, the proportion of t-patterns starting with
an event type constitutive of facial prototypes conveying some form of hostility rises to 63%.
AU4 is the first action in 25% of the patterns present in the sample files rated as
conveying a hostile attitude. According to Ekman, for an AU4 to be interpreted as a clear
“anger ” signal, it must be associated with AU24 (lips pressing against each others) possibly
also with AU23 (Lip Tightener), both in the lower face region (Ekman and Friesen, 2003,
p83). If that is not the case, the expression is considered ambiguous and might take on other
meanings: visual focusing efforts, concentration, determination or anger containment. In the
cluster, we find no lower face action units aligned with AU4 in a-t-pattern. One possible
explanation for this is that AU23 and AU24 are by definition only coded when subjects are
not speaking. Because we find no “ pause” codes in these patterns it is most likely that AU4 is
produced when the subjects are speaking. This would preclude any association of AU4 with
23 or 24. Note also, that according to Ekman, the involvement of AU23 and 24 in anger
expressions ought to be interpreted as attempts to control an impulse of saying something
hostile or shouting. This implies that the decoding of “angry” faces would have to rely on
converging signals from additional communicative channels to be disambiguated when
subjects are speaking (ex: speech content, vocal acoustics modification). For Scherer also,
action unit 4 should not be interpreted in isolation. He provides predictions for AU4+5 and
AU4+7 when they overlap (both AUs together being an element in a sequence). The
interpretation of the AU4+7 combinations varies depending on its position in the appraisal
sequence. When it occurs at the beginning of a pattern it would signal a low familiarity or the
occurrence of an event difficult to predict (novelty appraisal check). If these actions are found
latter in the sequence they could signal that some situation is being evaluated as intrinsically
unpleasant or goal obstructive. The association of AU4 with 5 is thought to convey an
attitude of feeling powerful enough and willing, to cope with a challenging situation. In our
dataset, the action units associated with AU4 in a t-pattern are: AU9 (63%), AU5 (50%), AU7
(41%) and AU1+2 (3%), (the proportions of independent t-patterns including these AUs are
in brackets). For Ekman, AU9 is sufficient by itself to signal a “disgust” message. Actually,
only the “disgust” and “contempt” prototypes in the EMFACS system can be defined by the
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 118/282
114
innervations of single action units. In this system, a simultaneous activation of AU4 and AU9
would be interpreted as blending some elements of anger with disgust. Our previous results
have shown that no such blended expressions could be said to be characteristic of any clusters
in particular. Nevertheless, we do find several instances of statistically valid sequences
involving both AU9 and AU4 that are unique to the “hostility” cluster. The CPM model
proposes two scenarios for such sequences. If AU4 and AU7 manifest before AU9, the
sequence would be interpreted as communicating something like: “ I see this event as a) new,
unfamiliar and or unpredictable and b) it is intrinsically unpleasant.” If AU9 is followed by
AU4+5, the message would then become: a) this is intrinsically unpleasant and b) I’m
powerful enough to deal with it, and actually, I will do something about it. In the hostility
cluster we do indeed find instances of both types of sequences. In fact 45% of all the t-
patterns starting with AU4 are compatible with one CPM predicted sequence. Next follows
some illustrations of t-patterns initiated by AU4, that are not found outside the group of
videos rated as conveying hostility. As before, the complete set of transition diagrams
illustrating all the sequences are in the appendix 3. Even though AU7 is most predominant in
the constitution of t-patterns belonging to the hostility group (23%), it is also found in the t-
patterns of videos rated as communicating embarrassment (19%) and to a much lesser extent
in the t-patterns of the surprise and sadness groups; 4% and 2% respectively. Besides these
quantitative differences, we also find structural variations in the composition of t-patterns
with AU7 across these four clusters. In the "Hostility" group, AU7 appears in 84% of the t-
patterns where AU4 is present. By contrast, in the "Embarrassment" and "Surprise" clusters
no such association appears. We do find two short sequences in the "Sadness" cluster where
AU4 is aligned in a t-pattern with AU4, with no other additional codes. By contrast, when
AU4 and AU7 are associated in a t-pattern in the "Hostility" cluster they are always
accompanied with some additional FACS codes. For example, in the illustration below
( Hostility: T-pattern 1), AU9 is aligned in a t-pattern including both AU4 and AU7.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 119/282
115
Figure 33.
Neutral baseline AU9AU7
StopAU7AU4
This specific t-pattern of length 4 repeats itself 19 times in the hostile group. By itself,
it represents 5% of all the sequences starting with AU4. Another sequence covering 6% of the
AU4 patterns, presents itself as follows:
Figure 34.
Baseline(AU1+2) AU4 AU9
StopAU1+2
StopAU9
This t-pattern represents 6% of the sequences that start with an action 4 in the hostility
cluster. Before the pattern starts, two action units are already activated: AU1+2. According to
Ekman, the combination of the frontalis and corrugator actions produce a typical fear brow.
AU1+2+4, is one of the few EMFACS combination that is statistically more represented in
some clusters than others. Event though this facial configuration is present in the hostility
group, it was found to be more characteristic of the sadness and embarrassment clusters.
Interestingly, the association of AU1+2 with AU4 is never aligned in a T-pattern in either the
sadness or the embarrassment cluster. This is a good illustration of how quantitative
differences detected across the clusters are not automatically reflected in the structural
composition of the t-patterns detected by THEME. The results of the structural analysis
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 120/282
116
suggests that 1+2+4 is part of a repetitive sequence found only in video samples rated as
“hostile”, which additionally involves AU9 and where the activation of AU4 is maintained
after AU1+2 has receded. In this case, this suggests that AU1+2+4 could be seen not as a
discrete facial signal but as a momentary configuration in a sequence of intertwining events
where AU1+2 and AU4 converge at times, but nevertheless follow distinct non random
dynamic trajectories. We do find some t-patterns starting with AU4 in the “sadness” (2%) and
“Embarrassment” (1%) cluster that are also found in the “Hostility” group. In the
“Embarrassment” group AU4 is aligned with AU5 in one t-pattern. In the sadness group AU4
is associated with AU7 in two short patterns. But, in both those cases no other EMFACS
codes are associated to these sequences. Additionally, neither in the “Positive emotions” nor
the “Surprise” cluster, does the program detect any t-patterns involving AU4. The inner and
outer brow raise action (AU1+2) in the hostility cluster initiate 7% of all the t-patterns. All of
them are directly followed by an action unit 5. According to Ekman, the association of
AU1+2 with AU5 is a combination constitutive of both the “fear” and “surprise” prototypical
expressions. Scherer associates AU1+2+5 with possibly two appraisal dimensions: “novelty”
and “ goal significance”. When these action units occur together, they are interpreted either as
a reaction to a sudden change in the person’s external or internal environment (novelty); or
alternatively as the expression of a perceived discrepancy between what is happening and
what was expected (goal significance). In 51% of the t-patterns starting with the sequence
AU1+2 +5, the next event in the sequence is an AU12. T-pattern 3 is a good illustration of
such a case. By itself it represents 16% of the patterns starting with the following sequence a)
AU1+2, b) AU5, and c) AU12.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 121/282
117
Figure 35.
StopAU20
StopAU5
Neutral baseline AU1+2 AU5 AU12 AU20
Interestingly, none of the facial actions that are most typically associated with
expressions of disgust, contempt or anger are included in the composition of this pattern
identified in sample files rated as conveying some form of hostile attitude. AU5 could be an
element of an anger prototype, if it were associated with AU4, AU7 or both, which is not the
case here. Ekman’s interpretative framework would lean towards an explanation involving
some element of surprise. To this initial sequence a smiling action (AU12) is added, directly
followed by a lip stretch (AU20). The AU20 is seen as an element of a “fear” expression in
Ekman’s dictionary and Scherer interprets its message value as conveying something like: “I
have little or no power”. Ekman would probably argue that the smile should not be seen as a
true signal of enjoyment, for lack of AU6 involvement, but rather as a deceiving attempt to
mask a “fear” expression. In this case both frameworks would seemingly fail to explain why
the sequence is perceived as communicating hostility. What strikes us in this sequence is the
fact that the frontalis action is maintained throughout the sequence. According to Ekman,
surprise is the briefest emotion, and longer displays of surprise are seen by him as reliable
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 122/282
118
cues to an “unfelt emotion” (Ekman, 2003, p.165). In this case the sequence would be
interpreted as a voluntary emblem communicating attitudes related to surprise like disbelief
and amazement. The CPM model proposes an alternative explanation to the longer than
would be expected duration for the AU1+2+5 activation. Once the initial processing of a
novel stimulus ends, the goal relevance appraisal starts. Interestingly, the facial signs
predicted for an event evaluated as “relevant” is precisely the maintained innervations’ of
action units 1+2+5. In this case the sequence would communicate something like: “this is new
(AU1+2+5), it concerns me (AU1+2+5), but I don’t think I have any control on the situation
(AU20)”. But again, both frameworks tend to explain the lip stretching action more as a sign
of submission than dominance as would seem to be the case when someone is expressing
hostile intentions. A last perspective could be that the lip stretch action is not a sign of basic
emotion or low power appraisal, but rather a smile control action. Indeed AU20 when
occurring with AU12 has been listed by Keltner (1995) as a smile control action. This differs
radically from Ekman’s notion of masking smiles in that what is being obscured or
counteracted here is not the expression of fear but the smile itself. The fact that AU12 starts
prior to AU20 in this sequence tends to lend credit to this view because emotion deamplifing
actions should follow not precede the display of affect. We think that this is an example of
how information about the sequential unfolding of facial displays may help decide on several
possible interpretations when simple information about overlapping actions may not. When
the AU1+2+5 are not followed by AU12, they are systematically followed by the offset of
AU1+2 (49%). In the majority of these cases (71%) the sequence runs as follows:
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 123/282
119
Figure 36.
Neutral baseline AU1+2 AU5
Stop
AU1+2AU4
StopAU4
This sequence covers 35% of the t-patterns that start with action units 1 and 2 in the
hostility cluster. It is a good illustration of the temporal structuring of communicative signals
that proponents of componential models are interested in. If one were to isolate slide 3 from
the rest of the sequence, the facial configuration on the slide would correspond to a “surprise” prototype (Ekman) or the reaction to a novel and sudden change in the environment (Scherer).
When taking into account the way that facial actions further unfold after this first display, we
get additional information about the seemingly unpleasant character of that change, signaled
with the AU4 brow action paired with the upper eyelid raise action (AU5). At no time in the
sequence are both signals overlapping. This implies that the sequence seen as a whole yields
more detailed information about the message value of the displays than the separate elements
that constitute it.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 124/282
120
Figure 37
Neutral
baseline AU14 AU23
StopAU14
StopAU23
In the hostility cluster 2% of the sequences are initiated by an action unit 14. More
significantly this action appears in 12% of the composition of patterns in this group. The
AU14 or dimpler as it is commonly called because of the dimple-like wrinkle that it produces
beyond the lip corner, involves a bilateral tightening and slight rising of the lip corner. In
contrast, AU14 is not found in the composition of t-patterns extracted in the embarrassment,
hostility and sadness clusters. However, we do find it in 8% of the t-patterns detected in video
files rated as positive. The most commonly accepted emotional meaning derived from this
action is thought to be an attitude of scorn or contempt. The empirical evidence in favor of the
universality of a “contempt” expression relies on 15 articles reporting the result of 26 rating
studies having investigated the impression produced by a unilateral version of AU14,
sometimes with AU12, head and eyes centered (Alvarado & Jameson, 1996; Biehl et al.,
1997; Ekman & Friesen, 1986; Ekman & Heider, 1988; Ekman, O’Sullivan, & Matsumoto,
1991; Frijda & Tcherkassof, 1997; Matsumoto, 1992; Ricci-Bitti et al., 1989; Rozin, Lowery,
Imada, & Haidt, 1999; Russell, 1991a; Wagner, 2000; Yrizarry, Matsumoto, & Wilson-Cohn,
1998, Matsumoto, 2005). A second version of this expression is similar but includes a head
tilt and/or eyes to the side (Haidt & Keltner, 1999; Rosenberg & Ekman, 1995). Nevertheless,
according to the EMFACS dictionary a symmetric AU14 can also be interpreted as a
“contempt” display under certain conditions: “The onset of the symmetrical AU14 ought to beimmediately preceded or accompanied by an upward rolling of the eyes. Without pausing, the
eyes will move up and to the side and come back down in one continuous motion. Another
variation is when the onset of AU14 is immediately preceded or accompanied by a movement
of the eyes or of the head and eyes to look at the other person in the conversation » (see
EMFACS-8 manual, Ekman, Irwin and Rosenberg, 1994). Finally, the combinations of a
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 125/282
121
bilateral eyebrow raise (AU1+2) with a symmetric AU14 is also suggested in the same text as
a possible variant of a contempt expression. As is illustrated in t-pattern 5, which corresponds
to 20% of the sequences starting with AU14; none of the EMFACS requirements are met to
interpret this action as potentially conveying a scornful attitude. However we do find a t-
pattern that closely matches the second version studied in previous work. In t-pattern 6 which
represents another 20% of the sequences starting with AU14, the participant does tilt her head
to the side after producing a unilateral 14. While the head is tilting the dimple action stops and
the head finally moves away from the interviewer.
Figure 38
Neutral
baseline AU14U
Head
Tilting Side
Stop
AU14U
HeadTurnedAway
By contrast, the innervations of AU14 found in the patterns of sample files rated as
positive are never unilateral and do not involve additional head or gaze actions. The meaning
of AU14 according to the CPM framework is linked to the appraisal of the implication of a
situation on the dimension of social normative standards or personal axiological values. If the
self is evaluated as having failed to comply with some personal or social standards, the model
predicts a symmetric 14 with either a look down action, a partial closing of the upper eyelid
and or a head tilt to the side. If the object of the evaluation is not the self but someone else,
then the model predicts that the AU14 action ought to be unilateral and the sequence would
not involve the head and gaze patterns described above. Our data seem to suggest that
contrary to the CPM prediction the pairing of AU14 with a head tilting to the side action is
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 126/282
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 127/282
123
According to this view there should be no reason why these types of t-patterns should be
restricted to either negatively or positively valenced displays.
Embarrassment Cluster
The embarrassment cluster is constituted of 51 video samples, 25% of all the records.
According to our defined criteria 101 t-patterns were considered for exploration. In video
samples rated as conveying embarrassment and/or nervousness, 22 distinct event types (47%)
enter into the combination of t-patterns. Among these, 13 are facial actions (59%). The other
codes are distributed as following: three "Gaze" (14%), four "Head" (18%) and two "Voice
and Speech" (9%) event types. Facial actions that are never found in this group are: AU1,
AU6, AU14, AU14U, AU16, AU24 and AU25. The most predominant action initiating t-
patterns in this cluster is AU12 (20%). The full list of initiating event is figured in figure 39.
20
12
10 10
8
7 7
6 65
3
21 1 1 1
0
2
4
6
8
10
12
14
16
18
20
22
A U 1 2
A U 1 5
A U 7
A U 1 2 U
L o o k
A w a
y
L o w e
r H e a d
L o o k
D o w n
A U 1 0
A U 1 7
E y e l i
d s D r
o p
A U 5
A U 1 + 2
A U 4
A U 9
A U 2 0
A U 2 3
Figure 39. "Embarrassment" Cluster. First codes in T-Patterns (%).
The proportion of independent t-patterns containing an AU12 (38%) is next only to the
positive emotions cluster (56%). One head movement that is most characteristic of the
embarrassment t-patterns is the lowering of the head on its vertical axis. It is present in the
composition of 21% of these patterns. Moreover, it is the initiating event of 7% of the t-
patterns in this group. By contrast, the "lower head" action never initiates patterns in other
groups. Although it is also present to a much lesser extent in the composition of patterns
found in the hostility cluster (4%), this action does not reach the 1% threshold for the
enjoyment, surprise or sadness clusters. Figure 40, illustrates a typical embarrassment
sequence aligning a head lowering action with AU12. In 11% of the t-patterns that starts with
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 128/282
124
the participant lowering the head, it is followed by a smiling action, a relaxing and partial
closing of the upper eyelid and finally a eyebrow raise and upper lid raise action. Codes
indicating the partial closing of the upper eyelids (eyelids droop) though present in four out of
five clusters are most prevalent in both the sadness (40%) and embarrassment” (23%)
clusters. Proportions for this action in other groups are as follows: enjoyment: (0%) anger
(3%), surprise (19%).
Figure 40.
Neutral
baseline
Lower
Head AU12
Eyelids
Drop AU1+2
AU5
We find that the embarrassment cluster is the only one where patterns containing a
unilateral version of AU12 are detected (see figure 12). This action which is present in the
combination of 15% of the embarrassment patterns is similarly to AU14U associated with the
prototypical display of “contempt” in EMFACS and the communication of a sense of moral
disapproval of someone else’s actions or the appraisal of an event as unfair according to the
CPM model (see table 1. p.24). Considering these theoretical propositions we would have
expected to find a predominance of t-patterns with both AU12U and AU14U in video samples
rated as conveying hostility. Instead AU12U is not found in t-patterns rated as hostile and
conversely AU14U never enters in the combination of t-patterns found in the embarrassment
cluster. The only other group outside the hostility cluster in which we do find instances of
AU14U being part of a t-pattern is sadness (2%). Incidentally both the hostility and sadness
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 129/282
125
clusters are likely to reflect appraisals where another person is looked down upon (hostility)
or where a situation is judged as unfair (ex: loss of a loved one) whereas embarrassment is
more likely to involve some self depreciating evaluations (remember that a majority of
records classified as conveying embarrassment were extracted from guilt narratives). If this
were the case it could be that AU14U reflects an appraisal of low compliance with standards
where the agent is not the self whereas when AU12U is activated the same appraisal would be
oriented towards the self as a causal agent. One possible counter-argument could be that it is
not the AUs 14U and 12U that are the essential signals in these t-patterns causing participants
to classify these records the way they did. When comparing the composition of t-patterns
involving AU14U in the hostility and sadness clusters with that of t-patterns involving
AU12U in the embarrassment group one discovers that a lowering head action is involved in
the embarrassment group whereas it is absent in the sadness and hostility patterns.
Figure 41.
Baseline
(Look At)
Lower
Head
Look
Away AU12U
Head Turned
Away
Admittedly, our attempt to suggest a possible explanation for this selective
dissociation between AU12U and AU14U in our dataset remains highly speculative at this
time and would need further experimentation to confirm. A position of the head that is
relatively more frequent in the embarrassment t-patterns (12%) than in the other groups is the
participant's head being turned away from the interviewer: enjoyment (4%); hostility (4%);
surprise (0%) and sadness (6%).
The sequence illustrated in figure 42 starts with the participant pulling the corners of
the lips down then she looks downwards and finally lowers the head on its vertical axis. This
pattern represents 16% of the sequences starting with AU15 found in the embarrassment
cluster. A participant gazing downwards is a relatively dominant response in both the
embarrassment (28%) and the sadness (39%) t-patterns. Note, also that this action is never
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 130/282
126
found in patterns identified in the enjoyment and hostility clusters. It is present to a lesser
extent in the composition of surprise (7%) t-patterns. Interestingly, an almost identical
sequence as the one depicted in figure 42 is repeatedly found in the sadness cluster. The only
difference being the final lowering of the head in the embarrassment patterns, which is absent
in sequences found in the sadness cluster.
Figure 42.
Neutral
baseline AU15
Look
Down
Lower
Head
This suggests that non facial actions, like head lowering, participating to the unfolding
of dynamic displays may actively contribute to the modulation of perceived meaning
attributed to AU15. The pattern depicted in figure 43 is characteristic of 24% of the t-patterns
starting with AU15 in the embarrassment cluster. It is directly followed by a lip corner pull
action (AU12) that counteracts the downwards pulling of AU15 producing the appearance ofa flattened smile often referred to as a “miserable smile” (Ekman & Friesen, 1982). The
sequence continues with a rising of the upper eyelid (AU5) that rapidly recedes at the end of
the pattern.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 131/282
127
Figure 43.
Neutral baseline AU15 AU12 AU5
StopAU5
Comparatively, in the hostility cluster AU12 is never aligned in a t-pattern when the
sequence is initiated by action unit 15. The enjoyment and surprise clusters do not show any
involvement of AU15 in their t-patterns. Even though the patterns found in the sadness cluster
involve both the AU12 and AU15, they never occur together in a sequence. As the pattern is
initiated by AU15, predicted to be an expressive element of sadness (Ekman & Friesen, 2003)
or the reflection of evaluating oneself as powerless (Wehrle et al. 2000), the subsequent
involvement of AU12 might be seen as an attempt by the participant to regulate her feelings
« let’s cheer up » or to neutralize her display of distress by a masking smile. In fact both these
functional hypothesis might be simultaneously valid. In fact the alternative consisting of
putting more emphasis on either the interpersonal or intrapersonal interpretation of such a
regulation episode (assuming it has this function) depends more on the perspective of the
observer than on what is being expressed. Here it seems that the judges participating in our
study have favored an interpretation of this pattern as reflecting an attempt to conceal one’s
feelings, possibly leading to an interpretation of the person’s attitude as conveying ill ease.
This could explain the presence of this t-pattern in the embarrassment cluster. The upper
eyelid raise in this sequence (AU5) is a facial action traditionally associated with either the
expression of anger (when combined with AU4 or AU7) or surprise (combined with
AU1+2+25/26) it has also been described has an emblematic facial gesture conveying a sense
of uncertain or questioning surprise (Ekman & Friesen, 2003, p. 43). The words that would go
with it might be something like: « oh really? » or « is that so? ». The sequence illustrated in
figure 16 is another example of an alignment of AU12 and AU15 in a t-pattern. It represents
23% of all the sequences initiated by AU5 in the embarrassment group. All the event types
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 132/282
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 133/282
129
themselves from smiling but also when attempting to conceal manifestation of distress (ex
AU15). Observers might infer from such sequences that the individual is ill at ease in letting
his distress be shown on the face for too long.
Surprise Cluster
The surprise cluster is constituted of 26 sample files, 13% of the videos in the core set.
According to defined criteria 27 t-patterns could be included in this exploration. This cluster
is mainly defined by two adjectives: that do not share the same status with regard to their
pototypicallity as emotion terms. As we have mentioned before, “surprise” holds a particular
status amongst emotional labels because of its apparent lack of hedonic valence. Nevertheless,
as mentioned before the Niedenthal study (2003) shows that the perceived intensity of a
subjective feeling, rather than its hedonic valence, constitutes the best predictor of
pototypicallity for being included in the french category “émotions”. On this intensitydimension, “surprise” is a term that gets mean rating scores (µ = 6.11,) close to that of other
“basic” terms of emotions like “colère” (µ = 6.96) or “joie” (µ = 6.94) whereas “perplexe”
does not (µ = 4.04). Video samples in this group gather 26 video samples in a cluster. It the
second least populated group right after enjoyment (N=14). In the surprise cluster 15 different
event types (32%) enter in the compositions of t-patterns. There are 7 actions units out of the
21 possible codes (33%); three upper face-AU1+2, AU5, AU7- and four lower face actions-
AU17, AU24, AU25, AU26. Head codes include a vertical upwards movement of the head,
and an orientation of the head towards the interviewer. All gaze orientation and eye
movement codes are present at the exception of the “look up” action. The “pause” code
referring to speech breaks in the participant’s narrative flow is also an important element
present in 30% of these patterns; half of which are initiated by this action. Facial actions that
are never found in this group are: AU1, AU4, AU6, AU9, AU10, AU10U, AU12, AU12U,
AU14, AU14U, AU15, AU16, AU20 and AU23. The most predominant action initiating t-
patterns in this cluster is AU17 (20%). The full list of initiating event is listed in figure 45.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 134/282
130
Figure 45 “Surprise” Cluster. First codes in T-Patterns (%)
29
1615 15 15
53
2
0
5
10
15
20
25
30
35
AU17 Head On AU5 Pause AU 1+2 Blink Look At AU7
The first t-pattern we will present from this cluster accounts for 11% of the sequencesstarting with AU17 (see figure 46). The action of the mentalis muscle raises the chin boss
pushing the lower lip upward. It causes the mouth to take on an inverted U shape. This is the
only facial action unit in this pattern. It is followed by an orientation of the gaze first towards
and then away from the interviewer, the sequence ends with AU17 receding while the
participant is still avoiding eye contact.
Figure 46.
AU17
Neutral
baseline Look At
Look
Away
Stop
AU17
While AU17 is in no way specific to the surprise cluster, it is found in the composition
of t-patterns from all clusters: enjoyment (12%), hostility (25%), embarrassment (23%) and
sadness (22%). It is the lack of association of AU17 with other core EMFACS action units
that makes this pattern specific to the surprise group. In sadness 17 is associated with 15, in
enjoyment it is aligned in t-patterns with either 6, 12 or 14; in hostility 17 is always found in
t-patterns involving at least one of the following codes: AU1+2, AU7, AU14, AU15, AU23 or
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 135/282
131
AU24 and in embarrassment : AU1+2, AU5, AU7, AU10, AU12U or AU15. It seems that it
is only when AU17 is not qualified by other action units in a t-pattern that it takes on the
dominant perplexed and/or surprised message value. Figure 47 is a variant of the precedent
sequence. By itself, it accounts for 6% of the patterns starting with AU17 in the cluster. The
opening of the sequence is similar to the last one, but here instead of an active gaze movement
away from the interviewer the participant relaxes the upper eyelid that droops down partially
closing the eye aperture.
Figure 47.
Neutral baseline AU17 Look At
EyelidsDrop
StopAU17
In both these patterns the chin action starts before the gaze orientation shift towards
the interviewer. In figure 46 the « look at » action rather than being maintained is directly
followed by a « look away » action. By contrast in figure 47 the eyes are not directed away
from the interviewer but the upper eyelid droop still limits the possibility of direct eye contact
between the interacting partners. It is as if the participant’s looking at his partner was not
intended to establish mutual eye contact but possibly to scan the reaction of his partner to his
own previously manifested attitude of disbelief. In this cluster we find two distinct groups of
patterns that can be differentiated on the basis of a) those that involve AU17 generally with
some forms of contact avoidance actions: gaze and/or head looking or turning away; and b)
those where AU1+2 and/or AU5 sometimes with AU25 and AU26 are constitutive elements
of the patterns and where no gaze or head avoidance actions are found. Ekman’s predictions
for prototypical surprise expressions do involve a combination of AU1+2+5 with lips opening
(AU25) and jaw dropping actions (AU26), whereas AU17 associated with AU1+2 possibly
also with a shoulder shrug is interpreted as an emblem; a facial shrug conveying a sense of
disbelief or bewilderment. Given the different semantic emphasis of the two dominant
adjectives in this cluster, cognitive/reflexive for perplexity and emotional for surprise, it is
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 136/282
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 137/282
133
or by an AU4 or AU20 for the remaining 49% (see transition graph p. 207). This is never the
case in the surprise cluster (see transition graph p.234). Still in the hostility group when the
initiating event is AU5, we do find one single instance of a pattern with AU1+2+5 but it
contains none of the other actions present in the surprise cluster. The other t-patterns initiated
by AU5 and containing AU1+2 that are perceived as conveying hostility all do contain at least
one of these actions AU4, AU9 or AU20 usually associated with negatively valenced affects.
This contrasts with the surprise group where these three actions are never found in the
constitutions of the t-patterns. Finally in the sadness cluster t-patterns initiated by AU1+2 and
containing AU5 are aligned in a sequence with a look down action in 60% of the cases. In the
remaining patterns the sequence is a simple AU1+2+5 combination with no other qualifying
actions. The sequence depicted in figure 49 corresponds to a classical prototype of surprise
display as defined by the EMFACS dictionary. It starts with an orientation of the head
towards the interviewer followed by a bilateral raise of the brows and upper lid raise. At this
point, the participant’s lips part and the jaw start to relax causing the mouth to open. This
specific sequence is found in 27% of the cases when a participant has his head centered on the
partner.
Figure 49
Neutral baseline Head On AU1+2 AU5 AU25
AU26
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 138/282
134
The last illustration for the surprise cluster is depicted in figure 50. It is initiated with a
participant that stops speaking (pause), followed by an upper eyelid raise (AU5) and an
opening of the mouth (AU26). This specific sequence covers 39% of all t-patterns initiated
with a pause in the surprise group. The pausing action is most characteristic of both the
surprise (31%) and sadness t-patterns (30%). It is altogether absent in t-patterns from other
clusters at the exception of embarrassment (2%).
Figure 50.
Neutral baseline
PauseAU25 AU5 AU26
We do find that sequences appearing in video samples rated as conveying surprise do
often combine AU1+2 with AU5 in a sequence often with AU25 and or AU26. This is in
accordance with Ekman and Friesen’s predictions for a facial prototype of surprise.
Nevertheless this combination of action units is not specific to the surprise cluster, it is also
found to be predominant in the hostility and enjoyment clusters. In fact quantitative
comparison of cluster means with Kruskall Wallis tests failed to show significant quantitative
differences for these combinations across the clusters. Only t-pattern analysis was able to
demonstrate specific structural differences across the clusters in the way these actions
combine in time with other event types. Overall the comparative analysis of the patterns
containing a AU1+2+5 sequence across the ratings provides evidence that these two actions
together constitute but one element in larger multichannel communicative structures that
when considered in their complete forms seems to affect how participants perceive the
meaning of the combination of AU1+2 with AU5.
Sadness Cluster
The sadness cluster includes 27% (N=55) of the 200 core set video records. It is
composed of samples files rated high on the sadness and disappointment adjectives. Out of
the 47 event types, in the annotation scheme 21 (45%) appear in the composition of t-patterns
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 139/282
135
that have been rated as conveying sadness/disappointment. These include four upper face
action units: AU1+2 found in 22% of the patterns, AU4 (3%), AU5 (13%) and AU7 (2%).
Seven lower face actions: AU12 (3%), AU15 (42%), AU17 (18%), AU23 (3%), AU24 (3%),
AU25 (7%) and AU26 (1%). Facial action units that are not found in the composition of
sadness t-patterns include: AU1, AU9, AU10, AU10U, AU12U, AU14, AU14U, AU16 and
AU20. Additional event types entering the composition of the repetitive sequences are all the
“gaze” codes at the exception of the “look up” action, with a predominance inside and across
the clusters of three specific codes: “look away” (52%), “eyelids droop” (40%) and “look
down” (39%). Head positions/actions involve the participant’s head oriented either towards or
away from the interviewer in similar proportions (6%). A small percentage of patterns include
a movement of the head tilting to the side (1%). Speech codes are exclusively related to the
speech flow of the participants indicating either a “pause” (31%) or a “start” (6%) of speech
utterances. The list of all event types initiating t-patterns in this group are listed in figure 51.
Figure 51 “Sadness” Cluster. First codes in T-Patterns (%)
22
17
14
119
53 2 2 2 2 2 2 2
1 1 1 1 1
0
5
10
15
20
25
P a u s e B l i n k A U 1 5 A U 1 + 2 A U 1 7 A U 5
L o o k
D o w n
A U 2 5 H e
a d O n
L o o k
A w a y
A U 1 2 L o
o k A t
A U 4
E y e l i
d s D r o p
S p e a k A U
1 4 U
H e a d
T u r n e
d A w a A U 2 3 A U 2 4
Codes
The first thing we would like to highlight about this cluster is the unexpected lack of
involvement of AU1 in the patterns. AU1 is considered by Ekman and Friesen (2003, p.117)
as a defining characteristic of a “sad” brow. This action raises the inner corners of the
eyebrows; this differs from the raising and drawing together of the whole brow in AU1+2.
Arguably the overall frequency of occurrence of AU1 is lowest amongst the core EMFACS
action units in the database as a whole (N=39) accounting for only 1% percent of all the
action units scored in the video samples. Nevertheless the ANOVA test confirmed a
statistically significant higher involvement of AU1 in the surprise cluster compared to the
embarrassment cluster [F(4,195) = 2.3; p = 0.0500]. Also, the tests values shows that AU1 is
the most characteristic facial action in the sadness cluster (see table 22 p. 73). There are two
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 140/282
136
possible explanations for this lack of associations of AU1 with other codes in t-patterns in t-
patterns detected by THEME, not only in the sadness cluster but in all the rating groups. First,
the relative low frequency of the action unit may have rendered it impossible for AU1 to be
included in a t-pattern because of the demanding 15 repetitions criteria that we set for
retaining patterns in clusters. In this case reducing the number of repetitions for selecting
patterns could reveal sequenced structures containing AU1. A second possibility is that AU1
may have been frequent enough to be theoretically included in a pattern but wasn’t because its
positioning and/or temporal distances to other event types in the continuous stream of
behaviors is too unpredictable to be considered part of a pattern. While we suggest these two
alternatives, we will not attempt to bring a definitive answer to this specific issue by lack of
time. Note also, that contrary to the basic emotions predictions the CMP models offers no
interpretation for an AU1 not occurring simultaneously with an AU2. Even though AU1 can
not be shown to be characteristic of t-patterns found in the sadness cluster, several sequences
are indeed unique to this group. We find that the t-patterns detected in the sadness group
differ from those of the enjoyment, hostility and embarrassment clusters for its comparatively
high involvement of the “pause” event (31% of all patterns). Figure 52 depicts a t-pattern
initiated by the participant stopping to speak. It is followed by an AU15 (lip corner
depressor). Then, the upper eyelids relax covering part of the eyes aperture. At this time the
gaze is oriented to the left side, away from the interviewer and finally downwards. This
specific sequence by itself represents 7% of all the t-patterns detected in the cluster and 31%
of those initiated by a “pause” action. In this pattern both the mouth and partly the eyes
regions show the predicted appearance of sadness according to Ekman and Friesen (2003, p.
121). For the mouth region it is the corners of the lips down (AU15). Note that according to
EMFACS if AU15 is not accompanied with additional specific actions in the brow/forehead
and/or the eyes regions its meaning cannot be reliably interpreted as emotional. As we just
discussed, the suggested brow/forehead action is unit 1, which is absent from our patterns. For
the eyes region these authors suggest two possible defining characteristic of a sad demeanor:
AU7 that raises the lower lid and AU43 (eyelids droop in our system) that slightly cast down
the eyes. These two actions together or in alternation are seen as increasing the sad composure
of the display.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 141/282
137
Figure 52.
Neutral baseline
(Pause)
AU15EyelidsDroop
LookAway
LookDown
As for the CPM model AU15 is hypothesized to communicate a sense of low power
(Wehrle, 2003). In keeping with this interpretation the model could possibly predict a higher
involvement of this action in groups of video samples rating high on submissive emotions like
embarrassment and sadness. In fact, as with other action units discussed in this section, AU15
is indeed involved most frequently in the sadness (42%) and to a lesser extent in the
embarrassment (17%) group while it is totally absent in the enjoyment and surprise t-patterns.
Nevertheless it is also found in substantial proportions (15%) of the patterns detected in the
hostility record files. These are less likely to include appraisals of low power than
embarrassment and sadness. The structural compositions of t-patterns with AU15 differ across
the sadness, hostility and embarrassment clusters as follows. In the sadness group AU15 is
associated with the participants pausing and looking away to the side which is not the case in
the other groups. Even though the eyes looking down is an action aligned with AU15 in the t-
patterns detected both in the sadness and embarrassment clusters, this association is absent in
the hostility cluster. Two head actions combined with AU15 are unique to the embarrassment
group, namely a vertical lowering of the head and a head tilting to the side. As for the
combination of AU15 with other facial actions a few notable distinctions emerge. In contrast
with the embarrassment and hostility clusters AU5 (upper eyelid raise) is never aligned in t-
patterns rated as conveying sadness. The embarrassment and hostility clusters do show
combinations of AU15 with AU9 that are not found in sadness. Moreover the hostility and
embarrassment clusters show specific patterns of associations between AU15 and other AUs
that are not detected in the other clusters. In the hostility group, AU15 is uniquely linked with
AU14, AU23 and AU24. As for the embarrassment group, unique associations are detected
between AU15, AU10 and AU12U. These observations clearly show that AU15 is distinctly
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 142/282
138
aligned with other facial and additional nonverbal actions in t-patterns depending on the
cluster considered. Figure 53 provides another illustration of a sequence uniquely found in the
sadness group. It represents 3% of the sequences starting with a blink. The pattern formally
starts with a blinking action. This action is found to initiate t-patterns in 17% of the cases.
After blinking an AU17 starts then the eyes move away from the partner and to the left side.
At this time AU15 is initiated while AU17 is maintained. The following actions involve the
participant starting to look down and finally AU17 recedes. This specific sequence while
being unique to the sadness group involves combinations of event types mostly common to t-
patterns with an action unit 15 across the clusters. The only exception is the “look away”
action, which as we mentioned above is characteristic of the sadness patterns that involve
AU15. But apart from the constitutive elements in the pattern, the order in which they unfold
is also unique to this cluster. For example 2% of the hostility t-patterns do involve a blinking
action but they never opens a sequence like is the case here. Moreover in the hostility group
whenever AU17 precedes AU15 the downwards movement of the lip corners always recedes
before the chin raise action stops. Here it is clearly the contrary that is happening.
Figure 53.
Baseline Neutral Blink AU17
LookAway AU15
LookDown
StopAU17
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 143/282
139
Figure 54 depicts a more frequent variant of a comparable sequence. By itself it covers
51% of the patterns starting with a blink. Additionally an AU1+2 and an eyelids droop insert
themselves in the pattern. After the participant starts blinking, an AU1+2 is activated. The
upper cover lids of the eyes do not go back up all the way leaving the eyes partly closed.
Finally the sequence ends with the participant first looking down and then to the side.
Figure 54.
Baseline(Look At) Blink AU1+2
EyelidsDrop
LookDown
LookAway
The next illustration (figure 55) depicts a sequence that does not contain any action
units predicted to convey the impression of sadness by either the EMFACS dictionary or the
CPM prediction tables. It is a simple sequence that involves a smiling action aligned in a t-
pattern with a lateral head movement to the side ending with the head oriented towards the
social partner. The sequence ends with the offset of the smile. The whole sequence accounts
for 20% of the patterns initiated by AU12 in the sadness group.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 144/282
140
Figure 55.
Neutral
baseline AU12Head Tilting
Side Head On
Stop
AU12
The AU12 opens the sequence with face and head centered on the camera and
continues with the head tilting to the side. As mentioned before previous work suggests that a
head tilt could be perceived as an element of a submissive or seductive communicative
structure (Otta et al., 1994; Bänninger-Huber & Rauber-Kaiser, 1989). The fact that the head
is being oriented towards the interviewer accentuates the feeling that the display is sociallyaddressed. In the context of communicating sadness this behavioral sequence might be seen as
an example of what Bänninger-Huber describes as a: ”a fishing for resonance” episode
(Bänninger-Huber, 1992). Meaning that the participant is attempting to induce an empathic
reaction in the listener about the possibly difficult situation she finds/found herself in. The last
t-pattern extracted from the sadness cluster that we will illustrate is depicted in figure 56. This
sequence is characteristic of 65% of the patterns initiated with an AU14U in this group.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 145/282
141
Figure 56.
Neutral
baseline AU14ULook
Down AU15
The sequence starts with the participant unilaterally tightening the corner of the mouth
creating a buldge beyond the lip corner (AU14). This action is followed by a downwards
movement of the eyes then the sequence ends with action 15 pulling the corners of the mouth
downwards. As was mentioned in the discussion on the t-patterns extracted from the hostility
cluster, AU14U is only found in the composition of t-patterns from the sadness and hostility
groups. The message value associated with this facial action is one of contempt or scorn
according to the EMFACS taxonomy. For Wagner (2000) three defining characteristics
contribute to the definition of contempt: « it is interpersonal, involves a feeling of superiority,
and views the other person negatively ». If the nonverbal display communicating this notion
of contempt was indeed AU14U, one would have to conclude that in our case the action was
not taken into account by the raters when deciding on how to classify the video files
containing the pattern. An alternative explanation could be, as the CPM model suggests, that
the action is not communicating anything about an emotional category per se but rather that
an event or the conduct of an agent other than the self is being appraised as violating a sense
of fairness or justice. For example, a fictional subtitle to this sequence according to the
appraisal model could run something like this: « why did he die so young, it’s so unfair
(AU14U), it leaves me feeling so powerless (AU15) ».
Summary of
findings
Using the t-pattern detection algorithm developed by Magnusson (2000), we were able
to detect several sequential patterns of nonverbal signals that were specific to video samples
distributed in five groups constituted on the basis of a prior judgment study. These groups are
characterized by distinct rating profiles reflecting the perception of: enjoyment, hostility,
embarrassment, surprise and sadness. The validity of the detected t-patterns have been tested
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 146/282
142
by comparing the number and the complexity (number of event types) of patterns found in the
experimental datasets with those found in the same datasets after applying two distinct
randomization methods. The results of these comparisons support the conclusion that the
patterns found in the five emotion groups were not due to chance effects. We have provided
illustrations of t-patterns that were specific to each emotion group and that could possibly
play a role in how participants decided to classify the video samples. For example, in the
enjoyment group the t-patterns detected differed from those found in the other groups by the
frequent alignment of AU6 with AU12 in a sequential pattern. Even though the association of
6 with 12 was already reported in the results of the frequency analysis of single and
overlapping action units, characteristic unique to this combination were also found. What the
t-pattern approach uniquely shows is that these action units are most often initiated by a
participant’s gesture of looking at the interviewer possibly indicating an intention to share a
positive affect. Interestingly, in other clusters this “looking at” action is rarerly included in t-
patterns and when it is the case it is never followed by a either a “social” (12) or “Duchenne”
(6+12) smile. Overall our results support the notion that facial actions are part of time
sequenced communicative structures that include additional nonverbal and para-verbal
signals. These communicative patterns present characteristic features – in terms of event type
combination and order - that are specific to the emotions clusters in which they appear. This
suggests that these multimodal dynamic patterns may play a role in the formation of
impressions concerning what emotion is being experienced by an individual who perform
them.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 147/282
143
IIVV.. GGEENNEERR A ALL DDIISSCCUUSSSSIIOONN A ANNDD
CCOONNCCLLUUSSIIOONNSS
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 148/282
144
General Discussion
In the present thesis, we explored and addressed methodological issues relevant to
nonverbal communication research. More specifically, we were guided by three main
objectives: The first one was to collect recordings of emotional expressions spontaneously
produced in face to face interactions; the second was to address the issue of subjective
judgments of dynamic facial expressions of emotion. The third one was to relate the
subjective judgments of dynamic displays perceived as communicating emotions with
sequential behavioral patterns composed of FACS facial action units combined with gaze,
head and para-verbal variables. The first objective was guided by the fact that, even if most
researchers widely endorse the assumption that the human face is effective at communicating
affective information, and that conversely an observer is able to infer emotional messages
from facial expressions, the existing evidence does not allow authenticating the two sides of
this postulate.
To date the majority of published results on emotion perception from the face
originates from studies having used static and acted portrayals as stimuli for target emotion
recognition. In the introduction we have reviewed serious objections concerning the
ecological validity and therefore the generalization of the results produced under such
conditions to emotion signal processing in interactive settings. One consequence of the
methodological choice of using static display is emotion perception studies, is that little is
known about the nature of spontaneous and dynamic facial expressions.
Yet accumulating data is starting to demonstrate the importance of the dynamic
features of facial expressions for deciphering the meaning of subtle expressions frequently
encountered in naturalistic contexts (Ambadar, Schooler and Cohn, 2005; Bould, and Morris,
2008). Others have shown that deliberate and spontaneous smiles could be distinguished by
the dynamic properties of the expression (Schmidt et al. 2006; Cohn and Schmidt, 2004). This
illustrates the importance of studying the perception of facial expressions as dynamic rather
than static signals. Nevertheless, relevant database containing expressions that are natural and
emotional enough and that can be used in emotion perception studies are very rare.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 149/282
145
MeMo - a new research database of dynamic facial expressions of
emotions
In this thesis we did present the steps taken to gather such a database. We were able to
extract 200 short segments (2s-16s, µ =5.88) of video containing emotional expression
sequences. The 200 sample files were extracted from 50 recorded interviews of ten female participants’ autobiographic events in which they had experienced an intense emotion. The
rationale for recording eliciting and filming these narratives was that they potentially could
reactivate emotions during the sharing task. We have shown that this emotion eliciting task
seemed efficient to elicit emotional feelings as evidenced by the self report data obtained after
the emotional narrative task. Most participants felt confident enough with the experimental
situation to disclose sensitive personal events including amongst others, episodes of rapes,
recent death of close relatives and physical threats by firearms.
The perceived message value of dynamic facial expressions of emotions
With regards to our second objective, we wanted to address the issue of how dynamic
sequences of facial expressions would be perceived in terms of the types of emotions
identified. Because traditional judgment studies use static material they do not provide
information on how fleeting and rapidly changing expressions are perceived. Another
characteristic of those studies is that they are meant to compute recognition accuracy scores.
Typically, in cross-cultural research, groups of participants from distinct cultures are
compared in terms of their level of accuracy at matching a pre-selected prototype with a
normative emotion label.
Because we are using video sequences that were pre-selected on the basis that the
participant simply “appeared to be experiencing” an emotion, we have no “correct response”
criteria against which ratings can be checked for accuracy. Our approach was not
confirmatory but essentially explorative in nature.
Using a multi-scalar rating task composed of 17 affective (or affect related) terms, we
were able to show that 45 participants taking part in a judgment study were able to
significantly agree in their ways of using the adjectives to describe the emotions they
perceived in the video samples. Based on a principal components factor analysis with varimax
rotations on the ratings with adjectives as variables and the subjects and clips as cases we
were able to extract five rating factors that we identified as reflecting: enjoyment, hostility,
embarrassment, surprise and sadness.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 150/282
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 151/282
147
the video samples, or b) have been co-occurring with latent variables not measured in this
thesis that did influence the categorization process.
Limitations
The findings of this study were generated with several notable limitations. The first
major limitation is related to the fact that the judgment study was conducted with the raters
listening to what the subjects were talking about while rating the videos. Consequently, verbal
statements may have acted as important cues for the judges in their ratings of the video
samples. When the rating study was conducted we had hopes that we could include ratings of
speech content to our scoring process. This would have allowed us to better estimate the
relative importance of semantic versus nonverbal cues in the emotion inference process. As
the work on this thesis evolved, it became increasingly clear that it would need a whole team
of researches to carry on with this task. By the time we renounced coding the speech contentof the narratives there was no time left to conduct a second judgment study with the sound of
the videos off. Nevertheless, even if speech content was shown to have played a role in how
the video-samples were classified, we would still have to come up with an explanation for
why specific nonverbal t-patterns were distributed nonrandomly across the clusters. One of
the most obvious steps to take after this thesis would be to conduct a judgment study without
the sound on and the results with those described in this thesis. This would produce important
data to compare the relative weights of verbal and nonverbal cues in the judgments of
dynamic displays of facial expressions.
Secondly, even though we have used dynamic emotional displays we have not
addressed the natural counterpart issue which is the temporal process of emotional
recognition. Like most previous work on recognition of dynamic expressions we have
gathered a-posteriori punctual judgments. The participants to the rating procedure had to
provide their judgments once the video record was over. Another method could have
consisted in having observer make emotional judgments as the expressions are going along.
Recently, Tcherkassof and colleagues (2007) reported the development of a software program
allowing matching the temporal evolution of a facial expression with the subjective judgment
of observers. Conducting moment by moment emotional ratings of videos containing
specified t-patterns might be very informative as to the evolution of the impression formation
process.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 152/282
148
Additionally, future studies on the structural and dynamic impact of expressive
behavior on emotion perception should examine other potential sources for the effect. Ekman
(1979) delineated the potential effects of static, fast, slow, and cosmetic cues in contributing
to emotion judgments. Facial expressions, which involve rapid movements of the facial
musculature, are fast cues. Facial physiognomies—the physical features of the face—provide
static cues, and ethnic/cultural differences in physiognomy may contribute differentially to
emotion-related signals independent of the expressions themselves. Individuals with
protruding eyebrows, for instance, may be perceived as staring more, and individuals with
double eyelids may produce more sclera—the whites above the eyes—than individuals with
single eyelids for the same degree of innervations. In fact one study did indeed find
differences in judgments of fear and anger between Americans and Japanese as a function of
the degree of white above the eyes shown in these emotions (Matsumoto, 1989). Rounder
faces with larger eyes give baby-face features, while longer faces with thinner eyes may
portray a harsher message. Slow cues, such as wrinkle patterns and pigmentation, as well as
cosmetic cues involving hair style and length, type and length of facial hair, may also
contribute to emotion signaling. Other cues in everyday life additionally compound the
picture, such as cosmetics, eyeglasses, jewelry, and the like. All of these physical features
associated with the face may contribute to emotion messages, and these may contribute to in-
group effects.
Finally, future studies on the structural and dynamic impact of expressive behavior on
emotion perception should examine effects of gender from both the production and the
perception side. In this thesis, we limited the participation in the production as well as the
interpretation studies to female participants. This was motivated by the fact that prior
researches have shown that women are both more expressive (Hall, Carter, and Horgan, 2000)
and more sensitive to nonverbal cues than men (Hall and Matsumoto, 2004). In the future,
similar studies should involve participants of both gender for both the production of dynamic
displays and their interpretations. In addition to our current dataset, the paradigm used in this
study would benefit by being extended to all male and mixed participants (male judging
males; female judging males and males judging females).
Future perspectives
To determine the precise psychological impact, in terms of message value of the t-
patterns detected in our research several steps can be taken. Generally, structural analysis
models describe structures in terms of two main components: elements and rules connecting
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 153/282
149
them. Each element is composed of one or more event types. When two or more event types
comprise one element, the element is considered to occur when anyone or more of its
constituent actions occurs. That is the actions are considered to be interchangeable within the
element (Duncan & Fiske 1977; Duncan and al., 1985). In this work, we have shown that the
structural composition of temporally unfolding behavioral sequences composed of FACS AUs
as well as head, gaze movements/orientation and speech categories Grouping different actions
within a single structural element is an empirical issue, based on evidence that these actions
are perceived as conveying reasonably the same meaning to different groups of observers.
The grouping of the behavior units found in our sequences, were based on t-pattern
analysis, as opposed to intuition or theory. Rules define appropriate sequences of elements
within a structure. A distinction can be made between obligatory rules and optional rules. An
obligatory rule would state that at a specified point in the stream of actions, an element must
or must not occur for the sequence to be perceived as conveying a specific attitude. An
optional rule would state that at a specified point in the stream of actions, an individual may
legitimately choose from a set of two or more alternative actions without modifying the
dominant meaning attributed to a pattern. In some cases actions will involve contrasts such as
turning the head away or looking away from the interviewer. Even though those two actions
are different they both may be serving the similar function of avoiding contact with a social
partner, for example. In other cases, the alternatives will involve the participant performing or
not performing an action, such as smiling or not smiling. In any case, each available option
must have a different effect on the ensuing classification of the pattern. That is the perception
of the attitude enacted must be different depending on how the option is exercised. The
analysis of the action sequences presented in our study could be pushed further by developing
empirically based hypothesis concerning the psychological impact, in terms of perceived
meaning, of highly specific communicative structures. For example, Potential structures could
then be implemented as dynamic sequences in facial expression simulation software like
FACSGen. These implementations could then be evaluated in rating studies in terms of their
effectiveness to fit the hypothesis. This is an example of how due to structural analysis new
communicative patterns that have been discovered can provide a basis for quantitative studies.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 154/282
150
Conclusions
Despite the fact that facial expressions of emotion are naturally highly dynamic social
signals, their communicative value has typically been studied using static photographs pre-
selected for maximum discriminability. To date, most of the emphasis has been put on cross-
cultural similarities in the ability of groups of different national/ethnic origins to “recognize”
a limited number of emotion categories from prototypical facial expressions posed by actors.
Results from research supporting cross-cultural commonalities in the interpretation of
prototypical expressions have been taken as strong evidence in favor of the universality of 6
to 7 phylogenetically inherited “basic emotions”. Because these “recognition” studies are
confirmatory in nature they can not generate information on the way emotions are
communicated nonverbally when full blown prototypical facial patterns are not enacted as is
often the case in natural settings (Russell, 1997). Moreover, the lack of ecological validity of
the facial stimuli generally used in these studies constitutes a serious objection raised against
the generalization of reported results to the way emotions are communicated and understood
in naturalistic settings. Even though the central role of the dynamics of facial expressions is
increasingly endorsed, little is known about the impact of the temporal unfolding of facial
expressions, in interaction with other nonverbal signals, on the process of inferring emotional
states from others during social interactions. In this thesis, we showed that judges could
reliably agree on the emotional message value of spontaneously produced dynamic facial
expressions of emotions. Five distinct groups of emotion categories were found to account for
the impressions of 45 judges rating 200 video sample files. These five categories were
labeled: enjoyment, hostility, embarrassment, surprise and sadness. This thesis also
demonstrates that repetitive sequential patterns composed of specific facial movements
combined with other nonverbal behaviors are distributed in distinctive and systematic ways in
the five emotion groups. This suggests that the composition and order of unfolding of
nonverbal communicative actions might play a significant role in deciphering the emotional
meaning of dynamic displays of emotions. Thus, this thesis suggests a strong need to carry on
systematic research on the perception of emotions from spontaneous and dynamic facial
displays. This is important because empirical data on this issue is likely to improve the
capacity of computer vision engineers to focus on relevant social signals in order to infer
affective states in the context of human/machine interactions.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 155/282
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 156/282
152
Benecke C, Krause R (2005): Facial-affective relationship offers of patients with panic
disorder. Psychother Res; 15: 178–187.
Ellgring H, Gaebel W (1994): Facial expression in schizophrenic patients; in Beigel A, Lopez
Ibor JJ, Costa e Silva JA (eds): Past, Present and Future of Psychiatry. IX World
Congress of Psychiatry. Singapore, World Scientific, vol. 1, pp 435–439
Berenbaum, H., & Oltmanns, T. (1992). Emotional experience and expression in
schizophrenia and depression. Journal of Abnormal Psychology, 101, 37–44.
Biehl, M., Matsumoto, D., Ekman, P., Hearn, V., Heider, K., Kudoh, T., & Ton, V. (1997).
Matsumoto and Ekman's Japanese and Caucasian Facial Expressions of Emotion
(JACFEE): Reliability Data and Cross-National Differences. Journal of
Nonverbal Behavior, 21, 3-21.
Boucher, J.D., & Carlson, G.E. (1980) Recognition of Facial Expression in Three Cultures.
Journal of Cross-Cultural Psychology, Sep 1980; 11: 263 - 280.
Bould, E. and Morris, N. (2008). Role of motion signals in recognizing subtle facial expressions of
emotion. British Journal of Psychology, (99: 2) 167-189.
Brooks, V. B. (1986). The Neural Basis of Motor control . New York: Oxford University
Press.
Camras, L. A., Meng, Z., Ujiie, T., Dharamsi, S., Miyake, K., Oster, H., Wang, L., Cruz, J.,
Murdoch, A. and Campos, J. (2002). Observing emotion in infants: facial
expression, body behaviour, and rater judgments of responses to an expectancy-
violating event. Emotion (2:2), 179-193.
Carroll, J.M., & Russell, J.A. (1996). Do facial expressions express specific emotions?
Judging emotion from the face in context. Journal of Personality and Social
Psychology, 70, 205-218.
Carroll, J. M., & Russell, J. A. (1997). Facial expressions in Hollywood's portrayal of
emotion. Journal of Personality and Social Psychology, 72, 164-176.
Chesney, M. A., Ekman, P., Friesen, W. V., Black, G. W., & Hecker, M. H. (1990). Type A
behavior pattern: Facial behavior and speech components. Psychosomatic
Medicine, 52, 307–319.
Cootes, T.F., Edwards, G.J., & Taylor, C.J. (2001). Active appearance models. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 681-685.
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and
psychological measurement , 20, 37-46.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 157/282
153
Cohn J.F., Elmore M. (1988). Effect of Contingent Changes in Mothers. Affective Expression
on the Organization of Behavior in 3-Month-Old Infants. Infant Behavior and
Development , vol. 11, pp. 493-505.
Cohn, J. F., Schmidt, K., Gross, R., & Ekman, P. (2002). Individual differences in facial
expression: Stability over time, relation to self-reported emotion, and ability to
inform person identification. Proceedings of the International Conference on
Multimodal User Interfaces (ICMI 2002), Pittsburgh, PA, 491-496.
Cohn, J. F. & Schmidt, K. L. (2004). The timing of facial motion in posed and spontaneous
smiles. International Journal of Wavelets, Multiresolution and Information
Processing, 2, 1-12.
Costanzo, M., & Archer, D. (1989). Interpreting the expressive behavior of others: The
Interpersonal Perception Task. Journal of Nonverbal Behavior, 13, 225–245.
Cottraux, J. (1985) Evaluation clinique et psychométrique des états dépressifs. Collection
Scientifique Survector, Paris.
Darwin, C. (1998). The Expression of the Emotions in Man and Animals, 3rd
edit. Oxford
University Press. New York.
Delplanque, S., Grandjean, D., Chrea, C., Coppin, G., Aymard, L., Cayeux, I., Margot, C.,
Velazco, M. I., Sander, D. and Scherer, K. R. (2009). Sequential unfolding of
novelty and pleasantness appraisals of odours: evidence from facial
electromyography and autonomic reactions. Emotion (9:3), pp. 316-328.
Den Uyl, M.J. ; van Kuilenburg, H. (2005) The FaceReader: Online facial expression
recognition. Proceedings of Measuring Behavior 2005 (Wageningen, 30 August –
2 September 2005) Eds. L.P.J.J. Noldus, F. Grieco, L.W.S. Loijens and P.H.
Zimmerman.
Duncan, S. D., Jr., & Fiske, D. W. (1977). Face-to-face interaction: Research, methods, and
theory. Hillsdale, NJ: Lawrence Erlbaum Associates.
Duncan, S. D., Jr., Fiske, D. W., Denny, R. Kanki, B. G., & Mokros, H. B. (1985). Interaction
structure and strategy. New York: Cambridge University Press.
Eibl-Eibesfeldt (1970). Ethology: The Biology of Behavior . New York: Holt, Rinehart and
Winston.
Eibl-Eibesfeldt I. (1972). Love and hate: The natural history of behavior patterns New York:
Holt, Strachan.
Ekman, P., & Friesen, W. (1969). The repertoire of nonverbal behavior: Categories, origins,
usage, and coding. Semiotica, 1, 49–98.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 158/282
154
Ekman, P., Friesen, W. V., & Ellsworth, P. (1972). Emotion in the human face: Guidelines for
research and an integration of findings. New York: Pergamon Press.
Ekman, P and Friesen, W. (1978). Facial Action Coding System: A Technique for the
Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto.
Ekman, P. (1982). Methods for measuring facial action. In K.R. Scherer and P. Ekman (Eds.),
Handbook of methods in Nonverbal Behavior Research (pp 45- 90). Cambridge:
Cambridge University Press.
Ekman, P. & Friesen, W.V. (1982). Felt- False- And Miserable Smiles. Journal of Nonverbal
Behavior., 6(4), 238-252.
Ekman, P., and Friesen, W. V. (1986). A new pan-cultural facial expression of emotion.
Motivation and Emotion, 10, 159–168.
Ekman, P., Friesen, W. V., O’Sullivan, M., Chan, A., Diacoyanni-Tarlatzis, I., Heider, K., et
al. (1987). Universals and cultural differences in the judgments of facial
expressions of emotion. Journal of Personality and Social Psychology, 53, 712–
717.
Ekman, P., Friesen, W. V., & O’Sullivan, M. (1988). Smiles When Lying. Journal of Personality and
Social Psychology, 1988, 54, 414-420.
Ekman, P., and Heider, K. G. (1988). The universality of a contempt expression: A
replication. Motivation and Emotion, 12, 303–308.
Ekman, P., Davidson, R.J., & Friesen, W.V. (1990). Emotional expression and brain
physiology II: The Duchenne Smile. Journal of Personality and Social
Psychology, 58, 342-353.
Ekman, P. (1993). Facial Expression and Emotion. American Psychologist , vol. 48, pp. 384-
392.
Ekman, P, Irwin W, and Rosenberg E. (1994). EMFACS-8. Unpublished manual.
Ekman, P. (1999) Basic Emotions. In T. Dalgleish and T. Power (Eds.) The Handbook of
Cognition and Emotion Pp. 45-60. Sussex, U.K.: John Wiley & Sons, Ltd.
Ekman, P., Friesen, W. and Hager, J.C. (2002). New version of the Facial Action Coding System.
http://www.face-and-emotion.com/dataface/facs/new_version.jsp.
Ekman, P., Friesen, W. and Hager, J.C. (2002). FACS investigator Guide. http://www.face-and-
emotion.com/dataface/facs/new_version.jsp.
Ekman, P. (2003). Sixteen Enjoyable Emotions. Emotion Researcher, 18, 6-7.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 159/282
155
Ekman, P (2007). The Directed Facial Action task; emotional responses without appraisal. In
J.A. Coan and J.B. Allen (Eds.), Handbook of Emotion Elicitation and Assessment
(pp. 47-53).
Elfenbein, H. A., & Ambady, N. (2002). On the universality and cultural specificity of
emotion recognition: A meta-analysis. Psychological Bulletin, 128, 203-235.
Elfenbein, H. A., Beaupré, M. G., Lévesque, M. & Hess, U. (2007). Toward a dialect theory:
Cultural differences in the expression and recognition of posed facial
expressions. Emotion, 7, 131-146.
Essa, I., & Pentland, A. (1997). Coding, analysis, interpretation and recognition of facial
expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 7 ,
757-763.
Fleiss, J.L. (1981). Statistical methods for rates and proportions. NY: Wiley
Frank, M. G., Ekman, P., and Friesen, W. V. (1993). Behavioral markers and recognizability
of the smile of enjoyment. Journal of Personality and Social Psychology, 64, 83-
93.
Frank M.G. & Ekman, P. (2003) Nonverbal Detection of Deception in Forensic Contexts.
Handbook of Forensic Psychology. Academic Press.
Frijda, N.H. and Tcherkassof, A. (1997). Facial expressions as modes of action readiness. In
J.A. Russell and J.M. Fernández-Dols (Eds.), The psychology of facial expression
(pp. 78-102).Cambridge: Cambridge University Press.
Gosselin, P., Kirouac, G., & Dore, F. (1995). Components and recognition of facial
expression in the communication of emotion by actors. Journal of Personality and
Social Psychology, 68, 83–96
Haidt, J., & Keltner, D. (1999). Culture and facial expression: Open-ended methods find more
expressions and a gradient of recognition. Cognition & Emotion, 13(3), 225-266.
Haberman, S.J. (1979). Analysis of qualitative data (Vol. 2). New York: Academic Press.
Hall, J. A. (1978). Gender effects in decoding nonverbal cues. Psychological Bulletin, 85,
845–857.
Hall, J. A. (1984). Nonverbal sex differences: Communication accuracy and expressive style.
Baltimore: Johns Hopkins University Press.
Hall, J. A., Carter, J. D., & Horgan, T. G. (2000). Gender differences in the nonverbal
communication of emotion. In A. H. Fischer (Ed.), Gender and emotion: Social
psychological perspectives (pp. 97–117). Paris: Cambridge University Press.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 160/282
156
Hall, J. A., & Matsumoto, D. (2004). Sex differences in judgments of multiple emotions from facial
expressions. Emotion, 4(2), 201-206.
Heller, M., Haynal-Reymond, V., Haynal, A. Archinard, M. (2001): Can Faces Reveal
Suicide Attempt Risks? Heller, M. (ed.): The Flesh of the Soul: the body we work
with, pp. 213-256. Bern: Peter Lang.
Heller, M (2008). Psychothérapies corporelles: fondements et méthodes. De Boeck
Université. Collection: Carrefour des psychothérapies. Louvain
Hess, U., Adams, R. and Kleck, R. (2007). Looking at You or Looking Elsewhere: The
Influence of Head Orientation on the Signal Value of emotional facial
expressions. Motivation and Emotion, 31, 137-144.
Izard, C. E. (1971). The face of emotion. East Norwalk, CT: Appleton-Century-Crofts.
Keltner, D. (1995). The signs of appeasement: Evidence for the distinct displays of
embarrassment, amusement and shame. Journal of Personality and Social
Psychology, 68, 441-454.
Krumhuber, E., Manstead, A. S. R., Cosker, D., Marshall, D., Rosin, P. L. and Kappas, A.
(2007). Facial dynamics as indicators of trustworthiness and cooperative
behaviour. Emotion, (7:4), 730-735.
Krumhuber, E., Manstead, A. S. R. and Kappas, A. (2007) Temporal Aspects of Facial
Displays in Person and Expression Perception: The Effects of Smile Dynamics,
Head-tilt, and Gender. Journal of Nonverbal Behaviour (31:1), 39-56.
Krumhuber, E. and Kappas, A. (2005). Moving smiles: The role of Dynamic Components for
the Perception of the Genuineness of Smiles. Journal of Nonverbal Behaviour ,
(29:1), 13-24.
Lanctôt, N. and Hess, U. The timing of appraisals. Emotion (7:1), 2007, pp. 207-212.
Lebart, A. Morineau, M. Piron, L Statistique Exploratoire Multidimensionnelle, Dunod,
pp.181‐184, 2000.
Levenson, R. W. (2005). FACS/EMFACS emotion predictions [Computer software].
Berkeley: University of California, Department of Psychology.
Lindquist, K. A., Barrett, L. F., Bliss-Moreau, E. and Russelll, J. A. (2006). Language and the
perception of emotion. Emotion (6:1), 125-138.
Lutz, C., & White, G. M. (1986). The anthropology of emotions. Annual Review of
Anthropology, 15, 405–436.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 161/282
157
Lyons, M., Akamasku, S., Kamachi, M., & Gyoba, J. (1998). Coding facial expressions with
Gabor wavelets. Proceedings of the International Conference on Face and
Gesture Recognition, Nara, Japan.
Magnusson, M. (2000). Discovering Hidden Time Patterns in Behavior: T-Patterns and their
Detection. Behavior Research Methods, Instruments and Computers, 32(1): pp.
93-110.
Magnusson, M. (2006). Structure and communication in Interactions in : Riva, M.T. Anguera,
B.K. Wiederhold and F. Mantovani (Eds.) From Communication to Presence:
Cognition, Emotions and Culture towards the Ultimate Communicative
Experience. IOS Press, Amsterdam.
Matias, R. and Cohn, J. F. (1993). Are MAX-specified infant facial expressions during face-
to-face interaction consistent with Differential Emotions Theory? Developmental
Psychology, 29, 524-531.
Matsumoto, D., & Ekman, P. (1988). Japanese and Caucasian facial expressions of emotion
and neutral faces ( jacfee and jacneuf ). Retrieved from http://www.paulekman.com
Matsumoto, D. (1990). Cultural similarities and differences in display rules. Motivation &
Emotions, 14, 195-214.
Matsumoto, D., Kasri, F., & Kooken, K. (1999). American-Japanese cultural differences in
judgements of expression intensity and subjective experience. Cognition and
Emotion, 13, 201–218.
Matsumoto, D., Ekman, P., & Fridlund, A. (1991). Analyzing nonverbal behavior. In P. W.
Dowrick (Ed.), Practical guide to using video in the behavioral sciences (pp.
153–165). New York: Wiley.
Matsumoto, D., & Ekman, P. (2004). The relationship between expressions, labels, and
descriptions of contempt. Journal of Personality and Social Psychology, 87(4),
529-540.
Matsumoto, D. (2005). Scalar ratings of contempt expressions. Journal of Nonverbal Behavior,
29(2), 91-104.
Matsumoto, D., & Willingham, B. (2009). Spontaneous facial expressions of emotion of blind
individuals. Journal of Personality and Social Psychology, 96(1), 1-10.
Matsumoto, D., Willingham, B., and Olide, A. (2009). Sequential dynamics and culturally-
moderated facial expressions of emotion. Psychological Science, Vol.20, Number 10.
Merten, J. (2001). Beziehungsregulation in Psychotherapien. Maladaptive Beziehungsmuster
und der therapeutische Prozeß. Stuttgart: Kohlhammer
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 162/282
158
Mistschenka, M. N. (1933). Uber die mimische Gesichtsmotorik der Blinden [Facial
mimicking motor behavior in blind individuals]. Folida Neuropathologica
Estoniana, 13, 24–43.
Motley, M., and Camden, C. (1988). Facial expression of emotion: A comparison of posed
expressions versus spontaneous expressions in an interpersonal communication
setting. Western Journal of Speech Communication, 52, 1–22.
Motley, M.T. (1993). Facial affect and verbal context in conversations. Facial expression as
interjection. Human Communication Research (20:1), 3-40.
Niedenthal, P.M. Auxiette, K., Nugier, A., Dalle, N., Bonin, P., & Fayol, M. (2003). A
prototype analysis of the French category "émotion". Cognition and Emotion, 18,
289-312.
Nowicki, S., Jr., & Duke, M. P. (1994). Individual differences in the nonverbal
communication of affect: The Diagnostic Analysis of Nonverbal Accuracy Scale.
Journal of Nonverbal Behavior, 18, 9–36.
Otta, E., Lira, B. B. P., Delevati, N. M., Cesar, O. P., & Pires, C. S. G. (1994). The effect of
smiling and of head tilting on person perception. Journal of Social Psychology,
128, 323–331.
Padgett, C. & Cottrell, G.W. (1996). Representing face images for emotion classification.
Proceedings Advances in Neural Information Processing Systems, 894-900.
Quera, V., Bakeman, R., & Gnisci, A. (2007). Observer agreement for event sequences:
Methods and software for sequence alignment and reliability estimates. Behavior
Research Methods, 39 (1) , 39-49.
Reginald B., J. A. and Kleck, R. E. (2005). Effects of Direct and Averted Gaze on the
Perception of Facially Communicated Emotion. Emotion (5:1), 3-11.
Rhodes G, (2006). The evolutionary psychology of facial beauty. Annual Review of
Psychology 57 , 199-226.
Rosenthal, R., Hall, J. A., DiMatteo, M. R., Rogers, P. L., & Archer, D. (1979). Sensitivity to
nonverbal communication: The PONS test . Baltimore: Johns Hopkins University
Press.
Russell, J.A. (2005). Emotion in Human Consciousness Is Built on Core Affect. Journal of
Consciousness Studies, 12, 26-42.
Parr, L. A., Waller, B. M., & Fugate, J. (2005). Emotional communication in primates:
Implications for neurobiology. Current Opinion in Neurobiology, 15, 716–720.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 163/282
159
Parr, L. A., Waller, B. M., Vick, S. J., & Bard, K. A. (2007). Classifying chimpanzee facial
expressions by muscle action. Emotion, 7, 172–181.
Ragan J. M. (1982). "Gender display in portrait photographs". Sex Roles, 8, 33-43.
Rinn, W.E. (1984) .The Neuropsychology of Facial Expression: A Review of the
Neurological and Psychological Mechanisms for Producing Facial Expressions.
Psychological Bulletin, vol. 95, pp. 52-77.
Russell JA. 1994. Is there universal recognition of emotion from facial expressions? A review
of cross-cultural studies. Psychol. Bull. 115:102–41
Russell, J. A. (1997). Reading emotions from and into faces: Resurrecting a dimensional-
contextual perspective. In J. A. Russelll and J. M. Fernandez-Dols (Eds.), The
psychology of facial expression (pp. 295-320). Cambridge: Cambridge University
Press.
Russell, J.A., & Barrett, L. F. (1999). Core affect, prototypical emotional episodes, and other
things called emotion: Dissecting the elephant. Journal of Personality and Social
Psychology, 76, 805-819.
Russell, J. A. (2003). Core affect and the psychological construction of emotion. Psychological
Review, 110, 145-172.
Roesch, E. B., Reveret, L., Grandjean, D., and Sander, D. (2006). FACSGen: Generating
synthetic, static and dynamic, FACS-based facial expressions of emotion. Alpine
Brain Imaging Meeting.
Rosenberg, E. L., & Ekman, P. (1994). Coherence between expressive and experiential
systems in emotion. Cognition & Emotion, 8, 201–229.
Rosenberg, E. L., Ekman, P., & Blumenthal, J. A. (1998). Facial expressions and the affective
component of cynical hostility in male coronary heart disease patients. Health
Psychology, 17, 376–380.
Rosenberg, E. L., Ekman, P., Jiang, W., Babyak, M., Coleman, R. E., Hanson, M., et al.
(2001). Linkages between facial expressions of anger and transient myocardial
ischemia in men with coronary heart disease. Emotion, 1, 107–115.
Rozin, P., Lowery, L., Imada, S., & Haidt, J. (1999). The CAD triad hypothesis: A mapping
between three moral emotions (contempt, anger, disgust) and three moral codes
(community, autonomy, divinity). Journal of Personality and Social Psychology,
75(4), 574-585.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 164/282
160
Sayette, M.A., Cohn, J. F. Wertz, J.M., Perrott, M.A., and Parrott, D.J. (2001). A
Psychometric Evaluation of the Facial Action Coding System for Assessing
Spontaneous Expression. Journal of Nonverbal Behaviour, 25. pp.167-186.
Scherer, K.R., (1992). What do facial expressions express? In K. Strongman (Eds.),
International review of Studies on Emotion. Vol. 2 (pp. 293-318). Hillsdale, NJ:
Lawrence Erlbaum.
Scherer, K. R. (1999a). Appraisal theory. In T. Dalgleish & M. Power (Eds.), Handbook of
cognition and emotion (pp. 637–663). New York: Wiley Ltd.
Scherer, K. R. (2001). Appraisal considered as a process of multilevel sequential checking. In
K. Scherer, A. Schorr, & T. Johnston (Eds.), Appraisal processes in emotions:
Theory, methods, research: Series in affective science (pp. 92–120). New York:
Oxford University Press.
Scherer, K. R. and Ellgring, H. (2007a). Are facial expressions of emotion produced by categorical
affect programs or dynamically driven by appraisal? Emotion, (7:1), 113-130.
Scherer, K. R. and Ellgring, H. (2007b). Multimodal expression of emotion. Affect programs
or componential appraisal patterns? Emotion, (7:1), 158–171.
Scherer, K. R., & Grandjean, D. (2008). Facial expressions allow inference of both emotions
and their components. Cognition and Emotion, 22(5), 789-801.
Scherer, K. R. (2009). The dynamic architecture of emotion: Evidence for the component
process model. Cognition and Emotion, 23(7), 1307-1351.
Schmidt, K.L., Ambadar, Z.,J.F. Cohn, and Reed, L.I. (2006). Movement differences between
deliberate and spontaneous facial expressions: Zygomaticus major action in
smiling. Journal of Nonverbal Behavior 30 (1): 37-52.
Smith, C.A., and Scott, H.S. (2007). A componential approach to the meaning of facial
expressions. In J. A. Russelll and J. M. Fernandez-Dols (Eds.), The psychology of
facial expression (pp. 229-254). Cambridge: Cambridge University Press.
Steimer-Krause, E., Karuse, R., & Wagner, G. (1990). Interaction regulations used by
schizophrenic and psychosomatic patients: Studies on facial behavior in dyadic
interactions. Psychiatry, 53, 209–228.
Suen, H.K., and Ary, D. (1989). Analyzing quantitative behavioral data. Hiillsdale, NJ:
Erlbaum.
Ricci-Bitti, P. E., Brighetti, G., Garotti, P. L., & Boggi-Cavallo, P. (1989). Is contempt
expressed by pancultural facial movements? In J. P. Forgas & J. M. Innes (Eds.),
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 165/282
161
Recent advances in social psychology: An international perspective (pp. 329-
339). Amsterdam, NL: Elsevier.
Shiota, M. N., Campos, B., & Keltner, D. & Hertenstein, M. J. (2004). Positive emotion and
the regulation of interpersonal relationships. In P. Philippot & R. S. Feldman
(Eds.), The regulation of emotion (pp. 127-156). Mahwah, NJ: Lawrence Erlbaum
Associates.
Tcherkassof, A., Bollon, T., Dubois, M., Pansu, P., Adam, J.M. (2007). Facial expressions of
emotions: A methodological contribution to the study of spontaneous and
dynamic emotional faces. European Journal of Social Psychology, 37 ; 1325–
1345.
Tomkins, S. S. (1962). Affect, imagery, and consciousness: Vol. 1. The positive affects. New
York: Springer Publishing Company.
Tomkins, S. S. (1963). Affect, imagery, and consciousness: Vol. 2. The negative affects. New
York: Springer Publishing Company.
Tracy, J. L., & Robins, R. W. (2004). Show your pride: Evidence for a discrete emotion
expression. Psychological Science, 15, 194-197.
Tracy, J. L., & Robins, R. W. (2008). The nonverbal expression of pride: Evidence for cross-
cultural recognition. Journal of Personality and Social Psychology, 94, 516-530.
Van Hooff, J. A. R. A. M. (1972). A comparative approach to the phylogeny of laughter and
smiling. In R. A. Hinde (Ed.), Non-verbal communication (pp. 209–240).
Cambridge, United Kingdom: Cambridge University Press.
Wagner, H. L. (2000). The accessibility of the term "contempt" and the meaning of the
unilateral lip curl. Cognition & Emotion, 14(5), 689-710.
Waller, B. M., & Dunbar, R. I. M. (2005). Differential behavioural effects of silent bared
teeth display and relaxed open mouth display in chimpanzees ( Pan troglodytes).
Ethology, 111, 129–142.
Waller, B.M., Cray, J.J. & Burrows, A.M. (2008). Selection for universal facial emotion.
Emotion, 8(3), 435-439.
Wehrle, T., Kaiser, S., Schmidt, S. & Scherer, K. R (2000). Studying the dynamics of
emotional expression using synthesized facial muscle movements. Journal of
Personality and Social Psychology, 78 (1), 105-119.
Wen, Z. (January 2004). Face processing research at the University of Illinois Champaign-
Urbana. Face Processing Meeting, Center for Multimodal Learning and
Communication, Carnegie Mellon University, Pittsburgh, PA.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 166/282
162
Wierzbicka, A. (1994). Emotion, language, and cultural scripts. In S. Kitayama & H. R.
Markus (Eds.), Emotion and culture: Empirical studies of mutual influence.
Washington, DC: American Psychological Association.
Widen, S. C. & Russell, J. A. (2008). Children’s and Adults’ Understanding of the “Disgust
Face” Cognition and Emotion, 22, 1513-1541.
With, S. and Kaiser, S. (2009). Multimodal annotation of emotion signals in social
interactions. In E. Bänninger-Huber and D. Peham (eds), Current and Future
Perspectives in Facial Expression Research: Topics and Methodological
Questions. Innsbruck: Innsbruck University Press.
Yacoob, Y. & Davis, L. (1996). Recognizing human facial expression from long image
sequences using optical flow. IEEE Transactions on Pattern Recognition and
Machine Intelligence, 18, 636-642.
Yale, M., Messinger, D., Cobo-Lewis, A., Oller, D. K., & Eilers, R., (1999). An event-based
analysis of the coordination of early infant vocalizations and facial
actions. Developmental Psychology, 35(2), 505–513.
Yale, M. E., Messinger, D. S ., Cobo-Lewis, A. B. (2003). The temporal coordination of early
infant communication. Developmental Psychology, 39(5), 815-824.
Yik, M., Meng, Z. and Russell, J.A. (1998) Adults' freely produced emotion labels for babies'
spontaneous facial expressions. Cognition and Emotion, 12, 723-730.
Yrizarry, N., Matsumoto, D., and Wilson-Cohn, C. (1998). American-Japanese differences in
multiscalar intensity ratings of universal facial expressions of emotion. Motivation
and Emotion, 22(4), 315-327.
Zhang, Z (1999). Feature-based facial expression recognition: Sensitivity analysis and
experiments with multi-layer perceptron. International Journal of Pattern
Recognition and Artificial Intelligence, 13, 893-911
Zhu, Y., De Silva, L.C., & Ko, C.C. (2002). Using moment invariants and HMM in facial
expression recognition. Pattern Recognition Letters, 23, 83-91.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 167/282
163
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 168/282
163
Appendix I.
Facial Action Coding System
Figures and Definitions of Major Action Units
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 169/282
The Upper Face Action Units according to the FACS system
AU1
AU 1 (Inner Brow Raise)
The inner corners of the brow are raised upwards. For
many people, AU1 produces an oblique shape to the
eyebrows.
In some people the brows do not take on this shape but
more of a dip in the center with a small pull up at the
inner corners. This action causes the skin in the center
of the forehead to wrinkle horizontally. These wrinkles
usually do not run across the forehead but are limited to
the center.
AU2
AU 2
This
eyebr
eyebr
In som
to app
There
portio
latera
AU 1+2
AU 1+2 (Inner and Outer Brow Raise)
The combination of AU 1 and 2 pulls the entire
eyebrow (medial to lateral parts) upwards. It produces
an arched, curved appearance to the shape of the
eyebrow. Horizontal wrinkles appear across the entire
forehead.
AU
This
toge
the
AU4
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 170/282
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 171/282
The Lower Face Action Units according to the FACS system
AU9
AU9 (Nose Wrinkler)
It lowers the medial portion of the eyebrows.
Wrinkles appear along the sides of the nose and
across the root of the nose. The infra-orbital triangle
is pulled upwards causing the infra-orbital furrow to
wrinkle and to deepen in strong actions. The eye
aperture is narrowed. This action pulls the center of
the upper lip upwards. The nostril wings may be
widened and raised.
AU10
AU10
The u
bend
The n
raised
AU12
AU12
This
back
adjac
up an
AU11 (Nasolabial Furrow Deepener)
The upper lip is pulled upwards and laterally. The
skin below the upper portion of the nasolabial furrow
is pulled obliquely upwards. Both the upper and
middle portion of the nasolabial furrow deepens. The
middle portion of the furrow deepens typically more
than in AU10.
AU11
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 172/282
AU13 (Sharp Lip Puller)
The cheeks and the infra-orbital triangle become
very salient, puffing out, as the infra-orbital triangle
is lifted primarily up, more than obliquely.
This action pulls the corners of the lips up but at a
sharper angle than AU 12. While the corners of the
lips are pulled up, the red parts of the lips do not
move up with the lip corners. The lip corners appear
to be tightened, narrowed, and sharply raised.
AU15
AU15 (Lip Corner Depressor)
This action pulls the lip corners down. The shape of
the lips is angled down at the corner, and usually the
lower lip is somewhat stretched horizontally.
AU16 (25)
AU16 (Low
The lower
occurs with
of the low
teeth, and
exposed as
AU14 (Di
This actio
the corner
corners. A
lip corner
the lip cor
AU14AU13
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 173/282
AU17
AU17 (Chin Raiser)
This action pushes the chin boss upward and the
lower lip upward. It can cause the mouth to take
on an inverted U shape.
AU20 (
The ac
corners
but the
elongat
by the l
chin bo
flattene
In addit
AU26 t
AU20 (16+25+26)
AU22 (25+26)AU22 (Lip Funneler)
The lips funnel outwards taking on the shape as
though the person were saying the word “flirt”. This
action pulls in medially on the lip corners. Exposes
the teeth and may expose gums.
AU23
AU23 (L
The lips
appear n
inwards
wrinkles
parts of
may app
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 174/282
AU24 (Lip Presser)
The lips are pressed together, without pushing up
the chin boss. This action lowers the upper lip and
raises the lower lip to a small extent. The lips are
tightened and narrowed.
AU24AU25
A
T
m
AU26 (Jaw Drop)
The mandible is lowered by relaxation so
that separation of the teeth can at least be
inferred if AU25 is not present.
AU27
A
T
m
m
t
a
h
AU26
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 175/282
170
Appendix II.
Frequency Distribution Tables for Event Types in Rating
Clusters
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 176/282
171
Cluster label AU1 AU1+2 AU4 AU5 AU6 AU7 AU9 AU10 AU10U AU12 AU12U AU14 AU14U AU15 AU17 AU20 AU23 AU24 AU25 AU26
Positive emotions 0 9 0 13 8 0 0 0 0 11 0 5 0 0 0 5 0 0 0 0
Hostility 0 7 25 7 0 15 10 2 1 3 0 2 4 5 9 4 1 1 1 0
Embarrassment 0 2 1 3 0 10 1 6 0 20 10 0 0 12 6 1 1 0 0 0
Surprise 0 15 0 15 0 2 0 0 0 0 0 0 0 0 29 0 0 0 0 0
Sadness 0 11 2 5 0 0 0 0 0 2 0 0 1 14 9 0 1 1 2 0
Table 1 FACS codes initiating T-patterns in Rating Clusters. Values are expressed in percentages
Table (). Head position and orientation codes initiating T-patterns in rating clusters. Values are expressed in percentages.
Cluster label LH HT HD HR HRT HLT HRed HON HTA HTS Htilt Shake Nod
Positive emotions 0 0 0 0 0 0 0 0 0 0 0 0 0
Hostility 0 0 0 0 0 0 0 0 0 0 0 0 0
Embarrassment 7 0 0 0 0 0 0 0 0 0 0 0 0
Surprise 0 0 0 0 0 0 0 16 0 0 0 0 0
Sadness 0 0 0 0 0 0 0 2 1 0 0 0 0
HTA: Head Turned Away; HTS: Head Tilted Side; HTilt: Head Tilting Side; Shake: Head Shake; Nod: Head Nod
Legend: LH: Lower Head; HT: Head Turns; HD: Head Down; HR: Head Raises; HRT: Head Raise and Turn; HLT:
Table 2 FACS Head codes initiating T-patterns in Rating Clusters. Values are expressed in percentages
Table (). Gaze position and orientation initiating T-patterns in rating clusters. Values are expressed in percentages.
Cluster label Blink Eyelids Droop Look At Look Away Look Down Look Up
Positive emotions 0 0 34 9 0 0
Hostility 0 0 0 0 0 0Embarrassment 0 5 0 8 7 0
Surprise 5 0 3 0 0 0
Sadness 17 2 2 2 3 0
Table 3 FACS Gaze codes initiating T-patterns in Rating Clusters. Values are expressed in percentages
Table 4. Voice and Speech codes initiating T-patterns in Rating Clusters. Values are expressed in
percentages
Cluster label Pause Speak Hesitation Verbal Filler Word Stress False Start Laughing Crying
Positive emotions 0 6 0 0 0 0 0 0
Hostility 0 0 0 0 0 0 0 0
Embarrassment 0 0 0 0 0 0 0 0
Surprise 15 0 0 0 0 0 0 0
Sadness 22 1 0 0 0 0 0 0
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 177/282
172
F i r s t i n p a t . * / F A C S c o d e s
A U 1
A U 1 + 2
A U 4
A U 5
A U 6
A U 7
A U
9
A U 1 0
A U 1 0 U
A U 1 2
A U 1 2 U
A U 1 4
A U 1 4 U
A U 1 5
A U 1 6
A U 1 7
A U 2 0
A U 2 3
A U 2 4
A U 2 5
A U 2 6
T o t . I n d . T - p a t . * *
L o o k A t
0
3
0
4
1
0
0
0
0
5
0
0
0
0
0
0
0
0
0
0
0
7
A U 5
0
2
0
3
1
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
3
A U 1 2
0
0
0
0
2
0
0
0
0
4
0
0
0
0
0
2
0
0
0
0
0
4
A U 1 + 2
0
2
0
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
L o o k A w a y
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
A U 6
0
0
0
0
3
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
3
S p e a k
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
1
A U 1 4
0
0
0
0
0
0
0
0
0
0
0
2
0
0
0
1
0
0
0
0
0
2
A U 2 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
0
0
0
0
2
T o t a l
0
7
0
9
7
0
0
1
0
1 4
0
2
0
0
0
3
2
0
0
0
0
2 5
P r o p o r t i o n s i n T - p a t t e
r n s
0 %
2 8 %
0 %
3 6 %
2 8 %
0 %
0 %
4 %
0 %
5 6 %
0 %
8 %
0 %
0 %
0 %
1 2 %
8 %
0 %
0 %
0 %
0 %
* F i r s t i n p a t . = F i r s t C o d e s i n T - p a t t e r n
* * T o t . I n d .
T - p a t = T o t a
l i t y o f i n d e p e n d e n t T - p a t t e r n s i n C l u s t e r s
T a b l e ( ) . P o s i t i v e E m o t i o n s C l u s t e r . N u m b e r a n d p r o p o r t i o n s o f F A C
S c o d e s i n T - p a t t e r n s .
T a b l e 5 E n j o y m e n t c l u s t e r . N u m b e r a n d p r o p o r t i o n s o f F A C S c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 178/282
173
F i r s t i n p a t . * / G a z e c o d e s
B l i n k E y e l i d s D r o o p
L o o k A t L o o k
A w a y
L o o k D o w n
L o o k U p
N . I n d . T - p a t . * *
L o o k A t
2
0
7
0
0
0
7
A U 5
1
0
0
0
0
0
3
A U 1 2
0
0
0
0
0
0
4
A U 1 + 2
0
0
0
0
0
0
2
L o o k A w a y
0
0
0
1
0
0
1
A U 6
0
0
0
0
0
0
3
S p e a k
0
0
1
0
0
0
1
A U 1 4
0
0
0
0
0
0
2
A U 2 0
0
0
0
1
0
0
2
T o t a l
3
0
8
2
0
0
2 5
P r o p o r t i o n s i n T - p a t t e
r n s
1 2 %
0 %
3 2 %
8
%
0 %
0 %
* F i r s t i n p a t . = F i r s t C o
d e s i n T - p a t t e r n
* * N .
I n d .
T - p a t . = N u m b e r o f i n d e p e n d e n t T - P a t t e r n i n t h e c l u s t e r .
T a b l e 6 . E n j o y m
e n t c l u s t e r . N u m b e r a n d p r o p o r t i o n s
o f G a z e c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 179/282
174 F i r s t i n p a t . * / H e a d C o d e s
L o w e r H e a d H
e a d T u r n s H e a d D o w n
H e a d R a i s e
H e a d R a i s e a n d T u r n
H e a d L o w e r a n d T u r n
H e a d R a i s e d
H e a d O n
H e a d T u
r n e d A w a y
H e a d T i l t e d S i d e H e a d T i l t i n g S i d e H e a d S h a
k e
H e a d N o d
N . I n d . T - p a t . * *
L o o k A t
0
0
0
0
0
0
0
0
0
0
0
0
0
7
A U 5
0
0
0
0
0
0
0
0
0
0
0
0
0
3
A U 1 2
0
0
0
0
0
0
0
0
0
0
0
0
0
4
A U 1 + 2
0
0
0
0
0
0
0
0
0
0
0
0
0
2
L o o k A w a y
0
0
0
0
0
0
0
0
0
0
0
0
0
1
A U 6
0
0
0
0
0
0
0
0
0
0
0
0
0
3
S p e a k
0
0
0
0
0
0
0
0
0
0
0
0
0
1
A U 1 4
0
0
0
0
0
0
0
0
0
0
0
0
0
2
A U 2 0
0
0
0
0
0
0
0
0
1
0
0
0
0
2
T o t a l
0
0
0
0
0
0
0
0
1
0
0
0
0
2 5
P r o p o r t i o n s i n T - p a t t e r n s
0 %
0 %
0 %
0 %
0 %
0 %
0 %
0 %
4 %
0 %
0 %
0 %
0 %
* F i r s t i n p a t . = F i r s t C o d e s i n T - p a t t e r n
* * N .
I n d .
T - p a t . = N u m b e r o f i n d e p e n d e n t T - P a t t e r n
i n t h e c l u s t e r .
T a b l e 7 . E n j o y m e n t c l u s t e r . N u m b e r a n d p
r o p o r t i o n s o f H e a d c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 180/282
175
F i r s t i n
p a t . * / V o i c e & S p e e c h
P a u s e S p e a k H e s i t a t i o
n V e r b a l F i l l e r W o r d S t r e s s
F a l s e S t a r t L a u g h i n g C r y i n g
N . I n d . T - p a t . * *
L o o k A
t
0
0
0
0
0
0
0
0
7
A U 5
0
0
0
0
0
0
0
0
3
A U 1 2
0
0
0
0
0
0
0
0
4
A U 1 + 2
0
0
0
0
0
0
0
0
2
L o o k A
w a y
0
0
0
0
0
0
0
0
1
A U 6
0
0
0
0
0
0
0
0
3
S p e a k
0
1
0
0
0
0
0
0
1
A U 1 4
0
0
0
0
0
0
0
0
2
A U 2 0
0
0
0
0
0
0
0
0
2
T o t a l
0
1
0
0
0
0
0
0
2 5
P r o p o r t i o n s i n T - p a t t e r n s
0 %
4 %
0 %
0 %
0 %
0 %
0 %
0 %
* F i r s t i n p a t . = F i r s t C o d e s i n T - p a t t e r n
* * N . I n
d .
T - p a t . = N u m b e r o f i n d e p e n d e n t T - P a t t e r n i n t h e c l u s t e r .
T a b l e 8 . E n j o y m e n t c l u s t e r . N u m b e r a n d p r o p o r t i o n s o f V o i c e a n d S p e e c h c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 181/282
176
F i r s t i n p a t . * / F A C S c o d e s
A U 1 A U 1 + 2 A U 4 A U 5 A U 6 A U 7 A U 9 A U
1 0 A U 1 0 U
A U 1 2 A U 1 2 U
A U 1 4 A U 1 4 U A U 1 5 A U 1 6 A U 1 7 A U 2 0 A U 2 3 A U 2 4 A U 2 5
A U 2 6
T o t . I n d . T - p a t * *
A U 4
0
2
3 3
1 4
0
1 2
2 3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3 3
A U 7
0
1
0
1 2
0
2 0
1 3
0
0
0
0
2
0
0
0
4
0
0
0
1
0
2 0
A U 9
0
1
0
5
0
1 1
2 4
0
0
0
0
3
0
4
0
1 0
0
0
0
0
1
2 4
A U 1 + 2
0
1 4
1
1 4
0
0
0
0
0
1 2
0
0
0
1
0
0
5
0
0
0
0
1 4
A U 5
0
1 3
2
2 2
0
4
2
1
0
0
0
0
0
0
5
1
1 1
0
0
0
3
2 2
A U 1 5
0
1 2
2
0
0
1
1
4
0
0
0
0
0
1 9
0
8
0
0
0
0
0
1 9
A U 1 0
0
3
0
1
0
0
0
9
0
2
0
0
0
0
0
1
0
0
0
0
0
9
A U 1 0 U
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
2
A U 1 2
0
3
0
0
0
0
0
0
0
1 3
0
0
0
0
0
2
0
0
0
0
0
1 3
A U 1 4
0
0
0
0
0
0
0
0
0
0
0
1 1
0
0
0
6
0
4
3
0
0
1 1
A U 1 4 U
0
0
0
0
0
0
0
0
0
0
0
0
1 2
0
0
5
0
0
0
0
0
1 2
A U 1 7
0
9
0
0
0
2
0
0
0
0
0
5
0
9
0
1 5
0
3
1
0
0
1 5
A U 2 0
0
8
0
2
0
0
0
2
0
0
0
1
0
0
4
2
1 7
0
0
0
0
1 7
A U 2 3
0
0
0
0
0
1
0
0
0
0
0
3
0
0
0
2
0
8
3
0
0
8
A U 2 4
0
2
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
5
0
0
5
T o t a l
0
6 8
3 8
7 0
0
5 1
6 3
1
6
2
2 7
0
2 6
1 2
3 3
9
5 6
3 3
1 5
1 2
1
4
2 2 4
P r o p o r t i o n s i n T - p a t t e r n s
0 %
3 0 %
1 7 %
3 1 %
0 %
2 3 %
2 8 %
7
%
1 %
1 2 %
0 %
1 2 %
5 %
1
5 %
4 %
2 5 %
1 5 %
7 %
5 %
0 %
2 %
* F i r s t i n p a t . = F i r s t C o d e s i n T - p
a t t e r n
* * T o t . I n d .
T - p a t = T o t a l i t y o f i n d
e p e n d e n t T - p a t t e r n s i n C l u s t e r s
T a b l e ( ) . H o s t i l i t y C l u s t e r . N u m b e r a n d p r o p o r t i o n s o f F A C S c o d e s i n T - p a t t e r n s .
T a b l e 9 . H o s t i l i t y c l u s t e r . N u m b e r a n d p r o
p o r t i o n s o f F A C S c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 182/282
177
F i r s t i n p a t . * / G a z e c o d e s
B l i n k
E y e l i d s D r o o p
L o o
k A t
L o o k A w a y
L o o k D o w n
L o o k U p
N . I n d . T - p a t . * *
A U
4
0
0
0
1 8
0
0
3 3
A U
7
0
1
7
0
0
0
2 0
A U
9
5
0
0
0
0
0
2 4
A U
1 + 2
0
0
1
0
0
0
1 4
A U
5
0
0
0
2
0
0
2 2
A U
1 5
0
1
0
0
0
0
1 9
A U
1 0
0
0
0
0
0
0
9
A U
1 0 U
0
0
0
0
0
0
2
A U
1 2
0
4
0
0
0
0
1 3
A U
1 4
0
0
0
0
0
0
1 1
A U
1 4 U
0
1
0
0
0
0
1 2
A U
1 7
0
0
0
0
0
0
1 5
A U
2 0
0
0
0
0
0
0
1 7
A U
2 3
0
0
0
0
0
0
8
A U
2 4
0
0
0
0
0
0
5
T o t a l
5
7
8
2 0
0
0
2 2 4
P r o
p o r t i o n s i n T - p a t t e r n s
2 %
3 %
4
%
9 %
0 %
0 %
* F i r s t i n p a t . = F i r s t C o d e s i n T - p a t t e r n
* * N
. I n d .
T - p a t . = N u m b e r o f i n d e p e n d e n t T - P a t t e r n i n t h e c l u s t e r .
T a b l e ( ) H o s t i l i t y C l u s t e r . N u m b e r a n
d P r o p o r t i o n s o f " G a z e " C o d e s i n T - p a t t e r n s .
T a b l e 1 0 . H o s t i l i t y c l u s t e r . N u m b e r a n d p r o p o r t i o n s o f G a z e c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 183/282
178
F i r s t i n p a t . * / H e a d C o d e s
L o w e r H e a d
H e a d T u r n s H e a d D o w n
H e a d R a i s e
H e a d R a
i s e a n d T u r n
H e a d L o w e r a n d T u r n
H e a d R a i s e d
H e a d O n
H e a d T u r n e d A w a y
H e a d T i l t e d S i d e
H e a d T i l t i n g H e a d S h
a k e H e a d N o d
N . I n d . T - p a t . * *
A U 4
3
0
0
0
0
0
0
0
0
0
0
0
0
3 3
A U 7
0
0
0
0
0
0
0
0
0
0
0
0
0
2 0
A U 9
2
0
0
1
0
0
0
0
0
0
0
0
0
2 4
A U 1 + 2
0
0
0
0
0
0
0
0
0
0
0
0
0
1 4
A U 5
0
0
0
0
0
0
0
0
1
0
0
0
0
2 2
A U 1 5
0
0
0
1
0
0
0
0
0
0
0
0
0
1 9
A U 1 0
0
0
0
0
0
0
3
0
1
0
0
1
0
9
A U 1 0 U
1
0
0
0
0
0
0
0
0
0
0
0
0
2
A U 1 2
0
0
0
0
1
0
0
0
3
0
0
0
0
1 3
A U 1 4
0
0
0
0
0
0
0
0
0
0
0
0
0
1 1
A U 1 4 U
2
0
0
0
0
0
0
0
4
0
5
0
0
1 2
A U 1 7
0
0
0
0
0
0
0
0
1
0
0
0
0
1 5
A U 2 0
0
0
0
0
0
0
0
0
0
0
0
0
0
1 7
A U 2 3
0
0
0
0
0
0
0
0
0
0
0
0
0
8
A U 2 4
0
0
0
0
0
0
0
0
0
0
0
2
0
5
T o t a l
8
0
0
2
1
0
3
0
1 0
0
5
3
0
2 2 4
P r o p o r t i o n s i n T - p a t t e r n s
4 %
0 %
0 %
1 %
0 %
0 %
1 %
0 %
4 %
0 %
2 %
1 %
0 %
* F i r s t i n p a t . = F i r s t C o d e s i n T - p a t t e r n
* * N .
I n d .
T - p a t . = N u m b e r o f i n d e p e n d e n t T - P a t t e r n i n t h e c l u s t e r .
T a b l e ( ) H o s t i l i t y C l u s t e r . N u m b e r a n d P r o p o r t i o n s o f H e a d c o d e s i n I n d e p e n d e n t T - p a t t e r n s .
T a b l e 1 1 . H o s t i l i t y c l u s t e r . N u m b e r a n d p r o p o r t i o n s o f H e a d c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 184/282
179
F i r s t i n p a t . *
/ V o i c e & S p e e c h
P a u s e S p e a k H e s i t a t i o n V e r b a l F i l l e r W o r d S t r e s s F a l s e S t a r t L a u g h i n g C r y i n g N . I n d . T - p a t . * *
A U 4
0
0
0
0
1
2
0
0
3 3
A U 7
0
0
0
0
0
0
0
0
2 0
A U 9
0
0
0
0
0
0
0
0
2 4
A U 1 + 2
0
0
0
0
0
0
0
0
1 4
A U 5
0
0
0
0
0
0
0
0
2 2
A U 1 5
0
0
0
0
0
0
0
0
1 9
A U 1 0
0
0
0
0
0
0
0
0
9
A U 1 0 U
0
0
0
0
0
0
0
0
2
A U 1 2
0
0
0
0
1
0
0
0
1 3
A U 1 4
0
0
0
0
0
0
0
0
1 1
A U 1 4 U
0
0
0
0
3
0
0
0
1 2
A U 1 7
0
0
0
0
0
0
0
0
1 5
A U 2 0
0
0
0
0
0
0
0
0
1 7
A U 2 3
0
0
0
0
0
0
0
0
8
A U 2 4
0
0
0
1
0
0
0
0
5
T o t a l
0
0
0
1
5
2
0
0
2 2 4
P r o p o r t i o n s i
n T - p a t t e r n s
0 %
0 %
0 %
0 %
2 %
1 %
0 %
0 %
T a b l e ( ) H o s t i l i t y C l u s t e r . N u m b e r a n d P r o p o r t i o n s o f " V o i c e a n d S p e e c h " C o d e s i n T - p a t t e r n s .
T a b l e 1 2 . H o s t i l i t y c l u s t e r . N u m b e r a n d p r o p o r t i o n s o f V o i c e a n d S p e e c h c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 185/282
180
F i r s t i n p a t . * / F A
C S c o d e s
A U 1 A U 1 + 2 A U 4 A U 5 A U 6 A U 7 A U 9 A U 1 0 A U 1 0 U
A U 1 2 A U 1 2 U
A U 1 4 A U 1 4 U
A U 1 5 A U 1 6 A U 1 7 A U 2 0 A U 2 3 A U 2 4 A U 2 5 A U 2 6
T o t . I n d . T - p a t . * *
A U 1 2
0
4
0
3
0
0
0
1
0
9
0
0
0
0
0
3
0
0
0
0
2
9
A U 1 5
0
1
0
2
0
0
0
0
0
1
0
0
0
8
0
2
0
0
0
0
0
8
A U 1 2 U
0
2
0
1
0
0
0
0
0
0
0
0
0
0
0
2
0
0
0
0
1
7
A U 7
0
2
0
3
0
1 2
0
3
0
3
1
0
0
0
0
2
0
0
0
0
0
1 2
L o o k A w a y
0
4
0
2
0
0
0
1
0
4
0
0
0
1
0
0
0
0
0
0
1
5
L o w e r H e a d
0
1
0
3
0
0
0
0
0
6
1
0
0
1
0
1
0
0
0
0
0
7
L o o k D o w n
0
5
0
6
0
2
0
0
0
3
4
0
0
4
0
2
0
0
0
0
1
1 2
A U 1 7
0
1
0
0
0
1
0
4
0
2
1
0
0
1
0
6
0
0
0
0
0
6
A U 1 0
0
3
0
0
0
0
0
7
0
3
0
0
0
0
0
4
0
0
0
0
0
6
E y e l i d s D r o p
0
3
0
1
0
2
0
0
0
2
2
0
0
1
0
0
0
0
0
0
0
7
A U 5
0
0
0
5
0
0
0
0
0
4
1
0
0
1
0
0
0
0
0
0
2
5
A U 1 + 2
0
5
0
2
0
2
0
2
0
1
0
0
0
0
0
1
0
0
0
0
0
5
A U 4
0
0
3
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
A U 9
0
0
0
1
0
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
A U 2 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
0
0
0
0
3
A U 2 3
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
0
0
0
2
T o t a l
0
3 2
3
3 0
0
1 9
3
1 8
0
3 8
1 0
0
0
1 7
0
2 3
3
2
0
0
7
1 0 0
P r o p o r t i o n s i n T - p a t t e r n s
0 %
3 2 %
3 %
3 0 %
0 %
1 9 %
3
%
1 8 %
0 %
3 8 %
1 0 %
0 %
0 %
1 7 %
0 %
2 3 %
3 %
2 %
0 %
0 %
7 %
* F i r s t i n p a t . = F i r
s t C o d e s i n T - p a t t e r n
* * T o t . I n d .
T - p a t =
T o t a l i t y o f i n d e p e n d e n t T - p a t t e r n s i n C l u s t e r s
T a b l e ( ) . E m
b a r r a s s m e n t C l u s t e r . N u m b e r a n d p r o p o r t i o n s o f F A C S c o d e s i n T - p a t t e r n s .
T a b l e 1 3 .
E m b a r r a s s m e n t c l u s t e r . N u m b e r a n d p r o
p o r t i o n s o f F A C S c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 186/282
181
F i r s t i n p a t . * / G a z e c
o d e s
B l i n k
E y e l i d s D r o o p
L o o k A t
L o o k A w a y
L o o k D o w n
L o o k U p
N . I n d . T - p a t . * *
A U 1 2
0
6
0
0
2
0
9
A U 1 5
0
0
0
0
2
0
8
A U 1 2 U
0
1
0
0
0
0
7
A U 7
0
2
0
0
5
0
1 2
L o o k A w a y
0
1
0
5
0
0
5
L o w e r H e a d
0
1
0
2
0
0
7
L o o k D o w n
0
1
0
0
1 2
0
1 2
A U 1 7
0
0
0
0
0
0
6
A U 1 0
0
0
0
0
1
0
6
E y e l i d s D r o p
0
7
0
0
2
0
7
A U 5
0
2
0
0
1
0
5
A U 1 + 2
0
1
0
1
0
0
5
A U 4
0
0
0
0
0
0
3
A U 9
0
1
0
0
1
0
3
A U 2 0
0
0
0
1
0
0
3
A U 2 3
0
0
0
0
0
0
2
T o t a l
0
2 3
0
9
2 6
0
1 0 0
P r o p o r t i o n s i n T - p a t t e
r n s
0 %
2 3 %
0 %
9 %
2 6 %
0 %
* F i r s t i n p a t . = F i r s t C o
d e s i n T - p a t t e r n
* * N .
I n d .
T - p a t . = N u m b e r o f i n d e p e n d e n t T - P a t t e r n i n t h e c l u s t e r .
T a b l e ( ) E m b a r r a s s m e n t C l u s t e r . N u m b e r a n d p r o p o
r t i o n s o f " G a z e " C o d e s i n T - p a t t e r n s .
T a b l e 1 4 . E m b a r r a s s m e n t c l u s t e r . N u m b e r a n d p r o p o r t i o n s o f G a z e c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 187/282
182
F i r s t i n p a t . *
/ H e a d C o d e s
L o w e r H e a d
H e a d T u r n s H e a d D o w n
H e a d R a i s e
H e a d
R a i s e a n d T u r n
H e a d L o w e r H e a d R a i s e d
H e a d O n
H e a d T u r n e d A w a y
H e a d T i l t e d S i d e
H e a d T i l t i n g H e a d S h a k e
H e a d N o d
N . I n d . T - p a t . *
*
A U 1 2
1
0
0
0
0
0
0
0
0
0
0
0
0
9
A U 1 5
4
0
0
0
0
0
0
0
2
0
0
0
0
8
A U 1 2 U
1
0
0
0
0
0
0
0
0
0
1
0
0
7
A U 7
1
0
0
0
0
0
0
0
3
0
0
0
0
1 2
L o o k A w a y
0
0
0
0
0
0
0
0
0
0
0
0
0
5
L o w e r H e a d
5
0
0
0
0
0
0
0
1
0
0
0
0
7
L o o k D o w n
2
0
0
0
0
0
0
0
1
0
0
0
0
1 2
A U 1 7
1
0
0
0
1
0
0
0
0
0
0
0
0
6
A U 1 0
0
0
0
0
0
0
0
0
0
0
0
0
0
6
E y e l i d s D r o p
2
0
0
0
0
0
0
0
1
0
0
0
0
7
A U 5
2
0
0
0
0
0
0
0
1
0
0
0
0
5
A U 1 + 2
1
0
0
0
0
0
0
0
2
0
0
0
0
5
A U 4
0
0
0
1
0
0
0
0
0
0
0
0
0
3
A U 9
0
0
0
0
0
0
0
0
0
0
0
0
0
3
A U 2 0
1
0
0
0
0
0
0
0
1
0
0
0
0
3
A U 2 3
0
0
0
0
0
0
0
0
0
0
0
0
0
2
T o t a l
2 1
0
0
1
1
0
0
0
1 2
0
1
0
0
1 0 0
P r o p o r t i o n s i n T - p a t t e r n s
2 1 %
0 %
0 %
1 %
1 %
0 %
0 %
0 %
1 2 %
0 %
0 %
0 %
0 %
* F i r s t i n p a t . = F i r s t C o
d e s i n T - p a t t e r n
* * N .
I n d .
T - p a t . = N u m
b e r o f i n d e p e n
d e n t T - P a t t e r n i n t h e c l u s t e r .
T a b l e ( ) E m b a r
r a s s m e n t C l u s t e r . N u m b e r a n d P r o p o r t i o n s o f H e a d c o d e s i n I n d e p e n d e n t T - p a t t e r n s .
T a b l e 1 5 . E m b a
r r a s s m e n t c l u s t e r . N u m b e r a n d p r o p o r t i o n s o f H
e a d c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 188/282
183
F i r s t i n p a t . *
/ V o i c e & S p e e c h
P a u s e S p e a k
H e s i t a t i o n V e r b a l F i l l e r W o r d S t r e s s
F a l s e S t a r t L a u g h i n g C r y i n g
N . I n d . T - p a t . *
*
A U
1 2
0
0
0
0
0
0
0
0
9
A U
1 5
0
0
0
0
0
0
0
0
8
A U
1 2 U
0
0
0
0
0
0
0
0
7
A U
7
0
0
0
0
0
0
0
0
1 2
L o
o k A w a y
0
0
0
0
0
0
0
0
5
L o
w e r H e a d
0
0
0
0
0
0
0
0
7
L o
o k D o w n
1
0
0
0
0
0
0
0
1 2
A U
1 7
0
1
0
0
0
0
0
0
6
A U
1 0
0
0
0
0
0
0
0
0
6
E y
e l i d s D r o p
1
0
0
0
0
0
0
0
7
A U
5
0
0
0
0
0
0
0
0
5
A U
1 + 2
0
0
0
0
0
0
0
0
5
A U
4
0
0
0
0
0
0
0
0
3
A U
9
0
0
0
0
0
0
0
0
3
A U
2 0
0
0
0
0
0
0
0
0
3
A U
2 3
0
0
0
0
0
0
0
0
2
T o
t a l
2
1
0
0
0
0
0
0
1 0 0
P r o p o r t i o n s i n T - p a t t e r n s
2 %
1 %
0 %
0 %
0 %
0 %
0 %
0 %
T a b l e ( ) E m b a r r a s s m e n t C l u s t e r . N u m
b e r a n d P r o p o r t i o n s o f " V o i c e a n d S p e e c h " C o d e s i n T - p a t t e r n s .
T a b l e 1 6 . E m b a r r a s s m e n t c l u s t e r . N u m b e r a n d p r o p o r t i o n s o f V o i c e a n d S p e e c h c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 189/282
184
F i r s t i n p a t . * / F A C S c o d e s
A U 1 A U 1 + 2 A U 4 A U 5 A U 6 A U 7 A U 9 A U 1 0 A U 1 0 U
A U 1 2 A U 1 2 U
A U 1 4 A U 1 4 U
A U 1 5
A U 1 6 A U 1 7 A U 2 0 A U 2 3 A U 2 4 A U 2 5 A U 2
6
T o t . I n d . T - p a t * *
A U 1 7
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
9
0
0
3
0
0
9
H e a d O n
0
1
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
3
3
A U 5
0
1
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
P a u s e
0
1
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
3
4
A U 1 + 2
0
4
0
5
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
4
B l i n k
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
L o o k A t
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
A U 7
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
T o t a l
0
7
0
1 7
0
1
0
0
0
0
0
0
0
0
0
9
0
0
3
7
6
2 7
P r o p o r t i o n s i n T - p a t t e r n s
0 %
2 6 %
0 %
6 3 %
0 %
4 %
0 %
0 %
0 %
0 %
0 %
0 %
0 %
0 %
0 %
3 3 %
0 %
0 %
1 1 %
2 6 %
2 2 %
* F i r s t i n p a t . = F i r s t C o d e s i n T - p a t t e r n
* * T o t . I n d .
T - p a t = T o t a l i t y o f i n d e p e n d e n t T - p a t t e r n s i n C l u s t e r s
T a b l e ( ) . S u r p r i s e C
l u s t e r . N u m b e r a n d p r o p o r t i o n s o f F A C S c o d e s i n T - p a t t e r n s .
T a b l e 1 7 . S u r p r i s e c l u s t e r . N u m b e r a n d p r o p o r t i o n s o f F A C S
c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 190/282
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 191/282
186 F i r s t i n p a t . * / H e a d C o d e s L o w e r H e a d H
e a d T u r n s H e a d D o w n
H e a d R a i s e H e a d R a i s e a n d T u r n
H e a d L o w e r H e a d R a i s e d H e a d O n
H e a d T u r n e d A w a y
H e a d T i l t e d S i d e
H e a d T i l t i n g S i d e H e a d S h a k e H e a d N o d
N . I n d . T - p a t . * *
A U 1 7
0
0
0
0
0
0
0
0
0
0
0
0
0
9
H e a d O n
0
0
0
0
0
0
0
3
0
0
0
0
0
3
A U 5
0
0
0
0
0
0
0
0
0
0
0
0
0
3
P a u s e
0
0
0
0
0
0
0
0
0
0
0
0
0
4
A U 1 + 2
0
0
0
1
0
0
0
0
0
0
0
0
0
4
B l i n k
0
0
0
0
0
0
0
0
0
0
0
0
0
2
L o o k A t
0
0
0
0
0
0
0
0
0
0
0
0
0
1
A U 7
0
0
0
0
0
0
0
0
0
0
0
0
0
1
T o t a l
0
0
0
1
0
0
0
3
0
0
0
0
0
2 7
P r o p o r t i o n s i n T - p a t t e r n s
0 %
0 %
0 %
4 %
0 %
0 %
0 %
1 1 %
0 %
0 %
0 %
0 %
0 %
* F i r s t i n p a t . = F i r s t C o d e s i n T - p a t t e r n
* * N .
I n d .
T - p a t . = N u m b e r o f i n d e p e n d e n t T - P a t t e r n i n t h e c l u s t e r .
T a b l e 1 9 . S u r p r i s e c l u
s t e r . N u m b e r a n d p r o p o r t i o n s o f H e a d c o d e s
i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 192/282
187
F i r s t i n p a t . * / V o
i c e & S p e e c h
P a u s e S p e a k H e s i t a t i o n V e r b a l F i l l e r W o r d S t r e s s
F a l s e S t a r t L a u g h i n g C r y i n g
N . I n d . T - p a t . * *
A U 1 7
2
0
0
0
0
0
0
0
9
H e a d O n
0
0
0
0
0
0
0
0
3
A U 5
0
0
0
0
0
0
0
0
3
P a u s e
4
0
0
0
0
0
0
0
4
A U 1 + 2
2
0
0
0
0
0
0
0
4
B l i n k
0
0
0
0
0
0
0
0
2
L o o k A t
0
0
0
0
0
0
0
0
1
A U 7
0
0
0
0
0
0
0
0
1
T o t a l
8
0
0
0
0
0
0
0
2 7
P r o p o r t i o n s i n T - p a t t e r n s
3 0 %
0 %
0 %
0 %
0 %
0 %
0 %
0 %
* F i r s t i n p a t . = F i r s t C o d e s i n T - p a t t e r n
* * N .
I n d .
T - p a t . = N u m b e r o f i n d e p e n d e n t T - P a t t e r n i n t h e c l u s t e r .
T a b l e 2 0 . S u r p r i s e c l u s t e r . N u m b e r a n d p r o p o r t i o n s o f
V o i c e a n d S p e e c h c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 193/282
188
F i r s t i n p a t . * / F A C S c o d e s
A U 1
A U 1 + 2
A U 4
A U 5
A U 6
A U 7
A U 9
A U 1
0
A U 1 0 U
A U 1 2
A U 1 2 U
A U 1 4
A U 1 4 U
A U 1 5
A U 1 6
A U 1 7
A U 2 0
A U 2 3
A U 2 4
A U 2 5
A U 2 6
T o t . I n d . T - p a t * *
P a u s e
0
0
0
0
0
0
0
0
0
0
0
0
0
8
0
0
0
0
0
0
0
8
B l i n k
0
6
0
0
0
0
0
0
0
0
0
0
0
2
0
5
0
0
0
0
0
1 3
A U 1 5
0
0
0
0
0
0
0
0
0
0
0
0
0
1 2
0
0
0
0
0
1
1
1 2
A U 1 + 2
0
1 6
0
5
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1 6
A U 1 7
0
0
0
0
0
0
0
0
0
0
0
0
0
4
0
1 6
0
0
0
0
0
1 6
A U 5
0
0
0
7
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
7
L o o k D o w n
0
0
0
0
0
0
0
0
0
0
0
0
0
5
0
0
0
0
0
0
0
5
A U 2 5
0
0
0
0
0
0
0
0
0
0
0
0
0
5
0
0
0
0
0
5
0
5
H e a d O n
0
0
0
0
0
0
0
0
0
0
0
0
0
2
0
1
0
0
0
0
0
4
L o o k A w a y
0
0
0
0
0
0
0
0
0
0
0
0
0
3
0
0
0
0
0
0
0
3
A U 1 2
0
0
0
0
0
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
3
L o o k A t
0
0
0
2
0
0
0
0
0
0
0
0
0
2
0
1
0
0
0
0
0
4
A U 4
0
0
3
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
E y e l i d s D r o p
0
0
0
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
2
S p e a k
0
0
0
0
0
0
0
0
0
0
0
0
0
4
0
1
0
0
0
2
0
4
A U 1 4 U
0
0
0
0
0
0
0
0
0
0
0
0
2
1
0
0
0
0
0
0
0
2
H e a d T u r n e d A w a y
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
A U 2 3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
0
3
0
0
0
3
A U 2 4
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
3
0
0
3
T o t a l
0
2 5
3
1 5
0
2
0
0
0
3
0
0
2
5 0
0
2 6
0
3
3
8
1
1 1 6
P r o p o r t i o n s i n T - p a t t e r n s
0 %
2 2 %
3 %
1 3 %
0 %
2 %
0 %
0 %
0 %
3 %
0 %
0 %
2 %
4 2 %
0 %
2 2 %
0 %
3 %
3 %
7 %
1 %
* F i r s t i n p a t . = F i r s t C o d e s i n T - p a t t e r n
* * T o t . I n d .
T - p a t = T o t a l i t y o f i n d e p e n d e n t T - p a t t e r n s i n C l u s t e r s
T a b l e ( ) . S a d n e s s C l u s t e r . N u m b e r a n d p r o p o r t i o n s o f F A C S c o d e s i n T - p a t t e r n s .
T a b l e 2 1 . S a d n e s s c l u s t
e r . N u m b e r a n d p r o p o r t i o n s o f F A C S c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 194/282
189
F i r s t i n p a t . * / G a z e c o d e s
B l i n k
E y e l i d s D r o o p
L o o k A t
L o o k A w a y
L o o k D o w n
L o o k U p
N . I n d . T - p a t . * *
P a u s e
0
7
0
6
7
0
8
B l i n k
1 3
5
0
9
5
0
1 3
A U 1 5
0
6
0
6
6
0
1 2
A U 1 + 2
1
9
0
7
1 0
0
1 6
A U 1 7
0
7
0
1 2
4
0
1 6
A U 5
5
1
0
4
3
0
7
L o o k D o w n
0
4
0
5
5
0
5
A U 2 5
0
2
0
3
0
0
5
H e a d O n
0
0
3
0
0
0
4
L o o k A w a y
0
2
0
3
1
0
3
A U 1 2
0
0
2
0
0
0
3
L o o k A t
2
0
4
1
1
0
4
A U 4
0
0
0
0
0
0
3
E y e l i d s D r o p
0
2
0
2
1
0
2
S p e a k
0
1
0
1
0
0
4
A U 1 4 U
0
0
0
0
1
0
2
H e a d T u r n e d A
w a y
0
0
0
1
1
0
3
A U 2 3
0
0
0
0
0
0
3
A U 2 4
0
0
0
0
0
0
3
T o t a l
2 1
4 6
9
6 0
4 5
0
1 1 6
P r o p o r t i o n s i n T - p a t t e r n s
1 8 %
4 0 %
8 %
5 2 %
3 9 %
0 %
* F i r s t i n p a t . = F i r s t C o d e s i n T - p a t t e r n
* * N .
I n d .
T - p a t .
= N u m b e r o f i n d e p e n d e n t T - P a t t e r n i n t h e c l u s t e r .
T a b l e ( ) S a d n e s s C l u s t e r . N u m b e r a n d P r o p o
r t i o n s o f " G a z e " C o d e s i n T - p a t t e r n s .
T a b l e
2 2 . S a d n e s s c l u s t e r . N u m b e r a n d p r o p o r t i o n s o f G a z e c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 195/282
190
F i r s t i n p a t . * / H e a d C o d e s
L o w e r H
e a d
H e a d T u r n s
H e a d D o w n
H e a d R a i s e
H e a d R a i s e a n d T
u r n
H e a d L o w e r a n d T u r n
H e a d R a i s e d
H e a d O n
H e a d T
u r n e d A w a y
H e a d T i l t e d S i d e
H e a d T i l t i n g H e a d S h a k e
H e a d N o d
N . I n d . T - p a t . * *
P a u s e
0
0
0
0
0
0
0
0
0
0
0
0
0
8
B l i n k
0
0
0
0
0
0
0
0
0
0
0
0
0
1 3
A U 1 5
0
0
0
0
0
0
0
0
0
0
0
0
0
1 2
A U 1 + 2
0
0
0
0
0
0
0
0
0
0
0
0
0
1 6
A U 1 7
0
0
0
0
0
0
0
0
0
0
0
0
0
1 6
A U 5
0
0
0
0
0
0
0
0
0
0
0
0
0
7
L o o k D o w n
0
0
0
0
0
0
0
0
0
0
0
0
0
5
A U 2 5
0
0
0
0
0
0
0
0
0
0
0
0
0
5
H e a d O n
0
0
0
0
0
0
0
4
2
0
0
0
0
4
L o o k A w a y
0
0
0
0
0
0
0
0
1
0
0
0
0
3
A U 1 2
0
0
0
0
0
0
0
2
0
0
1
0
0
3
L o o k A t
0
0
0
0
0
0
0
1
1
0
0
0
0
4
A U 4
0
0
0
0
0
0
0
0
0
0
0
0
0
3
E y e l i d s D r o p
0
0
0
0
0
0
0
0
0
0
0
0
0
2
S p e a k
0
0
0
0
0
0
0
0
0
0
0
0
0
4
A U 1 4 U
0
0
0
0
0
0
0
0
0
0
0
0
0
2
H e a d T u r n e d A w a y
0
0
0
0
0
0
0
0
3
0
0
0
0
3
A U 2 3
0
0
0
0
0
0
0
0
0
0
0
0
0
3
A U 2 4
0
0
0
0
0
0
0
0
0
0
0
0
0
3
T o t a l
0
0
0
0
0
0
0
7
7
0
1
0
0
1 1 6
P r o p o r t i o n s i n T - p a t t e r n s
0 %
0 %
0 %
0 %
0 %
0 %
0 %
6 %
6 %
0 %
1 %
0 %
0 %
* F i r s t i n p a t . = F i r s t C o d e s i n T - p a t t e r n
* * N .
I n d .
T - p a t . = N u m b e r o f i n d e p e n d e n t T - P a t t e r n i n t h e c l u s t e r .
T a b l e ( ) S a d n e s s C
l u s t e r . N u m b e r a n d P r o p o r t i o n s o f H e a d c o d e s i n I n d e p e n d e n t T - p a t t e r n s .
T a b l e 2 3 . S a d n e s s c l u s t e r . N u m b e r a n d p r o p o r t i o n s o f H e a d c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 196/282
191
F i r s t i n p a t . * / V o i c e & S p e e
c h
P a u s e S p e a k H e s i t a t i o n V e r b a l F i l l e r W o
r d S t r e s s
F a l s e S t a r t L a u g h i n g C r y i n g
N . I n d . T
- p a t . * *
P a u s e
8
0
0
0
0
0
0
0
8
B l i n k
0
0
0
0
0
0
0
0
1 3
A U 1 5
3
1
0
0
0
0
0
0
1 2
A U 1 + 2
0
0
0
0
0
0
0
0
1 6
A U 1 7
8
2
0
0
0
0
0
0
1 6
A U 5
0
0
0
0
0
0
0
0
7
L o o k D o w n
2
0
0
0
0
0
0
0
5
A U 2 5
4
0
0
0
0
0
0
0
5
H e a d O n
2
0
0
0
0
0
0
0
4
L o o k A w a y
1
0
0
0
0
0
0
0
3
A U 1 2
1
0
0
0
0
0
0
0
3
L o o k A t
2
0
0
0
0
0
0
0
4
A U 4
0
0
0
0
0
0
0
0
3
E y e l i d s D r o p
1
0
0
0
0
0
0
0
2
S p e a k
4
4
0
0
0
0
0
0
4
A U 1 4 U
0
0
0
0
0
0
0
0
2
H e a d T u r n e d A w a y
0
0
0
0
0
0
0
0
3
A U 2 3
0
0
0
0
0
0
0
0
3
A U 2 4
0
0
0
0
0
0
0
0
3
T o t a l
3 6
7
0
0
0
0
0
0
1 1
6
P r o p o r t i o n s i n T - p a t t e r n s
3 1 %
6 %
0 %
0 %
0 %
0 %
0 %
0 %
* F i r s t i n p a t . = F i r s t C o d e s i n T
- p a t t e r n
* * N .
I n d .
T - p a t . = N u m b e r o f i n
d e p e n d e n t T - P a t t e r n i n t h e c l u s t e r .
T a b l e
( ) S a d n e s s C l u s t e r . N u m b e r a n d P r o p o r t i o n s o f " V o i c e a n d S p e e c h " C o d e s i n T - p a t t e r n s .
T a b l e 2 4 . S a d
n e s s c l u s t e r . N u m b e r a n d p r o p o r t i o n s o f
V o i c e a n d S p e e c h c o d e s i n T - p a t t e r n s
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 197/282
192
Codes / Clusters Positive Emotions Hostility Embarrassment Surprise Sadness
AU1 0% 0% 0% 0% 0%
AU1+2 28% 30% 32% 26% 22%
AU4 0% 17% 3% 0% 3%
AU5 36% 31% 30% 63% 13%
AU6 28% 0% 0% 0% 0%
AU7 0% 23% 19% 4% 2%
AU9 0% 28% 3% 0% 0%
AU10 4% 7% 18% 0% 0%AU10U 0% 1% 0% 0% 0%AU12 56% 12% 38% 0% 3%
AU12U 0% 0% 10% 0% 0%
AU14 8% 12% 0% 0% 0%
AU14U 0% 5% 0% 0% 2%
AU15 0% 15% 17% 0% 42%
AU16 0% 4% 0% 0% 0%
AU17 12% 25% 23% 33% 22%
AU20 8% 15% 3% 0% 0%
AU23 0% 7% 2% 0% 3%AU24 0% 5% 0% 11% 3%
AU25 0% 0% 0% 26% 7%
AU26 0% 2% 7% 22% 1%
Table 25. Proportions of FACS Action Units in Independent T-patterns
Gaze / Clusters Positive Emotions Hostility Embarrassment Surprise Sadness
Blink 12% 2% 0% 4% 18%
Eyelids Droop 0% 3% 23% 19% 40%
Look At 32% 4% 0% 22% 8%
Look Away 8% 9% 9% 4% 52%
Look Down 0% 0% 26% 7% 39%
Look Up 0% 0% 0% 0% 0%
Table () Proportions of Gaze codes in Independent T-patterns across the five ClustersTable 26. Proportions of Gaze codes in Independent T-patterns
Head Codes / Cluster Positive Emotions Hostility Embarrassment Surprise Sadness
Lower Head 0% 4% 21% 0% 0%
Head Turns 0% 0% 0% 0% 0%
Head Down 0% 0% 0% 0% 0%
Head Raise 0% 1% 1% 4% 0%
Head Raise and Turn 0% 0% 1% 0% 0%
Head Lower and Turn 0% 0% 0% 0% 0%
Head Raised 0% 1% 0% 0% 0%
Head On 0% 0% 0% 11% 6%
Head Turned Away 4% 4% 12% 0% 6%
Head Tilted Side 0% 0% 0% 0% 0%
Head Tilting Side 0% 2% 0% 0% 1%
Head Shake 0% 1% 0% 0% 0%
Head Nod 0% 0% 0% 0% 0%
Table 27. Proportions of Head codes in Independent T-patterns
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 198/282
193
Codes / Cluster Positive Emotions Hostility Embarrassment Surprise Sadness
Pause 0% 0% 2% 30% 31%
Speak 4% 0% 1% 0% 6%
Hesitation 0% 0% 0% 0% 0%
Verbal Filler 0% 0% 0% 0% 0%
Word Stress 0% 2% 0% 0% 0%
False Start 0% 2% 0% 0% 0%Laughing 0% 0% 0% 0% 0%
Crying 0% 0% 0% 0% 0%
Table 28. Proportions of Voice and Speech codes in Independent T-patterns
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 199/282
194
Appendix III.
Transition Graphs for T-patterns.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 200/282
195
T-patterns found in the « Enjoyment » Cluster
Initiating event is: « Look At ».
Percentage next to the arrow coming out of the green block indicates the
proportion of T-patterns in the cluster starting with the event typeinscribed in the orange box.
STARTSTART
Look AtLook At
AU12AU12
34%
22%
AU5AU5
56%
AU6AU6
22%
AU5AU5
ENDEND
100%
48%
52%Stop
AU5
Stop
AU5 Blink Blink
ENDEND
AU1+2AU1+2
Blink Blink
AU12AU12
AU12AU12
100%
100% 100%
100%
100%
57%20%
23%
33%
67%
AU1+2AU1+2 AU12AU12
Stop
AU6
Stop
AU6
ENDEND
100%
100%
100%
100%
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 201/282
196
T-patterns found in the «Enjoyment » Cluster
Initiating event is :« AU5».
STARTSTART
13%
AU1+2AU1+2
AU12AU12
Blink Blink
Stop
AU5
Stop
AU5
AU6AU6
Stop
AU1+2
Stop
AU1+2 ENDEND
Stop
AU5
Stop
AU5
Stop
AU5
Stop
AU5
AU1+2AU1+2
AU12AU12
55%
18%
27%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
AU5AU5
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 202/282
197
T-patterns found in the « Enjoyment » ClusterInitiating event is « AU12».
STARTSTART
AU12AU12
11%
StopAU12
StopAU12AU17AU17
AU6AU6
Stop
AU6
StopAU6
51%27% 22%
ENDEND
100%
100%
100%43%
57%
Stop
AU17
Stop
AU17
AU6AU6
Stop
AU6
Stop
AU6
Stop
AU12
Stop
AU12
100%
100%
100%
100%
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 203/282
198
STARTSTART
AU1+ 2AU1+ 2
AU5AU5
100 %
Stop
AU1+2
Stop
AU1+2
36 %
Stop
AU5
Stop
AU5
64%
100%
ENDEND
100%
9%
T-patterns found in the « Enjoyment » Cluster
Initiating event is : « AU1+2».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 204/282
199
Look awayLook away
STARTSTART
9%
AU10AU10
Stop
AU10
Stop
AU10
ENDEND
100%
100%
100%
T-patterns found in the « Enjoyment » Cluster
Initiating event is: « Look Away».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 205/282
200
STARTSTART
AU6AU6
8%
Stop
AU6
Stop
AU6
Stop
AU12
Stop
AU12AU12AU12
ENDEND
38%
30%32%
100%
100%100%
T-patterns found in the « Enjoyment » Cluster
Initiating event is : « AU6».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 206/282
201
STARTSTART
Speak Speak
Look AtLook At
AU12AU12
6%
100%
100%
ENDEND
100%
T-patterns found in the «Enjoyment » Cluster
Initiating event is : «Speak».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 207/282
202
STARTSTART
AU14AU14
5%
AU17AU17StopAU14
StopAU14
ENDEND
50%50%
100%100%
T-patterns found in the « Enjoyment » Cluster
Initiating event is : «AU14».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 208/282
203
STARTSTART
AU20AU20
LookAwayLookAway
Head
Turned
Away
Head
Turned Away
45% 55%
StopAU20
StopAU20
100%100%
ENDEND
100%
5%
T-patterns found in the « Enjoyment » Cluster
Initiating event is : «AU20».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 209/282
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 210/282
205
STARTSTART
AU7AU7
15%
AU9AU9
Look AtLook At
Stop
AU7
Stop
AU7
31%
26%
43%
Look AtLook At
25%
AU1+2AU1+25%
AU5AU5
Stop
AU5
Stop
AU5
ENDEND
100%
100%
100%
AU5AU5
46%
AU17AU17
25%
ENDEND
33%
42% Stop
AU5
Stop
AU5
100% 100%
StopAU9
StopAU9
35%
ENDEND13%
Stop
AU7
Stop
AU7
22%
AU5AU5
14%
AU17AU17
11%58%
42%
AU14AU14
Stop
AU7
Stop
AU7
AU5AU5
Stop
AU5
Stop
AU5
ENDEND
32%
18%
32%
100%100%
18%
100%
100%
AU5AU5
81%
19% AU14AU14
AU17AU17
100%
ENDEND
100%
10%
StopAU5
StopAU5
71%
Eyelids
Drop
Eyelids
Drop AU25AU25
100%
100%
56% 44%
Stop
AU7
Stop
AU7
AU5AU5
AU5AU5
ENDEND
AU17AU17
51%
49%
100%
36%
64%
100%
100%
100%
T-patterns found in the « Hostility » Cluster
Initiating event is : «AU7».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 211/282
206
T-patterns found in the « Hostility » Cluster
Initiating event is : «AU9».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 212/282
207
T-patterns found in the « Hostility » Cluster
Initiating event is : «AU1+2».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 213/282
208
T-patterns found in the « Hostility » Cluster
Initiating event is : «AU5».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 214/282
209
T-patterns found in the « Hostility » Cluster
Initiating event is : «AU15».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 215/282
210
T-patterns found in the « Hostility » Cluster
Initiating event is : «AU10 or AU10U».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 216/282
211
STARTSTART
AU12AU12 AU12AAU12A
2% 1%
StopAU12StopAU12
40%
AU1+2AU1+2
25%
EyelidsDrop
EyelidsDrop
18%AU17AU17
17%
StopAU12A
StopAU12A41%
Eyelids
Drop
Eyelids
Drop
34%
Head
Turned
Away
Head
Turned
Away
25%
ENDEND
72%
15%
Stop
AU20
Stop
AU20
13%
Stop
AU1+2
Stop
AU1+2
100%
100%
100%
AU17AU17
100%
Stop
AU17
Stop
AU17
100%
ENDEND100%
Stop
AU17
Stop
AU17
100%
StopAU12StopAU12
100%
ENDEND
100%
ENDEND100%
25%35%
Head Raise
& Turn
Head Raise
& Turn
40%
StopAU12A
StopAU12A
100%
ENDEND
100%
37%
Word
Stress
Word
Stress
26%
StopAU12A
Stop
AU12A
26%
100%
100%
T-patterns found in the « Hostility » Cluster
Initiating event is : «AU12 or AU12A».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 217/282
212
T-patterns found in the « Hostility » Cluster
Initiating event is : «AU14 or AU14U».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 218/282
213
T-patterns found in the « Hostility » Cluster
Initiating event is : «AU17».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 219/282
214
STARTSTART
4%
Stop
AU20
Stop
AU20
35%
AU21AU21
23%AU1+2AU1+2
AU20AU20
22%
AU16AU1620%
ENDEND
31%35%
AU1+2AU1+2
Stop
AU16
Stop
AU16
20%
27%
AU10AU10
19%
Stop
AU1+2
Stop
AU1+2
54%
Stop
AU10
Stop
AU10
100%
100%
100%
Stop
AU1+2
Stop
AU1+2
StopAU20
StopAU20
StopAU5
StopAU5
AU5AU5
AU10AU10
ENDEND41%
100%
25%
11%
33%
Stop
AU10
Stop
AU10
100%
100%
100%
46%54%
100%
Stop
AU21
Stop
AU21
StopAU20
StopAU20
Stop
AU17
Stop
AU17
ENDEND
67%33%
57%
18%100%
25%
100%
AU17AU17
18%
AU14AU14
Stop
AU17
Stop
AU17
ENDEND
StopAU20
StopAU20
Stop
AU16
Stop
AU1650%
100%
100%
32%
100%
100%
100%
ENDEND
100%
T-patterns found in the « Hostility » Cluster
Initiating event is : «AU20».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 220/282
215
STARTSTART
AU23AU23
1%
Stop
AU23
StopAU23
AU7AU7
AU14AU14
Stop
AU24
Stop
AU24
43%
22%
14%21%
77%ENDEND
StopAU17
StopAU17
33% 100%
Stop
AU7
Stop
AU7
100%
ENDEND100%
Stop
AU14
Stop
AU14Stop
AU23
Stop
AU23Stop
AU24
Stop
AU24
33%33%33%
ENDEND
100%100% 100%
Stop
AU17
Stop
AU17
ENDEND
57%
43%100%
T-patterns found in the « Hostility » Cluster
Initiating event is : «AU23».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 221/282
216
STARTSTART
AU24AU24
1%
StopAU24StopAU24
AU14AU14
47%53%
50%
ENDEND
AU1+2AU1+2
15%
Stop
AU1+2
Stop
AU1+2
100%
100%
Head
Shake
Head
Shake
30%
55%
Verbal
Filler
Verbal
Filler
50%
ENDEND100%
Stop
AU14
Stop
AU14
ENDEND
100%
100%
T-patterns found in the « Hostility » Cluster
Initiating event is : «AU24».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 222/282
217
STARTSTART
AU12AU12
20%
AU1+2AU1+2
15%
AU10AU10 AU17AU17
47% 53%
ENDEND
Stop
AU1+2
Stop
AU1+2
100%100%
100%
AU17AU17
26%
11%
Eyelids
Drop
Eyelids
Drop
24%
Look
Down
Look
Down
35%
AU1+2AU1+2
14%
Lower
Head
Lower
Head
ENDEND
19%
AU5AU5
20%
16%
Look
Down
Look
Down
17%
Stop
AU17
Stop
AU17
14%
AU5AU5
AU26AU26
Stop
AU1+2
Stop
AU1+2
100%
100%
100%
100%
100%
100%
100%
AU5AU5
AU26AU26
ENDEND
100%
100%
100%
T-patterns found in the « Embarrassment » Cluster
Initiating event is: «AU12».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 223/282
218
STARTSTART
AU15AU15
AU12AU12
12%
24%
AU17AU17
20%
Look
Down
Look
Down
16%
Lower
Head
Lower
Head
21%
Head
Turned
Away
Head Turned
Away
19%
AU5AU5
Stop
AU5
Stop
AU5
ENDEND
Lower
Head
Lower
Head
ENDEND
AU1+2AU1+2
Head
Turned Away
Head
Turned Away
57%
100%
100%
100%
100%100%
20%23%
LookDown
LookDown
41%
59%
100%
100%
100%
Stop
AU5
Stop
AU5
100%
100%
T-patterns found in the « Embarrassment » Cluster
Initiating event is : «AU15».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 224/282
219
STARTSTART
AU12UAU12U
10%
AU1+2AU1+2
PausePause AU17AU17Head
Tilting
Head
Tilting
Stop
AU1+2
Stop
AU1+2
Eyelids
Drop
Eyelids
Drop
18%
AU5AU5
17%14%
11%14%
12%
14%
ENDEND
100%
100%
100%
AU26AU26
100%
100%
100%
100%
Lower
Head
Lower
Head
100%
100%
T-patterns found in the « Embarrassment » Cluster
Initiating event is : «AU12 Unilateral».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 225/282
220
STARTSTART
AU7AU7
10%
AU10AU10
18%
Head
Turned Away
Head
Turned
Away
33%
AU12UAU12U
14%
LookDown
LookDown
18%
AU5AU517%
AU1+2AU1+2
Speak Speak
ENDEND
23%
100%
100%
AU17AU17
AU12AU12
29%
100%
48%
100%
AU12AU12
AU17AU17
30%
100%
ENDEND
100%
39%
Lower Head
Lower Head
31%
100%
AU12AU12
Eyelids
Drop
Eyelids
Drop
Look Down
Look Down
ENDEND
AU5AU5
StopAU7
StopAU7
100%
100%
100%
100%
100%
46% 15%
39%
100%
EyelidsDrop
EyelidsDrop
LookDown
LookDown
ENDEND
100%
52%100%
48%
T-patterns found in the « Embarrassment » Cluster
Initiating event is : «AU7».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 226/282
221
Look AwayLook Away
STARTSTART
8%
AU5AU5
13%
AU1+2AU1+2
11%
AU12AU12
23% AU26AU26
43%
AU15AU1510%
AU5AU5
AU12AU12
AU1+2AU1+2
100%
100%
100%
ENDEND
100%
Eyelids
Drop
Eyelids
Drop
100%
Stop
AU1+2
Stop
AU1+2
100%
100%
100%
AU12AU12
Stop
AU5
Stop
AU5
AU1+2AU1+2
AU10AU10
AU12AU12
100%
100%
100%
100%
T-patterns found in the « Embarrassment » Cluster
Initiating event is: «Look Away».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 227/282
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 228/282
223
STARTSTART
AU12AU12
Look DownLook Down
7 %
30 %
AU12UAU12U
29%
AU5AU5
15%
Stop
AU1+2
Stop
AU1+2
AU7AU7
16%10%
AU5AU5
ENDEND
24%
100%
PausePause
26%
100%
Stop
AU1+2
Stop
AU1+2
50%
AU5AU5
AU26AU26
100%
100%
100%
Lower Head
Lower Head
Eyelids
Drop
EyelidsDrop
ENDEND100%
39%31%
30%
100%
AU15AU15
100%
AU17AU17
AU1+2AU1+2
ENDEND
67%33%
100%
100%
AU15AU15
Head
Turned
Away
Head
Turned
Away
AU5AU5
ENDEND
100%
50%
50% 100%
100%
StopAU1+2
StopAU1+2AU5AU5
AU17AU17 ENDEND
100%
100%
100%
52%48%
T-patterns found in the « Embarrassment » Cluster
Initiating event is: «Look Down».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 229/282
224
STARTSTART
AU10AU10
6%
AU10AU10
15%
AU12AU12
45%
AU17AU17
15%
Look
Down
Look
Down
25%
AU17AU17
AU1+2AU1+2
ENDEND
100%
100%
100%
AU12AU12
StopAU10
StopAU10
Stop
AU12
Stop
AU12
100%
100%
100%
100%
34%36%
30%
100% 100%
Stop
AU1+2
Stop
AU1+2Stop
AU12
Stop
AU12
100%
T-patterns found in the « Embarrassment » Cluster
Initiating event is : «AU10».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 230/282
225
STARTSTART
AU17AU17
6%
AU10AU10
31%
AU1+2AU1+2
40%
60%
AU12AU12
Stop
AU12
Stop
AU12
ENDEND
Speak Speak
100%
100%100%
100%
69%
Head
Raise
Head
Raise
AU7AU7
AU15AU15Stop
AU12
Stop
AU12
ENDEND
AU12UAU12U
Lower
Head
Lower
Head
AU10AU10
AU10AU10
100%
100%
100%
100%
100%
100%
14%
16%
100%
56% 14%
T-patterns found in the « Embarrassment » Cluster
Initiating event is : «AU17».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 231/282
226
STARTSTART
Eyelids DropEyelids Drop
LookDown
LookDown
34%
Lower Head
Lower Head
22%
AU12UAU12U
23%
Stop
AU1+2
Stop
AU1+2
21%
5%
ENDEND
100%
AU12UAU12U
AU7AU7
ENDEND
AU5AU5
70%
30%
100%
100%
100%
AU12AU12
100%
PausePauseStop
AU1+2
Stop
AU1+2
50%50%
ENDEND
100%100%
AU15AU15 AU7AU7
Head Turned
Away
Head Turned
Away
100%
50%50%
100%
100%
T-patterns found in the « Embarrassment » Cluster
Initiating event is : «Eyelids Drop».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 232/282
227
STARTSTART
AU5AU5
3%
AU12AU12
AU15AU15 Lower
Head
Lower
Head
Head
Turned
Away
Head
Turned
Away
Eyelids
Drop
Eyelids
Drop
17%
17%23%
18%
30%
Lower
Head
Lower
Head AU12AU12
Stop
AU12
Stop
AU12
100%
100%
100%
ENDEND100%
AU12AU12
AU26AU26
ENDEND
AU12UAU12U
ENDEND 100%
100%
100%
100%
100%
Eyelids
Drop
Eyelids
Drop
Look
Down
Look
Down
AU12AU12
ENDEND
100%
100%
100%
100%
AU26AU26
100%
100%
T-patterns found in the « Embarrassment » Cluster
Initiating event is : «AU5».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 233/282
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 234/282
229
STARTSTART
AU4AU4
1%
AU4AU4
38%
ENDEND
100%
AU5AU5
30%
100%
Head
Raise
Head
Raise
32%
100%
T-patterns found in the « Embarrassment » Cluster
Initiating event is : «AU4».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 235/282
230
STARTSTART
AU9AU9
1%
LookDownLookDown AU5AU5 Eyelids
DropEyelidsDrop
32%34%34%
ENDEND
100%100%100%
T-patterns found in the « Embarrassment » Cluster
Initiating event is : «AU9».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 236/282
231
STARTSTART
AU20AU20
1%
Lower Head
Lower Head
34%
LookAway
LookAway
Head
Turned Away
Head
Turned Away
32%34%
ENDEND
100%100%
100%
T-patterns found in the « Embarrassment » Cluster
Initiating event is : «AU20».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 237/282
232
STARTSTART
AU23AU23
1%
Stop
AU1+2
Stop
AU1+2
Stop
AU23
Stop
AU23
53% 47%
ENDEND
100%100%
T-patterns found in the « Embarrassment » Cluster
Initiating event is : «AU23».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 238/282
233
STARTSTART
AU17AU17
AU24AU24
29%
Look AtLook At
Stop
AU17
Stop
AU17
23%
21%
51%
ENDEND
46%Stop
AU24
Stop
AU24
Stop
AU17
Stop
AU17
27%27%
100%100%
Speak Speak
76%
24%
100%
EyelidsDrop
EyelidsDrop
Stop
AU17
Stop
AU17
Speak Speak
ENDEND
Look
Away
Look
Away
Stop
AU17
Stop
AU17
29%
100%
100%100%
100%
22%
38%
40%
71%
T-patterns found in the « Surprise» Cluster
Initiating event is : «AU17».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 239/282
234
STARTSTART
AU1+2AU1+2
15%
AU5AU5
StopAU5
StopAU5
58%
PausePause
AU25AU25
ENDEND
Stop
AU1+ 2
Stop
AU1+ 2
Head
Raises
Head
Raises
Stop
AU1+ 2
StopAU1+ 2
42%
11%
100%
100%
100%
100%
100%
39%
61%
100%
T-patterns found in the « Surprise» Cluster
Initiating event is : «AU1+2».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 240/282
235
STARTSTART
AU5AU5
15%
Stop
AU5
Stop
AU5AU1+2AU1+2
Stop
AU5
Stop
AU5
Stop
AU1+2
Stop
AU1+2
Blink Blink
ENDEND
37% 63%
51%
49%
100%
100%
100%
100%
T-patterns found in the « Surprise» Cluster
Initiating event is : «AU5».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 241/282
236
STARTSTART
Head OnHead On
17%
AU1+2AU1+2 AU25AU25 AU5AU5
27%41%
32%
AU26AU26AU5AU5
ENDEND
100%
100%
100%100%
100%
T-patterns found in the « Surprise» Cluster
Initiating event is : «Head On».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 242/282
237
STARTSTART
PausePause
15%
AU25AU25 AU26AU26
62% 38%
AU26AU26 AU5AU5
Stop
AU5
Stop
AU5ENDEND
37% 63%
100%
100%
42%
58%
Stop
AU1+2
Stop
AU1+2
Stop
AU5
Stop
AU5
100%
100%
100%
T-patterns found in the « Surprise» Cluster
Initiating event is : «Pause».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 243/282
238
STARTSTART
Blink Blink
5%
Eyelids
Drop
Eyelids
Drop
Look
Down
Look
Down
Look
At
Look
At
100%
59% 41%
AU5AU5
ENDEND
100%
100%
100%
T-patterns found in the « Surprise» Cluster
Initiating event is : «Blink».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 244/282
239
STARTSTART
AU7AU7
2%
ENDEND
StopAU7
StopAU7
100%
100%
STARTSTART
Look AtLook At
3%
Eyelids
Drop
Eyelids
Drop
Look
Down
Look
Down
ENDEND
100%
100%
100%
T-patterns found in the « Surprise» Cluster
Initiating event is: «AU7».
T-patterns found in the « Surprise» Cluster
Initiating event is: « Look At».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 245/282
240
STARTSTART
PausePause
22%
AU15AU15
100%
52% 48%
EyelidsDrop
Eyelids
Drop
Look
Away
LookAway
Look
Away
Look
AwayLook
Down
Look
Down
59%41%
ENDEND
StopAU15StopAU15
ENDEND
38%
Look
Down
Look
Down
Stop
AU15
Stop
AU15
100%
100%
100%
62%
13% 87%
Eyelids
Drop
Eyelids
DropLook
Down
Look
Down
36%64%
LookDown
LookDown
StopAU15StopAU15
ENDEND
Eyelids
Drop
Eyelids
Drop
StopAU15StopAU15
100%
100%
100%
84%
16%
64%
36%
100%
T-patterns found in the « Sadness» Cluster
Initiating event is : «Pause».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 246/282
241
STARTSTART
Blink Blink
17%
AU1+2AU1+2 AU17AU17
51% 49%
Eyelids
Drop
Eyelids
Drop
Look
Down
Look
Down
Look
Away
Look
Away
53%
33%
14%
Look
Away
Look
Away
63%
25%Look
Down
Look
Down AU17AU17
ENDEND
100%
100%
Eyelids
Drop
Eyelids
Drop
10%
AU15AU15
21%
LookDown
LookDown
11%
Stop
AU17
Stop
AU17
ENDEND
100%
100%
Stop
AU15
Stop
AU15
100%
37%
48%
StopAU17
StopAU17
63%
100%
100%
Look
Down
Look
Down
81%
Look
Away
Look
Away
ENDEND
19%
100%
100%14%
Eyelids
Drop
Eyelids
Drop
Stop
AU1+2
Stop
AU1+2
24%
38%
Look
Down
Look
Down
62%
100%
100%
Look
Away
Look
Away
100%
100%
T-patterns found in the « Sadness» Cluster
Initiating event is: «Blink».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 247/282
242
T-patterns found in the « Sadness» Cluster
Initiating event is: «AU15».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 248/282
243
T-patterns found in the « Sadness» Cluster
Initiating event is: «AU1+2».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 249/282
244
STARTSTART
AU17AU17
9%
Look Away
Look Away PausePause
54% 46%
LookDown
LookDown
AU15AU15
ENDEND6%
63%
9%StopAU17
StopAU17
3%
Eyelids
Drop
Eyelids
Drop
17%
ENDEND
18%
Stop
AU17
Stop
AU17
10%
AU15AU15
28%
Look Away
Look Away
44%
EyelidsDrop
EyelidsDrop
100%
54% 46%
Look Away
Look Away
StopAU17
StopAU17
ENDEND
100%100%
Speak Speak 100%
100%
LookDown
LookDown
Eyelids
Drop
Eyelids
Drop
ENDEND
StopAU17
StopAU17
Speak Speak
EyelidsDrop
EyelidsDrop
Look
Down
Look
Down
100% 100%
100%
100%
48%
100%
18% 15%
15%
100%
Stop
AU17
Stop
AU17
ENDEND
Look
Down
Look
Down
27%
40%
33%
100%
100%Stop
AU17
Stop
AU17 ENDEND
40%60%
100%
AU15AU15
Stop
AU15
Stop
AU15
ENDEND
100%
100%
100%
100%
T-patterns found in the « Sadness» Cluster
Initiating event is : «AU17».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 250/282
245
STARTSTART
AU5AU5
5%
StopAU5
StopAU5
100%
ENDEND
8%
Blink Blink
92%
Look
Away
Look
Away
20%
Look
Down
Look
Down
61%
19%
61%
41%
LookAway
LookAway
59%
100%
Look
Down
Look
Down
ENDEND
22%EyelidsDrop
EyelidsDrop
17%
100%100%
T-patterns found in the « Sadness» Cluster
Initiating event is : «AU5».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 251/282
246
STARTSTART
Look DownLook Down
3%
AU15AU15
17%
Look
Away
Look
Away
100%
Eyelids
Drop
Eyelids
Drop
100%
Stop
AU15
Stop
AU15
ENDEND
100%
100%
Eyelids
Drop
Eyelids
Drop
48%
Look Away
Look
Away
100%
AU15AU15 PausePause
36% 64%
Stop
AU15
Stop
AU15
100%
100%
100%
35%
Look
Away
Look
Away
PausePause EyelidsDrop
EyelidsDrop
AU15AU15
Stop
AU15
Stop
AU15
100%
100%
100%100%
12% 88%
T-patterns found in the « Sadness» Cluster
Initiating event is : «Look Down».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 252/282
247
STARTSTART
2%
AU25AU25
PausePause AU15AU15
93% 7%
AU15AU15 Stop
AU15
Stop
AU15
Eyelids
Drop
Eyelids
DropLook
Away
Look
Away ENDEND
StopAU15
StopAU15
Eyelids
Drop
Eyelids
DropLookAway
LookAway
ENDEND
100%100%
100%19%
44%
35%
55%45%
100%
100%100%
T-patterns found in the « Sadness» Cluster
Initiating event is : «AU25».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 253/282
248
STARTSTART
Head OnHead On
2%
Head
Turned Away
Head
Turned Away
Look AtLook At
59% 41%
PausePause AU17AU17
AU15AU15
Stop
AU15
StopAU15
ENDEND
StopAU17
StopAU17
Look AtLook At PausePause
AU15AU15
Stop
AU15
Stop
AU15
82% 18%
100%
100%
100%
100%
100%
100%
100%
100%
100%
51%69%
T-patterns found in the « Sadness» Cluster
Initiating event is : «Head On».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 254/282
249
STARTSTART
Look AwayLook Away
2%
AU15AU15 EyelidsDrop
EyelidsDrop
Head Turned Away
Head Turned Away
29%13%
58%
Look
Down
Look
Down
AU15AU15
AU15AU15
PausePause
Eyelids
Drop
Eyelids
Drop
Stop
AU15
Stop
AU15
ENDEND
Stop
AU15
Stop
AU15
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
T-patterns found in the « Sadness» Cluster
Initiating event is: «Look Away».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 255/282
250
STARTSTART
AU12AU12
2%
HeadOn
HeadOn
Head
Tilted
Side
Head
Tilted
Side
StopAU12
Stop
AU12
ENDEND
Look
At
Look
At PausePause
Look
At
Look
At
24%
20%
56%
100%
100%
100%
100%
100%100%
T-patterns found in the « Sadness» Cluster
Initiating event is : «AU12».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 256/282
251
STARTSTART
Look AtLook At
2%
AU5AU5HeadOn
Head
On PausePause
41%51%
8%
Stop
AU5
Stop
AU5
Blink Blink
Look
Down
Look
DownLook
Away
Look
Away
ENDEND
HeadTurned
Away
HeadTurned
Away
PausePause
AU15AU15
Stop
AU15
Stop
AU15
Stop
AU17
Stop
AU17
AU17AU17
ENDEND
100%
100%
100%
100%
100%
100%
100%
100%
100%100% 100%
61%39%
100%
100%
T-patterns found in the « Sadness» Cluster
Initiating event is : «Look At».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 257/282
252
STARTSTART
AU4AU4
2%
AU7AU7 StopAU4
StopAU4
StopAU7
StopAU7
61%22%
17%
StopAU4
StopAU4
ENDEND
100%
100%
100%
100%
T-patterns found in the « Sadness» Cluster
Initiating event is : «AU4».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 258/282
253
STARTSTART
Eyelids
Drop
Eyelids
Drop
2%
Look
Away
Look
AwayLook
Down
Look
Down
82% 18%
Look
Away
Look
Away
PausePause
AU15AU15
Stop
AU15
Stop
AU15
ENDEND
AU5AU5
StopAU5
StopAU5
100%
100%
100%
100%
100%
100%
100%
100%
T-patterns found in the « Sadness» Cluster
Initiating event is : « Eyelids Drop».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 259/282
254
STARTSTART
Speak Speak
1%
PausePause
100%
AU17AU17
36%
AU25AU25
18%
AU15AU15
46%
AU25AU25 Look
Away
Look
Away
AU15AU15
Stop
AU15
Stop
AU15
StopAU15
StopAU15
ENDEND
EyelidsDrop
EyelidsDrop
100%
100%
100%
100%44% 56%
100%100%
100% 100%
T-patterns found in the « Sadness» Cluster
Initiating event is : «Speak».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 260/282
255
STARTSTART
AU14UAU14U
1%
Look
Down
Look
Down
Stop
AU14U
Stop
AU14U
AU15AU15
ENDEND
65% 35%
100%
100%
100%
T-patterns found in the « Sadness» Cluster
Initiating event is : «14U».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 261/282
256
STARTSTART
Head Turned AwayHead Turned Away
100%
AU1+2AU1+2
1%
StopAU1+2
StopAU1+2
Look
Away
Look
Away
ENDEND
21%
56%
23%
LookDown
LookDown
100%
100%
100%
T-patterns found in the « Sadness» Cluster
Initiating event is : «Head Turned Away».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 262/282
257
STARTSTART
AU23AU23
1%
Stop
AU17
Stop
AU17
Stop
AU23
Stop
AU23
StopAU17
StopAU17ENDEND
31% 69%
100%
100% 59% 41%
T-patterns found in the « Sadness» Cluster
Initiating event is : «AU23».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 263/282
258
STARTSTART
AU24AU24
1%
Stop
AU24
Stop
AU24
ENDEND
25%
100%
Stop
AU15
Stop
AU15AU26AU26
50%25%
100%100%
T-patterns found in the « Sadness» Cluster
Initiating event is : «AU24».
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 264/282
259
Appendix IV.
Instructions and questionnaires
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 265/282
260
Adjectifs Définitions
Gênée /
embarrasséeSe dit d'une personne qui donne l'impression d'être mal à l'aise.
SoulagéeSe dit d'une personne qui donne l'impression d'être soudainement délivrée d'une souffrance
(morale ou physique).
Dégoutée /
révulsée
Se dit d'une personne qui donne l'impression de ressentir de l'aversion, de l'écœurement ou de la
répulsion.
Ironique Se dit d'une personne qui dit le contraire de ce qu'elle veut faire comprendre.
Fière Se dit d'une personne qui donne l'impression d’être satisfaite d’elle-même ou d’un proche
Surprise / étonnée Se dit d'une personne qui donne l'impression d'être étonnée, surprise ou stupéfaite.
Tendue /
nerveuseSe dit d'une personne qui donne l'impression d’être crispée, tendue ou stressée.
Amusée Se dit d'une personne qui donne l'impression d'être agréablement divertie.
Triste Se dit d'une personne qui donne l'impression d'être chagrinée, affligée ou malheureuse.
Dédaigneuse /
méprisante
Se dit d'une personne qui donne l'impression de juger quelqu'un ou quelque chose comme
indigne de son estime et de son attention.
JoyeuseSe dit d'une personne qui donne l'impression d'être dans un état de bien être général, de
contentement ou de bonheur profond.
Irritée / agaçée en
colèreSe dit d'une personne qui se montre mécontente, énervée ou en colère.
EnthousiasméeSe dit d'une personne qui donne l’impression d'être fascinée, captivée ou passionnée par quelque
chose ou quelqu’un
Préoccupée /
InquièteSe dit d'une personne qui donne l'impression d’être soucieuse, anxieuse ou craintive.
AffectueuseSe dit d'une personne qui donne l'impression d'être généreuse, bienveillante et pleine d'attention
envers quelqu'un
Perplexe Se dit d'une personne qui se montre interrogative, indécise ou hésitante.
Déçue Se dit d'une personne donne l'impression d'être désenchantée, désillusionnée ou dépitée.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 266/282
261
General instructions for emotion narratives eliciting task
Je vais vous demander de me raconter en détail pendant environ 5 minutes le souvenir d'un
évènement que vous avez vécu personnellement et qui a déclenché chez vous une forte réaction
émotionnelle. L'évènement peut être récent ou ancien mais il est nécessaire de l'avoir réellement
vécu.
Pour vous aider à retrouver des situations précises, je vais vous donner des indices qu'il s'agira
d'utiliser pour vous rappeler d'évènements qui ont déclenchés une émotion forte et dont les
caractéristiques correspondent très exactement à l'ensemble des indications données.
Nous allons répéter cette procédure pour 5 souvenirs différents, et je vous donnerai à chaque fois
de nouveaux indices.
Une fois que vous aurez clairement retrouvé le souvenir d'un évènement qui correspond aux
indices que j'aurai donné, je vous demanderai de prendre quelques instants pour essayer de vous
remettre le plus possible dans la situation, en pensant aux personnes qui étaient impliquées dans
l'évènement et à la manière dont il s'est déroulé. Le but est pour vous de retrouver le plus de
détails concernant cet évènement.
Une fois que vous serez prêt à le faire, vous commencerez à me raconter cet évènement en détail
comme vous le feriez si vous le partagiez avec un ou une ami(e) à qui vous souhaiteriez raconter
un évènement qui a déclenché une émotion forte.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 267/282
262
Consignes Colère
Un évènement fortement injuste survenu de manière très soudaine et difficilement prévisible qui
a constitué une menace évidente et importante pour la poursuite de vos buts ou la satisfaction de
vos besoins importants dans la situation. C’est un évènement qui a été provoqué
intentionnellement par le comportement d’une autre personne ou d’un groupe de personnes.
Vous n’aviez pas ou peu souvent été confronté à ce type de situation par le passé. Il était tout à
fait possible d’agir sur la situation pour la modifier, et vous aviez les ressources personnelles
pour le faire. Par ailleurs, vous aviez facilement la possibilité de vous adapter aux conséquences
de la situation qui ne pouvaient être modifiées par la suite. Néanmoins, pour le faire il était
important de réagir rapidement.
Indices de récupération: Colère chaude
Evènement fortement injuste
Survenu de manière très soudaine et imprévisible
Evènement peu familier
Menace la poursuite de vos buts ou la satisfaction de vos besoins dans la situation
Provoqué intentionnellement par le comportement d’une autre personne ou d’un groupe de
personnes
Evènement qu'il était possible d'influencer
Vous aviez des ressources suffisantes pour modifier la situation
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 268/282
263
Consignes Culpabilité
Un évènement durant lequel vous ne vous êtes pas montré correct avec une autre personne ou un
groupe de personne. Vous avez causé l’évènement de manière intentionnelle car vous saviez que
vous pourriez ainsi atteindre un but important ou satisfaire un besoin fondamental. Vous étiez
tout à fait conscient des conséquences négatives probables de votre comportement pour autrui.
Dans la situation, vous n’avez pas respecté vos propres valeurs et l’image que vous aviez de vous
même.
Indices de récupération: Culpabilité
Vous ne vous êtes pas comporté(e) de manière correcte
Vous avez causé l'évènement de manière intentionnelle
Pour satisfaire un besoin ou atteindre un but personnel
Vous étiez conscient des conséquences négatives probables de votre comportement pour autrui
au moment d'agir
Vous n'avez pas respecté vos propres valeurs
Vous n'avez pas été à la hauteur de l'image que vous aviez de vous-même
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 269/282
264
Consignes Dédain / Mépris
Un évènement clairement injuste provoqué de manière intentionnelle par le comportement d’une
personne ou un groupe de personnes. Même si les conséquences de cet évènement n’ont pas
affectées vos propres intérêts dans la situation, le comportement de cette personne ou de ce
groupe de personnes était fortement incompatible avec vos sens des valeurs. La situation ou
l’évènement aurait facilement pu être modifié en agissant relativement rapidement. Mais à titre
personnel, il n’y a pas grand-chose que vous auriez pu faire dans cette situation pour changer les
choses.
Indices de récupération : Dédain / Mépris
Evènement injuste
Causé intentionnellement
Par une autre personne ou un groupe de personne
Pas de conséquences pour vous
Evènement très fortement incompatible avec vos sens des valeurs
La situation aurait pu être influencée facilement en agissant rapidement
Personnellement vous ne pouviez pas faire grand-chose
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 270/282
265
Consignes Peur
Un évènement très déplaisant survenu de manière soudaine et imprévisible. L’évènement
impliquait une menace évidente et importante pour la poursuite de vos buts ou la satisfaction
d’un besoin fondamental. L’évènement ou la situation a été causé soit par une autre personne, un
groupe de personnes, soit par des causes naturelles. Vous n’aviez pas ou relativement peu
souvent été confronté(e) à ce type d’évènement par le passé, et vous ne vous attendiez pas à ce
qu’il arrive à ce moment. Il aurait été urgent d’agir mais vous n’aviez pas la possibilité de faire
tourner la situation à votre avantage. Vous saviez qu’il serait difficile de vous adapter aux
conséquences de la situation qui ne pourraient être modifiée par la suite.
Indices de récupération: Peur
Un évènement très déplaisant
Causé par une autre personne, un groupe de personnes ou des causes naturelles
Survenu de manière soudaine et imprévisible
Evènement peu familier
Auquel vous ne vous attendiez pas à ce moment
Menace la poursuite de vos buts ou la satisfaction de vos besoins dans la situation
Il était urgent de réagir
Vous n'aviez pas le pouvoir de modifier la situation
Vous avez tout de suite su qu'il serait difficile pour vous de vous adapter aux conséquences
irréversibles de l'évènement
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 271/282
266
Consignes: Tristesse
Un évènement auquel vous n’aviez pas ou relativement peu souvent été confronté par le passé.
Cet évènement implique l’impossibilité de continuer à poursuivre un but personnel ou à satisfaire
un besoin fondamental très important. L’évènement a été provoqué soit : par les circonstances
hasardeuses de la vie, soit par négligence ; la votre ou celle d’une autre personne. Au moment ou
se déroule l’évènement, les conséquences et les implications pour l’avenir étaient très clairs pour
vous. Vous saviez qu’il n’y avait plus rien à faire pour changer la situation et vous-même étiez
impuissant face aux circonstances. Bien que possible, vous saviez qu’il serait difficile
d’apprendre à vivre avec les conséquences de cet évènement.
Indices de récupération: Tristesse
Peu familier
Impossibilité de continuer à poursuivre un but très important ou à satisfaire un besoin
fondamental
Causé par: les hasards de la vie / votre négligence ou celle d'une autre personne
Les conséquences de l'évènement pour l'avenir vous apparaissaient clairement
La situation ne pouvait pas humainement être influencée
Vous étiez impuissant face aux circonstances
Vous saviez qu'il serait difficile de vous adapter aux conséquences irréversibles de l'évènement
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 272/282
267
Written instructions presented on the screen before the start of the rating study.
Vous allez évaluer une centaine d´extraits de séquences vidéo dans lesquelles des personnes font
le récit d´expériences personnelles. Avant de commencer, veuillez lire attentivement les
consignes qui suivent. Pour chaque séquence réponses seront données sur des échelles continues
et sans gradation en déplaçant un curseur à l´endroit approprié sur chaque échelle.'
Pour déplacer le curseur, bougez la souris à l´endroit de l´échelle qui correspond à la réponse que
vous souhaitez donner. Pour valider votre choix appuyez sur le bouton gauche de la souris.'
Les jugements portent sur les impressions que vous fait la personne sur la vidéo.
Il n y a pas de réponses justes ou fausses, ce qui nous importe ce sont vos impressions.
Vos jugements porteront sur des adjectifs. Ex: triste, perplexe, etc.
Enthousiasmée vous placez le curseur vers la gauche, plus vous indiquez que l´adjectif vous
semble peu correspondre à votre impression de la personne dans la séquence vidéo.
Plus vous placez le curseur vers la droite, plus vous indiquez que l´adjectif vous semble
beaucoup correspondre à votre impression de la personne dans la séquence vidéo.
La validité des résultats de cette étude dépend de votre capacité à maintenir votre attention sur la
tâche. Il est important que vous regardiez attentivement la totalité de la séquence vidéo avant de
répondre. Si vous vous sentez fatigué(e), prenez une pause entre deux séquences.
N´hésitez pas à poser des questions au chercheur si vous avez besoin de clarifications.
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 273/282
268
GENEVA EMOTION WHEEL
Joie
Bonheur
Savourer
Plaisir
Soulagement
Apaisement
Se languirNostalgie
IrritationColère
DédainMépris
CulpabilitéRemords
InquiétudePeur
TristesseDésespoir
EmerveillementAdmiration
AffectionÊtre amoureux
DéceptionRegrets
PitiéCompassion
AmusementRire
DégoûtRépulsion
Etre envieux
Jalousie
EmbarrasHonte
FiertéExaltation
Etonnementsurprise
IntérêtEnthousiasme
Aucuneémotionressentie
Autreémotionressentie
Sujet: _________
Item: _________
N° : _________
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 274/282
269
Appendix V.
Consent Form
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 275/282
270
Forme de consentement du participant
Responsables du projet :
Prof. Susanne Kaiser- FAPSE / 022 379.92.16. / Unimail : 5132
Susanne.kaiser@pse.unige.ch
Stéphane With, assistant – FAPSE / 022. 379.92.06. / Unimail : 5135
Stephane.with@pse.unige.ch
Objectifs de la recherche: L’objectif du projet, qui constitue une part essentielle du travail de
thèse de Monsieur Stéphane With, est d’étudier la narration d’événements émotionnels
autobiographiques. Ce projet est dirigé par les professeur Susanne Kaiser
Procédure:
Si j’accepte de participer à cette recherche, je suis conscient(e) que je prendrai part à une
expérience de psychologie dans laquelle il me sera demandé de me souvenir et de raconter des
événements de ma vie personnelle dans laquelle j’ai ressenti des émotions négatives.
L’expérience dure environ 1h 45 minutes et ma participation est rémunérée CHF 25.-
Toutes les informations me concernant qui seront recueillis pendant l’expérience resteront
strictement confidentielles et seront utilisées uniquement à des fins de recherches et de
présentations scientifiques.
Ma participation est volontaire et je suis libre d’arrêter l’étude à tout moment.
_________________________________ ________________________
Nom du / de la participant(e) en majuscule Signature et Date
_________________________
Signature du chercheur et date Stéphane With
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 276/282
271
Appendix VI.
Normality tests for Act ion Units Distribution in Database
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 277/282
272
Normality tests for Action Units Distribution in Database
(Kolmogorov-Smirnov & Lilliefors tests for normality)
Histogram: AU1
K-S d=.49192, p<.01 ; Lil liefors p<.01
Expected Normal
-0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
X <= Category Boundary
0
20
40
60
80
100
120
140
160
180
200
N o . o f o b s .
Histogram: AU1_2
K-S d=.22097, p<.01 ; Lil liefors p<.01
Expected Normal
-1 0 1 2 3 4 5 6 7
X <= Category Boundary
0
10
20
30
40
50
60
70
80
N o . o f o b s .
Histogram: AU4
K-S d=.42777, p<.01 ; Lil liefors p<.01
Expected Normal
-1 0 1 2 3 4 5 6
X <= Category Boundary
0
20
40
60
80
100
120
140
160
180
N o . o f o b s .
Histogram: AU5
K-S d=.23288, p<.01 ; Lill iefors p<.01
Expected Normal
-1 0 1 2 3 4 5 6 7
X <= Category Boundary
0
10
20
30
40
50
60
70
80
N o . o f o b s .
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 278/282
273
Histogram: AU6
K-S d=.37462, p<.01 ; Lill iefors p<.01
Expected Normal
-1 0 1 2 3 4 5 6
X <= Category Boundary
0
20
40
60
80
100
120
140
160
N o . o f o b s .
Histogram: AU7
K-S d=.38899, p<.01 ; Lil liefors p<.01
Expected Normal
-1 0 1 2 3 4 5
X <= Category Boundary
0
20
40
60
80
100
120
140
160
N o . o f o b s .
Histogram: AU9
K-S d=.45158, p<.01 ; Lil liefors p<.01
Expected Normal
-0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
X <= Category Boundary
0
20
40
60
80
100
120
140
160
180
N o . o f o b s .
Histogram: AU10
K-S d=.29073, p<.01 ; Lil liefors p<.01
Expected Normal
-1 0 1 2 3 4 5 6 7
X <= Category Boundary
0
20
40
60
80
100
120
N o . o f o b s .
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 279/282
274
Histogram: AU11
K-S d=.53523, p<.01 ; Lill iefors p<.01
Expected Normal
-0.5 0.0 0.5 1.0 1.5 2.0
X <= Category Boundary
0
20
40
60
80
100
120
140
160
180
200
220
N o . o f o b s .
Histogram: AU12
K-S d=.24838, p<.01 ; Lil liefors p<.01
Expected Normal
-1 0 1 2 3 4 5 6
X <= Category Boundary
0
10
20
30
40
50
60
70
80
N o . o f o b s .
Histogram: AU13
K-S d=.52787, p<.01 ; Lil liefors p<.01
Expected Normal
-0.5 0.0 0.5 1.0 1.5 2.0
X <= Category Boundary
0
50
100
150
200
250
300
N o . o f o b s .
Histogram: AU14
K-S d=.26595, p<.01 ; Lil liefors p<.01
Expected Normal
-2 0 2 4 6 8 10
X <= Category Boundary
0
20
40
60
80
100
120
140
160
N o . o f o b s .
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 280/282
275
Histogram: AU15
K-S d=.29175, p<.01 ; Lill iefors p<.01
Expected Normal
-1 0 1 2 3 4 5
X <= Category Boundary
0
20
40
60
80
100
120
N o . o f o b s .
Histogram: AU16
K-S d=.43928, p<.01 ; Lil liefors p<.01
Expected Normal
-1 0 1 2 3 4 5
X <= Category Boundary
0
20
40
60
80
100
120
140
160
180
N o . o f o b s .
Histogram: AU17
K-S d=.22096, p<.01 ; Lil liefors p<.01
Expected Normal
-1 0 1 2 3 4 5 6
X <= Category Boundary
0
10
20
30
40
50
60
70
80
N o . o f o b s .
Histogram: AU18
K-S d=.53683, p<.01 ; Lill iefors p<.01
Expected Normal
-0.5 0.0 0.5 1.0 1.5 2.0
X <= Category Boundary
0
20
40
60
80
100
120
140
160
180
200
220
N o . o f o b s .
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 281/282
276
Histogram: AU20
K-S d=.38512, p<.01 ; Lill iefors p<.01
Expected Normal
-0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
X <= Category Boundary
0
20
40
60
80
100
120
140
160
N o . o f o b s .
Histogram: AU22
K-S d=.53556, p<.01 ; Lil liefors p<.01
Expected Normal
-0.5 0.0 0.5 1.0 1.5 2.0
X <= Category Boundary
0
20
40
60
80
100
120
140
160
180
200
220
N o . o f o b s .
Histogram: AU23
K-S d=.41669, p<.01 ; Lil liefors p<.01
Expected Normal
-0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0
X <= Category Boundary
0
20
40
60
80
100
120
140
160
N o . o f o b s .
Histogram: AU24
K-S d=.39460, p<.01 ; Lill iefors p<.01
Expected Normal
-1 0 1 2 3 4 5 6
X <= Category Boundary
0
20
40
60
80
100
120
140
160
N o . o f o b s .
8/12/2019 Cluster Fac s
http://slidepdf.com/reader/full/cluster-fac-s 282/282