Gender differences in the relationship between social communication and emotion recognition

gender differences in the relationship between social communication and emotion recognition

We investigated how face-emotion recognition in ASD is different from typically association between emotion recognition and severity of social problems Emotional processing is closely linked to social interaction [20]. We recruited 49 TD adolescents, matched for age and gender as a control group. Gender differences in the relationship between social communication and emotion recognition. Kothari R(1), Skuse D, Wakefield J, Micali N. A secondary aim is to investigate possible gender differences in the the association between social communication and emotion recognition.

Other studies, however, reported either only a small overall advantage in favor of females in the recognition of non-verbal auditory, visual, audio-visual displays of emotion Kret and de Gelder, ; Thompson and Voyer, or even equal performance accuracy for male and female participants in identifying emotions from both, speech-embedded e. To address these diverging findings, it has been suggested that instead of examining gender effects across emotions, specific emotion categories should be considered separately de Gelder, For instance, in a behavioral study Bonebright et al.

They instructed trained actors to record paragraph-long stories, each time using their voice to portray a specified emotion i. Subsequently, undergraduate students listened to each recorded paragraph and tried to determine which emotion the speaker was trying to portray.

Login using

Females were significantly more accurate than males in decoding voices that expressed fear, happiness, and sadness. These gender differences were small but consistent. No gender differences were found for emotional expressions uttered in an angry or neutral tone of voice.

Subsequent evidence showed that females outperform males for utterances spoken in a fearful Demenescu et al. While both genders were found to perform equally well when identifying angry Fujisawa and Shinohara, ; Lambrecht et al. That the accuracy of performance varies across discrete emotion categories e. The above-mentioned studies do not show a consistent gender pattern either regarding overall effects in the performance accuracy of decoding vocal emotions or emotion specific categories [see Table 1 a1 for overall effects in decoding vocal emotions and a2 for decoding performance accuracy by emotion categories].

There are several likely sources for these inconsistencies. One of the reasons may have been the large variety of different types of vocal stimuli e. Other methodological differences that might have been responsible for these conflicting results are related either to the number of emotions studied [which vary from two e.

gender differences in the relationship between social communication and emotion recognition

In a validation study concerning the identification of vocal emotions, Belin et al. Participants were asked to evaluate actors' vocalizations on three emotional dimensions: Results showed higher mean identification rates for intensity and arousal dimensions across all emotion categories when spoken by female actors.

Similar to other findings e. These findings indicate that females compared to males were not only better at decoding but also at identifying emotions in the female voice. Considering emotion-specific effects, it has been shown that vocal portrayals of anger and fear have higher mean identification rates when spoken by male actors Bonebright et al.

In contrast, other investigators observed that fear and disgust were better identified when spoken by a female though a response bias toward disgust when an actor portrayed the emotion and, fear when an actress expressed the emotion was reported; see Collignon et al.

Further research that includes speakers' gender as an additional factor, reports that while gender differences might exist for identifying emotions from speakers' voice, these are not systematic and vary for specific emotions Pell et al. Similar to the performance accuracy of decoding emotions, the evidence with regard to speaker's gender as a relevant factor for identifying emotions from the voice is inconsistent [see Table 1 b1 for overall identification rates by speakers' gender and b2 for identification rates by speakers' gender and emotion category].

The discrepancies in these findings are likely to be attributable to a number of methodological differences, such as recording conditions e.

A seemingly inevitable conclusion after reviewing past work on gender differences in the recognition of vocal expressions of emotion is that conflicting findings have left the exact nature of these differences unclear. Although accuracy scores from some prior studies suggest that females are overall better than males at decoding and encoding vocal emotions, independent of the stimulus type, other studies do not confirm these findings.

Likewise, the question whether women are consistently better than men at decoding and identifying emotions such as happiness, fear, sadness or neutral expressions when spoken by a female, while men have an advantage for anger and disgust, remains unresolved.

Thus, it has been suggested that a comprehensive understanding of gender differences in vocal emotion recognition can only be achieved by replicating these studies while accounting for influential factors such as stimulus type, gender-balanced samples, number of encoders, decoders, and emotional categories Bonebright et al. To address some of these limitations, the present study aimed at investigating, across a large set of speech-embedded stimuli i.

To date, no extensive research on differences between males and females in the recognition of emotional prosody has been conducted and, thus, we based our approach for investigating these effects on the patterns observed in the majority of the aforementioned studies. We first examined whether there are any differences in the performance accuracy of decoding vocal emotions based on listeners' gender i. Specifically, we expected an overall female advantage when decoding vocal emotions, and that they would be more accurate than males when categorizing specific emotions such as happiness, fear, sadness, or neutral expressions.

No gender differences were expected to manifest for emotions uttered in an angry and disgusted tone of voice. Secondly, we tested whether there are any differences for identifying vocal emotions based on speakers' gender i. We hypothesized that vocal portrayals of emotion would have overall significantly higher hit rates when spoken by female than by male actors. Considering emotion-specific effects, we expected that anger and disgust would have higher identification rates when spoken by male actors, whereas portrayals of happiness, fear, sadness, and neutral would be better identified when spoken by female actors.

gender differences in the relationship between social communication and emotion recognition

Finally, we investigated potential interactions between listeners' and speakers' gender for the identification of vocal emotions across all stimuli and for each stimulus type. They were recruited through flyers distributed at the University campus, the ORSEE database for psychological experiments http: Inclusion criteria for participation in the study were: Twelve participants who reported hearing disorders e.

This left a total of participants female, male with a mean age of To assess the performance accuracy between females and males within different types of vocal stimuli i. This allowed us to have a higher number of stimuli in each group resulting in a higher precision of estimated gender or emotion differences within one database and respectively within one of the groups.

To assess whether there were any age differences in the two groups a Wilcoxon-Mann-Whitney test was conducted. The results indicated a significant age difference between females and males in both groups GroupWords: Participants' demographic characteristics are presented in Table 2.

Demographic characteristics of the study population. Throughout the article these two groups will be referred to as Group-Words and Group-Sentences. Participants were reimbursed with course credit or 8 Euros. Features of the selected emotion speech databasesa. To be included in the present study the stimuli had to satisfy the following criteria: We decided to use a wide variety of stimuli representing the spectrum of materials used in emotional prosody research i.

For economic reasons, only a sub-set of stimuli from each database was selected. The stimuli from the remaining other three databases were ordered randomly and the first 10 items per database were selected. Acoustic Analysis The extraction of amplitude dBduration, and peak amplitude of all 1, original stimuli was conducted using the phonetic-software Praat Boersma, As the stimuli used for this study came from different databases with different recording conditions, we controlled for acoustic parameters, including the minimum, maximum, mean, variance, and standard deviation of the amplitude.

Physical volume of stimulus presentation across the four PCs' used in the experiment was controlled by measuring sound volume of the practice trials with a professional sound level meter, Nor Norsonic,Lierskogen, Norway.

Procedure Participants were tested in groups of up to four members. After signing a consent form and completing a short demographic questionnaire concerning age, gender 1 and education level, participants were informed that they would be involved in a study evaluating emotional aspects of vocal stimulus materials.

Afterwards, they were told to put on headphones and carefully read the instructions presented on the computer screen. Before the main experiment, participants were familiarized with the experimental setting in a short training session comprised of 10 stimuli, which were not presented in the main experiment.

They were instructed to carefully listen to the presented stimuli as they would be played only once and that the number of emotions presented might vary form the number of categories given as possible choices see Design and Randomization for an argument related to this approach.

Nonverbal Communication: Gender and Culture

Each trial began with a white fixation-cross presented on a gray screen, which was shown until participants' response had been recorded. The presentation of the stimuli was initiated by pressing the Enter-key. After stimulus presentation, participants had to decide as accurately as possible, in a fixed-choice response format, which of the 7 emotional categories i. These findings are discussed in detail below. When analyzing both boys and girls jointly, higher social communication impairment SCDC was associated with more frequent errors in the recognition of emotion from sad and fearful faces, and more frequent misattributions of faces as happy DANVA.

Frontiers | Gender Differences in the Recognition of Vocal Emotions | Psychology

The pattern changed when looking at each gender separately. Boys with high SCDC scores made more errors in the recognition of emotion from sad and angry faces, but no associations were observed in girls. Findings suggest that the established association between ASD and emotion recognition observed in clinical samples is also present, albeit to a lesser degree, in a general population sample of children.

These findings also support our hypothesis that facial emotion recognition ability would be relatively more impaired in boys with social communication difficulties than in girls with equivalent behavioral traits. The poorer recognition of negative emotions from facial cues that we observed in this sample, when analyzing the performance of all children or boys alone, is qualitatively similar to deficits observed in clinical groups.

Gender differences in the relationship between social communication and emotion recognition.

Emotion recognition deficits in male adults with ASD, high-functioning individuals with autism, and the parents of children with autism, are particularly marked in terms of negative emotions, including sadness, anger, fear, and disgust. They are less marked in the recognition of happiness.

The same pattern of association was observed in girls, whereas in boys high SCDC scores predicted poorer recognition of happiness only. Our findings are consistent with evidence that autistic traits are associated with impaired happiness recognition from nonfacial stimuli such as body movement and vocal cues. This hypothesis is supported by the observation that girls in this sample were more accurate in facial emotion recognition in comparison to boys overall, regardless of SCDC scores.

The social motion task used in this study was novel, and accurate performance could not have been gained from prior exposure.

That the protective mechanism is social, rather than reflecting inherent resilience, is suggested by the observation that girls lacked any advantage over boys in their recognition of novel emotion cues based on movements of inanimate objects. Our findings imply that key features of the autism phenotype, such as impaired facial emotion recognition, which are used to support the clinical assessment of ASD, could be less prominent in girls with equivalent underlying autistic traits.

The implications of this are far reaching with regard to the diagnosis of ASD in females, suggesting that more subtle assessment may be required to identify those individuals with difficulties. Strengths of this study are the use of a large cohort of children, prospective data collection, and the assessment of emotion recognition from 2 different stimuli.

In addition, our findings provide further validation for the Emotional Triangles Task, which is a relatively novel measure. This study does, however, have limitations. First, the ALSPAC cohort has been shown to be broadly representative of the Avon area, but not of the United Kingdom as a whole, 27 limiting the generalizability of findings. Second, mothers who brought their children to clinics were of a higher social class, older, and better educated than those who did not attend.