Repository logo

How Do the Socio-Cognitive Impacts of Real vs. AI-Generated Facial Expressions of Emotion Differ?

Loading...
Thumbnail ImageThumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

Université d'Ottawa | University of Ottawa

Abstract

Media generated by aritificial intelligence (AI) is becoming ubiquitous. As it becomes more realistic and more common-place, it is important to know whether we can distinguish it from real media, how accurately it replicates real media, and what negative socio-emotional effects it could be perpetuating. A critical concern regarding AI-generated faces is that the datasets used to train the generators consist largely of images of White men. When the training sets are biased, the output will be biased. Questions then arise regarding whether AI generators are able to generate faces of varying genders and races with equal verisimilitude. Additionally, do they generate different facial expressions accurately? The three studies presented in this thesis examined these questions with regards to one AI-generator, which uses an implementation of Stable Diffusion. The first study examined how well people can distinguish AI-generated faces from real faces across genders (men and women), races (Asian, Black, and White), and emotional expressions (anger, fear, happiness, and sadness). Examination of d-prime values revealed that, although detection of AI-generated faces was high overall, participants were particularly good at identifying AI-generated faces of people of colour and men. Criterion was also examined and revealed that participants had a bias towards classifying all faces except for those of White men as AI-generated. The second study examined differences in emotion detection for AI-generated faces vs. real face photographs. Participants viewed faces of varying genders (men and women), races (Asian, Black, and White), and emotional expressions (anger, fear, happiness, and sadness) and rated their emotions using scales for the six basic emotions plus neutral (neutral was denoted by setting all emotion scales to zero). These ratings were then transformed into intensity (the degree of the target emotion rating), saliency (the ratio of the target emotion rating to all ratings), and accuracy (whether the target emotion was the highest rated emotion) scores. Results indicated that, broadly speaking, AI-generated images were lower than their real face photograph counterparts on all dependent variables. However, there were some subtleties. Specifically, the AI-generator seemed to perpetuate the stereotype of “the angry Black woman”, as AI-generated images of Black women expressing anger were more intense, salient, and accurately identified than their real counterparts, and this was not seen for other gender/race conditions expressing anger. Finally, the third study examined whether social attributions assigned to AI-generated faces would differ from those assigned to real face photographs using a mixed measures approach. Participants viewed AI-generated and real face photographs of varying genders (men and women), races (Asian, Black, and White), and emotional expressions (anger and happiness) and gave open ended first impressions. Responses were then coded quantitatively on four dependent variables (emotional valence, emotional arousal, warmth, and dominance). Results again indicated that the AI-generator seemingly perpetuated commonly held stereotypes, especially with regard to gender and race. Again, the stereotype of the “angry Black woman” was prominent. Additionally, the stereotype of the “passive Asian woman (and sometimes man)” was also apparent. Taken together, these studies provide evidence that AI-generated media, and in turn the training sets used to create these generators, are indeed biased against generating accurate depictions of people of colour, particularly women of colour.

Description

Keywords

AI, AI-generated, Artifical Intelligence, Cognition, Emotion, Emotion Recognition, Faces, Facial Expressions, Perception, Visual Cognition

Citation

Related Materials

Alternate Version