Challenging Negative Gender Stereotypes: A Study on the Effectiveness of Automated Counter-Stereotypes

Isar Nejadgholi,1 Kathleen C. Fraser,1 Anna Kerkhof,2 Svetlana Kiritchenko1
1National Research Council Canada, Ottawa, Canada
2ifo Institute for Economic Research and University of Munich, Munich, Germany

In Proceedings of the Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, Italy, May 2024

Gender stereotypes are pervasive beliefs about individuals based on their gender that play a significant role in shaping societal attitudes, behaviours, and even opportunities. Recognizing the negative implications of gender stereotypes, particularly in online communications, this study investigates eleven strategies to automatically counteract and challenge these views. We present AI-generated gender-based counter-stereotypes to (self-identified) male and female study participants and ask them to assess their offensiveness, plausibility, and potential effectiveness. The strategies of counter-facts and broadening universals (i.e., stating that anyone can have a trait regardless of group membership) emerged as the most robust approaches, while humour, perspective-taking, counter-examples, and empathy for the speaker were perceived as less effective. Also, the differences in ratings were more pronounced for stereotypes about the different targets than between the genders of the raters. Alarmingly, many AI-generated counter-stereotypes were perceived as offensive and/or implausible. Our analysis and the collected dataset offer foundational insight into counter-stereotype generation, guiding future efforts to develop strategies that effectively challenge gender stereotypes in online interactions.

[paper][Download the data]


A. Curation Rationale

Stereotypes involve attributing certain characteristics to a person purely on the basis of their perceived membership in a certain social category, often defined by demographic features such as race, ethnicity, age, or religious affiliation. In particular, perceived gender continues to be one of the most salient features by which these conscious and subconscious social categorizations are made, despite growing recognition that gender is not necessarily apparent from a person's appearance, is not a binary categorization, and in most cases is not relevant to the situation. Stereotypes are reinforced by repeated exposure. On the other hand, stereotypical associations can be weakened by exposure to counter-stereotypes or information that disrupts or challenges the stereotype. Several counter-strategies can be employed, such as providing factual information to contradict the stereotype, asking questions to motivate critical thinking, or encouraging the speaker to 'put themselves in the target group's shoes'.

In this project, we investigate the question of how to effectively generate counter-stereotypes in online spaces, such as on social media platforms, where such content is prevalent. In particular, we assess whether generative AI technology (in our case, ChatGPT) can be used to generate appropriate and plausible counter-stereotypes and which counter-strategy is judged to be most effective at countering negative gender stereotypes.

We collect twenty well-known North American stereotypes for men and women (ten stereotypes each) from existing literature and online platforms. Then, using ChatGPT we automatically generate one-sentence-long counter-statements for each stereotype in a social-media style according to established counter-strategies to challenge stereotypes, and manually validate the generated statements. We present stereotype - counter-stereotype pairs to human annotators and ask them to assess the offensiveness, plausibility, and potential effectiveness of the counter-statements. The complete lists of stereotypes and counter-stereotype strategies are available in the paper.

B. Language Variety

The stereotype and counter-stereotype statements are written in English.

C. Speaker Demographic N/A

D. Annotator Demographic

The data is annotated by 75 annotators (37 males, 38 females, self-identified) from the United States of America, fluent in English, recruited through Prolific. The mean (median) age is 40.41 (38) years.

E. Speech Situation

The counter-stereotypes are automatically generated by ChatGPT in an informal, tweet-like style.

F. Text Characteristics N/A

G. Recording Quality N/A

H. Other N/A

I. Provenance Appendix N/A