Understanding and Countering Stereotypes: We present a computational approach to interpreting stereotypes in text through the Stereotype Content Model (SCM), a comprehensive causal theory from social psychology. Further, we explore various strategies to counter stereotypical beliefs.
Kathleen C. Fraser, Svetlana Kiritchenko, and Isar Nejadgholi. (2024) How Does Stereotype Content Differ across Data Sources? In Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024), Mexico City, Mexico, June 2024. [
pdf]
Isar Nejadgholi, Kathleen C. Fraser, Anna Kerkhof, and Svetlana Kiritchenko. (2024) Challenging Negative Gender Stereotypes: A Study on the Effectiveness of Automated Counter-Stereotypes. In Proceedings of the Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, Italy, May 2024. [
pdf] [
data]
Kathleen C. Fraser, Svetlana Kiritchenko, Isar Nejadgholi, and Anna Kerkhof. (2023) What Makes a Good Counter-Stereotype? Evaluating Strategies for Automated Responses to Stereotypical Text. In Proceedings of the First Workshop on Social Influence in Conversations (SICon), Toronto, ON, Canada, July 2023. [
pdf]
Kathleen C. Fraser, Svetlana Kiritchenko, and Isar Nejadgholi. (2022). Computational Modelling of Stereotype Content in Text. Frontiers in Artificial Intelligence, April, 2022. [
paper]
Kathleen C. Fraser, Isar Nejadgholi, and Svetlana Kiritchenko (2021). Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model. In Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021), August 2021. [
pdf]
Kathleen C. Fraser, Svetlana Kiritchenko, and Isar Nejadgholi. (2022). Extracting Age-Related Stereotypes from Social Media Texts. In Proceedings of the Language Resources and Evaluation Conference (LREC-2022), Marseille, France, June 2022. [
pdf][
project webpage]
Detecting and Countering Aporophobia: Aporophobia, a social bias against the poor, is a common phenomenon online, yet so far has been overlooked in NLP research on toxic language. We demonstrate that aporophobic attitudes are indeed present in social media and argue that the existing NLP datasets and models are inadequate to effectively address this problem.
Further, we introduce a novel dataset, DRAX, manually annotated for aporophobia, to facilitate automatic identification and tracking of harmful beliefs and discriminative actions against poor people on social media.
Based on the annotated data, we devise a taxonomy of categories of aporophobic attitudes and actions expressed through speech on social media.
Georgina Curto, Svetlana Kiritchenko, Muhammad Hammad Fahim Siddiqui, Isar Nejadgholi, Kathleen C. Fraser. (2025) Tackling Social Bias against the Poor: A Dataset and Taxonomy on Aporophobia. In Findings of the Association for Computational Linguistics: NAACL 2025, Albuquerque, New Mexico, USA, April 2025. [
pdf] [
data]
Georgina Curto, Svetlana Kiritchenko, Kathleen C. Fraser, and Isar Nejadgholi. (2024) The Crime of Being Poor: Associations between Crime and Poverty on Social Media in Eight Countries. In Proceedings of the Sixth Workshop on NLP and Computational Social Science (NLP+CSS), Mexico City, Mexico, June 2024. [
pdf]
Svetlana Kiritchenko, Georgina Curto, Isar Nejadgholi, and Kathleen C. Fraser. (2023) Aporophobia: An Overlooked Type of Toxic Language Targeting the Poor. In Proceedings of the 7th Workshop on Online Abuse and Harms (WOAH), Toronto, ON, Canada, July 2023.
Outstanding Paper Award [
pdf]
Biases in Vision-Language Systems: We investigate bias and diversity in outputs of state-of-the-art text-to-image and large vision-language systems.
Phillip Howard, Kathleen C. Fraser, Anahita Bhiwandiwalla, Svetlana Kiritchenko, (2025) Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals. In Proceedings of the Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL 2025), Albuquerque, New Mexico, USA, April 2025. [
pdf]
Kathleen C. Fraser and Svetlana Kiritchenko. (2024) Examining Gender and Racial Bias in Large Vision--Language Models Using a Novel Dataset of Parallel Images. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (EACL), Malta, March 2024. [
paper]
Kathleen C. Fraser, Svetlana Kiritchenko, and Isar Nejadgholi. (2023) Diversity is Not a One-Way Street: Pilot Study on Ethical Interventions for Racial Bias in Text-to-Image Systems. In Proceedings of the 14th International Conference on Computational Creativity (ICCC), Waterloo, ON, Canada, June 2023.
Best Short Paper Award [
pdf]
Kathleen C. Fraser, Isar Nejadgholi, and Svetlana Kiritchenko. (2023) A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified? In Proceedings of the Creative AI Across Modalities Workshop (CreativeAI @ AAAI), Washington, DC, USA, Feb. 2023. [
pdf]