Humor as a window into generative AI bias

Abstract A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them “funnier”, the prevalence of stereotyped groups changes. While stereotyped groups fo...

Full description

Saved in:
Bibliographic Details
Main Authors: Roger Saumure, Julian De Freitas, Stefano Puntoni
Format: Article
Language:English
Published: Nature Portfolio 2025-01-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-024-83384-6
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841544784710205440
author Roger Saumure
Julian De Freitas
Stefano Puntoni
author_facet Roger Saumure
Julian De Freitas
Stefano Puntoni
author_sort Roger Saumure
collection DOAJ
description Abstract A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them “funnier”, the prevalence of stereotyped groups changes. While stereotyped groups for politically sensitive traits (i.e., race and gender) are less likely to be represented after making an image funnier, stereotyped groups for less politically sensitive traits (i.e., older, visually impaired, and people with high body weight groups) are more likely to be represented.
format Article
id doaj-art-dec2927557e04031880e9d6c7ce65422
institution Kabale University
issn 2045-2322
language English
publishDate 2025-01-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-dec2927557e04031880e9d6c7ce654222025-01-12T12:19:21ZengNature PortfolioScientific Reports2045-23222025-01-011511710.1038/s41598-024-83384-6Humor as a window into generative AI biasRoger Saumure0Julian De Freitas1Stefano Puntoni2Department of Marketing, The Wharton School, University of PennsylvaniaDepartment of Marketing, Harvard Business School, Harvard UniversityDepartment of Marketing, The Wharton School, University of PennsylvaniaAbstract A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them “funnier”, the prevalence of stereotyped groups changes. While stereotyped groups for politically sensitive traits (i.e., race and gender) are less likely to be represented after making an image funnier, stereotyped groups for less politically sensitive traits (i.e., older, visually impaired, and people with high body weight groups) are more likely to be represented.https://doi.org/10.1038/s41598-024-83384-6
spellingShingle Roger Saumure
Julian De Freitas
Stefano Puntoni
Humor as a window into generative AI bias
Scientific Reports
title Humor as a window into generative AI bias
title_full Humor as a window into generative AI bias
title_fullStr Humor as a window into generative AI bias
title_full_unstemmed Humor as a window into generative AI bias
title_short Humor as a window into generative AI bias
title_sort humor as a window into generative ai bias
url https://doi.org/10.1038/s41598-024-83384-6
work_keys_str_mv AT rogersaumure humorasawindowintogenerativeaibias
AT juliandefreitas humorasawindowintogenerativeaibias
AT stefanopuntoni humorasawindowintogenerativeaibias