What social stratifications in bias blind spot can tell us about implicit social bias in both LLMs and humans
Abstract Large language models (LLMs) are the engines behind generative Artificial Intelligence (AI) applications, the most well-known being chatbots. As conversational agents, they—much like the humans on whose data they are trained—exhibit social bias. The nature of social bias is that it unfairly...
Saved in:
| Main Authors: | Sarah V. Bentley, David Evans, Claire K. Naughtin |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-08-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-025-14875-3 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Perceptions and implications of implicit gender bias in the hotel sector in Aruba
by: Madhu S. Jadnanansing, et al.
Published: (2024-06-01) -
A framework for evaluating cultural bias and historical misconceptions in LLMs outputs
by: Moon-Kuen Mak, et al.
Published: (2025-09-01) -
Advancing equity in healthcare systems: understanding implicit bias and infant mortality
by: Sophia M. Gran-Ruaz, et al.
Published: (2025-07-01) -
Bias at the board: implicit gender stereotypes and dual-task effects in chess evaluations
by: Remy M. J. P. Rikers, et al.
Published: (2025-12-01) -
Understanding Social Biases in Large Language Models
by: Ojasvi Gupta, et al.
Published: (2025-05-01)