A framework for evaluating cultural bias and historical misconceptions in LLMs outputs
Large Language Models (LLMs), while powerful, often perpetuate cultural biases and historical inaccuracies from their training data, marginalizing underrepresented perspectives. To address these issues, we introduce a structured framework to systematically evaluate and quantify these deficiencies. O...
Saved in:
| Main Authors: | Moon-Kuen Mak, Tiejian Luo |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
KeAi Communications Co. Ltd.
2025-09-01
|
| Series: | BenchCouncil Transactions on Benchmarks, Standards and Evaluations |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2772485925000481 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Contraception: Ethical Quandaries and Misconceptions
by: G.A. Ogunbanjo, et al.
Published: (2004-08-01) -
A Significant Reducing Misconception on Newton’s Law Under Purposive Scaffolding and Problem-Based Misconception Supported Modeling Instruction
by: Suwasono Purbo, et al.
Published: (2025-08-01) -
Identification of learning difficulties and misconceptions of chemical bonding material: A review
by: Hayuni Retno Widarti, et al.
Published: (2024-10-01) -
Who Is to Blame for the Bias in Visualizations, ChatGPT or DALL-E?
by: Dirk H. R. Spennemann
Published: (2025-04-01) -
A Benchmark for math misconceptions: bridging gaps in middle school algebra with AI-supported instruction
by: Nancy Otero, et al.
Published: (2025-08-01)