Generative AI in Medicine: Pioneering Progress or Perpetuating Historical Inaccuracies? Cross-Sectional Study Evaluating Implicit Bias
Abstract BackgroundGenerative artificial intelligence (gAI) models, such as DALL-E 2, are promising tools that can generate novel images or artwork based on text input. However, caution is warranted, as these tools generate information based on historical data and are thus at...
Saved in:
| Main Authors: | Philip Sutera, Rohini Bhatia, Timothy Lin, Leslie Chang, Andrea Brown, Reshma Jagsi |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
JMIR Publications
2025-06-01
|
| Series: | JMIR AI |
| Online Access: | https://ai.jmir.org/2025/1/e56891 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
The innovation bias: Implicit preferences for innovative and historical solutions over contemporary ones
by: Moritz Reis, et al.
Published: (2025-05-01) -
Demographic inaccuracies and biases in the depiction of patients by artificial intelligence text-to-image generators
by: Tim Luca Till Wiegand, et al.
Published: (2025-07-01) -
Systematic review and meta-analysis of quotation inaccuracy in medicine
by: Christopher Baethge, et al.
Published: (2025-07-01) -
Do Implicit Racial Biases Have Significant Discriminatory Effects?
by: Timothy Fuller
Published: (2021-12-01) -
Forecasting inaccuracies: a result of unexpected events, optimism bias, technical problems, or strategic misrepresentation?
by: Petter Naess, et al.
Published: (2015-08-01)