Evaluating science: A comparison of human and AI reviewers
Scientists have started to explore whether novel artificial intelligence (AI) tools based on large language models, such as GPT-4, could support the scientific peer review process. We sought to understand (i) whether AI versus human reviewers are able to distinguish between made-up AI-generated and...
Saved in:
| Main Authors: | Anna Shcherbiak, Hooman Habibnia, Robert Böhm, Susann Fiedler |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Cambridge University Press
2024-01-01
|
| Series: | Judgment and Decision Making |
| Subjects: | |
| Online Access: | https://www.cambridge.org/core/product/identifier/S193029752400024X/type/journal_article |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
When AI goes wrong: Fatal errors in oncological research reviewing assistance Open AI based
by: Marwan Al-Raeei
Published: (2024-06-01) -
Peer Review in the Artificial Intelligence Era: A Call for Developing Responsible Integration Guidelines
by: BaHammam AS
Published: (2025-01-01) -
Enhancing peer assessment with artificial intelligence
by: Keith J. Topping, et al.
Published: (2025-01-01) -
Can AI provide useful holistic essay scoring?
by: Tamara P. Tate, et al.
Published: (2024-12-01) -
An overview of large AI models and their applications
by: Xiaoguang Tu, et al.
Published: (2024-12-01)