A unified ontological and explainable framework for decoding AI risks from news data

Abstract Artificial intelligence (AI) is rapidly permeating various aspects of human life, raising growing concerns about its associated risks. However, existing research on AI risks often remains fragmented—either limited to specific domains or focused solely on ethical guideline development—lackin...

Full description

Saved in:
Bibliographic Details
Main Authors: Chuan Chen, Peng Luo, Huilin Zhao, Mengyi Wei, Puzhen Zhang, Zihan Liu, Liqiu Meng
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-10675-x
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Artificial intelligence (AI) is rapidly permeating various aspects of human life, raising growing concerns about its associated risks. However, existing research on AI risks often remains fragmented—either limited to specific domains or focused solely on ethical guideline development—lacking a comprehensive framework that bridges macro-level typologies and micro-level instances. To address this gap, we propose an ontological risk model that unifies AI risk representation across multiple scales. Based on this model, we construct an enriched AI risk event database by systematically extracting and structuring raw news data. We then apply a suite of visual analytics methods to extract and summarize key characteristics of AI risk events. Finally, by integrating explainable machine learning techniques, we identify potential driving factors underlying different risk attributes. This study provides a novel, quantitative framework for understanding AI risks, offering both structural insights through ontological modeling and mechanistic interpretations by explainable machine learning.
ISSN:2045-2322