AI_TAF: A Human-Centric Trustworthiness Risk Assessment Framework for AI Systems

This paper presents the AI Trustworthiness Assessment Framework (AI_TAF), a comprehensive methodology for evaluating and mitigating trustworthiness risks across all stages of an AI system’s lifecycle. The framework accounts for the criticality of the system based on its intended application, the mat...

Full description

Saved in:
Bibliographic Details
Main Authors: Eleni Seralidou, Kitty Kioskli, Theofanis Fotis, Nineta Polemi
Format: Article
Language:English
Published: MDPI AG 2025-06-01
Series:Computers
Subjects:
Online Access:https://www.mdpi.com/2073-431X/14/7/243
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper presents the AI Trustworthiness Assessment Framework (AI_TAF), a comprehensive methodology for evaluating and mitigating trustworthiness risks across all stages of an AI system’s lifecycle. The framework accounts for the criticality of the system based on its intended application, the maturity level of the AI teams responsible for ensuring trust, and the organisation’s risk tolerance regarding trustworthiness. By integrating both technical safeguards and sociopsychological considerations, AI_TAF adopts a human-centric approach to risk management, supporting the development of trustworthy AI systems across diverse organisational contexts and at varying levels of human–AI maturity. Crucially, the framework underscores that achieving trust in AI requires a rigorous assessment and advancement of the trustworthiness maturity of the human actors involved in the AI lifecycle. Only through this human-centric enhancement can AI teams be adequately prepared to provide effective oversight of AI systems.
ISSN:2073-431X