A Red Teaming Framework for Securing AI in Maritime Autonomous Systems

Artificial intelligence (AI) is being ubiquitously adopted to automate processes in science and industry. However, due to its often intricate and opaque nature, AI has been shown to possess inherent vulnerabilities which can be maliciously exploited with adversarial AI, potentially putting AI users...

Full description

Saved in:
Bibliographic Details
Main Authors: Mathew J. Walter, Aaron Barrett, Kimberly Tam
Format: Article
Language:English
Published: Taylor & Francis Group 2024-12-01
Series:Applied Artificial Intelligence
Online Access:https://www.tandfonline.com/doi/10.1080/08839514.2024.2395750
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1846119945527623680
author Mathew J. Walter
Aaron Barrett
Kimberly Tam
author_facet Mathew J. Walter
Aaron Barrett
Kimberly Tam
author_sort Mathew J. Walter
collection DOAJ
description Artificial intelligence (AI) is being ubiquitously adopted to automate processes in science and industry. However, due to its often intricate and opaque nature, AI has been shown to possess inherent vulnerabilities which can be maliciously exploited with adversarial AI, potentially putting AI users and developers at both cyber and physical risk. In addition, there is insufficient comprehension of the real-world effects of adversarial AI and an inadequacy of AI security examinations; therefore, the growing threat landscape is unknown for many AI solutions. To mitigate this issue, we propose one of the first red team frameworks for evaluating the AI security of maritime autonomous systems. The framework provides operators with a proactive (secure by design) and reactive (post-deployment evaluation) response to securing AI technology today and in the future. This framework is a multi-part checklist, which can be tailored to different systems and requirements. We demonstrate this framework to be highly effective for a red team to use to uncover numerous vulnerabilities within a real-world maritime autonomous systems AI, ranging from poisoning to adversarial patch attacks. The lessons learned from systematic AI red teaming can help prevent MAS-related catastrophic events in a world with increasing uptake and reliance on mission-critical AI.
format Article
id doaj-art-b9519f6c08ab4ec88c5e260c8231f4ab
institution Kabale University
issn 0883-9514
1087-6545
language English
publishDate 2024-12-01
publisher Taylor & Francis Group
record_format Article
series Applied Artificial Intelligence
spelling doaj-art-b9519f6c08ab4ec88c5e260c8231f4ab2024-12-16T16:13:01ZengTaylor & Francis GroupApplied Artificial Intelligence0883-95141087-65452024-12-0138110.1080/08839514.2024.2395750A Red Teaming Framework for Securing AI in Maritime Autonomous SystemsMathew J. Walter0Aaron Barrett1Kimberly Tam2School of Engineering, Computing and Mathematics, University of Plymouth, Plymouth, UKSchool of Engineering, Computing and Mathematics, University of Plymouth, Plymouth, UKSchool of Engineering, Computing and Mathematics, University of Plymouth, Plymouth, UKArtificial intelligence (AI) is being ubiquitously adopted to automate processes in science and industry. However, due to its often intricate and opaque nature, AI has been shown to possess inherent vulnerabilities which can be maliciously exploited with adversarial AI, potentially putting AI users and developers at both cyber and physical risk. In addition, there is insufficient comprehension of the real-world effects of adversarial AI and an inadequacy of AI security examinations; therefore, the growing threat landscape is unknown for many AI solutions. To mitigate this issue, we propose one of the first red team frameworks for evaluating the AI security of maritime autonomous systems. The framework provides operators with a proactive (secure by design) and reactive (post-deployment evaluation) response to securing AI technology today and in the future. This framework is a multi-part checklist, which can be tailored to different systems and requirements. We demonstrate this framework to be highly effective for a red team to use to uncover numerous vulnerabilities within a real-world maritime autonomous systems AI, ranging from poisoning to adversarial patch attacks. The lessons learned from systematic AI red teaming can help prevent MAS-related catastrophic events in a world with increasing uptake and reliance on mission-critical AI.https://www.tandfonline.com/doi/10.1080/08839514.2024.2395750
spellingShingle Mathew J. Walter
Aaron Barrett
Kimberly Tam
A Red Teaming Framework for Securing AI in Maritime Autonomous Systems
Applied Artificial Intelligence
title A Red Teaming Framework for Securing AI in Maritime Autonomous Systems
title_full A Red Teaming Framework for Securing AI in Maritime Autonomous Systems
title_fullStr A Red Teaming Framework for Securing AI in Maritime Autonomous Systems
title_full_unstemmed A Red Teaming Framework for Securing AI in Maritime Autonomous Systems
title_short A Red Teaming Framework for Securing AI in Maritime Autonomous Systems
title_sort red teaming framework for securing ai in maritime autonomous systems
url https://www.tandfonline.com/doi/10.1080/08839514.2024.2395750
work_keys_str_mv AT mathewjwalter aredteamingframeworkforsecuringaiinmaritimeautonomoussystems
AT aaronbarrett aredteamingframeworkforsecuringaiinmaritimeautonomoussystems
AT kimberlytam aredteamingframeworkforsecuringaiinmaritimeautonomoussystems
AT mathewjwalter redteamingframeworkforsecuringaiinmaritimeautonomoussystems
AT aaronbarrett redteamingframeworkforsecuringaiinmaritimeautonomoussystems
AT kimberlytam redteamingframeworkforsecuringaiinmaritimeautonomoussystems