Evaluating Python Static Code Analysis Tools Using FAIR Principles

The quality of modern software relies heavily on the effective use of static code analysis tools. To improve their usefulness, these tools should be evaluated using a framework that prioritizes collaboration, user-friendliness, and long-term sustainability. In this paper, we suggest applying the FAI...

Full description

Saved in:
Bibliographic Details
Main Authors: Hassan Bapeer Hassan, Qusay Idrees Sarhan, Arpad Beszedes
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10758651/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The quality of modern software relies heavily on the effective use of static code analysis tools. To improve their usefulness, these tools should be evaluated using a framework that prioritizes collaboration, user-friendliness, and long-term sustainability. In this paper, we suggest applying the FAIR principles—Findability, Accessibility, Interoperability, and Reusability—as a foundation for assessing static code analysis tools. We specifically focus on Python-based tools, analyzing their features and how well they align with FAIR guidelines. Our findings indicate that it is important to expand the FAIR principles to include thorough documentation, performance assessments, and robust testing frameworks for a more complete evaluation. As Internet of Things (IoT) applications and technologies become increasingly common, these tools must adapt to meet the unique challenges posed by complex and interconnected systems. Addressing these issues is vital for ensuring security and scalability within IoT environments. By implementing this FAIR-based approach, we aim to support the development of static code analysis tools that cater to the evolving needs of the software engineering community while ensuring they remain sustainable and reliable.
ISSN:2169-3536