A Novel Approach for Model Interpretability and Domain Aware Fine-Tuning in AdaBoost

Abstract The success of machine learning in real-world use cases has increased its demand in mission-critical applications such as autonomous vehicles, healthcare and medical diagnosis, aviation and flight safety, natural disaster prediction, early warning systems, etc. Adaptive Boosting (AdaBoost)...

Full description

Saved in:
Bibliographic Details
Main Authors: Raj Joseph Kiran, J. Sanil, S. Asharaf
Format: Article
Language:English
Published: Springer Nature 2024-09-01
Series:Human-Centric Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s44230-024-00082-2
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841544571143585792
author Raj Joseph Kiran
J. Sanil
S. Asharaf
author_facet Raj Joseph Kiran
J. Sanil
S. Asharaf
author_sort Raj Joseph Kiran
collection DOAJ
description Abstract The success of machine learning in real-world use cases has increased its demand in mission-critical applications such as autonomous vehicles, healthcare and medical diagnosis, aviation and flight safety, natural disaster prediction, early warning systems, etc. Adaptive Boosting (AdaBoost) is an ensemble learning method that has gained much traction in such applications. Inherently being a non-interpretable model, the interpretability of the AdaBoost algorithm has been a research topic for many years. Furthermore, most of the research being conducted till now is aimed at explaining AdaBoost using perturbation-based techniques. The paper presents a technique to interpret the AdaBoost algorithm from a data perspective using deletion diagnostics and Cook’s distance. The technique achieves interpretability by detecting the most influential data instances and their impact on the feature importance of the model. This interpretability enables domain experts to accurately modify the significance of specific features in a trained AdaBoost model depending on the data instances. Unlike explaining AdaBoost using perturbation-based techniques, interpreting from a data perspective will enable it to debug data-related biases, errors and to impart the knowledge of the domain experts into the model through domain aware fine-tuning. Experimental studies were conducted with diverse real-world multi-feature datasets to demonstrate interpretability and knowledge integration through domain-aware fine-tuning.
format Article
id doaj-art-260707f480924621a2f662de61171f09
institution Kabale University
issn 2667-1336
language English
publishDate 2024-09-01
publisher Springer Nature
record_format Article
series Human-Centric Intelligent Systems
spelling doaj-art-260707f480924621a2f662de61171f092025-01-12T12:26:39ZengSpringer NatureHuman-Centric Intelligent Systems2667-13362024-09-014461063210.1007/s44230-024-00082-2A Novel Approach for Model Interpretability and Domain Aware Fine-Tuning in AdaBoostRaj Joseph Kiran0J. Sanil1S. Asharaf2School of Computer Science and Engineering, Kerala University of Digital Sciences, Innovation and TechnologySchool of Computer Science and Engineering, Kerala University of Digital Sciences, Innovation and TechnologySchool of Computer Science and Engineering, Kerala University of Digital Sciences, Innovation and TechnologyAbstract The success of machine learning in real-world use cases has increased its demand in mission-critical applications such as autonomous vehicles, healthcare and medical diagnosis, aviation and flight safety, natural disaster prediction, early warning systems, etc. Adaptive Boosting (AdaBoost) is an ensemble learning method that has gained much traction in such applications. Inherently being a non-interpretable model, the interpretability of the AdaBoost algorithm has been a research topic for many years. Furthermore, most of the research being conducted till now is aimed at explaining AdaBoost using perturbation-based techniques. The paper presents a technique to interpret the AdaBoost algorithm from a data perspective using deletion diagnostics and Cook’s distance. The technique achieves interpretability by detecting the most influential data instances and their impact on the feature importance of the model. This interpretability enables domain experts to accurately modify the significance of specific features in a trained AdaBoost model depending on the data instances. Unlike explaining AdaBoost using perturbation-based techniques, interpreting from a data perspective will enable it to debug data-related biases, errors and to impart the knowledge of the domain experts into the model through domain aware fine-tuning. Experimental studies were conducted with diverse real-world multi-feature datasets to demonstrate interpretability and knowledge integration through domain-aware fine-tuning.https://doi.org/10.1007/s44230-024-00082-2AdaBoostDeletion diagnosticsInfluential instancesInterpretable machine learning
spellingShingle Raj Joseph Kiran
J. Sanil
S. Asharaf
A Novel Approach for Model Interpretability and Domain Aware Fine-Tuning in AdaBoost
Human-Centric Intelligent Systems
AdaBoost
Deletion diagnostics
Influential instances
Interpretable machine learning
title A Novel Approach for Model Interpretability and Domain Aware Fine-Tuning in AdaBoost
title_full A Novel Approach for Model Interpretability and Domain Aware Fine-Tuning in AdaBoost
title_fullStr A Novel Approach for Model Interpretability and Domain Aware Fine-Tuning in AdaBoost
title_full_unstemmed A Novel Approach for Model Interpretability and Domain Aware Fine-Tuning in AdaBoost
title_short A Novel Approach for Model Interpretability and Domain Aware Fine-Tuning in AdaBoost
title_sort novel approach for model interpretability and domain aware fine tuning in adaboost
topic AdaBoost
Deletion diagnostics
Influential instances
Interpretable machine learning
url https://doi.org/10.1007/s44230-024-00082-2
work_keys_str_mv AT rajjosephkiran anovelapproachformodelinterpretabilityanddomainawarefinetuninginadaboost
AT jsanil anovelapproachformodelinterpretabilityanddomainawarefinetuninginadaboost
AT sasharaf anovelapproachformodelinterpretabilityanddomainawarefinetuninginadaboost
AT rajjosephkiran novelapproachformodelinterpretabilityanddomainawarefinetuninginadaboost
AT jsanil novelapproachformodelinterpretabilityanddomainawarefinetuninginadaboost
AT sasharaf novelapproachformodelinterpretabilityanddomainawarefinetuninginadaboost