Interpretable Classifier Models for Decision Support Using High Utility Gain Patterns

Ensemble models such as gradient boosting and random forests are proven to offer the best predictive performance on a wide variety of supervised learning problems. The high performance of these black box models, however, comes at a cost of model interpretability. They are also inadequate to meet reg...

Full description

Saved in:
Bibliographic Details
Main Author: Srikumar Krishnamoorthy
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10669017/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841533392500293632
author Srikumar Krishnamoorthy
author_facet Srikumar Krishnamoorthy
author_sort Srikumar Krishnamoorthy
collection DOAJ
description Ensemble models such as gradient boosting and random forests are proven to offer the best predictive performance on a wide variety of supervised learning problems. The high performance of these black box models, however, comes at a cost of model interpretability. They are also inadequate to meet regulatory demands and explainability needs of organizations. The model interpretability in high performance black-box models is achieved with the help of post-hoc explainable models such as Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). This paper presents an alternate intrinsic classifier model that extracts a class of higher order patterns and embeds them into an interpretable learning model. More specifically, the proposed model extracts novel High Utility Gain (HUG) patterns that capture higher order interactions, transforms the model input data into a new space, and applies interpretable classifier methods on the transformed space. We conduct rigorous experiments on forty benchmark binary and multi-class classification datasets to evaluate the proposed model against the state-of-the-art ensemble and interpretable classifier models. The proposed model was comprehensively assessed on three key dimensions: 1) quality of predictions using classifier measures such as accuracy, <inline-formula> <tex-math notation="LaTeX">$F_{1}$ </tex-math></inline-formula>, AUC, H-measure, and logistic loss, 2) computational performance on large and high-dimensional data, and 3) interpretability aspects. The HUG-based learning model was found to deliver performance comparable to that of the state-of-the-art ensemble models. Our model was also found to achieve 2-40% (45%) prediction quality (interpretability) improvements with significantly lower computational requirements over other interpretable classifier models. Furthermore, we present case studies in finance and healthcare domains and generate one- and two-dimensional HUG profiles to illustrate the interpretability aspects of our HUG models. The proposed solution offers an alternate approach to build high performance and transparent machine learning classifier models. We hope that our ML solution help organizations meet their growing regulatory and explainability needs.
format Article
id doaj-art-29e73bc458af49d183e900a5ecdae717
institution Kabale University
issn 2169-3536
language English
publishDate 2024-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-29e73bc458af49d183e900a5ecdae7172025-01-16T00:02:16ZengIEEEIEEE Access2169-35362024-01-011212608812610710.1109/ACCESS.2024.345556310669017Interpretable Classifier Models for Decision Support Using High Utility Gain PatternsSrikumar Krishnamoorthy0https://orcid.org/0000-0002-2051-571XInformation Systems Area, Indian Institute of Management Ahmedabad, Ahmedabad, Gujarat, IndiaEnsemble models such as gradient boosting and random forests are proven to offer the best predictive performance on a wide variety of supervised learning problems. The high performance of these black box models, however, comes at a cost of model interpretability. They are also inadequate to meet regulatory demands and explainability needs of organizations. The model interpretability in high performance black-box models is achieved with the help of post-hoc explainable models such as Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). This paper presents an alternate intrinsic classifier model that extracts a class of higher order patterns and embeds them into an interpretable learning model. More specifically, the proposed model extracts novel High Utility Gain (HUG) patterns that capture higher order interactions, transforms the model input data into a new space, and applies interpretable classifier methods on the transformed space. We conduct rigorous experiments on forty benchmark binary and multi-class classification datasets to evaluate the proposed model against the state-of-the-art ensemble and interpretable classifier models. The proposed model was comprehensively assessed on three key dimensions: 1) quality of predictions using classifier measures such as accuracy, <inline-formula> <tex-math notation="LaTeX">$F_{1}$ </tex-math></inline-formula>, AUC, H-measure, and logistic loss, 2) computational performance on large and high-dimensional data, and 3) interpretability aspects. The HUG-based learning model was found to deliver performance comparable to that of the state-of-the-art ensemble models. Our model was also found to achieve 2-40% (45%) prediction quality (interpretability) improvements with significantly lower computational requirements over other interpretable classifier models. Furthermore, we present case studies in finance and healthcare domains and generate one- and two-dimensional HUG profiles to illustrate the interpretability aspects of our HUG models. The proposed solution offers an alternate approach to build high performance and transparent machine learning classifier models. We hope that our ML solution help organizations meet their growing regulatory and explainability needs.https://ieeexplore.ieee.org/document/10669017/Analyticsinterpretable machine learningexplainable artificial intelligenceclassificationhigh utility patterns
spellingShingle Srikumar Krishnamoorthy
Interpretable Classifier Models for Decision Support Using High Utility Gain Patterns
IEEE Access
Analytics
interpretable machine learning
explainable artificial intelligence
classification
high utility patterns
title Interpretable Classifier Models for Decision Support Using High Utility Gain Patterns
title_full Interpretable Classifier Models for Decision Support Using High Utility Gain Patterns
title_fullStr Interpretable Classifier Models for Decision Support Using High Utility Gain Patterns
title_full_unstemmed Interpretable Classifier Models for Decision Support Using High Utility Gain Patterns
title_short Interpretable Classifier Models for Decision Support Using High Utility Gain Patterns
title_sort interpretable classifier models for decision support using high utility gain patterns
topic Analytics
interpretable machine learning
explainable artificial intelligence
classification
high utility patterns
url https://ieeexplore.ieee.org/document/10669017/
work_keys_str_mv AT srikumarkrishnamoorthy interpretableclassifiermodelsfordecisionsupportusinghighutilitygainpatterns