Interpretable Classifier Models for Decision Support Using High Utility Gain Patterns
Ensemble models such as gradient boosting and random forests are proven to offer the best predictive performance on a wide variety of supervised learning problems. The high performance of these black box models, however, comes at a cost of model interpretability. They are also inadequate to meet reg...
Saved in:
Main Author: | Srikumar Krishnamoorthy |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2024-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10669017/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Credit Risk Modeling Using Interpreted XGBoost
by: Marcin Hernes, et al.
Published: (2023-01-01) -
Towards Transparent AI in Medicine: ECG-Based Arrhythmia Detection with Explainable Deep Learning
by: Oleksii Kovalchuk, et al.
Published: (2025-01-01) -
Fast Single Pbase Algoritbm for Utility Mining in Big Data
by: Junqiang Liu, et al.
Published: (2015-04-01) -
CIME4R: Exploring iterative, AI-guided chemical reaction optimization campaigns in their parameter space
by: Christina Humer, et al.
Published: (2024-05-01) -
Closed-form interpretation of neural network classifiers with symbolic gradients
by: Sebastian J Wetzel
Published: (2025-01-01)