• search hit 13 of 2760
Back to Result List

Debiasing SHAP scores in random forests

  • AbstractBlack box machine learning models are currently being used for high-stakes decision making in various parts of society such as healthcare and criminal justice. While tree-based ensemble methods such as random forests typically outperform deep learning models on tabular data sets, their built-in variable importance algorithms are known to be strongly biased toward high-entropy features. It was recently shown that the increasingly popular SHAP (SHapley Additive exPlanations) values suffer from a similar bias. We propose debiased or "shrunk" SHAP scores based on sample splitting which additionally enable the detection of overfitting issues at the feature level.

Download full text files

Export metadata

Additional Services

Share in Twitter Search Google Scholar Statistics
Metadaten
Author:Markus LoecherORCiD
URN:urn:nbn:de:kobv:b721-opus4-42445
DOI:https://doi.org/10.1007/s10182-023-00479-7
ISSN:1863-8171
Parent Title (English):AStA Advances in Statistical Analysis
Publisher:Springer Science and Business Media LLC
Document Type:Article
Language:English
Year of Completion:2023
Year of first Publication:2023
Publishing Institution:Hochschulbibliothek HWR Berlin
Release Date:2023/09/06
Tag:Analysis; Applied Mathematics; Economics and Econometrics; Modeling and Simulation; Social Sciences (miscellaneous); Statistics and Probability
Institutes:FB I - Wirtschaftswissenschaften
Open Access Publikationen (DINI-Set):open_access
Open Access Publikations financed by DEAL Project:hybrid
Licence (German):License LogoCreative Commons - CC BY - Namensnennung 4.0 International