• search hit 1 of 1
Back to Result List

Are you sure? Prediction revision in automated decision-making

  • With the rapid improvements in machine learning and deep learning, decisions made by automated decision support systems (DSS) will increase. Besides the accuracy of predictions, their explainability becomes more important. The algorithms can construct complex mathematical prediction models. This causes insecurity to the predictions. The insecurity rises the need for equipping the algorithms with explanations. To examine how users trust automated DSS, an experiment was conducted. Our research aim is to examine how participants supported by an DSS revise their initial prediction by four varying approaches (treatments) in a between-subject design study. The four treatments differ in the degree of explainability to understand the predictions of the system. First we used an interpretable regression model, second a Random Forest (considered to be a black box [BB]), third the BB with a local explanation and last the BB with a global explanation. We noticed that all participants improved their predictions after receiving an advice whether it was a complete BB or an BB with an explanation. The major finding was that interpretable models were not incorporated more in the decision process than BB models or BB models with explanations.

Export metadata

Additional Services

Search Google Scholar
Metadaten
Author:Nadia Burkart, Sebastian Robert, Marco F. Huber
URL:https://doi.org/10.1111/exsy.12577
Parent Title (English):Expert Systems
Document Type:Article (peer reviewed)
Language:English
Publication Year:2021
Tag:experiment; explainable ML; interpretability; prediction revision
Volume:38
Issue:1
First Page:e12577
Peer reviewed:Ja