• search hit 3 of 3
Back to Result List

Interfaces for Explanations in Human-AI Interaction: Proposing a Design Evaluation Approach

  • Explanations in Human-AI Interaction are communicated to human decision makers through interfaces. Yet, it is not clear what consequences the exact representation of such explanations as part of decision support systems (DSS) and working on machine learning (ML) models has on human decision making. We observe a need for research methods that allow for measuring the effect different eXplainable AI (XAI) interface designs have on people’s decision making. In this paper, we argue for adopting research approaches from decision theory for HCI research on XAI interface design. We outline how we used estimation tasks in human-grounded design research in order to introduce a method and measurement for collecting evidence on XAI interface effects. To this end, we investigated representations of LIME explanations in an estimation task online study as proof-of-concept for our proposal.

Export metadata

Additional Services

Search Google Scholar
Metadaten
Author:Henrik Mucha, Sebastian Robert, Ruediger Breitschwerdt, Michael Fellmann
URL:https://doi.org/10.1145/3411763.3451759
Parent Title (English):Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA
Document Type:Conference Proceeding
Language:English
Publication Year:2021
Tag:Human-AI Interaction
Page Number:6
First Page:327