TY - CHAP A1 - Weitz, Katharina A1 - Schiller, Dominik A1 - Schlagowski, Ruben A1 - Huber, Tobias A1 - André, Elisabeth T1 - "Do you trust me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design T2 - IVA’19: Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents UR - https://doi.org/10.1145/3308532.3329441 Y1 - 2019 UR - https://doi.org/10.1145/3308532.3329441 SN - 978-1-4503-6672-4 SP - 7 EP - 9 PB - ACM CY - New York ER - TY - JOUR A1 - Weitz, Katharina A1 - Schiller, Dominik A1 - Schlagowski, Ruben A1 - Huber, Tobias A1 - André, Elisabeth T1 - “Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design JF - Journal on Multimodal User Interfaces N2 - While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility, especially for end-users. In this paper, we explore the effects of incorporating virtual agents into explainable artificial intelligence (XAI) designs on the perceived trust of end-users. For this purpose, we conducted a user study based on a simple speech recognition system for keyword classification. As a result of this experiment, we found that the integration of virtual agents leads to increased user trust in the XAI system. Furthermore, we found that the user’s trust significantly depends on the modalities that are used within the user-agent interface design. The results of our study show a linear trend where the visual presence of an agent combined with a voice output resulted in greater trust than the output of text or the voice output alone. Additionally, we analysed the participants’ feedback regarding the presented XAI visualisations. We found that increased human-likeness of and interaction with the virtual agent are the two most common mention points on how to improve the proposed XAI interaction design. Based on these results, we discuss current limitations and interesting topics for further research in the field of XAI. Moreover, we present design recommendations for virtual agents in XAI systems for future projects. UR - https://doi.org/10.1007/s12193-020-00332-0 Y1 - 2020 UR - https://doi.org/10.1007/s12193-020-00332-0 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-57188 SN - 1783-8738 VL - 15 IS - 2 SP - 87 EP - 98 PB - Springer CY - Berlin ER - TY - CHAP A1 - Flutura, Simon A1 - Seiderer, Andreas A1 - Huber, Tobias A1 - Weitz, Katharina A1 - Aslan, Ilhan A1 - Schlagowski, Ruben A1 - André, Elisabeth A1 - Rathmann, Joachim T1 - Interactive Machine Learning and Explainability in Mobile Classification of Forest-Aesthetics T2 - Proceedings of the 6th EAI International Conference on Smart Objects and Technologies for Social Good UR - https://doi.org/10.1145/3411170.3411225 Y1 - 2020 UR - https://doi.org/10.1145/3411170.3411225 SN - 978-1-4503-7559-7 SP - 90 EP - 95 PB - ACM CY - New York ER - TY - CHAP A1 - Schlagowski, Ruben A1 - Herget, Frederick A1 - Heimerl, Niklas A1 - Hammerl, Maximilian A1 - Huber, Tobias A1 - Zwolsky, Pamina A1 - Gruca, Jan A1 - André, Elisabeth ED - Smith, Gillian ED - Whitehead, Jim ED - Samuel, Ben ED - Spiel, Katta ED - van Rozen, Riemer T1 - From a Social POV: The Impact of Point of View on Player Behavior, Engagement, and Experience in a Serious Social Simulation Game T2 - Proceedings of the 19th International Conference on the Foundations of Digital Games, FDG 2024 N2 - Multiplayer games with social aspects vary widely regarding client design, e.g., point of view or camera perspective. While design paradigms usually arise from gold standards that are set by previously successful games in the industry, the impact of such paradigms is under-researched for games that serve as scientific instruments, e.g., to research social behavior. Intending to investigate how such games should be designed, we built two multiplayer clients with the same game logic, one using a first-person point of view, while the other includes a top-down camera perspective. Then, we conducted an online user study in which players tested these game clients in extensive multiplayer sessions. Analyzing speech time, in-game logs, questionnaires, and qualitative feedback, we look at the perspectives’ impact on player behavior, engagement, and game experience in a scientific or "serious games" context. In addition, we have made our designed game UNISON and both clients available as open source to facilitate future empirical social science research. UR - https://doi.org/10.1145/3649921.3649936 Y1 - 2024 UR - https://doi.org/10.1145/3649921.3649936 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-65451 SN - 979-8-4007-0955-5 PB - ACM CY - New York ER - TY - CHAP A1 - Mertes, Silvan A1 - Huber, Tobias A1 - Karle, Christina A1 - Weitz, Katharina A1 - Schlagowski, Ruben A1 - Conati, Cristina A1 - André, Elisabeth ED - Larson, Kate T1 - Relevant Irrelevance: Generating Alterfactual Explanations for Image Classifiers T2 - Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, (IJCAI-24) UR - https://doi.org/10.24963/ijcai.2024/52 Y1 - 2024 UR - https://doi.org/10.24963/ijcai.2024/52 SN - 978-1-956792-04-1 SP - 467 EP - 475 PB - IJCAI CY - Wien ER - TY - INPR A1 - Mertes, Silvan A1 - Karle, Christina A1 - Huber, Tobias A1 - Weitz, Katharina A1 - Schlagowski, Ruben A1 - André, Elisabeth T1 - Alterfactual Explanations - The Relevance of Irrelevance for Explaining AI Systems N2 - Explanation mechanisms from the field of Counterfactual Thinking are a widely-used paradigm for Explainable Artificial Intelligence (XAI), as they follow a natural way of reasoning that humans are familiar with. However, all common approaches from this field are based on communicating information about features or characteristics that are especially important for an AI's decision. We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system. Therefore, we introduce a new way of explaining AI systems. Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered. By doing so, the user directly sees which characteristics of the input data can change arbitrarily without influencing the AI's decision. We evaluate our approach in an extensive user study, revealing that it is able to significantly contribute to the participants' understanding of an AI. We show that alterfactual explanations are suited to convey an understanding of different aspects of the AI's reasoning than established counterfactual explanation methods. UR - https://doi.org/10.48550/arXiv.2207.09374 Y1 - 2022 UR - https://doi.org/10.48550/arXiv.2207.09374 UR - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:573-57414 PB - arXiv CY - Ithaca ER -