TY - JOUR A1 - Höpfl, Felix A1 - Peisl, Thomas A1 - Greiner, Christian T1 - Exploring stakeholder perspectives: Enhancing robot acceptance for sustainable healthcare solutions JF - Sustainable Technology and Entrepreneurship N2 - The pandemic has highlighted the fact that healthcare systems around the world are under pressure. Demographic change is leading to an increasing shortage of care workers in most countries, and the demographic challenge is only just beginning in most societies. While robots are widely used in industry, robotic support in healthcare is still limited to very specialized robots in the operating theatre. The question of what type of deployment is likely to be successful in a healthcare scenario is not only a technological or economical question but also one of technology acceptance. The answer to this question supports entrepreneurial opportunities to develop sustainable healthcare solutions. In this paper, we analyze the acceptance of robots in elderly care from the perspective of patients, patient families, and geriatric care professionals. To understand the various positions and to identify the suitability of existing acceptance models, we applied stakeholder mapping to conduct qualitative interviews with 14 people with different knowledge backgrounds and levels of involvement in care situations, based on 9 videos showing different robots and application scenarios. The results confirmed that existing technology acceptance models need to be extended by factors such as robot appearance. We found that the background knowledge of the respondents influences the results of the questions about e.g. safety concerns. In addition, we found that the contribution to patients' self-determination and independence is an important factor that is not included in existing technology acceptance models. Finally, the discovery of a significant discrepancy between the self-perception and the external perception of the different stakeholders regarding the acceptance of a service robot can be explained by the stakeholder positions involved in caring for the benefit of a specific patient. These findings encourage further research, especially with the underlying assumption that technology acceptance in healthcare is not just a patient issue, but a stakeholder issue. Stakeholder mapping is a valid tool to analyze the interdependencies for the acceptance of robots. Therefore, we suggest using a tool such as stakeholder mapping to further analyze these issues. KW - Technology acceptance KW - Robot acceptance KW - Stakeholder mapping KW - Geriatrics Y1 - 2023 U6 - https://doi.org/10.1016/j.stae.2023.100045 VL - 2 IS - 3 SP - 100045 ER - TY - JOUR A1 - Greiner, Christian A1 - Peisl, Thomas A1 - Höpfl, Felix A1 - Beese, Olivia T1 - Acceptance of AI in SemiStructured Decision-Making Situations Applying the Four-Sides Model of Communication—An Empirical Analysis Focused on Higher Education JF - Education Sciences N2 - This study investigates the impact of generative AI systems like ChatGPT on semi-structured decision-making, specifically in evaluating undergraduate dissertations. We propose using Davis’ technology acceptance model (TAM) and Schulz von Thun’s four-sides communication model to understand human–AI interaction and necessary adaptations for acceptance in dissertation grading. Utilizing an inductive research design, we conducted ten interviews with respondents having varying levels of AI and management expertise, employing four escalating-consequence scenarios mirroring higher education dissertation grading. In all scenarios, the AI functioned as a sender, based on the four-sides model. Findings reveal that technology acceptance for human–AI interaction is adaptive but requires modifications, particularly regarding AI’s transparency. Testing the four-sides model showed support for three sides, with the appeal side receiving negative feedback for AI acceptance as a sender. Respondents struggled to accept the idea of AI, suggesting a grading decision through an appeal. Consequently, transparency about AI’s role emerged as vital. When AI supports instructors transparently, acceptance levels are higher. These results encourage further research on AI as a receiver and the impartiality of AI decision-making without instructor influence. This study emphasizes communication modes in learning-ecosystems, especially in semi-structured decision-making situations with AI as a sender, while highlighting the potential to enhance AI-based decision-making acceptance. KW - AI as a sender KW - higher education KW - semi-structured decisions KW - four-sides model; technology acceptance model Y1 - 2023 U6 - https://doi.org/10.3390/educsci13090865 VL - 13 IS - 9 SP - 1 EP - 11 PB - MDPI CY - Basel, Schweiz ER - TY - GEN A1 - Höpfl, Felix ED - Höpfl, Felix T1 - Ideenwerkstatt: KI in der Hochschullehre - Persona-Konzept mit bildgebender KI in der Vorlesung N2 - Beitrag zur Ideenwerkstatt Hochschullehre der Virtuellen Hochschule Bayern (VHB) zum Themenkomplex "KI in der Hochschullehre". Gegenstan des Beitrages ist die Verwendung bildgebender KI im unmittelbaren Unterrichtsgeschehen in dem ein Persona-Konzept (Design-Thinking) durch ein KI-generiertes Porträt persönlicher und wirksamer wird. Die Veranstaltung fand am 29.02.2024 online im Rahmen des bundesweiten Projekts "Konzertierte Weiterbildungen zu künstlicher Intelligenz in der Hochschullehre" des Netzwerks Landeseinrichtungen für digitale Hochschullehre (NeL) statt und wurde von der Stiftung Innovation in der Hochschullehre gefördert. Das Video kann auch auf Youtube abgerufen werden: https://youtu.be/JCaUWryiEjs KW - Künstliche Intelligenz, Hochschullehre, Persona, Design-Thinking KW - KI; Persona; Design-Thinking Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:861-opus4-23833 SP - 1 EP - 13 ER - TY - GEN A1 - Höpfl, Felix T1 - Pen & Paper AI-Training T2 - Patternpool N2 - "Pen & Paper – AI Training“ bietet Ihnen die Möglichkeit, die Funktionsweise von Künstlicher Intelligenz ohne technisches Vorwissen zu verstehen. In interaktiven Gruppenarbeiten erleben Sie, wie Large Language Models (LLMs) Informationen verarbeiten, Muster erkennen und Entscheidungen unterstützen – alles mit Stift und Papier! Perfekt, um komplexe KI-Konzepte auf einfache und praxisnahe Weise zu durchdringen. KW - Interaktion KW - Kritisches Denken KW - Künstliche Intelligenz Y1 - 2025 UR - https://www.patternpool.de/pattern/pen-paper-ai-training/ VL - 2025 PB - Patternpool e.V. CY - Hamburg ER - TY - JOUR A1 - Höpfl, Felix A1 - Schellhorn, Marina T1 - Balancing innovation and trust: Assessing artificial intelligence’s role in medical history taking and physician perspectives on patient care JF - DESGIN+ N2 - This study explores the potential of artificial intelligence (AI) in medical history taking (anamnesis) and assesses its acceptance using technology acceptance models. Through nine expert interviews with physicians from diverse medical backgrounds, the study aims to understand concerns and anticipated benefits of AI in the doctor–patient relationship. To demonstrate AI’s applications, digital anamnesis surveys were conducted with two actual patients, and the resulting data were interpreted by AI and reviewed by physicians. Findings indicate that physicians view AI as potentially beneficial, expecting that AI can facilitate improvements in care quality, efficiency, and time savings. Despite initial concerns about AI’s ability to address individual patient needs and its impact on the doctor–patient relationship, there is significant interest in integrating AI tools into daily practice. Key issues include patient constitution, the effort-to-benefit ratio, and potential risks to patient trust. The study identifies six areas for further research: Economic impact and cost-benefit analysis, patient acceptance and trust, stress reduction and job satisfaction, effects on doctor–patient relationships, development of verification mechanisms, and ethical and legal considerations. These findings underscore the complexities of AI integration in health care, emphasizing the need to address concerns about patient individuality, data privacy, and interpersonal relationships while harnessing AI’s potential. KW - AI-enhanced anamnesis systems KW - AI technology acceptance KW - AI quality assumptions KW - Idana AI in medical history taking Y1 - 2025 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:861-opus4-29528 VL - 2025 SP - 1 PB - AccScience Publishing (ASP) DESIGN+ CY - Singapur ER - TY - JOUR A1 - Höpfl, Felix A1 - Ott, Robert T1 - Potenziale und Herausforderungen der KI-Integration in die Hochschullehre: Eine explorative Analyse am Beispiel des Gesundheitsmanagements JF - Die Neue Hochschule N2 - Diese Studie untersucht, wie Studierende des Bachelor­studiengangs „Management in der Gesundheits­wirtschaft“ an der Technischen Hochschule Rosenheim den Einsatz Künstlicher Intelligenz (KI) in der Hochschul­lehre wahrnehmen. Befragt wurden sie zu Erfahrung, Interesse und Vertrautheit mit KI-Tools. Die Ergebnisse zeigen: Erfahrung: Entgegen der Ausgangs­hypothese verfügen viele Studierende bereits über erste praktische Erfahrungen, vor allem mit ChatGPT, das sie vorwiegend zur Strukturierung und sprachlichen Überarbeitung von Seminararbeiten einsetzen. Interesse: Das Interesse, mehr über KI zu lernen, ist hoch. Nach einer ersten Anwendung sinkt jedoch die Bereitschaft zur weiter­führenden Auseinander­setzung – ein Hinweis auf überschätzte Selbst­kompetenz und oberflächliches Verständnis. Tool-Vertrautheit: Die meisten sind mit Basis­tools vertraut, fortgeschrittene Anwendungen bleiben weitgehend unbekannt. Qualitative Rückmeldungen benennen Risiken: mögliche Beeinträchtigung eigenständigen Denkens, unzureichende Qualität generierter Inhalte (z. B. fiktive oder veraltete Quellen) und technische Störungen. Insgesamt birgt KI großes Potenzial zur Lern­unterstützung, erfordert jedoch eine reflektierte didaktische Integration und vertiefte Schulungen für Lehrende und Lernende. Zukünftige Forschung sollte klären, wie Studierende zu einem kritischen, qualitätsorientierten Einsatz von KI-Tools befähigt werden können. KW - Künstliche Intelligenz KW - KI in der Lehre KW - Gesundheitswirtschaft KW - Hochschullehre Y1 - 2025 U6 - https://doi.org/10.5281/zenodo.15474805 IS - 3 SP - 26 EP - 29 PB - hlb-Bundesvereinigung ER -