Digitalisierung
Refine
Year of publication
Document Type
- conference proceeding (article) (345)
- Article (182)
- conference proceeding (presentation, abstract) (45)
- Part of a Book (36)
- Book (13)
- Preprint (11)
- Working Paper (5)
- conference proceeding (volume) (4)
- Report (4)
- conference talk (3)
Is part of the Bibliography
- no (660)
Keywords
- Offshoring (13)
- Betriebliches Informationssystem (12)
- Informationstechnik (11)
- Datenschutz (10)
- Digitalisierung (10)
- Datensicherung (8)
- Elektronische Gesundheitskarte (8)
- Information systems (8)
- Internet of Things (8)
- Literaturbericht (8)
Institute
- Fakultät Informatik und Mathematik (372)
- Fakultät Elektro- und Informationstechnik (222)
- Laboratory for Safe and Secure Systems (LAS3) (207)
- Labor für Digitalisierung (LFD) (84)
- Regensburg Strategic IT Management (ReSITM) (59)
- Labor eHealth (eH) (36)
- Fakultät Maschinenbau (32)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (25)
- Labor Parallele und Verteilte Systeme (23)
- Fakultät Sozial- und Gesundheitswissenschaften (18)
Begutachtungsstatus
- peer-reviewed (264)
- begutachtet (7)
For manufacturing firms, success in innovating IT-enabled services is a critical antecedent to benefit from digital servitization of their business models. Digital servitization literature has explored mechanisms for success in innovating IT-enabled services, indicating that the phenomenon is multifaceted and needs to be explained from multiple theoretical perspectives. We derive a conceptual model for success in innovating IT-enabled services covering its multifaceted nature by referring to knowledge-based and organizational control theory. We test this model using qualitative cases of IT-enabled service innovation initiatives in manufacturing firms and use set-theoretic analyses to account for the multifaceted nature of the phenomenon. The necessary condition analysis yields that a certain degree of service innovation capabilities is a prerequisite for success. With the results of a qualitative comparative analysis, we obtain five solution terms as causal recipes for success in innovating IT-enabled services. Our results contribute to research by offering a theory-based approach that explains the multiplicity of success in IT-enabled service innovation. Practitioners benefit from our results by understanding prerequisites and causal recipes for success while learning from unsuccessful initiatives in innovating IT-enabled services of manufacturing firms. Our study is also an example of how to rigorously calibrate qualitative data using a structured approach.
In higher education, improving learning and learning success are goals of general improvement. Lecturers teaches content and students acquire that content in an efficient way. To structure content, learning element categories are evaluated from the student's point of view in higher education area. The aim is to validate given definitions of ten learning element categories within a Learning Management System (LMS).
This paper evaluates a categorization of learning elements for organizing learning content in online education within LMSs. Therefore, ten categories of learning elements and corresponding definitions were defined in a previous work as base for this paper. The learning elements to examine are manuscript, exercise, quiz, brief overview, learning goal, summary, collaboration tool, auditory additional material, textual additional material, and visual additional material. To validate the definitions and to get improvements to each learning element a survey is processed. Beside the demographic data questions, the survey consists of two questions to the acceptance of the definitions and asks for improvements. 148 students between the ages 19 and 35 participate in the survey in summer term 2023. The education level of the participants ranges from undergraduates to Ph.D. students.
The results of this paper are that more than 80% accept the given definitions. Some definitions of the learning elements are changed, but the changes are restricted to additions of maximal four words. This categorization of learning elements could lead to improvements in learning by giving the content more structure. With the structure students get the possibility to learn with preferred learning elements which could lead to more success in learning and to a decreasing dropout rate in universities. In the future, the learning elements allow to classify content within LMSs with the goal of generating individual learning paths. Furthermore, our project will integrate these learning elements, use them to generate learning paths, and could set a new standard in the way of personalized learning.
Context:
Causal probabilistic graph-based models have gained widespread utility, enabling the modeling of cause-and-effect relationships across diverse domains. With their rising adoption in new areas, such as safety analysis of complex systems, software engineering, and machine learning, the need for an integrated lifecycle framework akin to DevOps and MLOps has emerged. Currently, such a reference for organizations interested in employing causal engineering is missing. This lack of guidance hinders the incorporation and maturation of causal methods in the context of real-life applications.
Objective:
This work contextualizes causal model usage across different stages and stakeholders and outlines a holistic view of creating and maintaining them within the process landscape of an organization.
Method:
A novel lifecycle framework for causal model development and application called CausalOps is proposed. By defining key entities, dependencies, and intermediate artifacts generated during causal engineering, a consistent vocabulary and workflow model to guide organizations in adopting causal methods are established.
Results:
Based on the early adoption of the discussed methodology to a real-life problem within the automotive domain, an experience report underlining the practicability and challenges of the proposed approach is discussed.
Conclusion:
It is concluded that besides current technical advancements in various aspects of causal engineering, an overarching lifecycle framework that integrates these methods into organizational practices is missing. Although diverse skills from adjacent disciplines are widely available, guidance on how to transfer these assets into causality-driven practices still need to be addressed in the published literature. CausalOps’ aim is to set a baseline for the adoption of causal methods in practical applications within interested organizations and the causality community.
Interpretable Machine Learning for Mode Choice Modeling on Tracking-Based Revealed Preference Data
(2024)
Mode choice modeling is imperative for predicting and understanding travel behavior. For this purpose, machine learning (ML) models have increasingly been applied to stated preference and traditional self-recorded revealed preference data with promising results, particularly for extreme gradient boosting (XGBoost) and random forest (RF) models. Because of the rise in the use of tracking-based smartphone applications for recording travel behavior, we address the important and unprecedented task of testing these ML models for mode choice modeling on such data. Furthermore, as ML approaches are still criticized for leading to results that are hard to understand, we consider it essential to provide an in-depth interpretability analysis of the best-performing model. Our results show that the XGBoost and RF models far outperform a conventional multinomial logit model, both overall and for each mode. The interpretability analysis using the Shapley additive explanations approach reveals that the XGBoost model can be explained well at the overall and mode level. In addition, we demonstrate how to analyze individual predictions. Lastly, a sensitivity analysis gives insight into the relative importance of different data sources, sample size, and user involvement. We conclude that the XGBoost model performs best, while also being explainable. Insights generated by such models can be used, for instance, to predict mode choice decisions for arbitrary origin–destination pairs to see which impacts infrastructural changes would have on the mode share.
In a number of tomographic applications, data cannot be fully acquired, resulting in severely underdetermined image reconstruction. Conventional methods in such cases lead to reconstructions with significant artifacts. To overcome these artifacts, regularization methods are applied that incorporate additional information. An important example is TV reconstruction, which is known to be efficient in compensating for missing data and reducing reconstruction artifacts. On the other hand, tomographic data are also contaminated by noise, which poses an additional challenge. The use of a single regularizer must therefore account for both the missing data and the noise. A particular regularizer may not be ideal for both tasks. For example, the TV regularizer is a poor choice for noise reduction over multiple scales, in which case ℓ1 curvelet regularization methods are well suited. To address this issue, in this paper, we present a novel variational regularization framework that combines the advantages of different regularizers. The basic idea of our framework is to perform reconstruction in two stages. The first stage is mainly aimed at accurate reconstruction in the presence of noise, and the second stage is aimed at artifact reduction. Both reconstruction stages are connected by a data proximity condition. The proposed method is implemented and tested for limited-view CT using a combined curvelet–TV approach. We define and implement a curvelet transform adapted to the limited-view problem and illustrate the advantages of our approach in numerical experiments.
Hintergrund und Fragestellung
Nichtregierungsorganisationen (NRO) sind ein wichtiger Bestandteil der Zivilgesellschaft und interagieren auch mit Regierungen, Unternehmen und anderen gesellschaftlichen Akteuren. Aufgrund der komplexer werdenden Arbeit von NROs scheint Künstliche Intelligenz (KI) Möglichkeiten zur Bewältigung aktueller und zukünftiger Herausforderungen zu bieten. Jedoch ist wenig über die Arbeit von NROs mit KI bekannt.
Methodik
Es wurden fünf explorative Interviews mit Vertreter*innen von Nichtregierungsorganisationen (NROs) zum Thema Wissen, Akzeptanz, Bedarfe und Risikoeinschätzungen geführt und ausgewertet. Dabei sind NROs aus verschiedenen Handlungsfeldern und Größe interviewt. Es wurden informelle Vorgespräche geführt, um eine erste Orientierung im Forschungsfeld zu generieren. Aus den Erkenntnissen der Vorgespräche
und den Ergebnissen des Scoping Reviews ist ein Leitfaden erstellt worden, der zur Orientierung für die Expert*inneninterviews dient. Im Anschluss wurden dann fünf explorative leitfadengestützte Expert*inneninterviews durchgeführt und qualitativ ausgewertet.
Ergebnisse
Ein zentraler Befund ist, dass das Thema KI gerade in den NROs ankommt und es noch keine gefestigten Strukturen und Vorstellungen zum Einsatz von KI gibt. KI wird in einzelnen spezifischen Projekten eingesetzt, ohne dass diese umfassend in Arbeitsabläufe integriert ist. Die Akzeptanz von KI ist generell positiv; die Technologie wird als potenzielle Lösung für strukturelle Herausforderungen und Unterstützung im Alltag gesehen. Die Nutzung von KI-Anwendungen beschränkt sich jedoch mit Ausnahme von Large Language Models auf Pilotprojekte. Mit jüngerem Alter und Technikaffinität ist eine höhere Akzeptanz verbunden. Besonders kritisch werden Anwendungen von KI im Sozial- oder Gesundheitsbereich als Ersatz für menschliche Interaktionen gesehen. Betont werden auch ethische Bedenken und eine hohe Bedeutsamkeit von Datenschutz.
Schlussfolgerung
Künstliche Intelligenz in Nichtregierungsorganisationen ist ein aufkommendes und sich entwickelndes Forschungsthema. Die Interviews unterstreichen den Bedarf an mehr Wissen, ethischen Richtlinien und finanziellen Ressourcen für eine effektive Nutzung von KI in NROs. Ein umfassendes Verständnis von KI und eine tiefergehende, systematische Integration in Arbeitsabläufe in diesen Organisationen müssen noch entwickelt werden.
Background and objectiveDue to the high prevalence of dental caries, fixed dental restorations are regularly required to restore compromised teeth or replace missing teeth while retaining function and aesthetic appearance. The fabrication of dental restorations, however, remains challenging due to the complexity of the human masticatory system as well as the unique morphology of each individual dentition. Adaptation and reworking are frequently required during the insertion of fixed dental prostheses (FDPs), which increase cost and treatment time. This article proposes a data-driven approach for the partial reconstruction of occlusal surfaces based on a data set that comprises 92 3D mesh files of full dental crown restorations.MethodsA Generative Adversarial Network (GAN) is considered for the given task in view of its ability to represent extensive data sets in an unsupervised manner with a wide variety of applications. Having demonstrated good capabilities in terms of image quality and training stability, StyleGAN-2 has been chosen as the main network for generating the occlusal surfaces. A 2D projection method is proposed in order to generate 2D representations of the provided 3D tooth data set for integration with the StyleGAN architecture. The reconstruction capabilities of the trained network are demonstrated by means of 4 common inlay types using a Bayesian Image Reconstruction method. This involves pre-processing the data in order to extract the necessary information of the tooth preparations required for the used method as well as the modification of the initial reconstruction loss.ResultsThe reconstruction process yields satisfactory visual and quantitative results for all preparations with a root mean square error (RMSE) ranging from 0.02 mm to 0.18 mm. When compared against a clinical procedure for CAD inlay fabrication, the group of dentists preferred the GAN-based restorations for 3 of the total 4 inlay geometries.ConclusionsThis article shows the effectiveness of the StyleGAN architecture with a downstream optimization process for the reconstruction of 4 different inlay geometries. The independence of the reconstruction process and the initial training of the GAN enables the application of the method for arbitrary inlay geometries without time-consuming retraining of the GAN.