Digitalisierung
Refine
Year of publication
Document Type
- conference proceeding (article) (419)
- Article (201)
- conference proceeding (presentation, abstract) (49)
- Part of a Book (48)
- Preprint (25)
- Book (14)
- Working Paper (9)
- conference proceeding (volume) (5)
- Doctoral Thesis (4)
- Report (4)
Language
- English (608)
- German (183)
- Multiple languages (1)
Is part of the Bibliography
- no (792)
Keywords
- Offshoring (15)
- Betriebliches Informationssystem (12)
- Informationstechnik (12)
- Digitalisierung (11)
- Datenschutz (10)
- Internet of Things (9)
- Datensicherung (8)
- Elektronische Gesundheitskarte (8)
- Information systems (8)
- Literaturbericht (8)
Institute
- Fakultät Informatik und Mathematik (461)
- Fakultät Elektro- und Informationstechnik (250)
- Laboratory for Safe and Secure Systems (LAS3) (232)
- Labor für Digitalisierung (LFD) (100)
- Labor Regensburg Strategic IT Management (ReSITM) (72)
- Labor eHealth (eH) (38)
- Fakultät Maschinenbau (34)
- Labor für Technikfolgenabschätzung und Angewandte Ethik (LaTe) (28)
- Fakultät Sozial- und Gesundheitswissenschaften (24)
- Labor Parallele und Verteilte Systeme (24)
Begutachtungsstatus
- peer-reviewed (304)
- begutachtet (8)
Planning modern facade systems is complex, requiring optimization across multiple domains.This paper proposes an AI-enhanced workflow for facade planning, harnessing computer vision and human input via a Large Language Model.A generative AI system then guides a parametric model to produce 3D facade designs. Automated checks provide feedback to a Reinforcement Learning system, to iteratively determine optimal solutions.These solutions are verified and finalized by human expertise, ensuring improved outcomes with reduce planning time and effort.The approach illustrates how combining advanced AI methods with human expertise can address the multifactorial challenges of facade design within current industry practices.
Model Transformations are a key element of Model-Driven Software Engineering. As soon as variability is involved, transformations become increasingly complicated. The lack of support for variability in model transformations impairs the acceptance of approaches to organized reuse such as software product lines. In this position paper, the general problem of multi-variant model transformations is formulated for MOF-based, XMI-serialized models. A simplistic case study is presented to specify the input and the expected output of such a transformation. Furthermore, requirements for tool support are defined, including a standardized representation of both multi-variant model instances and variability information, as well as an execution specification for multi-variant transformations. A literature review reveals that the problem is weakly identified and often solved using ad-hoc solutions; there exists no tool providing a general solution to the proposed problem statement. The observation s presented here may serve for the future development of standards and tools.
Model-driven development is a well-known practice in modern software engineering. Many tools exist which allow developers to build software in a model-based or even model-driven way, but they do not provide dedicated support for software product line development. Only recently some approaches combined model-driven engineering and software product line engineering. In this paper we present an approach that allows for combining feature models and Ecore-based domain models and provides extensive support to keep the mapping between the involved models consistent. Our key contribution is a declarative textual language which allows to phrase domain-specific consistency constraints which are preserved during the configuration process in order to ensure context-sensitive syntactical correctness of derived domain models.
This paper considers three fundamental approaches to software development, namely manual coding, model-driven software engineering, and code generation by large language models. All of these approaches have their individual pros and cons, motivating the desire for an integrated approach. We present MoProCo, a technical solution to integrate the three approaches into a single tool chain, allowing the developer to split a software engineering task into modeling, prompting or coding sub-tasks. From a single input file consisting of static model structure, natural language prompts and/or source code fragments, Java source code is generated using a two-stage approach. A case study demonstrates that the MoProCo approach combines the desirable properties of the three development approaches by offering the appropriate level of abstraction, determinism, and dynamism for each specific software engineering sub-task.
Ensuring the security of modern automotive systems is critical due to their increasing complexity and reliance on interconnected Electronic Control Units. The Controller Area Network still serves as a key communication protocol within these systems, making it a primary target for security testing. Traditional fuzz testing approaches for Controller Area Networks often rely on random or brute-force message generation, not leveraging the system’s feedback to improve the generation process. This paper introduces the Genetic Fuzz Data Generator, a fuzzing method that leverages Genetic Algorithms and side-channel analysis to enhance Controller Area Network security testing. The Genetic Fuzz Data Generator dynamically refines its fuzzing strategy by evaluating system responses through side-channel data, such as processing unit temperatures and power supply variations. By structuring Controller Area Network messages as genetic individuals and applying evolutionary principles—including selection, crossover, and mutation—the Genetic Fuzz Data Generator systematically identifies active Controller Area Network IDs and generates targeted fuzz messages. Experimental validation was conducted on a real automotive electronic control unit within a controlled laboratory setup. The first results demonstrated the approach’s effectiveness, revealing system anomalies, including a Denial of Service vulnerability that disrupted functions of the investigated Electronic Control Unit. The findings highlight the potential of feedback-driven fuzzing for improving the efficiency of black-box security testing in Controller Area Network-based systems. Future research could further optimize fitness functions or explore
additional side-channel metrics.
Towards a Taxonomy for Digital Assistant Technologies: Addressing the Jingle-Jangle Fallacies
(2025)
This study proposes a unified taxonomy for Digital Assistant Technologies (DATs) to resolve terminological inconsistencies and eliminate »Jingle-Jangle fallacies.« By employing a systematic taxonomy development method on 137 papers, the framework categorizes DATs across four meta-characteristics: AI technology, context, intelligence, and interaction. This taxonomy facilitates the clear differentiation of three primary DAT concepts: assistant, chatbot, and agent. By providing a structured framework, the study enhances conceptual clarity, fosters more focused research, and ensures better alignment of DATs.
This paper presents a novel two-stage approach for computed tomography (CT) reconstruction, focusing on sparse-angle and low-dose setups to minimize radiation exposure while maintaining high image quality. Two-stage approaches consist of an initial reconstruction followed by a neural network for image refinement. In the initial reconstruction, we apply the backprojection (BP) instead of the traditional filtered backprojection (FBP). This enhances computational speed and offers potential advantages for more complex geometries, such as fan-beam and cone-beam CT. Additionally, BP addresses noise and artifacts in sparse-angle CT by leveraging its inherent noise-smoothing effect, which reduces streaking artifacts common in FBP reconstructions. For the second stage, we fine-tune the DRUNet proposed by Zhang et al. to further improve reconstruction quality. We call our method BP-DRUNet and evaluate its performance on a synthetically generated ellipsoid dataset alongside thewell-established LoDoPaBCT dataset. Our results show that BP-DRUNet produces competetive results in terms of PSNR and SSIM metrics compared to the FBP-based counterpart, FBPDRUNet, and delivers visually competitive results across all tested angular setups.
Today, large language models are a very efficient tool for human-computer interaction using natural language. Chatbots like ChatGPT and their corresponding APIs can be used to solve a large variety of tasks that are provided in human-comprehensible sentences, for example generating SQL queries. Executing spoken SQL queries in a database system poses a challenge, because the various syntactical details of SQL are usually not provided verbally. Here, the LLM can help to augment the recognized raw query with the syntax elements needed for successful execution. Furthermore, the correct spelling of table and column names can be derived from the database schema provided in the LLM prompt. This work showcases four use cases in which LLMs assist in querying database systems: (1) A plugin for phpMyAdmin for voice-query input in natural language, (2) a chart generator, (3) an Alexa skill, and (4) a speech-controlled action game SQL Invaders.
When working with today's relational databases, there is usually a clear boundary between the database server and the application, that interfaces with the database system using the query language SQL. The concept of stored procedures allows to move complex parts of the business logic into the database server for various reasons, as, for instance, to reduce the latency of ELT processes that involve several database queries building on each other like distributing records into tables according to their attribute values. Creating and maintaining such stored procedures can be a challenging task, however. The idea pursued in this paper is to create a programming language, as well as a compilation and execution environment that allows the user to mark parts of the application code for being automatically compiled to and later be executed as a stored procedure in the database instead of the execution environment of the actual application. This blurs the border between database and application and provides a natural and maintenance-friendly way for offloading latency sensitive parts of the code to the database system.
Does the Tool Matter? Exploring Some Causes of Threats to Validity in Mining Software Repositories
(2025)
Software repositories are an essential source of information for software engineering research on topics such as project evolution and developer collaboration. Appropriate mining tools and analysis pipelines are therefore an indispensable precondition for many research activities. Ideally, valid results should not depend on technical details of data collection and processing. It is, however, widely acknowledged that mining pipelines are complex, with a multitude of implementation decisions made by tool authors based on their interests and assumptions. This raises the questions if (and to what extent) tools agree on their results and are interchangeable. In this study, we use two tools to extract and analyse ten large software projects, quantitatively and qualitatively comparing results and derived data to better understand this concern. We analyse discrepancies from a technical point of view, and adjust code and parametrisation to minimise replication differences. Our results indicate that despite similar trends, even simple metrics such as the numbers of commits and developers may differ by up to 500%. We find that such substantial differences are often caused by minor technical details. We show how tool-level and data post-processing changes can overcome these issues, but find they may require considerable efforts. We summarise identified causes in our lessons learned to help researchers and practitioners avoid common pitfalls, and reflect on implementation decisions and their influence in ensuring obtained data meets explicit and implicit expectations. Our findings lead us to hypothesise that similar uncertainties exist in other analysis tools, which may limit the validity of conclusions drawn in tool-centric research.