Institut für Informatik
Refine
Year of publication
Document Type
- Doctoral Thesis (39)
- Part of Periodical (38)
- Diploma Thesis (24)
- Study Thesis (19)
- Master's Thesis (17)
- Bachelor Thesis (14)
- Habilitation (1)
- Report (1)
Has Fulltext
- yes (153)
Is part of the Bibliography
- no (153)
Keywords
- Routing (5)
- Bluetooth (4)
- Knowledge Compilation (4)
- Netzwerk (4)
- Semantic Web (4)
- Software Engineering (4)
- VNUML (4)
- E-KRHyper (3)
- Netzwerksimulation (3)
- RIP-MTI (3)
Institute
The rapid evolution of wireless communication technologies, particularly the introduction
of Fifth-Generation (5G) networks and the anticipated transition to Sixth-Generation (6G)
systems, ushers in a new era of connectivity, enabling transformative applications across
industrial automation, the Internet of Everything (IoE), and the Industrial Internet of Things
(IIoT). However, the exponential growth in the number of connected devices, stringent reliability
requirements, and increasing security challenges pose significant hurdles for current network
architectures. This dissertation addresses these challenges by proposing innovative frameworks
and mechanisms that enhance reliability, optimize resource utilization, and strengthen security
and trust management in next-generation mobile networks.
The first contribution of this dissertation focuses on reliability enhancements in 5G networks.
While existing mechanisms, such as Dual Connectivity (DC) and Network Function (NF)
redundancy, provide partial solutions, they do not fully resolve application-layer reliability
and dynamic server failover. To bridge this gap, this work introduces the Make-Before-Break-
Reliability (MBBR) and enhanced Make-Before-Break-Reliability (eMBBR) mechanisms. These
frameworks proactively establish redundant communication paths, ensuring seamless failovers
with minimal latency and service disruption. By extending reliability to the application layer
and integrating adaptive path selection and dynamic failover capabilities, these mechanisms
offer robust solutions for latency-sensitive and mission-critical applications.
The second major contribution addresses bandwidth optimization for industrial networks.
The black channel paradigm, widely adopted for industrial safety applications, relies heavily on
cyclic keep-alive messages to detect connection loss, leading to significant signaling overhead.
This dissertation proposes a novel solution leveraging 5G Channel State Information (CSI)
to replace cyclic messaging with real-time connection quality monitoring. By exposing CSI
metrics, such as Signal-to-Noise Ratio (SNR) and Channel Quality Indicator (CQI), to the
application layer, the proposed mechanism reduces bandwidth consumption while maintaining
the safety and reliability requirements of industrial networks.
Addressing the growing complexity of security requirements in IIoT, the third contribution
introduces the AF-based Security Framework (AERO) framework. This framework empowers
application providers to dynamically apply cryptographic mechanisms to the user plane,
overcoming the limitations of legacy protocols and eliminating the need for redundant security
layers. By ensuring backward compatibility and enabling both static and dynamic configuration
of user plane encryption, AERO enhances security while minimizing computational overhead
and reducing transmission delays.
The fourth and final contribution redefines trust management in mobile networks through
the SecUre deleGAtion of tRust (SUGAR) framework. Traditional trust models, which rely
on identity chips for each connected device, are becoming increasingly impractical in the
IoE era, where billions of devices require connectivity. The SUGAR framework introduces a
delegation-based trust model, allowing Parent Devices (PaDs) to delegate trust to multiple
Child Devices (ChDs) securely. This approach eliminates the need for individual identity chips,
significantly reducing costs and enhancing scalability. Integration with System-on-a-Chip
(SoC)-based identity enclaves further strengthens the security of trust credentials.
The findings of this dissertation offer substantial contributions to both academia and
industry. The proposed frameworks effectively address critical gaps in current 5G standards
and provide valuable contributions for developing the 6G framework. By enhancing reliability,
optimizing bandwidth, and redefining security and trust management, this dissertation provides
a comprehensive foundation for the design and deployment of next-generation mobile networks.
Furthermore, the solutions presented are adaptable to a wide range of applications, including
industrial automation, autonomous systems, and smart city infrastructures.
In conclusion, this dissertation represents a significant step toward realizing the full
potential of next-generation mobile networks. By addressing key challenges in reliability,
resource optimization, security, and trust management, the proposed frameworks pave the way
for scalable, secure, and efficient mobile ecosystems that are essential for the dynamic and
interconnected world of the future.
Improving patient care is an ongoing process, evolving from early evidence-based practices to modern AI-driven approaches. This thesis explores three key research directions aimed at improving clinical decisionmaking through AI. Adverse events, defined as negative and harmful outcomes that occur during medical care, present major challenges for hospitals. Most data-driven research using electronic health records relies on data from tertiary referral hospitals, but their patient population differs from those in hospitals of medium level of care. The first major contribution of this thesis is a data-driven Trigger Tool for predicting adverse events trained on data from a hospital of medium level of care. This tool uses a concise set of laboratory values measured within the first 24 hours of hospitalization. In addition to models using numerical features, we devised models using dichotomized features that indicate whether a laboratory value falls below or above a reference threshold. Our findings show that models using numerical features achieve high accuracy in predicting acute kidney injury and the COVID-19 associated adverse events in-hospital mortality and transfer to the ICU. Models using dichotomous features performonly slightly worse but offer better interpretability.
The second major contribution is the online-updateable AI model OptAB for selecting optimal antibiotics in sepsis patients. OptAB aims to minimize the sepsis-related organ failure score (SOFA-Score) while accounting for nephrotoxic and hepatotoxic side effects. OptAB relies on a hybrid neural network differential equation algorithm tailored to the special properties of patient data, including irregular measurements, missing values, and time-dependent confounding. Time-dependent confounding describes a dependence between time-varying covariates and treatment decisionsmade by physicians, often leading to biased treatment effect estimates. OptAB generates disease course forecasts for (combinations of ) the antibiotics vancomycin, ceftriaxone, and piperacillin/tazobactam and learns realistic treatment effects on the SOFA-Score and side effect indicative laboratory values. Results indicate that OptAB’s recommendations achieve faster efficacy than the administered antibiotics while reducing side effects.
The third major contribution is DoseAI, an online-updateable AI model that extends OptAB to optimize dosing regimens. DoseAI mitigates time-dependent confounding in dosage selection by minimizing the absolute spearman correlation between predicted and future treatment dosages. It forecasts disease progression under alternative dosing regimens and proposes optimal chemotherapy and radiotherapy dosing regimens for synthetic cancer patients. These regimens effectively reduce the tumor volume while adhering to varying maximum allowed weight loss constraints, used as a measure of toxicity.
The integration of the different stakeholder needs and environmental constraints is the key goal of requirements engineering. This demands collaborations between involved parties, to reach “understandability of the system”, what is particularly challenging for collaborations over different organisations. High quality requirements engineering is the key factor to address these challenges. Requirements are input to all development steps and carry the knowledge to exchange—requirements engineering is an overall life-cycle spanning and in its essence a knowledge management task.
The main goal of the T-Reqs framework presented in this thesis is to enable semantic interoperability and to sustain the knowledge by conceptualization of the requirements engineering process applied to European space projects. T-Reqs’ objective is to formally capture the information carried by the requirements to provide top-shelf inputs for the consecutive system and discipline-specific development tasks, in particular within model-based systems engineering. Emphasis is placed on the nature of relationships that exist among requirements and requirement documents. The T-Reqs formalism addresses the structuring of requirements as well as their potential reuse, e.g., in product line development or even between different projects. This implies an overall System Requirements Specification that is distributed in many specifications documents and involves requirements of different levels of abstractions from abstract goals to implementation details. This thesis especially focuses the specification and validation of such requirements documents.
The T-Reqs traceability model provides a means to trace not only individual requirements,
but also consider relations among views such as documents, taking into account the role
they play for stakeholders, especially in reuse. It is shown, how formalization of dependencies, such as for tailoring of standards, enables automated quality checks to facilitate reviews and enhance completeness and consistency of the overall specification.
Towards the structuring of requirements itself, different syntactic template systems aim to
increase the quality of requirement documentation. Within this thesis a comparative evaluation of these notations is conducted, supporting that claim and differentiating the strength and weaknesses of different approaches. Special emphasis is not only laid on documentation quality, but also the usefulness of these semi-formal notations for integration with model-based development methods. This is achieved through the representation of concepts, which can be managed in special contextualised glossaries.
Overall it can be shown that conceptualization of requirements engineering knowledge can support requirements engineering in different aspects and a holistic approach to integrate different tasks lays the foundation for semantic interoperability spanning organizations and life cycle phases.
This habilitation thesis compiles research on the challenges of complex networks in com-
puter science and their applications. It includes case studies on interdisciplinary research
in life sciences, computational social sciences, and digital humanities. In the life sciences,
knowledge graph approaches are commonly used for clinical and biomedical data. This
thesis focuses on context mining, algorithmic challenges, and link prediction. In social
sciences network approaches, the goal is to connect social network analysis with ontology-
driven research on the labor market. Although data sets are frequently available in social
sciences, this is not always the case in the humanities. Therefore, when applying complex
network approaches such as social network analysis to textual data, hermeneutical and
methodological considerations are necessary. Once these considerations are addressed,
data science methods such as text mining can be used to construct networks from texts.
This thesis presents two case studies on social network analysis, in addition to addressing
the challenges of interdisciplinary research on complex networks in computer science. By
describing three different domains, it demonstrates the existence of a common toolbox that
utilizes methods from data science and graph theory. Consequently, this thesis argues for
more interdisciplinary exchange
The proliferation of online abuse on social media platforms has emerged as a significant concern, negatively impacting users' mental health and online experiences. While the Natural Language Processing (NLP) community has developed various computational methods for abuse detection, including Support Vector Machines (SVMs) and Convolutional Neural Networks (CNNs), existing approaches predominantly focus on identifying explicit forms of abuse. This narrow focus overlooks subtle and contextual forms of online harassment, which can be equally damaging to users' wellbeing.
This thesis presents a novel approach to online abuse detection by integrating contextual embeddings with sentiment analysis features through the fine-tuning of Large Language Models (LLMs). Our methodology leverages a comprehensive dataset of 47,000 annotated tweets for training, combined with sentiment analysis capabilities developed using 50,000 IMDB movie reviews. The system employs DistilBERT architecture to develop a sophisticated detection framework capable of identifying six distinct categories of abuse: ethnicity-based, age-based, gender-based, religion-based, other cyberbullying, and non-cyberbullying content. The author established a rigorous evaluation framework employing multiple metrics, including accuracy, recall, and F1 score, to assess the model's performance in detecting both explicit and nuanced forms of online abuse.
The integrated system achieved an overall accuracy of 85\% across 6 categories on the cyberbullying dataset, outperforming other methodologies applied to the same data. In direct comparison, our approach— which uniquely combines contextual embeddings with sentiment analysis—demonstrated significant improvements over traditional fine-tuning methods, such as those using only BERT or RoBERTa, particularly in detecting subtle forms of abuse. Most notably, our system was more effective at identifying passive-aggressive content and context-dependent harassment, challenges that often cause conventional detection methods to fall short. This enhanced performance can be attributed to the model's ability to capture nuanced linguistic cues through its integrated analysis of both contextual information and sentiment, thereby offering a more refined interpretation of potentially harmful content.
This research emphasizes the critical importance of incorporating subtle abuse detection into online content moderation systems. By developing more sophisticated detection methods that can identify both overt and nuanced forms of harassment, this work contributes to the creation of safer and more inclusive online spaces that facilitate constructive dialogue. The findings of this study have significant implications for the development of more effective content moderation tools and the broader goal of fostering healthier online communities.
Analyse und Bewertung der Resilienz von Unternehmen und Geschäftsprozessen aus Ressourcensicht
(2025)
Companies can be affected by events that adversely impact their business operations. These events can originate from corporate environments or within companies themselves. The effects of these events may be quite diverse and can threaten the survival of companies in the worst case. To deal with events that can adversely impact business operations of companies, the concept of resilience can be used. The concept of resilience relates to the ability of companies to handle adverse circumstances. It encompasses different aspects from measuring to restoring and strengthening the resilience of companies. This dissertation deals with the concept of resilience within the corporate context. It considers the concept of resilience from a company and business process perspective and provides different research contributions for this contexts. From a company perspective, a concept and a model are presented that serve as the basis to analyse the resilience of companies. The concept shows essential elements that are important for considering the resilience of companies. The model outlines the range that can be used to analyse the resilience of companies. Furthermore, a corporate maturity model is introduced to assess the resilience of companies. It encompasses different attributes and resilience levels to determine and improve the resilience of compannies. From a business process perspective, a lifecycle and metrics for business process resilience are presented. The lifecycle shows different phases relating to resilience considerations of business processes. The metrics are used to measure the resilience of business processes.
Assessing ChatGPT’s Performance in Analyzing Students’ Sentiments: A Case Study in Course Feedback
(2024)
The emergence of large language models (LLMs) like ChatGPT has impacted fields such as education, transforming natural language processing (NLP) tasks like sentiment analysis. Transformers form the foundation of LLMs, with BERT, XLNet, and GPT as key examples. ChatGPT, developed by OpenAI, is a state-of-the-art model and its ability in natural language tasks makes it a potential tool in sentiment analysis. This thesis reviews current sentiment analysis methods and examines ChatGPT’s ability to analyze sentiments across three labels (Negative, Neutral, Positive) and five labels (Very Negative, Negative, Neutral, Positive, Very Positive) on a dataset of student course reviews. Its performance is compared with fine tuned state-of-the-art models like BERT, XLNet, bart-large-mnli, and RoBERTa-large-mnli using quantitative metrics. With the help of 7 prompting techniques which are ways to instruct ChatGPT, this work also analyzed how well it understands complex linguistic nuances in the given texts using qualitative metrics. BERT and XLNet outperform ChatGPT mainly due to their bidirectional nature, which allows them to understand the full context of a sentence, not just left to right. This, combined with fine-tuning, helps them capture patterns and nuances better. ChatGPT, as a general purpose, open-domain model, processes text unidirectionally, which can limit its context understanding. Despite this, ChatGPT performed comparably to XLNet and BERT in three-label scenarios and outperformed others. Fine-tuned models excelled in five label cases. Moreover, it has shown impressive knowledge of the language. Chain-of-Thought (CoT) was the most effective technique for prompting with step by step instructions. ChatGPT showed promising performance in correctness, consistency, relevance, and robustness, except for detecting Irony. As education evolves with diverse learning environments, effective feedback analysis becomes increasingly valuable. Addressing ChatGPT’s limitations and leveraging its strengths could enhance personalized learning through better sentiment analysis.
Exploring Academic Perspectives: Sentiments and Discourse on ChatGPT Adoption in Higher Education
(2024)
Artificial intelligence (AI) is becoming more widely used in a number of industries, including in the field of education. Applications of artificial intelligence (AI) are becoming crucial for schools and universities, whether for automated evaluation, smart educational systems, individualized learning, or staff support. ChatGPT, anAI-based chatbot, offers coherent and helpful replies based on analyzing large volumes of data. Integrating ChatGPT, a sophisticated Natural Language Processing (NLP) tool developed by OpenAI, into higher education has sparked significant interest and debate. Since the technology is already adapted by many students and teachers, this study delves into analyzing the sentiments expressed on university websites regarding ChatGPT integration into education by creating a comprehensive sentiment analysis framework using Hierarchical Residual RSigELU Attention Network (HR-RAN). The proposed framework addresses several challenges in sentiment analysis, such as capturing fine-grained sentiment nuances, including contextual information, and handling complex language expressions in university review data. The methodology involves several steps, including data collection from various educational websites, blogs, and news platforms. The data is preprocessed to handle emoticons, URLs, and tags and then, detect and remove sarcastic text using the eXtreme Learning Hyperband Network (XLHN). Sentences are then grouped based on similarity and topics are modeled using the Non-negative Term-Document Matrix Factorization (NTDMF) approach. Features, such as lexico-semantic, lexico structural, and numerical features are extracted. Dependency parsing and coreference resolution are performed to analyze grammatical structures and understand semantic relationships. Word embedding uses the Word2Vec model to capture semantic relationships between words. The preprocessed text and extracted features are inputted into the HR-RAN classifier to categorize sentiments as positive, negative, or neutral. The sentiment analysis results indicate that 74.8% of the sentiments towards ChatGPT in higher education are neutral, 21.5% are positive, and only 3.7% are negative. This suggests a predominant neutrality among users, with a significant portion expressing positive views and a very small percentage holding negative opinions. Additionally, the analysis reveals regional variations, with Canada showing the highest number of sentiments, predominantly neutral, followed by Germany, the UK, and the USA. The sentiment analysis results are evaluated based on various metrics, such as accuracy, precision, recall, F-measure, and specificity. Results indicate that the proposed framework outperforms conventional sentiment analysis models. The HR-RAN technique achieved a precision of 98.98%, recall of 99.23%, F-measure of 99.10%, accuracy of 98.88%, and specificity of 98.31%. Additionally, word clouds are generated to visually represent the most common terms within positive, neutral, and negative sentiments, providing a clear and immediate understanding of the key themes in the data. These findings can inform educators, administrators, and developers about the benefits and challenges of integrating ChatGPT into educational
settings, guiding improvements in educational practices and AI tool development.
The goal of this PhD thesis is to investigate possibilities of using symbol elimination for solving problems over complex theories and analyze the applicability of such uniform approaches in different areas of application, such as verification, knowledge representation and graph theory. In the thesis we propose an approach to symbol elimination in complex theories that follows the general idea of combining hierarchical reasoning with symbol elimination in standard theories. We analyze how this general approach can be specialized and used in different areas of application.
In the verification of parametric systems it is important to prove that certain safety properties hold. This can be done by showing that a property is an inductive invariant of the system, i.e. it holds in the initial state of the system and is invariant under updates of the system. Sometimes this is not the case for the condition itself, but for a stronger condition it is. In this thesis we propose a method for goal-directed invariant strengthening.
In knowledge representation we often have to deal with huge ontologies. Combining two ontologies usually leads to new consequences, some of which may be false or undesired. We are interested in finding explanations for such unwanted consequences. For this we propose a method for computing interpolants in the description logics EL and EL⁺, based on a translation to the theory of semilattices with monotone operators and a certain form of interpolation in this theory.
In wireless network theory one often deals with classes of geometric graphs in which the existence or non-existence of an edge between two vertices in a graph relies on properties on their distances to other nodes. One possibility to prove properties of those graphs or to analyze relations between the graph classes is to prove or disprove that one graph class is contained in the other. In this thesis we propose a method for checking inclusions between geometric graph classes.
In international business relationships, such as international railway operations, large amounts of data can be exchanged among the parties involved. For the exchange of such data, a limited risk of being cheated by another party, e.g., by being provided with fake data, as well as reasonable cost and a foreseeable benefit, is expected. As the exchanged data can be used to make critical business decisions, there is a high incentive for one party to manipulate the data in its favor. To prevent this type of manipulation, mechanisms exist to ensure the integrity and authenticity of the data. In combination with a fair exchange protocol, it can be ensured that the integrity and authenticity of this data is maintained even when it is exchanged with another party. At the same time, such a protocol ensures that the exchange of data only takes place in conjunction with the agreed compensation, such as a payment, and that the payment is only made if the integrity and authenticity of the data is ensured as previously agreed. However, in order to be able to guarantee fairness, a fair exchange protocol must involve a trusted third party. To avoid fraud by a single centralized party acting as a trusted third party, current research proposes decentralizing the trusted third party, e.g., by using a distributed ledger based fair exchange protocol. However, for assessing the fairness of such an exchange, state-of-the-art approaches neglect costs arising for the parties conducting the fair exchange. This can result in a violation of the outlined expectation of reasonable cost, especially when distributed ledgers are involved, which are typically associated with non-negligible costs. Furthermore, the performance of typical distributed ledger-based fair exchange protocols is limited, posing an obstacle to widespread adoption.
To overcome the challenges, in this thesis, we introduce the foundation for a data exchange platform allowing for a fully decentralized fair data exchange with reasonable cost and performance. As a theoretical foundation, we introduce the concept of cost fairness, which considers cost for the fairness assessment by requesting that a party following the fair exchange protocol never suffers any unilateral disadvantages. We prove that cost fairness cannot be achieved using typical public distributed ledgers but requires customized distributed ledger instances, which usually lack complete decentralization. However, we show that the highest unilateral cost are caused by a grieving attack.
To allow fair data exchanges to be conducted with reasonable cost and performance, we introduce FairSCE, a distributed ledger-based fair exchange protocol using distributed ledger state channels and incorporating a mechanism to protect against grieving attacks, reducing the possible unilateral cost that have to be covered to a minimum. Based on our evaluation of FairSCE, the worst-case cost for data exchange, even in the presence of malicious parties, is known, which allows an estimate of the possible benefit and, thus, the preliminary estimate of economic utility. Furthermore, to allow for an unambiguous assessment of the correct data being transferred while still allowing for sensitive parts of the data to be masked, we introduce an approach for the hashing of hierarchically structured data, which can be used to ensure integrity and authenticity of the data being transferred.