Refine
Year of publication
Document Type
- Doctoral Thesis (39)
- Part of Periodical (38)
- Diploma Thesis (24)
- Study Thesis (19)
- Master's Thesis (17)
- Bachelor Thesis (14)
- Habilitation (1)
- Report (1)
Has Fulltext
- yes (153)
Is part of the Bibliography
- no (153)
Keywords
- Routing (5)
- Bluetooth (4)
- Knowledge Compilation (4)
- Netzwerk (4)
- Semantic Web (4)
- Software Engineering (4)
- VNUML (4)
- E-KRHyper (3)
- Netzwerksimulation (3)
- RIP-MTI (3)
Institute
- Institut für Informatik (153) (remove)
The rapid evolution of wireless communication technologies, particularly the introduction
of Fifth-Generation (5G) networks and the anticipated transition to Sixth-Generation (6G)
systems, ushers in a new era of connectivity, enabling transformative applications across
industrial automation, the Internet of Everything (IoE), and the Industrial Internet of Things
(IIoT). However, the exponential growth in the number of connected devices, stringent reliability
requirements, and increasing security challenges pose significant hurdles for current network
architectures. This dissertation addresses these challenges by proposing innovative frameworks
and mechanisms that enhance reliability, optimize resource utilization, and strengthen security
and trust management in next-generation mobile networks.
The first contribution of this dissertation focuses on reliability enhancements in 5G networks.
While existing mechanisms, such as Dual Connectivity (DC) and Network Function (NF)
redundancy, provide partial solutions, they do not fully resolve application-layer reliability
and dynamic server failover. To bridge this gap, this work introduces the Make-Before-Break-
Reliability (MBBR) and enhanced Make-Before-Break-Reliability (eMBBR) mechanisms. These
frameworks proactively establish redundant communication paths, ensuring seamless failovers
with minimal latency and service disruption. By extending reliability to the application layer
and integrating adaptive path selection and dynamic failover capabilities, these mechanisms
offer robust solutions for latency-sensitive and mission-critical applications.
The second major contribution addresses bandwidth optimization for industrial networks.
The black channel paradigm, widely adopted for industrial safety applications, relies heavily on
cyclic keep-alive messages to detect connection loss, leading to significant signaling overhead.
This dissertation proposes a novel solution leveraging 5G Channel State Information (CSI)
to replace cyclic messaging with real-time connection quality monitoring. By exposing CSI
metrics, such as Signal-to-Noise Ratio (SNR) and Channel Quality Indicator (CQI), to the
application layer, the proposed mechanism reduces bandwidth consumption while maintaining
the safety and reliability requirements of industrial networks.
Addressing the growing complexity of security requirements in IIoT, the third contribution
introduces the AF-based Security Framework (AERO) framework. This framework empowers
application providers to dynamically apply cryptographic mechanisms to the user plane,
overcoming the limitations of legacy protocols and eliminating the need for redundant security
layers. By ensuring backward compatibility and enabling both static and dynamic configuration
of user plane encryption, AERO enhances security while minimizing computational overhead
and reducing transmission delays.
The fourth and final contribution redefines trust management in mobile networks through
the SecUre deleGAtion of tRust (SUGAR) framework. Traditional trust models, which rely
on identity chips for each connected device, are becoming increasingly impractical in the
IoE era, where billions of devices require connectivity. The SUGAR framework introduces a
delegation-based trust model, allowing Parent Devices (PaDs) to delegate trust to multiple
Child Devices (ChDs) securely. This approach eliminates the need for individual identity chips,
significantly reducing costs and enhancing scalability. Integration with System-on-a-Chip
(SoC)-based identity enclaves further strengthens the security of trust credentials.
The findings of this dissertation offer substantial contributions to both academia and
industry. The proposed frameworks effectively address critical gaps in current 5G standards
and provide valuable contributions for developing the 6G framework. By enhancing reliability,
optimizing bandwidth, and redefining security and trust management, this dissertation provides
a comprehensive foundation for the design and deployment of next-generation mobile networks.
Furthermore, the solutions presented are adaptable to a wide range of applications, including
industrial automation, autonomous systems, and smart city infrastructures.
In conclusion, this dissertation represents a significant step toward realizing the full
potential of next-generation mobile networks. By addressing key challenges in reliability,
resource optimization, security, and trust management, the proposed frameworks pave the way
for scalable, secure, and efficient mobile ecosystems that are essential for the dynamic and
interconnected world of the future.
Improving patient care is an ongoing process, evolving from early evidence-based practices to modern AI-driven approaches. This thesis explores three key research directions aimed at improving clinical decisionmaking through AI. Adverse events, defined as negative and harmful outcomes that occur during medical care, present major challenges for hospitals. Most data-driven research using electronic health records relies on data from tertiary referral hospitals, but their patient population differs from those in hospitals of medium level of care. The first major contribution of this thesis is a data-driven Trigger Tool for predicting adverse events trained on data from a hospital of medium level of care. This tool uses a concise set of laboratory values measured within the first 24 hours of hospitalization. In addition to models using numerical features, we devised models using dichotomized features that indicate whether a laboratory value falls below or above a reference threshold. Our findings show that models using numerical features achieve high accuracy in predicting acute kidney injury and the COVID-19 associated adverse events in-hospital mortality and transfer to the ICU. Models using dichotomous features performonly slightly worse but offer better interpretability.
The second major contribution is the online-updateable AI model OptAB for selecting optimal antibiotics in sepsis patients. OptAB aims to minimize the sepsis-related organ failure score (SOFA-Score) while accounting for nephrotoxic and hepatotoxic side effects. OptAB relies on a hybrid neural network differential equation algorithm tailored to the special properties of patient data, including irregular measurements, missing values, and time-dependent confounding. Time-dependent confounding describes a dependence between time-varying covariates and treatment decisionsmade by physicians, often leading to biased treatment effect estimates. OptAB generates disease course forecasts for (combinations of ) the antibiotics vancomycin, ceftriaxone, and piperacillin/tazobactam and learns realistic treatment effects on the SOFA-Score and side effect indicative laboratory values. Results indicate that OptAB’s recommendations achieve faster efficacy than the administered antibiotics while reducing side effects.
The third major contribution is DoseAI, an online-updateable AI model that extends OptAB to optimize dosing regimens. DoseAI mitigates time-dependent confounding in dosage selection by minimizing the absolute spearman correlation between predicted and future treatment dosages. It forecasts disease progression under alternative dosing regimens and proposes optimal chemotherapy and radiotherapy dosing regimens for synthetic cancer patients. These regimens effectively reduce the tumor volume while adhering to varying maximum allowed weight loss constraints, used as a measure of toxicity.
The integration of the different stakeholder needs and environmental constraints is the key goal of requirements engineering. This demands collaborations between involved parties, to reach “understandability of the system”, what is particularly challenging for collaborations over different organisations. High quality requirements engineering is the key factor to address these challenges. Requirements are input to all development steps and carry the knowledge to exchange—requirements engineering is an overall life-cycle spanning and in its essence a knowledge management task.
The main goal of the T-Reqs framework presented in this thesis is to enable semantic interoperability and to sustain the knowledge by conceptualization of the requirements engineering process applied to European space projects. T-Reqs’ objective is to formally capture the information carried by the requirements to provide top-shelf inputs for the consecutive system and discipline-specific development tasks, in particular within model-based systems engineering. Emphasis is placed on the nature of relationships that exist among requirements and requirement documents. The T-Reqs formalism addresses the structuring of requirements as well as their potential reuse, e.g., in product line development or even between different projects. This implies an overall System Requirements Specification that is distributed in many specifications documents and involves requirements of different levels of abstractions from abstract goals to implementation details. This thesis especially focuses the specification and validation of such requirements documents.
The T-Reqs traceability model provides a means to trace not only individual requirements,
but also consider relations among views such as documents, taking into account the role
they play for stakeholders, especially in reuse. It is shown, how formalization of dependencies, such as for tailoring of standards, enables automated quality checks to facilitate reviews and enhance completeness and consistency of the overall specification.
Towards the structuring of requirements itself, different syntactic template systems aim to
increase the quality of requirement documentation. Within this thesis a comparative evaluation of these notations is conducted, supporting that claim and differentiating the strength and weaknesses of different approaches. Special emphasis is not only laid on documentation quality, but also the usefulness of these semi-formal notations for integration with model-based development methods. This is achieved through the representation of concepts, which can be managed in special contextualised glossaries.
Overall it can be shown that conceptualization of requirements engineering knowledge can support requirements engineering in different aspects and a holistic approach to integrate different tasks lays the foundation for semantic interoperability spanning organizations and life cycle phases.
This habilitation thesis compiles research on the challenges of complex networks in com-
puter science and their applications. It includes case studies on interdisciplinary research
in life sciences, computational social sciences, and digital humanities. In the life sciences,
knowledge graph approaches are commonly used for clinical and biomedical data. This
thesis focuses on context mining, algorithmic challenges, and link prediction. In social
sciences network approaches, the goal is to connect social network analysis with ontology-
driven research on the labor market. Although data sets are frequently available in social
sciences, this is not always the case in the humanities. Therefore, when applying complex
network approaches such as social network analysis to textual data, hermeneutical and
methodological considerations are necessary. Once these considerations are addressed,
data science methods such as text mining can be used to construct networks from texts.
This thesis presents two case studies on social network analysis, in addition to addressing
the challenges of interdisciplinary research on complex networks in computer science. By
describing three different domains, it demonstrates the existence of a common toolbox that
utilizes methods from data science and graph theory. Consequently, this thesis argues for
more interdisciplinary exchange
The proliferation of online abuse on social media platforms has emerged as a significant concern, negatively impacting users' mental health and online experiences. While the Natural Language Processing (NLP) community has developed various computational methods for abuse detection, including Support Vector Machines (SVMs) and Convolutional Neural Networks (CNNs), existing approaches predominantly focus on identifying explicit forms of abuse. This narrow focus overlooks subtle and contextual forms of online harassment, which can be equally damaging to users' wellbeing.
This thesis presents a novel approach to online abuse detection by integrating contextual embeddings with sentiment analysis features through the fine-tuning of Large Language Models (LLMs). Our methodology leverages a comprehensive dataset of 47,000 annotated tweets for training, combined with sentiment analysis capabilities developed using 50,000 IMDB movie reviews. The system employs DistilBERT architecture to develop a sophisticated detection framework capable of identifying six distinct categories of abuse: ethnicity-based, age-based, gender-based, religion-based, other cyberbullying, and non-cyberbullying content. The author established a rigorous evaluation framework employing multiple metrics, including accuracy, recall, and F1 score, to assess the model's performance in detecting both explicit and nuanced forms of online abuse.
The integrated system achieved an overall accuracy of 85\% across 6 categories on the cyberbullying dataset, outperforming other methodologies applied to the same data. In direct comparison, our approach— which uniquely combines contextual embeddings with sentiment analysis—demonstrated significant improvements over traditional fine-tuning methods, such as those using only BERT or RoBERTa, particularly in detecting subtle forms of abuse. Most notably, our system was more effective at identifying passive-aggressive content and context-dependent harassment, challenges that often cause conventional detection methods to fall short. This enhanced performance can be attributed to the model's ability to capture nuanced linguistic cues through its integrated analysis of both contextual information and sentiment, thereby offering a more refined interpretation of potentially harmful content.
This research emphasizes the critical importance of incorporating subtle abuse detection into online content moderation systems. By developing more sophisticated detection methods that can identify both overt and nuanced forms of harassment, this work contributes to the creation of safer and more inclusive online spaces that facilitate constructive dialogue. The findings of this study have significant implications for the development of more effective content moderation tools and the broader goal of fostering healthier online communities.
Analyse und Bewertung der Resilienz von Unternehmen und Geschäftsprozessen aus Ressourcensicht
(2025)
Companies can be affected by events that adversely impact their business operations. These events can originate from corporate environments or within companies themselves. The effects of these events may be quite diverse and can threaten the survival of companies in the worst case. To deal with events that can adversely impact business operations of companies, the concept of resilience can be used. The concept of resilience relates to the ability of companies to handle adverse circumstances. It encompasses different aspects from measuring to restoring and strengthening the resilience of companies. This dissertation deals with the concept of resilience within the corporate context. It considers the concept of resilience from a company and business process perspective and provides different research contributions for this contexts. From a company perspective, a concept and a model are presented that serve as the basis to analyse the resilience of companies. The concept shows essential elements that are important for considering the resilience of companies. The model outlines the range that can be used to analyse the resilience of companies. Furthermore, a corporate maturity model is introduced to assess the resilience of companies. It encompasses different attributes and resilience levels to determine and improve the resilience of compannies. From a business process perspective, a lifecycle and metrics for business process resilience are presented. The lifecycle shows different phases relating to resilience considerations of business processes. The metrics are used to measure the resilience of business processes.
Assessing ChatGPT’s Performance in Analyzing Students’ Sentiments: A Case Study in Course Feedback
(2024)
The emergence of large language models (LLMs) like ChatGPT has impacted fields such as education, transforming natural language processing (NLP) tasks like sentiment analysis. Transformers form the foundation of LLMs, with BERT, XLNet, and GPT as key examples. ChatGPT, developed by OpenAI, is a state-of-the-art model and its ability in natural language tasks makes it a potential tool in sentiment analysis. This thesis reviews current sentiment analysis methods and examines ChatGPT’s ability to analyze sentiments across three labels (Negative, Neutral, Positive) and five labels (Very Negative, Negative, Neutral, Positive, Very Positive) on a dataset of student course reviews. Its performance is compared with fine tuned state-of-the-art models like BERT, XLNet, bart-large-mnli, and RoBERTa-large-mnli using quantitative metrics. With the help of 7 prompting techniques which are ways to instruct ChatGPT, this work also analyzed how well it understands complex linguistic nuances in the given texts using qualitative metrics. BERT and XLNet outperform ChatGPT mainly due to their bidirectional nature, which allows them to understand the full context of a sentence, not just left to right. This, combined with fine-tuning, helps them capture patterns and nuances better. ChatGPT, as a general purpose, open-domain model, processes text unidirectionally, which can limit its context understanding. Despite this, ChatGPT performed comparably to XLNet and BERT in three-label scenarios and outperformed others. Fine-tuned models excelled in five label cases. Moreover, it has shown impressive knowledge of the language. Chain-of-Thought (CoT) was the most effective technique for prompting with step by step instructions. ChatGPT showed promising performance in correctness, consistency, relevance, and robustness, except for detecting Irony. As education evolves with diverse learning environments, effective feedback analysis becomes increasingly valuable. Addressing ChatGPT’s limitations and leveraging its strengths could enhance personalized learning through better sentiment analysis.
Exploring Academic Perspectives: Sentiments and Discourse on ChatGPT Adoption in Higher Education
(2024)
Artificial intelligence (AI) is becoming more widely used in a number of industries, including in the field of education. Applications of artificial intelligence (AI) are becoming crucial for schools and universities, whether for automated evaluation, smart educational systems, individualized learning, or staff support. ChatGPT, anAI-based chatbot, offers coherent and helpful replies based on analyzing large volumes of data. Integrating ChatGPT, a sophisticated Natural Language Processing (NLP) tool developed by OpenAI, into higher education has sparked significant interest and debate. Since the technology is already adapted by many students and teachers, this study delves into analyzing the sentiments expressed on university websites regarding ChatGPT integration into education by creating a comprehensive sentiment analysis framework using Hierarchical Residual RSigELU Attention Network (HR-RAN). The proposed framework addresses several challenges in sentiment analysis, such as capturing fine-grained sentiment nuances, including contextual information, and handling complex language expressions in university review data. The methodology involves several steps, including data collection from various educational websites, blogs, and news platforms. The data is preprocessed to handle emoticons, URLs, and tags and then, detect and remove sarcastic text using the eXtreme Learning Hyperband Network (XLHN). Sentences are then grouped based on similarity and topics are modeled using the Non-negative Term-Document Matrix Factorization (NTDMF) approach. Features, such as lexico-semantic, lexico structural, and numerical features are extracted. Dependency parsing and coreference resolution are performed to analyze grammatical structures and understand semantic relationships. Word embedding uses the Word2Vec model to capture semantic relationships between words. The preprocessed text and extracted features are inputted into the HR-RAN classifier to categorize sentiments as positive, negative, or neutral. The sentiment analysis results indicate that 74.8% of the sentiments towards ChatGPT in higher education are neutral, 21.5% are positive, and only 3.7% are negative. This suggests a predominant neutrality among users, with a significant portion expressing positive views and a very small percentage holding negative opinions. Additionally, the analysis reveals regional variations, with Canada showing the highest number of sentiments, predominantly neutral, followed by Germany, the UK, and the USA. The sentiment analysis results are evaluated based on various metrics, such as accuracy, precision, recall, F-measure, and specificity. Results indicate that the proposed framework outperforms conventional sentiment analysis models. The HR-RAN technique achieved a precision of 98.98%, recall of 99.23%, F-measure of 99.10%, accuracy of 98.88%, and specificity of 98.31%. Additionally, word clouds are generated to visually represent the most common terms within positive, neutral, and negative sentiments, providing a clear and immediate understanding of the key themes in the data. These findings can inform educators, administrators, and developers about the benefits and challenges of integrating ChatGPT into educational
settings, guiding improvements in educational practices and AI tool development.
The goal of this PhD thesis is to investigate possibilities of using symbol elimination for solving problems over complex theories and analyze the applicability of such uniform approaches in different areas of application, such as verification, knowledge representation and graph theory. In the thesis we propose an approach to symbol elimination in complex theories that follows the general idea of combining hierarchical reasoning with symbol elimination in standard theories. We analyze how this general approach can be specialized and used in different areas of application.
In the verification of parametric systems it is important to prove that certain safety properties hold. This can be done by showing that a property is an inductive invariant of the system, i.e. it holds in the initial state of the system and is invariant under updates of the system. Sometimes this is not the case for the condition itself, but for a stronger condition it is. In this thesis we propose a method for goal-directed invariant strengthening.
In knowledge representation we often have to deal with huge ontologies. Combining two ontologies usually leads to new consequences, some of which may be false or undesired. We are interested in finding explanations for such unwanted consequences. For this we propose a method for computing interpolants in the description logics EL and EL⁺, based on a translation to the theory of semilattices with monotone operators and a certain form of interpolation in this theory.
In wireless network theory one often deals with classes of geometric graphs in which the existence or non-existence of an edge between two vertices in a graph relies on properties on their distances to other nodes. One possibility to prove properties of those graphs or to analyze relations between the graph classes is to prove or disprove that one graph class is contained in the other. In this thesis we propose a method for checking inclusions between geometric graph classes.
In international business relationships, such as international railway operations, large amounts of data can be exchanged among the parties involved. For the exchange of such data, a limited risk of being cheated by another party, e.g., by being provided with fake data, as well as reasonable cost and a foreseeable benefit, is expected. As the exchanged data can be used to make critical business decisions, there is a high incentive for one party to manipulate the data in its favor. To prevent this type of manipulation, mechanisms exist to ensure the integrity and authenticity of the data. In combination with a fair exchange protocol, it can be ensured that the integrity and authenticity of this data is maintained even when it is exchanged with another party. At the same time, such a protocol ensures that the exchange of data only takes place in conjunction with the agreed compensation, such as a payment, and that the payment is only made if the integrity and authenticity of the data is ensured as previously agreed. However, in order to be able to guarantee fairness, a fair exchange protocol must involve a trusted third party. To avoid fraud by a single centralized party acting as a trusted third party, current research proposes decentralizing the trusted third party, e.g., by using a distributed ledger based fair exchange protocol. However, for assessing the fairness of such an exchange, state-of-the-art approaches neglect costs arising for the parties conducting the fair exchange. This can result in a violation of the outlined expectation of reasonable cost, especially when distributed ledgers are involved, which are typically associated with non-negligible costs. Furthermore, the performance of typical distributed ledger-based fair exchange protocols is limited, posing an obstacle to widespread adoption.
To overcome the challenges, in this thesis, we introduce the foundation for a data exchange platform allowing for a fully decentralized fair data exchange with reasonable cost and performance. As a theoretical foundation, we introduce the concept of cost fairness, which considers cost for the fairness assessment by requesting that a party following the fair exchange protocol never suffers any unilateral disadvantages. We prove that cost fairness cannot be achieved using typical public distributed ledgers but requires customized distributed ledger instances, which usually lack complete decentralization. However, we show that the highest unilateral cost are caused by a grieving attack.
To allow fair data exchanges to be conducted with reasonable cost and performance, we introduce FairSCE, a distributed ledger-based fair exchange protocol using distributed ledger state channels and incorporating a mechanism to protect against grieving attacks, reducing the possible unilateral cost that have to be covered to a minimum. Based on our evaluation of FairSCE, the worst-case cost for data exchange, even in the presence of malicious parties, is known, which allows an estimate of the possible benefit and, thus, the preliminary estimate of economic utility. Furthermore, to allow for an unambiguous assessment of the correct data being transferred while still allowing for sensitive parts of the data to be masked, we introduce an approach for the hashing of hierarchically structured data, which can be used to ensure integrity and authenticity of the data being transferred.
Empirical studies in software engineering use software repositories as data sources to understand software development. Repository data is either used to answer questions that guide the decision-making in the software development, or to provide tools that help with practical aspects of developers’ everyday work. Studies are classified into the field of Empirical Software Engineering (ESE), and more specifically into Mining Software Repositories (MSR). Studies working with repository data often focus on their results. Results are statements or tools, derived from the data, that help with practical aspects of software development. This thesis focuses on the methods and high order methods used to produce such results. In particular, we focus on incremental methods to scale the processing of repositories, declarative methods to compose a heterogeneous analysis, and high order methods used to reason about threats to methods operating on repositories. We summarize this as technical and methodological improvements. We contribute the improvements to methods and high-order methods in the context of MSR/ESE to produce future empirical results more effectively. We contribute the following improvements. We propose a method to improve the scalability of functions that abstract over repositories with high revision count in a theoretically founded way. We use insights on abstract algebra and program incrementalization to define a core interface of highorder functions that compute scalable static abstractions of a repository with many revisions. We evaluate the scalability of our method by benchmarks, comparing a prototype with available competitors in MSR/ESE. We propose a method to improve the definition of functions that abstract over a repository with a heterogeneous technology stack, by using concepts from declarative logic programming and combining them with ideas on megamodeling and linguistic architecture. We reproduce existing ideas on declarative logic programming with languages close to Datalog, coming from architecture recovery, source code querying, and static program analysis, and transfer them from the analysis of a homogeneous to a heterogeneous technology stack. We provide a prove-of-concept of such method in a case study. We propose a high-order method to improve the disambiguation of threats to methods used in MSR/ESE. We focus on a better disambiguation of threats, operationalizing reasoning about them, and making the implications to a valid data analysis methodology explicit, by using simulations. We encourage researchers to accomplish their work by implementing ‘fake’ simulations of their MSR/ESE scenarios, to operationalize relevant insights about alternative plausible results, negative results, potential threats and the used data analysis methodologies. We prove that such way of simulation based testing contributes to the disambiguation of threats in published MSR/ESE research.
The trends of industry 4.0 and the further enhancements toward an ever changing factory lead to more mobility and flexibility on the factory floor. With that higher need of mobility and flexibility the requirements on wireless communication rise. A key requirement in that setting is the demand for wireless Ultra-Reliability and Low Latency Communication (URLLC). Example use cases therefore are cooperative Automated Guided Vehicles (AGVs) and mobile robotics in general. Working along that setting this thesis provides insights regarding the whole network stack. Thereby, the focus is always on industrial applications. Starting on the physical layer, extensive measurements from 2 GHz to 6 GHz on the factory floor are performed. The raw data is published and analyzed. Based on that data an improved Saleh-Valenzuela (SV) model is provided. As ad-hoc networks are highly depended onnode mobility, the mobility of AGVs is modeled. Additionally, Nodal Encounter Patterns (NEPs) are recorded and analyzed. A method to record NEP is illustrated. The performance by means of latency and reliability are key parameters from an application perspective. Thus, measurements of those two parameters in factory environments are performed using Wireless Local Area Network (WLAN) (IEEE 802.11n), private Long Term Evolution (pLTE) and 5G. This showed auto-correlated latency values. Hence, a method to construct confidence intervals based on auto-correlated data containing rare events is developed. Subsequently, four performance improvements for wireless networks on the factory floor are proposed. Of those optimization three cover ad-hoc networks, two deal with safety relevant communication, one orchestrates the usage of two orthogonal networks and lastly one optimizes the usage of information within cellular networks.
Finally, this thesis is concluded by an outlook toward open research questions. This includes open questions remaining in the context of industry 4.0 and further the ones around 6G. Along the research topics of 6G the two most relevant topics concern the ideas of a network of networks and overcoming best-effort IP.
In the last decade, policy-makers around the world have turned their attention toward the creative industry as the economic engine and significant driver of employments. Yet, the literature suggests that creative workers are one of the most vulnerable work-forces of today’s economy. Because of the highly deregulated and highly individuated environment, failure or success are believed to be the byproduct of individual ability and commitment, rather than a structural or collective issue. This thesis taps into the temporal, spatial, and social resolution of digital behavioural data to show that there are indeed structural and historical issues that impact individuals’ and
groups’ careers. To this end, this thesis offers a computational social science research framework that brings together the decades-long theoretical and empirical knowledge of inequality studies, and computational methods that deal with the complexity and scale of digital data. By taking music industry and science as use cases, this thesis starts off by proposing a novel gender detection method that exploits image search and face-detection methods.
By analysing the collaboration patterns and citation networks of male and female computer scientists, it sheds lights on some of the historical biases and disadvantages that women face in their scientific career. In particular, the relation of scientific success and gender-specific collaboration patterns is assessed. To elaborate further on the temporal aspect of inequalities in scientific careers, this thesis compares the degree of vertical and horizontal inequalities among the cohorts of scientists that started their career at different point in time. Furthermore, the structural inequality in music industry is assessed by analyzing the social and cultural relations that breed from live performances and musics releases. The findings hint toward the importance of community belonging at different stages of artists’ careers. This thesis also quantifies some of the underlying mechanisms and processes of inequality, such as the Matthew Effect and the Hipster Paradox, in creative careers. Finally, this thesis argues that online platforms such as Wikipedia could reflect and amplify the existing biases.
Currently, there are a variety of digital tools in the humanities, such
as annotation, visualization, or analysis software, which support researchers in their work and offer them new opportunities to address different research questions. However, the use of these tools falls far
short of expectations. In this thesis, twelve improvement measures are
developed within the framework of a design science theory to counteract the lack of usage acceptance. By implementing the developed design science theory, software developers can increase the acceptance of their digital tools in the humanities context.
For software engineers, conceptually understanding the tools they are using in the context of their projects is a daily challenge and a prerequisite for complex tasks. Textual explanations and code examples serve as knowledge resources for understanding software languages and software technologies. This thesis describes research on integrating and interconnecting
existing knowledge resources, which can then be used to assist with understanding and comparing software languages and software technologies on a conceptual level. We consider the following broad research questions that we later refine: What knowledge resources can be systematically reused for recovering structured knowledge and how? What vocabulary already exists in literature that is used to express conceptual knowledge? How can we reuse the
online encyclopedia Wikipedia? How can we detect and report on instances of technology usage? How can we assure reproducibility as the central quality factor of any construction process for knowledge artifacts? As qualitative research, we describe methodologies to recover knowledge resources by i.) systematically studying literature, ii.) mining Wikipedia, iii.) mining available textual explanations and code examples of technology usage. The theoretical findings are backed by case studies. As research contributions, we have recovered i.) a reference semantics of vocabulary for describing software technology usage with an emphasis on software languages, ii.) an annotated corpus of Wikipedia articles on software languages, iii.) insights into technology usage on GitHub with regard to a catalog of pattern and iv.) megamodels of technology usage that are interconnected with existing textual explanations and code examples.
Social media provides a powerful way for people to share opinions and sentiments about a specific topic, allowing others to benefit from these thoughts and feelings. This procedure generates a huge amount of unstructured data, such as texts, images, and references that are constantly increasing through daily comments to related discussions. However, the vast amount of unstructured data presents risks to the information-extraction process, and so decision making becomes highly challenging. This is because data overload may cause the loss of useful data due to its inappropriate presentation and its accumulation. To this extent, this thesis contributed to the field of analyzing and detecting feelings in images and texts. And that by extracting the feelings and opinions hidden in a huge collection of image data and texts on social networks After that, these feelings are classified into positive, negative, or neutral, according to the features of the classified data. The process of extracting these feelings greatly helps in decision-making processes on various topics as will be explained in the first chapter of the thesis. A system has been built that can classify the feelings inherent in the images and texts on social media sites, such as people’s opinions about products and companies, personal posts, and general messages. This thesis begins by introducing a new method of reducing the dimension of text data based on data-mining approaches and then examines the sentiment based on neural and deep neural network classification algorithms. Subsequently, in contrast to sentiment analysis research in text datasets, we examine sentiment expression and polarity classification within and across image datasets by building deep neural networks based on the attention mechanism.
Connected vehicles will have a tremendous impact on tomorrow’s mobility solutions. Such systems will heavily rely on information delivery in time to ensure the functional reliability, security and safety. However, the host-centric communication model of today’s networks questions efficient data dissemination in a scale, especially in networks characterized by a high degree of mobility. The Information-Centric Networking (ICN) paradigm has evolved as a promising candidate for the next generation of network architectures. Based on a loosely coupled communication model, the in-network processing and caching capabilities of ICNs are promising to solve the challenges set by connected vehicular systems. In such networks, a special class of caching strategies which take action by placing a consumer’s anticipated content actively at the right network nodes in time are promising to reduce the data delivery time. This thesis contributes to the research in active placement strategies in information-centric and computation-centric vehicle networks for providing dynamic access to content and computation results. By analyzing different vehicular applications and their requirements, novel caching strategies are developed in order to reduce the time of content retrieval. The caching strategies are compared and evaluated against the state-of-the-art in both extensive simulations as well as real world deployments. The results are showing performance improvements by increasing the content retrieval (availability of specific data increased up to 35% compared to state-of-the-art caching strategies), and reducing the delivery times (roughly double the number of data retrieval from neighboring nodes). However, storing content actively in connected vehicle networks raises questions regarding security and privacy. In the second part of the thesis, an access control framework for information-centric connected vehicles is presented. Finally, open security issues and research directions in executing computations at the edge of connected vehicle networks are presented.
Virtual reality is a growing field of interest as it provides a particular intuitive way of user-interaction. However, there are still open technical issues regarding latency — the delay between interaction and display reaction — and the trade-off between visual quality and frame-rate of real-time graphics, especially when taking visual effects like specular and semi-transparent surfaces and volumes into account. One solution, a distributed rendering setup, is presented in this thesis, in which the image synthesis is divided into an accurate but costly physically based rendering thread with a low refresh rate and a fast reprojection thread to remain a responsive interactivity with a high frame-rate. Two novel reprojection techniques are proposed that cover reflections and refractions produced by surface ray-tracing as well as volumetric light transport generated by volume ray-marching. The introduced setup can enhance the VR experience within several domains. In this thesis, three innovative training applications have been realized to investigate the added value of virtual reality to the three learning stages of observation, interaction and collaboration. For each stage an interdisciplinary curriculum, currently taught with traditional media, was transferred to a VR setting in order to investigate how virtual reality is capable of providing a natural, flexible and efficient learning environment
Initial goal of the current dissertation was the determination of image-based biomarkers sensitive for neurodegenerative processes in the human brain. One such process is the demyelination of neural cells characteristic for Multiple sclerosis (MS) - the most common neurological disease in young adults for which there is no cure yet. Conventional MRI techniques are very effective in localizing areas of brain tissue damage and are thus a reliable tool for the initial MS diagnosis. However, a mismatch between the clinical fndings and the visualized areas of damage is observed, which renders the use of the standard MRI diffcult for the objective disease monitoring and therapy evaluation. To address this problem, a novel algorithm for the fast mapping of myelin water content using standard multiecho gradient echo acquisitions of the human brain is developed in the current work. The method extents a previously published approach for the simultaneous measurement of brain T1, T∗ 2 and total water content. Employing the multiexponential T∗ 2 decay signal of myelinated tissue, myelin water content is measured based on the quantifcation of two water pools (myelin water and rest) with different relaxation times. Whole brain in vivo myelin water content maps are acquired in 10 healthy controls and one subject with MS. The in vivo results obtained are consistent with previous reports. The acquired quantitative data have a high potential in the context of MS. However, the parameters estimated in a multiparametric acquisition are correlated and constitute therefore an ill-posed, nontrivial data analysis problem. Motivated by this specific problem, a new data clustering approach is developed called Nuclear Potential Clustering, NPC. It is suitable for the explorative analysis of arbitrary dimensional and possibly correlated data without a priori assumptions about its structure. The developed algorithm is based on a concept adapted from nuclear physics. To partition the data, the dynamic behavior of electrically even charged nucleons interacting in a d-dimensional feature space is modeled. An adaptive nuclear potential, comprised of a short-range attractive (Strong interaction) and a long-range repulsive term (Coulomb potential), is assigned to each data point. Thus, nucleons that are densely distributed in space fuse to build nuclei (clusters), whereas single point clusters are repelled (noise). The algorithm is optimized and tested in an extensive study with a series of synthetic datasets as well as the Iris data. The results show that it can robustly identify clusters even when complex configurations and noise are present. Finally, to address the initial goal, quantitative MRI data of 42 patients are analyzed employing NPC. A series of experiments with different sets of image-based features show a consistent grouping tendency: younger patients with low disease grade are recognized as cohesive clusters, while those of higher age and impairment are recognized as outliers. This allows for the definition of a reference region in a feature space associated with phenotypic data. Tracking of the individual's positions therein can disclose patients at risk and be employed for therapy evaluation.