open_access
Refine
Year of publication
Document Type
- Doctoral Thesis (229)
- Article (26)
- Preprint (10)
- Conference Proceeding (9)
- Report (4)
- Book (3)
- Part of Periodical (3)
- Master's Thesis (1)
- Other (1)
Language
- English (286) (remove)
Has Fulltext
- yes (286)
Keywords
- Computersicherheit (8)
- Maßtheorie (8)
- Graphenzeichnen (6)
- Iteriertes Funktionensystem (5)
- Marketing (5)
- Multimedia (5)
- Software Engineering (5)
- Information Retrieval (4)
- Kryptologie (4)
- Modellierung (4)
Institute
- Fakultät für Informatik und Mathematik (102)
- Wirtschaftswissenschaftliche Fakultät (53)
- Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik (47)
- Philosophische Fakultät (36)
- Mitarbeiter Lehrstuhl/Einrichtung der Wirtschaftswissenschaftlichen Fakultät (9)
- Sonstiger Autor der Fakultät für Informatik und Mathematik (9)
- Sozial- und Bildungswissenschaftliche Fakultät (8)
- Philosophische Fakultät / Südostasienkunde (6)
- Sonstiger Autor der Wirtschaftswissenschaftlichen Fakultät (5)
- Juristische Fakultät (4)
4 Essays on Sustainable Development and Organizations’ Response to Stakeholder’s Expectations
(2024)
As integral part of the society, environment, and economy, the organization’s survival and growth depend on legitimacy which reflects the social support and acceptance of its stakeholders, the institutions and individuals it interacts with. Organizations interact with internal and external stakeholder groups which not necessarily have the same legitimacy expectations. Due to several reasons, such as the adoption of new laws or the invention of new technologies, stakeholders’ expectations of legitimate actions can change in the course of time. Thus, to respond to changing stakeholder’s expectations, as integral part of a changing environment, is complex.
The main objective of this dissertation is to answer the research question: “How do organizations respond to changing stakeholder’s expectations in context of sustainable development?”. Building on two empirical settings with four essays, this dissertation provides insights into organizations’ responses as reaction to changing stakeholder expectations with regard to sustainable development. In particular, it takes a closer look on the interplay between internal stakeholders, such as employees, and external stakeholders, such as politics or customers. The first essay summarizes the organizational factors for the implementation of a research data management (RDM) system within higher education institutes (HEI) and how they interact with each other. Based on Leavitt’s (1965) classical model of organizational change, the essay provides an overview about the interrelation between the individual components that make up an RDM system. The second essay investigates how early career researchers within HEI make use of different respond strategies to the state, market, professional, and community logic in context of RDM. It provides insights how employees deal with changing environmental conditions and the organization’s response to them. The third essay analyses the shift from voluntary to mandatory sustainability reporting and demonstrates different strategies organizations adopt to respond to it. The fourth essay incorporates the findings of the third essay and investigates which factors influence the organization’s strategy
to respond. It focuses on the interface between the top management team and the chief executive officer.
This dissertation makes at least two overall contributions to management and organization studies research. First, it emphasizes the role of different stakeholder groups and their impact on the organizations’ activities. Second, this dissertation shows the value of connecting different theoretical approaches such as institutional logics, upper echelon theory, and organizational transparency research in context of organizational legitimacy and demonstrates that a nuanced view is necessary to understand the concept of organizational legitimacy. This cumulative dissertation is structured as follows. Part A is an introduction to the study organizational legitimacy. Part B contains the four essays.
With the increasing use of digital technologies in the automotive sector, the traditional automobile is undergoing a structural transformation, requiring new technologies and enabling innovative mobility concepts.
In particular, the ability to drive automatically or even fully autonomously, update control software, and remain connected to the environment allows attackers to infiltrate highly critical vehicle systems and take control without adequate protection.
Once not only individual vehicles but entire fleets are dominated by software, cyberattacks could disrupt a significant portion of the infrastructure and expose passengers to substantial risks.
This work follows a holistic approach to protecting highly automated software-defined vehicles from cyberattacks by designing and implementing security concepts in the main phases of a vehicle's lifecycle.
We use SAE level 4 prototype vehicles to evaluate our proposed techniques.
We start with a systematic security requirement analysis using the ISA-62443 standard series, demonstrating how threats can be identified in a collaborative, hierarchical process and how the resulting security risks impact the software and hardware architecture of a self-driving vehicle.
We show how this analysis process results in concrete requirements whose consideration reduces the overall security risk to a tolerable level.
Subsequently, we develop technical solutions for selected requirements. We begin by securing the CAN and FlexRay legacy protocols, which we foresee being used in specific areas of SDV in a transitional period despite technological changes.
To enable vehicle-wide security management, we address the management and distribution of cryptographic keys within such networks, mainly focusing on resource-constrained devices.
We propose using lightweight implicit certificates for deriving cryptographic group keys that can be used in CAN networks.
Additionally, we demonstrate how the slot-based frame structure of the FlexRay protocol allows for efficient "multi-slot" authentication, for which we calculate cryptographic keys using hash-based key chains.
SDV use Ethernet-based communication protocols and custom middleware stacks to transmit large amounts of data in real-time.
We develop a three-stage security process for the novel ASOA, which enables the development and central orchestration of system-agnostic functional software components on embedded systems and HPC platforms.
After the central specification of the security architecture at the data flow level, security tokens are automatically calculated and distributed for runtime protection of the service-oriented, DDS-based data transmission.
Our process ensures the strict separation of function and system knowledge, allowing for cost-effective and adaptable security architecture management.
The evaluation in four self-driving, software-defined vehicles demonstrates an average runtime overhead of approximately 5.71%.
As the initial risk analysis and actual cyberattacks have shown, protective measures against the compromise of control units must be taken alongside communication security.
To address this, we develop a method for verifying and validating the software integrity of control units.
A governmental third party confirms a measurement through a digital certificate, proving the examined vehicle's trustworthiness and suitability for participation in automated traffic.
In the final step of this work, we present an assessment scheme that allows software-defined vehicles to evaluate security incidents during operation in terms of their maximum expected damage and initiate appropriate countermeasures.
We follow the ISO/SAE 21434 standard and model attack paths using a graph representing dependencies among internal vehicle assets to account for the propagation effects of cyberattacks.
The assessment of a security incident considers not only the probability of individual attack paths but also the vehicle context.
Our practical evaluation demonstrates that we can detect, report, and assess security incidents below the human reaction time in the earlier mentioned prototype vehicles.
The collection of personal information by organizations has become increasingly essential for social interactions. Nevertheless, according to the GDPR (General Data Protection Regulation), the organizations have to protect collected data. Access Control (AC) mechanisms are traditionally used to secure information systems against unauthorized access to sensitive data. The increased availability of personal sensor data, thanks to IoT-oriented applications, motivates new services to offer insights about individuals. Consequently, data mining algorithms have been proposed to infer personal insights from collected sensor data. Although they can be used for genuine purposes, attackers can leverage those outcomes, combining them with other type of data, and further breaching individuals’ privacy. Thus, bypassing AC mechanisms thanks to such insights is a concrete problem.
We propose an inference detection system based on the analysis of queries issued on a sensor database. The knowledge obtained through these queries, and the inference channels corresponding to the use of data mining algorithms on sensor data to infer individual information, are described using Raw sensor data based Inference ChannEl Model (RICE-M). The detection is carried out by RICE-M based inference detection System (RICE-Sy). RICE-Sy considers at the time of the query, the knowledge that a user obtains via a new query and has obtained via his query history, and determines whether this is sufficient to allow that user to operate a channel. Thus, privacy protection systems can take advantage of the inferences detected by RICE-Sy, taking into account individuals’ information obtained by the attackers via a database of sensors, to further protect these individuals.
The impact of urban development on local residents in the urban periphery oscillates between processes of empowerment and processes of marginalization as well as between state-controlled order and decentralized self-organization. Consequently local authorities, including urban planners and local residents need to negotiate their respective roles in the production and usage of urban space defined by transition, transformation and ambiguity. Therefore, this work examines the underlying patterns, interactions and structures which define the negotiation of state-society relations in the framework of a porous peri-urban landscape in Vietnam’s secondary cities.
In Chapter 1, peri-urban areas are defined as space of transformation where pattern of rural land use are intertwined with pattern of urban land use to create an urban-rural interface with blurred boundaries. The main characteristic of the spatial pattern in Tam Kỳ and Buôn Ma Thuột is rooted urban porosity leading to the spatial intertwining of rural, peri-urban and urban spaces. The emerging urban landscape can be defined as peri-urban city. Three main themes define this peri-urban city beyond porosity: (1) processes of transformation creating the peri-urban city, (2) networks of power and control as means to adapt to these transformations and (3) mobility as prerequisite for ability to benefit from the changing landscape in peri-urban cities.
Chapter 2 describes how state authorities, urban planners and private investors at the local level in Tam Kỳ and Buôn Ma Thuột use porous space to reproduce the city based on their aspirations. This leads to a cargo cult urbanism, where urban planning is rooted in future aspirations for the city. Plans and construction efforts reference the image of a modern urban future and produce the image of a city, which does not exists in the real urban space of Tam Kỳ and Buôn Ma Thuột. In the meantime, practices based in peri-urban space are often contradictory to the official aspirations of local state authorities
Chapter 3 explores this emerging divergence between the aspired urban space by the state and the reality of urban space rooted in material and social porosity. Traditional, newly accessible and environmental porosity provide an array of accessible space which transforms the peri-urban city into a social arena of encounters and interaction. Consequently, porosity enables local residents to maintain urban space as commons. This counters the push towards the privatization of urban space by state and private actors and creates multiple aspirations for the urban future.
Rooted in urban space as commons, porosity enables the usage of peri-urban space as spaces of resistance as discussed in Chapter 4. Mobility and interaction are means of reproduction in the porous ambiguity of urban materiality but also become means of everyday resistance. The emerging spatialities of emancipation provide opportunities for an emerging urban citizenship. Urban materiality and urban citizenship have a mutually constitutive relationship.
In summary, this cumulative dissertation investigates the application of the conjugate gradient method CG for the optimization of artificial neural networks (NNs) and compares this method with common first-order optimization methods, especially the stochastic gradient descent (SGD).
The presented research results show that CG can effectively optimize both small and very large networks. However, the default machine precision of 32 bits can lead to problems. The best results are only achieved in 64-bits computations. The research also emphasizes the importance of the initialization of the NNs’ trainable parameters and shows that an initialization using singular value decomposition (SVD) leads to drastically lower error values. Surprisingly, shallow but wide NNs, both in Transformer and CNN architectures, often perform better than their deeper counterparts. Overall, the research results recommend a re-evaluation of the previous preference for extremely deep NNs and emphasize the potential of CG as an optimization method.
People-centred reforestation is one of the ways to achieve natural climate solutions. Ghana has established a people-centred reforestation programme known as the Modified Taunya System (MTS) where local people are assigned degraded forest reserves to practice agroforestry. Given that the MTS is a people-centred initiative, socioeconomic factors are likely to have impact on the reforestation drive. This study aims to understand the role of translocal practices of remittances and visits by migrants on the MTS. Using multi-sited, sequential explanatory mixed methods and the lens of socioecological systems, the study shows that social capital and socioeconomic obligations of cash remittances from, as well as visits by migrants to their communities of origin play positive roles on reforestation under the MTS. Specifically, translocal households have access to, and use remittances to engage relatively better in the MTS than households that do not receive remittances. This shows that translocal practices can have a positive impact on the environment at the area of origin of migrants where there are people-centred environmental policies in place.
The growing demand for electric vehicles (EV) in the last decade and the most recent European Commission regulation to only allow EV on the road from 2035 involved the necessity to design a cost-effective and sustainable EV charging station (CS). A crucial challenge for charging stations arises from matching fluctuating power supplies and meeting peak load demand. The overall objective of this paper is to optimize the charging scheduling of a hybrid energy storage system (HESS) for EV charging stations while maximizing PV power usage and reducing grid energy costs.
This goal is achieved by forecasting the PV power and the load demand using different deep learning (DL) algorithms such as the recurrent neural network (RNN) and long short-term memory (LSTM). Then, the predicted data are adopted to design a scheduling algorithm that determines the optimal charging time slots for the HESS. The findings demonstrate the efficiency of the proposed approach, showcasing a root-mean-square error (RMSE) of 5.78% for real-time PV power forecasting and 9.70%
for real-time load demand forecasting. Moreover, the proposed scheduling algorithm reduces the total grid energy cost by 12.13%.
From transportation to urbanization, energy and digitalization, China-backed projects of infrastructural development are increasingly common throughout Southeast Asia and the global South as both a means and outcome of development. This trend has accelerated since China’s Belt and Road Initiative (BRI) in 2013. Against this backdrop, the present ASEAS issue invites to rethink the roles infrastructure plays in forms of development that place connectivity at the center.
Contents:
Simon Rowedder, Phill Wilcox & Susanne Brandtstädter
Negotiating Chinese Infrastructures of Modern Mobilities: Insights from Southeast Asia
Current Research on Southeast Asia
Panitda Saiyarod
The Deviated Route: Navigating the Logistical Power Landscape of the Mekong Border Trade
Franziska S. Nicolaisen
The Politicization of Mobility Infrastructures in Vietnam — The Hanoi Metro Project at the Nexus of Urban Development, Fragmented Mobilities, and National Security
Arratee Ayuttacorn
Chinese Investor Networks and the Politics of Infrastructure Projects in the Eastern Economic Corridor in Thailand
Karin Dean
Belt and Road Initiative in Northern Myanmar: The Local World of China’s Global Investments
Mira Käkönen
Entangled Enclaves: Dams, Volatile Rivers, and Chinese Infrastructural Engagement in Cambodia
Research Workshop
Tim Oakes
Infrastructure Power, Circulation and Suspension
Susanne Brandtstädter
Infrastructural Fragility, Infra-Politics and Jianghu
Book Reviews
Michael Kleinod-Freudenberg
Book Review: Tappe, O., & Rowedder, S. (Eds.). (2022). Extracting Development: Contested Resource Frontiers in Mainland Southeast Asia
This book collects ten of Sandra Huebenthal’s most important contributions to the application of Social Memory Theory in Biblical studies. The volume consists of four parts, each devoted to a particular field of research. Part one addresses the general impact of Social Memory Theory for the New Testament. The second part analyzes how Social Memory Theory adds to exploring the phenomenon of (biblical) intertextuality as a strategy for negotiating Early Christian identity and the third part investigates how New Testament pseudepigraphy provides a different approach for understanding the negotiation and formation of Christian identities. Finally, part four provides an outlook how the hermeneutical approach can enhance Patristic research. The ten essays originate from discussions about Social Memory Theory and the New Testament at international conferences, three of them are translations of German contributions, while two are published for the first time in this volume.
(Verlagsbeschreibung)
Technological advancements and new legal requirements are continuously changing the online data disclosure landscape in terms of both, the quantity and quality of data that firms can acquire. Nowadays, consumers are required to disclose personal data online multiple times a day and in a variety of different contexts, such as creating user profiles, online payment or using location-based services. Despite consumers’ increasing online privacy concerns, firms rely ever more strongly on consumer data that they convert into a competitive advantage through personalized product recommendations and targeted advertising. In an effort to encourage consumer data disclosure, many firms have focused on building trust as a way to counterbalance privacy concerns and mitigate risk perceptions. Correspondingly, marketing literature has continued to examine the interplay of trust and consumer privacy concerns. While the extant research has considerably advanced our understanding of the role of trust in privacy-related decision-making, the majority of studies has mainly focused on single-stage, dyadic disclosure settings and cognitive decision-making processes.
Against this background, the overarching goal of this thesis is to shed light on under-researched data disclosure contexts involving trust and to explore additional facets of the underlying decision-making processes. For example, considering pre- and post-disclosure stages when evaluating consumers’ data disclosure decisions allows for a more holistic picture of the decision-making process. Similarly, social media and sharing economy settings challenge the traditional assumption of purely dyadic consumer-firm data disclosure, thus extending traditional conceptualizations of trust. Across three independent essays, this thesis addresses the overarching research question of how the peculiarities of multi-stage and multi-actor settings shape consumers’ trust-based decision-making strategies.
CNT-PUFs: highly robust and heat-tolerant carbon-nanotube-based physical unclonable functions
(2023)
In this work, we explored a highly robust and unique Physical Unclonable Function (PUF) based on the stochastic assembly of single-walled Carbon NanoTubes (CNTs) integrated within a wafer-level technology. Our work demonstrated that the proposed CNT-based PUFs are exceptionally robust with an average fractional intra-device Hamming distance well below 0.01 both at room temperature and under varying temperatures in the range from 23 °C to 120 °C. We attributed the excellent heat tolerance to comparatively low activation energies of less than 40 meV extracted from an Arrhenius plot. As the number of unstable bits in the examined implementation is extremely low, our devices allow for a lightweight and simple error correction, just by selecting stable cells, thereby diminishing the need for complex error correction. Through a significant number of tests, we demonstrated the capability of novel nanomaterial devices to serve as highly efficient hardware security primitives.
Vanadium redox-flow batteries (VRFBs) have played a significant role in hybrid energy storage systems (HESSs) over the last few decades owing to their unique characteristics and advantages. Hence, the accurate estimation of the VRFB model holds significant importance in large-scale storage applications, as they are indispensable for incorporating the distinctive features of energy storage systems and control algorithms within embedded energy architectures. In this work, we propose a novel approach that combines model-based and data-driven techniques to predict battery state variables, i.e., the state of charge (SoC), voltage, and current. Our proposal leverages enhanced deep reinforcement learning techniques, specifically deep q-learning (DQN), by combining q-learning with neural networks to optimize the VRFB-specific parameters, ensuring a robust fit between the real and simulated data. Our proposed method outperforms the existing approach in voltage prediction. Subsequently, we enhance the proposed approach by incorporating a second deep RL algorithm—dueling DQN—which is an improvement of DQN, resulting in a 10% improvement in the results, especially in terms of voltage prediction. The proposed approach results in an accurate VFRB model that can be generalized to several types of redox-flow batteries.
The worldwide adoption of Electric Vehicles (EVs) has embraced promising advancements toward a sustainable transportation system. However, the effective charging scheduling of EVs is not a trivial task due to the increase in the load demand in the Charging Stations (CSs) and the fluctuation of electricity prices. Moreover, other issues that raise concern among EV drivers are the long waiting time and the inability to charge the battery to the desired State of Charge (SOC). In order to alleviate the range of anxiety of users, we perform a Deep Reinforcement Learning (DRL) approach that provides the optimal charging time slots for EV based on the Photovoltaic power prices, the current EV SOC, the charging connector type, and the history of load demand profiles collected in different locations. Our implemented approach maximizes the EV profit while giving a margin of liberty to the EV drivers to select the preferred CS and the best charging time (i.e., morning, afternoon, evening, or night). The results analysis proves the effectiveness of the DRL model in minimizing the charging costs of the EV up to 60%, providing a full charging experience to the EV with a lower waiting time of less than or equal to 30 min.
Since the launch of the BRI, particular modes of movement are integral to its vision of what it means to be a modern world citizen. Nowhere is this more apparent than in Southeast Asia, where China-backed infrastructure projects expand, and at great speed. Such infrastructure projects are carriers of particular versions of modernity, promising rapid mobility to populations better connected than ever before. Yet, until now, little attention has been paid to how mobility and promises of mobility intersect with local understandings of development. In the introduction to this special issue, we argue that it is essential to think about the role infrastructure plays in forms of development that place connectivity at the center. We suggest that considering development, mobility and mo-dernity together is enlightening because it interrogates the connections between these interlocking themes. Through an introduction to five ethnographically grounded papers and two commentaries, all of which engage with infrastructures in different contexts throughout Southeast Asia, we demonstrate that there are significant gaps between of-ficial policy and lived experience. This makes the need to interrogate what infrastructure, mobilities, and global China really mean all the more pressing.
ChatGPT and similar generative AI models have attracted hundreds of millions of users and have become part of the public discourse. Many believe that such models will disrupt society and lead to significant changes in the education system and information generation. So far, this belief is based on either colloquial evidence or benchmarks from the owners of the models—both lack scientific rigor. We systematically assess the quality of AI-generated content through a large-scale study comparing human-written versus ChatGPT-generated argumentative student essays. We use essays that were rated by a large number of human experts (teachers). We augment the analysis by considering a set of linguistic characteristics of the generated essays. Our results demonstrate that ChatGPT generates essays that are rated higher regarding quality than human-written essays. The writing style of the AI models exhibits linguistic characteristics that are different from those of the human-written essays. Since the technology is readily available, we believe that educators must act immediately. We must re-invent homework and develop teaching concepts that utilize these AI models in the same way as math utilizes the calculator: teach the general concepts first and then use AI tools to free up time for other learning objectives.
Systems-focused error prevention efforts are internationally recognized in the healthcare industry, and industry efforts to identify and correct organizational defects through the process of CRM are well established in the U.S. and Germany. However, in both countries, there is no clear corresponding liability for healthcare organizations who fail to engage in systems-based learning through the process of clinical risk management (CRM). Although both jurisdictions do recognize organization-based theories of liability, liability for negligent CRM has not been explicitly recognized by courts in either jurisdiction to date. German legal scholars, recognizing this gap in liability for healthcare organizations, have written in support of finding liability for negligent CRM under existing tort law; however, there is no corresponding discussion in the American legal literature. This dissertation fills that gap with a comparative analysis of medical negligence law in the U.S. and Germany through the international lens of modern medical error prevention science and policy to articulate a legal basis and sketch the evidentiary framework for tort liability based on negligent CRM.
In the constrained planarity setting, we ask whether a graph admits a crossing-free drawing that additionally satisfies a given set of constraints. These constraints are often derived from very natural problems; prominent examples are Level Planarity, where vertices have to lie on given horizontal lines indicating a hierarchy, Partially Embedded Planarity, where we extend a given drawing without modifying already-drawn parts, and Clustered Planarity, where we additionally draw the boundaries of clusters which recursively group the vertices in a crossing-free manner. In the last years, the family of constrained planarity problems received a lot of attention in the field of graph drawing. Efficient algorithms were discovered for many of them, while a few others turned out to be NP-complete. In contrast to the extensive theoretical considerations and the direct motivation by applications, only very few of the found algorithms have been implemented and evaluated in practice.
The goal of this thesis is to advance the research on both theoretical as well as practical aspects of constrained planarity. On the theoretical side, we consider two types of constrained planarity problems. The first type are problems that individually constrain the rotations of vertices, that is they restrict the counter-clockwise cyclic orders of the edges incident to vertices. We give a simple linear-time algorithm for the problem Partially Embedded Planarity, which also generalizes to further constrained planarity variants of this type.
The second type of constrained planarity problem concerns more involved planarity variants that come down to the question whether there are embeddings of one or multiple graphs such that the rotations of certain vertices are in sync in a certain way. Clustered Planarity and a variant of the Simultaneous Embedding with Fixed Edges Problem (Connected SEFE-2) are well-known problems of this type. Both are generalized by our Synchronized Planarity problem, for which we give a quadratic algorithm. Through reductions from various other problems, we provide a unified modelling framework for almost all known efficiently solvable constrained planarity variants that also directly provides a quadratic-time solution to all of them.
For both our algorithms, a key ingredient for reaching an efficient solution is the usage of the right data structure for the problem at hand. In this case, these data structures are the SPQR-tree and the PC-tree, which describe planar embedding possibilities from a global and a local perspective, respectively. More specifically, PC-trees can be used to locally describe the possible cyclic orders of edges around vertices in all planar embeddings of a graph. This makes it a key component for our algorithms, as it allows us to test planarity while also respecting further constraints, and to communicate constraints arising from the surrounding graph structure between vertices with synchronized rotation.
Bridging over to the practical side, we present the first correct implementation of PC-trees. We also describe further improvements, which allow us to outperform all implementations of alternative data structures (out of which we only found very few to be fully correct) by at least a factor of 4. We show that this yields a simple and competitive planarity test that can also yield an embedding to certify planarity. We also use our PC-tree implementation to implement our quadratic algorithm for solving Synchronized Planarity. Here, we show that our algorithm greatly outperforms previous attempts at solving related problems like Clustered Planarity in practice. We also engineer its running time and show how degrees of freedom in the theoretical algorithm can be leveraged to yield an up to tenfold speed-up in practice.
The nationalism-patriotism distinction is one of the most influential distinctions in the field of political psychology. While frequently used, the distinction suffers from a number of shortcomings that have hitherto been devoted little attention to. This dissertation aims to contribute to fill this research gap by systematically addressing these pitfalls. Notably, it does not abandon the binary distinction as such, but aims to further refine it. Thoroughly revisiting the nationalism-patriotism distinction, it synthesises the field's two predominant research traditions, i.e. the work of Kosterman and Feshbach (1989) in the U.S. and the one of Blank and Schmidt (2003) in Germany, that have not been brought into dialogue. In so doing, and engaging with research on attachment, it calls for a more nuanced triad of attachments: nationalism, that revolves around the nation; patriotism, that refers to the homeland; and democratic patriotism with democracy as its object of attachment. In line with this triad, it introduces a novel three-factor measurement model that has been validated in three studies in Germany. Overall, the dissertation underlines the need to approach ambiguous and complex concepts such as nationalism and patriotism in a more theoretically consistent way before operationalizing them in a rigorous manner.
Data has become a necessary resource for firm operations in the modern digital world, explaining their growing data gathering efforts. Due to this development, consumers are confronted with decisions to disclose personal data on a daily basis, and have become increasingly intentional about data sharing. While this reluctance to disclose personal data poses challenges for firms, at the same time, it also creates new opportunities for improving privacy-related interactions with customers. This dissertation advocates for a more holistic perspective on consumers’ privacy-related decision-making and introduces the consumer privacy journey consisting of three subsequent phases: pre data disclosure, data disclosure, post data disclosure. In three independent essays, I stress the importance of investigating data requests (i.e., the first step of this journey) as they represent a largely neglected, yet, potentially powerful means to influence consumers’ decision-making and decision-evaluation processes. Based on dual-processing models of decision-making, this dissertation focuses on both consumers’ cognitive and affective evaluations of privacy-related information: First, Essay 1 offers novel conceptualizations and operationalizations of consumers’ perceived behavioral control over personal data (i.e., cognitive processing) in the context of Artificial Intelligence (AI)-based data disclosure processes. Next, Essay 2 examines consumers’ cognitive and affective processing of a data request that entails relevance arguments as well as relevance-illustrating game elements. Finally, Essay 3 categorizes affective cues that trigger consumers’ affective processing of a data request and proposes that such cues need to fit with a specific data disclosure situation to foster long-term decision satisfaction. Collectively, my findings provide research and practice with new insights into consumers’ privacy perceptions and behaviors, which are particularly valuable in the context of complex, new (technology-enabled) data disclosure situations.
My dissertation examines literary mythologies of privacy in the authoritarian Russia of 1953–1985. This era was marked by an expansion of “non-state spheres,” or areas of life of which the Communist state increasingly released its control after Joseph Stalin’s death in 1953. Political elites, architects, and designers, as well as ordinary citizens co-constructed and explored the rising spheres through the languages of their respective fields—by producing regulations and laws, designing and erecting new types of buildings, developing new and modernizing already familiar everyday objects as well as devising new ways of integrating these objects into private and public spaces. Alongside these voices, transformations in the cultural sphere in general, and literature in particular, were most vocal. The late Soviet era witnessed the dissolution of the ossified ideology of socialist realism that focused on the glorification of the “new Soviet man” and had held culture in its tight grips since the 1930s. Starting from the 1950s, writers increasingly focused on portraying areas of life that lay beyond one’s public commitments and experimented with new meanings, codes, and forms to give shape to novel spheres of experience of the “late Soviet man” that can be subsumed under the concept of the “private sphere.”
The analytical framework of my dissertation is built around the journey to understand the mechanisms and architecture that powered the imagination of models of distancing oneself from the state and the society at large—scenarios of privacy, as we may call them today. I examine Russian prose and drama of the 1950s–1980s as a laboratory for the ideas of privacy, which was increasingly sought in the society disillusioned by the Communist doctrine and thus progressively alienating from active participation in the public sphere. I analyze the meanings that writers incorporated into new and old forms of domesticity—private flats that became progressively widespread throughout the 1950s–1980s, rooms in communal apartments, individual houses—to determine the spectrum of concepts that nurtured the idea of privacy in the late Soviet literary imagination. I also examine representations of reciprocal paradigms of relations between subjects from which the state and society at large were increasingly excluded. In the examples that I analyze, forms of private withdrawals variate from establishing control over liminal spaces or escaping into the world of feelings and emotions and building a connection to a person or space (significant for the characters for private rather than public reasons) to experimenting with language and pursuing one’s idiolect despite the ubiquitous “officialese,” as well as living in temporalities asynchronous with the public time.
Beyond revealing the visions of different, non-state existences in the late Soviet era, my text also advocates examining the role of official literature as a platform for subversion and change that was no less important in an authoritarian state than dissident literature. I see officially published texts as a cultural subaltern who defies the state of affairs and slowly but firmly turns the “state sphere” into a public one by pushing its own agenda through publications that test and gain ground for bolder visions of Soviet life that are not predicated on the commitment to the public sphere. With individual mechanisms of power assertion employed by the state or literature in the late Soviet era well-researched, the framework is still missing that would capture the shifts of borders between the private and public spheres under the influence of these actors. In devising such framework, I build upon sociological theories of disattendability and civil inattention that Erving Goffman conceived to describe conventions of individual behavior and social interaction in public. Extending these theories toward the studies of literary politics, I argue that by envisioning scenarios of a private retreat and bringing them into officially published editions, literature normalized privacy as a late Soviet imaginary and, therefore, continuously heightened its own disattendability, thereby expanding the borders of the private sphere. On the side of the state, the border was defined by the triggers of disturbance of civil inattention: privacy was conceded in return for disattendability. Under such conditions, literature became a bizarre “private kitchen” that performed private and public functions simultaneously—similar to the kitchens in newly-built individual apartments that were popular loci of socialization in the late Soviet era. It turned into a place where one can escape—it became one of the “niches of privacy” where it was possible to discuss and negotiate the world, in which the society lived or to which it should strive. At the same time, it assumed the role of a surrogate for the public sphere within the “state” sphere by pushing its own agenda through the publication of literary texts that sought to imagine a person rather than a cog in the Communist machine and thus transformed socialist realism into a literary current “with a human face.”
In my research, privacy, literature, and politics are bound together to reveal a vibrant spectacle of the continuous interaction between the state, cultural elites, and the citizens, in which thresholds are erected and crossed incessantly. Fictional private sites were battlefields for the production and contestation of ideologies, and the exposure of these literary wars to the public eye played a fundamental role in shifting the borders between the private and the public spheres in an authoritarian late Soviet Russia. The patterns of relations between the state and culture that I uncover in my dissertation resonate in neo-authoritarian twenty-first-century Russia, making privacy an important lens for our insight into the role of culture in rising authoritarian and failing democratic systems across the globe.
Due to the increasing amount of distributed renewable energy generation and the emerging high demand at consumer connection points, e. g., electric vehicles, the power distribution grid will reach its capacity limit at peak load times if it is not expensively enhanced. Alternatively, smart flexibility management that controls user assets can help to better utilize the existing power grid infrastructure for example by sharing available grid capacity among connected electric vehicles or by disaggregating flexibility requests to hybrid photovoltaic battery energy storage systems in households. Besides maintaining an acceptable state of the power distribution grid, these smart grid applications also need to ensure a certain quality of service and provide fairness between the individual participants, both of which are not extensively discussed in the literature. This thesis investigates two smart grid applications, namely electric vehicle charging-as-a-service and flexibility-provision-as-a-service from distributed energy storage systems in private households.
The electric vehicle charging service allocation is modeled with distributed queuing-based allocation mechanisms which are compared to new probabilistic algorithms. Both integrate user constraints (arrival time, departure time, and energy required) to manage the quality of service and fairness. In the queuing-based allocation mechanisms, electric vehicle charging requests are packetized into logical charging current packets, representing the smallest controllable size of the charging process. These packets are queued at hierarchically distributed schedulers, which allocate the available charging capacity using the time and frequency division multiplexing technique known from the networking domain. This allows multiple electric vehicles to be charged simultaneously with variable charging currents. To achieve high quality of service and fairness among electric vehicle charging processes, dynamic weights are introduced into a weighted fair queuing scheduler that considers electric vehicle departure time and required energy for prioritization. The distributed probabilistic algorithms are inspired by medium access protocols from computer networking, such as binary exponential backoff, and control the quality of service and fairness by adjusting sampling windows and waiting periods based on user requirements.
The second smart grid application under investigation aims to provide flexibility provision-as-a-service that disaggregates power flexibility requests to distributed battery energy storage systems in private households. Commonly, the main purpose of stationary energy storage is to store energy from a local photovoltaic system for later use, e. g., for overnight charging of an electric vehicle. This is optimized locally by a home energy management system, which also allows the scheduling of external flexibility requests defined by the deviation from the optimal power profile at the grid connection point, for example, to perform peak shaving at the transformer. This thesis discusses a linear heuristic and a meta heuristic to disaggregate a flexibility request to the single participating energy management systems that are grouped into a flexibility pool. Thereby, the linear heuristic iteratively assigns portions of the power flexibility to the most appropriate energy management system for one time slot after another, minimizing the total flexibility cost or maximizing the probability of flexibility delivery. In addition, a multi-objective genetic algorithm is proposed that also takes into account power grid aspects, quality of service, and fairness among par-ticipating households. The genetic operators are tailored to the flexibility disaggregation search space, taking into account flexibility and energy management system constraints, and enable power-optimized buffering of fitness values.
Both smart grid applications are validated on a realistic power distribution grid with real driving patterns and energy profiles for photovoltaic generation and household consumption. The results of all proposed algorithms are analyzed with respect to a set of newly defined metrics on quality of service, fairness, efficiency, and utilization of the power distribution grid. One of the main findings is that none of the tested algorithms outperforms the others in all quality of service metrics, however, integration of user expectations improves the service quality compared to simpler approaches. Furthermore, smart grid control that incorporates users and their flexibility allows the integration of high-load applications such as electric vehicle charging and flexibility aggregation from distributed energy storage systems into the existing electricity distribution infrastructure. However, there is a trade-off between power grid aspects, e. g., grid losses and voltage values, and the quality of service provided. Whenever active user interaction is required, means of controlling the quality of service of users’ smart grid applications are necessary to ensure user satisfaction with the services provided.
Code injection attacks like the one used in the high-profile 2017 Equifax breach, have become increasingly common, ranking at the top of OWASP’s list of critical web application vulnerabilities. The injection attacks can also target embedded applications running on processors like ARM and Xtensa by exploiting memory bugs and maliciously altering the program’s behavior or even taking full control over a system. Especially, ARM’s support of low power consumption without sacrificing performance is leading the industry to shift towards ARM processors, which advances the attention of injection attacks as well.
In this thesis, we are considering web applications and embedded applications (running on ARM and Xtensa processors) as the target of injection attacks. To detect injection attacks in web applications, taint analysis is mostly proposed but the precision, scalability, and runtime overhead of the detection depend on the analysis types (e.g., static vs dynamic, sound vs unsound). Moreover, in the existing dynamic taint tracking approach for Java- based applications, even the most performant can impose a slowdown of at least 10–20% and often far more. On the other hand, considering the embedded applications, while some initial research has tried to detect injection attacks (i.e., ROP and JOP) on ARM, they suffer from high performance or storage overhead. Besides, the Xtensa has been neglected though used in most firmware-based embedded WiFi home automation devices.
This thesis aims to provide novel approaches to precisely detect injection attacks on both the web and embedded applications. To that end, we evaluate JavaScript static analysis frameworks to evaluate the security of a hybrid app (JS & native) from an industrial partner, provide RIVULET – a tool that precisely detects injection attacks in Java-based real-world applications, and investigate injection attacks detection on ARM and Xtensa platforms using hardware performance counters (HPCs) and machine learning (ML) techniques.
To evaluate the security of the hybrid application, we initially compare the precision, scalability, and code coverage of two widely-used static analysis frameworks—WALA and SAFE. The result of our comparison shows that SAFE provides higher precision and better code coverage at the cost of somewhat lower scalability. Based on these results, we analyze the data flows of the hybrid app via taint analysis by extending the SAFE’s taint analysis and detected a potential for injection attacks of the hybrid application.
Similarly, to detect injection attacks in Java-based applications, we provide Rivulet which monitors the execution of developer-written functional tests using dynamic taint tracking. Rivulet uses a white-box test generation technique to re-purpose those functional tests to check if any vulnerable flow could be exploited. We compared Rivulet to the state-of-the-art static vulnerability detector Julia on benchmarks and Rivulet outperformed Julia in both false positives and false negatives. We also used Rivulet to detect new vulnerabilities.
Moreover, for applications running on ARM and Xtensa platforms, we investigate ROP1 attack detection by combining HPCs and ML techniques. We collect data exploiting real- world vulnerable applications and small benchmarks to train the ML. For ROP attack detection on ARM, we also implement an online monitor which labels a program’s execution as benign or under attack and stops its execution once the latter is detected. Evaluating our ROP attack detection approach on ARM provides a detection accuracy of 92% for the offline training and 75% for the online monitoring. Similarly, our ROP attack detection on the firmware-only Xtensa processor provides an overall average detection accuracy of 79%.
Last but not least, this thesis shows how relevant taint analysis is to precisely detect injection attacks on web applications and the power of HPC combined with machine learning in the control flow injection attacks detection on ARM and Xtensa platforms.
A Comprehensive Comparison of Fuzzy Extractor Schemes Employing Different Error Correction Codes
(2023)
This thesis deals with fuzzy extractors, security primitives often used in conjunction with Physical Unclonable Functions (PUFs). A fuzzy extractor works in two stages: The generation phase and the reproduction phase. In the generation phase, an Error Correction Code (ECC) is used to compute redundant bits for a given PUF response, which are then stored as helper data, and a key is extracted from the response. Then, in the reproduction phase, another (possibly noisy) PUF response can be used in conjunction with this helper data to extract the original key.
It is clear that the performance of the fuzzy extractor is strongly dependent on the underlying ECC. Therefore, a comparison of ECCs in the context of fuzzy extractors is essential in order to make them as suitable as possible for a given situation. It is important to note that due to the plethora of various PUFs with different characteristics, it is very unrealistic to propose a single metric by which the suitability of a given ECC can be measured.
First, we give a brief introduction to the topic, followed by a detailed description of the background of the ECCs and fuzzy extractors studied. Then, we summarise related work and describe an implementation of the ECCs under consideration. Finally, we carry out the actual comparison of the ECCs and the thesis concludes with a summary of the results and suggestions for future work.
In empirical research, scholars can choose between an exploratory causes-of-effects analysis, a confirmatory effects-ofcauses approach, or a mechanism-of-effects analysis that can be either exploratory or confirmatory. Understanding the choice between the approaches is important for two reasons. First, the added value of each approach depends on how much is known about the phenomenon of interest at the time of the analysis. Second, because of the specializations of methods, there are benefits to a division of labor between researchers who have expertise in the application of a given method. In this preregistered study, we test two hypotheses that follow from these arguments. We theorize that exploratory research is chosen when little is known about a phenomenon and a confirmatory approach is taken when more knowledge is available. A complementary hypothesis is that quantitative researchers opt for confirmatory designs and qualitative researchers for exploration because of their academic socialization. We test the hypotheses with a survey experiment of more than 900 political scientists from the United States and Europe. The results indicate that the state of knowledge has a significant and sizeable effect on the choice of the approach. In contrast, the evidence about the effect of methods expertise is more ambivalent.
Understanding of financial data has always been a point of interest for market participants to make better informed decisions. Recently, different cutting edge technologies have been addressed in the Financial Technology (FinTech) domain, including numeracy understanding, opinion mining and financial ocument processing.
In this thesis, we are interested in analyzing the arguments of financial experts with the goal of supporting investment decisions. Although various business studies confirm the crucial role of argumentation in financial communications, no work has addressed this problem as a computational argumentation task. In other words, the automatic analysis of arguments. In this regard, this thesis presents contributions in the three essential axes of theory, data, and evaluation to fill the gap between argument mining and financial text.
First, we propose a method for determining the structure of the arguments stated by company representatives during the public announcement of their quarterly results and future estimations through earnings conference calls. The proposed scheme is derived from argumentation theory at the micro-structure level of discourse. We further conducted the corresponding annotation study and published the first financial dataset annotated with arguments: FinArg.
Moreover, we investigate the question of evaluating the quality of arguments in this financial genre of text. To tackle this challenge, we suggest using two levels of quality metrics, considering both the Natural Language Processing (NLP) literature of argument quality assessment and the financial era peculiarities.
Hence, we have also enriched the FinArg data with our quality dimensions to produce the FinArgQuality dataset.
In terms of evaluation, we validate the principle of ensemble learning on the argument identification and argument unit classification tasks. We show that combining a traditional machine learning model along with a deep learning one, via an integration model (stacking), improves the overall performance, especially in small dataset settings.
In addition, despite the fact that argument mining is mainly a domain dependent task, to this date, the number of studies that tackle the generalization of argument mining models is still relatively small. Therefore, using our stacking approach and in comparison to the transfer learning model of DistilBert, we address and analyze three real-world scenarios concerning the model robustness over completely unseen domains and unseen topics.
Furthermore, with the aim of the automatic assessment of argument strength, we have investigated and compared different (refined) versions of Bert-based models that incorporate external knowledge in the decision layer. Consequently, our method outperforms the baseline model by 13 ± 2% in terms of F1-score through integrating Bert with encoded categorical features.
Beyond our theoretical and methodological proposals, our model of argument quality assessment, annotated corpora, and evaluation approaches are publicly available, and can serve as strong baselines for future work in both FinNLP and computational argumentation domains.
Hence, directly exploiting this thesis, we proposed to the community, a new task/challenge related to the analysis of financial arguments: FinArg-1, within the framework of the NTCIR-17 conference.
We also used our proposals to react to the Touché challenge at the CLEF 2021 conference. Our contribution was selected among the «Best of Labs».
To answer the research question, all SPIEGEL covers from 1965 to 2021 were examined for a reference to history topics. The report documents the assignments of the 533 covers recorded to the categories of history narrative, politics of memory and politics of the past.
Main article: https://doi.org/10.3167/jemms.2023.150107
Organic agriculture in Java, Indonesia, has been historically intertwined with social movements that struggled for more economically, ecologically, culturally, and socially sustainable agriculture. While these grassroots movements emerged under an authoritarian government that showed little interest in organic agriculture, the turn of the 21st century saw the rapid involvement of the Indonesian government in supporting, regulating and, arguably, commodifying organic agriculture. Institutionalization triggered diverse responses from competing organic actors, reflecting their different standpoints and knowledges. In this context, a transdisciplinary approach is deemed suitable to provide context-specific insights into organic agriculture.
This dissertation draws on anthropology and Science and Technology Studies (STS) to explore the politics of knowledge of organic agriculture in Yogyakarta, Indonesia, as a contribution to a critique of transdisciplinarity. My interest on the hierarchization of different knowledges is inspired by the work of anthropologists of knowledge that asks how the communities they study construct knowledge and how they themselves construct knowledge about these communities. Since transdisciplinary knowledge is co-produced by science and society and reflects their embedded power relations, transdisciplinary research needs to be open to different interpretations, and reflexive towards the unequal distribution of resources, accountability, and responsibility. By linking these two lines of thought, I examine the making of knowledges through reflexive transdisciplinary work. I reflect on how “epistemic living space” (Felt 2009) and “co-presence” (Chua 2015) affect research and shape the politics of knowledge of organic agriculture in Yogyakarta, Indonesia. I argue that the hierarchization of different knowledges of organic agriculture was intertwined with my shifting positionalities, as a field researcher in Indonesia and PhD student at Passau University, as I moved between these two different “field sites”.
This cumulative dissertation is divided into two parts. In Part I, “Knowledge in the making”, I present my contributions towards transdisciplinary knowledge production and politics of knowledge of organic agriculture. Part II, “Publications”, comprises the three stand-alone papers. The first contribution is my formulation of the notion of knowledge in the making. The second is my exploration of the ways that reflexive transdisciplinary work, and living and intersubjective experience shape knowledge in the making. The third is my demonstration of how an understanding of knowledge in the making sheds lights on the politics of knowledge of organic agriculture. This approach serves to examine the politics involved in synthesizing the conceptualizations of organic agriculture employed by different actors into one overarching narrative, such as sustainable agriculture or alternative agriculture. My final contribution is the notion of transdisciplinary moments, a conceptualization of transdisciplinary research practice that accounts for the politics of knowledge in which both scientific and extra-scientific actors are embedded. As a conclusion, I share the lessons learned from pursuing a PhD as a cumulative dissertation in an unstructured setting within a German–Indonesian research project on Indonesian organic agriculture. Finally, I identify bodies of literature and strands of thinking for future engagement within transdisciplinary research and discuss their potential to contribute to radical change in the institutional and value structures of contemporary academia.
In the ongoing 21st century, low- and middle-income countries will face two health challenges that are thoroughly different from what these countries have been dealing with in preceding centuries. First, they are confronted with surging rates of non-communicable diseases (NCDs), and second, climate change will take its toll and is predicted to cause catastrophic health impairments and exacerbate chronic health conditions further. Both will pose a disproportionate health and economic burden on low- and middle-income countries, which are also the countries least able to cope with them. By threatening individual health and socioeconomic improvements, and by putting an immense burden on already constrained health care systems, they impede the progress in poverty reduction and widen health inequities between the rich and the poor.
Against this background, this thesis investigates the potential of NCD prevention and treatment measures in the context of Southeast Asia, with case studies in Indonesia. Specifically, it seeks to understand what kind of health interventions have the potential to be (cost-)effective considering the cultural background, lifestyle, health literacy and health system capacities in the region. Further, this thesis analyzes the interplay between NCDs and climate change and assesses the financial burden that both might pose in the decades to come. Hence, this thesis contributes to a better understanding of how the two health challenges of the 21st century, NCDs and climate change, can be addressed in the context of Southeast Asia and offers insights into what type of health policies and interventions can play a supportive role.
Teaching Journalism Literacy in Schools: The Role of Media Companies as
Media Educators in Germany
(2023)
German journalism is facing major challenges including declining circulation, funding, trust, and political allegations of spreading disinformation. Increased media literacy in the population is one way to counter these issues and their implications. This especially applies to the sub‐concept of journalism literacy, focusing on the ability to consume news critically and reflectively, thus enabling democratic participation. For media companies, promoting journalism literacy seems logical for economic and altruistic reasons. However, research on German initiatives is scarce. This article presents an explorative qualitative survey of experts from seven media companies offering journalistic media education projects in German schools, focusing on the initiatives’ content, structure, and motivation. Results show that initiatives primarily aim at students and teachers, offering mostly education on journalism (e.g., teaching material) and via journalism (e.g., journalistic co‐production with students). While these projects mainly provide information on the respective medium and journalistic practices, dealing with disinformation is also a central goal. Most initiatives are motivated both extrinsically (e.g., reaching new audiences) and intrinsically (e.g., democratic responsibility). Despite sometimes insufficient resources and reluctant teachers, media companies see many opportunities in their initiatives: Gaining trust and creating resilience against disinformation are just two examples within the larger goal of enabling young people to be informed and opinionated members of a democratic society.
Religion can unite and divide, it can lead to a strengthening or a weakening of identity and legitimacy. Religion can stoke conflicts but it can also pacify them – within societies and in international politics. Religion endures and it can exist independently of states, it can constitute them, and it can provide new forms of states, societies, and empires. Arguably, religion shapes or even constitutes the international society of states, an aspect so far neglected in the field of International Relations. The dissertation provides a new definition of religion for International Relations and the English School in particular. Based upon this understanding of religion, the five publications presented in the dissertation provide new analytical and theoretical concepts and approaches to fill the research gap. Religion is integrated into the theoretical framework of the English School in the form of a “prime institution” and with the help of the “quilt model”. While the former expands the theoretical framework, the latter adds an analytical layer. Based upon this definition religion is also introduced as a concept (“hybrid actorness”) in Foreign Policy Analysis, opening it up to become less state-centrist and more transnational-oriented, thereby boosting its relevance considering the evolving international (global) society. In another step, the Securitization framework of analysis is expanded to include (freedom) of religion. By revisiting the publications, the dissertation is able to identify next steps in terms of avenues of research. Finally, the dissertation reveals areas of study which contribute to increasing the pertinence of IR, particularly of the English School.
Data-driven decision-making and data-intensive research are becoming prevalent in many sectors of modern society, i.e. healthcare, politics, business, and entertainment. During the COVID-19 pandemic, huge amounts of educational data and new types of evidence were generated through various online platforms, digital tools, and communication applications. Meanwhile, it is acknowledged that educa-tion lacks computational infrastructure and human capacity to fully exploit the potential of big data. This paper explores the use of Learning Analytics (LA) in higher education for measurement purposes. Four main LA functions in the assessment are outlined: (a) monitoring and analysis, (b) automated feedback, (c) prediction, prevention, and intervention, and (d) new forms of assessment. The paper con-cludes by discussing the challenges of adopting and upscaling LA as well as the implications for instructors in higher education.
In the Internet of Things (IoT), Low-Power Wide-Area Networks (LPWANs) are designed to provide low energy consumption while maintaining a long communications’ range for End Devices (EDs). LoRa is a communication protocol that can cover a wide range with low energy consumption. To evaluate the efficiency of the LoRa Wide-Area Network (LoRaWAN), three criteria can be considered, namely, the Packet Delivery Rate (PDR), Energy Consumption (EC), and coverage area. A set of transmission parameters have to be configured to establish a communication link. These parameters can affect the data rate, noise resistance, receiver sensitivity, and EC. The Adaptive Data Rate (ADR) algorithm is a mechanism to configure the transmission parameters of EDs aiming to improve the PDR. Therefore, we introduce a new algorithm using the Multi-Armed Bandit (MAB) technique, to configure the EDs’ transmission parameters in a centralized manner on the Network Server (NS) side, while improving the EC, too. The performance of the proposed algorithm, the Low-Power Multi-Armed Bandit (LP-MAB), is evaluated through simulation results and is compared with other approaches in different scenarios. The simulation results indicate that the LP-MAB’s EC outperforms other algorithms while maintaining a relatively high PDR in various circumstances.
This article presents a proposal on how the European Union’s regulatory framework on genetically modified (GM) plants should be reformed in light of recent developments in genomic plant breeding techniques. The reform involves a three-tier system reflecting the genetic changes and resulting traits of GM plants. The article is intended to contribute to the ongoing debate over how best to regulate plant gene editing techniques in the EU.
After the enactment of the GDPR in 2018, many companies were forced to rethink their privacy management in order to comply with the new legal framework. These changes mostly affect the Controller to achieve GDPR-compliant privacy policies and management.However, measures to give users a better understanding of privacy, which is essential to generate legitimate interest in the Controller, are often skipped. We recommend addressing this issue by the usage of privacy preference languages, whereas users define rules regarding their preferences for privacy handling. In the literature, preference languages only work with their corresponding privacy language, which limits their applicability. In this paper, we propose the ConTra preference language, which we envision to support users during privacy policy negotiation while meeting current technical and legal requirements. Therefore, ConTra preferences are defined showing its expressiveness, extensibility, and applicability in resource-limited IoT scenarios. In addition, we introduce a generic approach which provides privacy language compatibility for unified preference matching.
Due to their high numbers, refugees’ labour market inclusion has become an important topic for Germany in recent years. Because of a lack of research on meso-level actors’ influences on labour market inclusion and the transcendent role of organizations in modern societies, the article focuses on the German professional chambers’ role in the process of refugee inclusion. The study shows that professional chambers are intermediaries between economic actors, the government and refugees, which all follow their own logics and ideas of labour market inclusion (the state, the market and the community logic). The measures taken by professional chambers mainly reflect a governmental logic (to reduce refugee unemployment) combined with a market logic (to provide human resources to economic actors). A community logic (altruism) only comes into play as a rather unintended consequence of measures addressing the other two logics. The measures of two types of professional chambers are compared. Close similarities between them reveal that the organization type is of theoretical relevance to explain the type of measures organizations opt for.
My study examines how the configuration of the capitalist frontier through extractivism shapes ethnicity, gender, and intersectionality in the areas surrounding a nickel mine in Sorowako (East Luwu District, South Sulawesi), logging and coal mining along the Lalang River (Murung Raya District, Central Kalimantan) Indonesia. The colonial frontier intersects with the capitalist frontier and provides the circumstances for its formation. The colonial restrictions, religions, commodities, the imposition of labor discipline, and political changes have molded ethnic identities and relationships with nature. Furthermore, using autoethnography and Feminist Political Ecology, I combine my experiences as a woman academic-activist with the experiences of the people in my research area. I identify how communities and individuals interact with the multiple-frontier in everyday life by defining the configuration of the frontier from above and below. Thus, my dissertation contributes to understanding how I, the community, and the capitalist frontier landscape are contained and can potentially transform into multidimensional resistance. I develop a link between the body as the interior frontier and the extractive landscape to be transformed into a perspective of “Tubuh-tanah air” as a future arena of engagement and resistance to extractivism.
This dissertation examines the overarching research question of how the suppliers’ brand management in the form of brand identity, brand culture, and brand essence influences buyer-seller relationships in three independent essays.
In Essay 1, I address the structure, capabilities, and outcomes of brand identity from a supplier perspective. Through qualitative interviews with suppliers, I examine how widespread the concept of brand identity is in practice and what exactly practitioners understand by it. Going further, I look at what capabilities and conditions are necessary for brand identity to be successful and what outcomes suppliers hope to achieve. Using an Information-Display-Matrix (IDM) test and a sample of Master of Business Administration (MBA) students, I examine the relevance of brand functions in more detail.
In Essay 2, I use a dyadic dataset with matched buyer-seller dyads to examine the causes and effects of perceptual congruence and incongruence of brand culture strength on the buyer-seller relationship, while considering relationship-specific investments and interaction mechanisms as moderating effects. I show that congruence and incongruence have different effects on customer loyalty and price sensitivity and that these are strongly context-dependent.
In Essay 3, I deal with brand essence strength interactions and their effects on the buyer-seller relationship. I use a dyadic dataset with matched buyer-seller dyads to show how brand essence strength influences customer loyalty and customer profitability, and how it interacts with key customer attitudes and other important buyer-seller relationship closeness indicators.
This dissertation makes a significant contribution to the literature on brand identity, brand culture, and brand essence in buyer-seller relationships. Furthermore, my dissertation offers practical implications for managers at B2B suppliers who (re)shape their brand management with a focus on the inner parts of the brand.
1. IT-Exposure and Firm Value: We analyze the joint influence of a firm’s information technology (IT)-Exposure and investment behavior on firm value. Estimating a firm’s (partial) IT-Exposure allows for distinguishing between firms with a business model that is challenged by IT above and below market average. Hence, we estimate the annual IT-Exposure of a firm using a 3-factor Fama-French model extended by an IT-proxy. Subsequently, we analyze the relationship with Tobin’s Q in a panel data context, accounting for the relationship between IT-Exposure and investments proxied by R&D as well as CapEx. We use more than 48,000 firm-year observations for firms in the Russell 3000 Index covering the period 1990 to 2018. Although IT-Exposure has a negative impact on firm value, this discount can be overcompensated by up to 2.1 times by sufficient investments through R&D and CapEx, giving a firm with an average Tobin’s Q a premium of 14.8% to 19.2%, while controlling for endogeneity.
2. Corporate Social Responsibility, Risk, and Firm Value: An Unconditional Quantile Regression Approach: This paper examines the impact of corporate social responsibility (CSR) on firm risk, comprising total risk, idiosyncratic risk, and systematic risk, as well as firm value. We focus on analyzing the interrelationships along the entire distribution of the dependent variables, thus estimating an unconditional quantile regression (UQR). The analysis is based on CSR scores from Refinitiv and MSCI, using up to 12,013 firm-year observations over the period 2002 to 2019 for all U.S. companies listed on NYSE, NASDAQ, and AMEX. UQR reveals strongly heterogeneous effects along the unconditional quantiles of the dependent variables, which are reflected in sign changes, magnitude and significance variations. For CSR we find a risk-reducing as well as value-enhancing effect. When applying fixed effects OLS, we can just partly confirm the risk-reducing and value-enhancing effect of CSR shown in the literature.
3. Heterogenous Effects of Religiosity on Firm Risk and Firm Value: An Unconditional Quantile Regression Approach: This paper examines the impact of religiosity on firm risk, comprising total risk, idiosyncratic risk, and systematic risk, as well as firm value. We focus on analyzing the interrelationships along the entire distribution of the dependent variables, thus estimating an unconditional quantile regression (UQR). The analysis is based on all U.S. companies listed on NYSE, NASDAQ, and AMEX for the period from 1980 through 2020. UQR reveals strongly heterogeneous effects along the unconditional quantiles of the dependent variables, which are reflected in sign changes, magnitude and significance variations. Overall, the risk-reducing effect of religiosity is more pronounced in the higher quantiles of the distribution. We further observe a value-reducing as well as value-enhancing religiosity effect. When applying fixed effects OLS, we can confirm the risk-reducing and non-existing value effect of religiosity shown in the literature. The robustness of our results is underpinned by a battery of additional tests.
Abstract 1: This paper investigates whether market quality, uncertainty, investor sentiment and attention, and macroeconomic news affect bitcoin price discovery in spot and futures markets. Over the period December 2017 – March 2019, we find significant time variation in the contribution to price discovery of the two markets. Increases in price discovery are mainly driven by relative trading costs and volume, and by uncertainty to a lesser extent. Additionally, medium-sized trades contain most information in terms of price discovery. Finally, higher news-based bitcoin sentiment increases the informational role of the futures market, while attention and macroeconomic news have no impact on price discovery.
Abstract 2: We investigate whether local religious norms affect stock liquidity for U.S. listed companies. Over the period 1997–2020, we find that firms located in more religious areas have higher liquidity, as reflected by lower bid-ask spreads. This result persists after the inclusion of additional controls, such as governance metrics, and further sensitivity and endogeneity analyses. Subsample tests indicate that the impact of religiosity on stock liquidity is particularly evident for firms operating in a poor information environment. We further show that firms located in more religious areas have lower price impact of trades and smaller probability of information-based trading. Overall, our findings are consistent with the notion that religiosity, with its antimanipulative ethos, probably fosters trust in corporate actions and information flows, especially when little is known about the firm. Finally, we conjecture an indirect firm value implication of religiosity through the channel of stock liquidity.
Abstract 3: This study shows that higher physical distance to institutional shareholders is associated with higher stock price crash risk. Since monitoring costs increase with distance, the results are consistent with the monitoring theory of local institutional investors. Cross-sectional analyses show that the effect of proximity on crash risk is more pronounced for firms with weak internal governance structures. The significant relation between distance and crash risk still holds under the implementation of the Sarbanes-Oxley Act, however, to a lower extent. Also, the existence of the channel of bad news hoarding is confirmed. Finally, I show that there is heterogeneity in distance-induced monitoring activities of different types of institutions.
Feature binding has been proven to be a common and general mechanism underlying human information processing and action control. There is strong evidence showing that when humans perform a task, stimuli (e.g., the target, the distractor) and responses are bound together into an episodic representation, called an event file or a stimulus-response (S-R) episode, which can be retrieved upon feature repetition. As compared with the target and the distractor, the context (i.e., an additional stimulus presented together with the target and the distractor, but not associated with any response keys throughout the whole course of the task), which is considered as task-irrelevant, did not receive that much attention in previous studies. The current thesis was aimed to provide insights into the different roles the context plays in S-R binding and retrieval. Specifically, in Study One and Two, the role of context as an element that can be integrated into an S-R episode was investigated, with a focus on the saliency and the inter-trial variability of the contextual stimulus. Both properties were found to influence how the context is integrated into an S-R episode. More specifically, results show that both saliency and inter-trial variability determine whether the context is directly bound in a binary fashion with the response, or it enters in to a configural binding together with another stimulus and the response. In Study Three, intrigued by the role of context as an event segmentation factor in the event perception literature, whether the context can demarcate the integration window of an S-R episode was tested. Results provide consistent evidence that sharing a common context leads to a stronger binding between a stimulus and the response, as compared with the condition when these elements are separated by different contexts, thereby suggesting a binding principle of common context. Taken together, the current thesis specifies the role of context in S-R binding and retrieval, and sheds some light on how contextual information influences human behavior.
This dissertation uses four studies to examine the context-contingent strategic factors that are critical to the success of digital transformation strategies from the perspectives of capital markets, incumbents, and start-ups. It focuses on a better understanding of (1) digital innovations and their quantitative evaluation, (2) power disruptions in digitally servitized supply chains, (3) strategic measures and dynamics in digital B2B platform markets, and (4) strategizing by data-driven start-ups in digitalized business networks.
Digital platforms consist of technical elements such as software and hardware and associated social elements such as organizational processes and standards. When such social or technical elements seem logical individually but inconsistent when juxtaposed they form tensions. Prior research on platforms often focused on individual elements of digital platforms but neglected possible related and conflicting elements which offers limited insight about underlying tensions. While some studies on platforms considered tensions, they largely assumed that centralized platform owners being responsible for responding to tensions, neglecting collective response mechanisms in blockchain-based decentralized autonomous organizations (DAOs) where decentralized participants typically respond to tensions. The examination of tensions in the context of centralized platforms and decentralized autonomous organizations offers an opportunity to surface conflicting elements that form novel types of socio-technical tensions which require collective and technology-enabled response mechanisms. This thesis explored what tensions exist in centralized and decentralized digital platform contexts and how platform participants can respond to selected tensions. For this purpose, this thesis comprises five essays that employ multiple different research methods including interviews analyzed by using techniques of grounded theory, qualitative meta-analysis of published case studies, and systematic literature reviews. The findings derived from all five essays contribute to a better understanding of tensions in digital platforms. In particular, this thesis (1) offers a lens for analyzing platforms as collective organizations in which tensions arise at the collective meta-organizational level requiring collective responses, (2) identifies new tensions and response mechanisms related to generativity and collectivity, and (3) points to a novel category of socio-technical tensions that are especially salient in digital platforms.
This thesis investigates the quality of randomly collected data by employing a framework built on information-based complexity, a field related to the numerical analysis of abstract problems. The quality or power of gathered information is measured by its radius which is the uniform error obtainable by the best possible algorithm using it. The main aim is to present progress towards understanding the power of random information for approximation and integration problems.
In the first problem considered, information given by linear functionals is used to recover vectors, in particular from generalized ellipsoids. This is related to the approximation of diagonal operators which are important objects of study in the theory of function spaces. We obtain upper bounds on the radius of random information both in a convex and a quasi-normed setting, which extend and, in some cases, improve existing results. We conjecture and partially establish that the power of random information is subject to a dichotomy determined by the decay of the length of the semiaxes of the generalized ellipsoid.
Second, we study multivariate approximation and integration using information given by function values at sampling point sets. We obtain an asymptotic characterization of the radius of information in terms of a geometric measure of equidistribution, the distortion, which is well known in the theory of quantization of measures. This holds for isotropic Sobolev as well as Hölder and Triebel-Lizorkin spaces on bounded convex domains. We obtain that for these spaces, depending on the parameters involved, typical point sets are either asymptotically optimal or worse by a logarithmic factor, again extending and improving existing results.
Further, we study isotropic discrepancy which is related to numerical integration using linear algorithms with equal weights. In particular, we analyze the quality of lattice point sets with respect to this criterion and obtain that they are suboptimal compared to uniform random points. This is in contrast to the approximation of Sobolev functions and resolves an open question raised in the context of a possible low discrepancy construction on the two-dimensional sphere.
Blockchain technology enables the automated recording of information and execution of contract content – utilizing so-called Smart Contracts – without relying on trusted intermediaries (Beck et al., 2016). A blockchain is best described as a decentralized digital ledger (Atzori, 2015). The decentralized data storage on the blockchain makes the recorded information tamper-proof and creates transparency along the value chain. Therefore blockchain changes fundamentally the way data and information are processed (Al-Jaroodi & Mohamed, 2019; Avital et al., 2016). This has given rise to numerous use cases for blockchain in a wide range of industries.
Fundamentally, blockchain technology can be used at any time when information needs to be stored in an automated and tamper-proof manner (Crosby et al., 2016). In the financial industry, blockchain helps to automate peer-to-peer transactions. This makes middlemen obsolete, which can reduce transaction costs (Cai, 2018). In the public health sector, blockchain is primarily used for decentralized storage of patient records. By using blockchain technology, these patient records are secured against manipulation and unauthorized access by third parties (Mettler, 2016). Another interesting use case can be seen in the electricity market. Blockchain technology makes it possible to integrate micro producers of electricity, such as private households, into the power grid in a cost-efficient way (Cheng et al., 2017). However, blockchain technology also offers several applications in the creative industries to support the daily work of professionals.
The term creative industries encompasses industries and sectors which hold intellectual property at the core of their value creation (Caves, 2000). According to DCMS (1998),creative industries include not only classic art sectors such as fine art, painting or crafting, but also areas such as marketing, game developing, film and video or music. As the main drivers of innovation, the creative industries have a steadily increasing influence on the overall economic impact. Often, ideas, products and services from the creative industries ultimately flow into other areas, such as the automotive sector, and support them in achieving their entrepreneurial goals (Banks, 2010; Jones et al., 2004). In order to continuously maintain the position as an innovation driver, a certain form of organization has prevailed in the creative industries.
For the creative industries to react flexibly to new requirements and a constantly changing environment, work is usually carried out as project-based (DeFillippi, 2015). For this purpose, the teams of the project-based organization are predominantly formed using freelancers who are specialists in the required field. As a result, many recurring organizational activities arise, such as contracting, team finding or onboarding (DeFillippi, 2015; Eikhof & Haunschild, 2006). Therefore, professionals from the creative industries have to spend significant time on activities that do not serve their core task of creating intellectual property. These tasks not only reduce the efficiency of their work, but also hinder their creative flow (Foord, 2009; Hennekam & Bennett, 2016). In this regard, blockchain represents a promising technology to support professionals in the creative industries.
For the creative industries, blockchain is primarily used to automate formal processes and secure intellectual property rights (O’Dair, 2018). Blockchain technology enables artists and creatives more freedom for their own creative activities. By automating repetitive activities, professionals from the creative industries are freed from typical management tasks (Arcos, 2018; Cong & He, 2019). Furthermore, for the first time, intellectual property can be secured in a cost- and time-efficient way by utilizing blockchain technology. This is made possible by the decentralized nature of the blockchain, which makes subsequent manipulation of the contents of the intellectual property impossible (Avital et al., 2016; Beck et al., 2016; Regner et al., 2019). Ultimately, the use of so-called Non-Fungible Tokens (NFTs) create the opportunity for artists and creatives to sell unique digital art (Regner et al., 2019). Thus, blockchain generates entirely new ways for creative industries to organize their projects and opens new markets to sell their work. While several use cases of blockchain technology can be identified in the creative industries, the widespread use of blockchain is still lacking.
So far, no Blockchain service or blockchain application has managed to take a dominant market position in the creative industries. At first sight, this seems surprising since artists and creatives could fundamentally benefit from this technology. At the same time, professionals from the creative industries would not be dependent on middlemen or central entities. This circumstance gave the impulse for the research presented in this thesis. I was able to identify that professionals from the creative industries are still underutilizing blockchain technology for three main reasons: (1) When using blockchain technology, professionals from the creative industries experience strong resistance from their stakeholders, who want to prevent the use of blockchain. (2) The perceived constraints by artists and creatives in using blockchain still deter many from using this technology extensively. (3) Several blockchain applications lack a persuasive design, resulting in many artists and creatives continue to prefer conventional services and products.
The generalization of univariate splines to higher dimensions is not straightforward. There are different approaches, each with its own advantages and drawbacks. A promising approach using Delaunay configurations and simplex splines is due to Neamtu.
After recalling fundamentals of univariate splines, simplex splines, and the wellknown, multivariate DMS-splines, we address Neamtu’s DCB-splines. He defined two variants that we refer to as the nonpooled and the pooled approach, respectively. Regarding these spline spaces, we contribute the following results.
We prove that, under suitable assumptions on the knot set, both variants exhibit the local finiteness property, i.e., these spline spaces are locally finite-dimensional and at each point only a finite number of basis candidate functions have a nonzero value. Additionally, we establish a criterion guaranteeing these properties within a compact region under mitigated assumptions.
Moreover, we show that the knot insertion process known from univariate splines does not work for DCB-splines and reason why this behavior is inherent to these spline spaces. Furthermore, we provide a necessary criterion for the knot insertion property to hold true for a specific inserted knot. This criterion is also sufficient for bivariate, nonpooled DCB-splines of degrees zero and one. Numerical experiments suggest that the sufficiency also holds true for arbitrary spline degrees.
Univariate functions can be approximated in terms of splines using the Schoenberg operator, where the approximation error decreases quadratically as the maximum distance between consecutive knots is reduced. We show that the Schoenberg operator can be defined analogously for both variants of DCB-splines with a similar error bound.
Additionally, we provide a counterexample showing that the basis candidate functions of nonpooled DCB-splines are not necessarily linearly independent, contrary to earlier statements in the literature. In particular, this implies that the corresponding functions are not a basis for the space of nonpooled DCB-splines.
Data is an important resource in our economy and society, substantially improving overall business efficiency, innovativeness and competitiveness, and shaping our everyday lives. Yet, to leverage the data's full potential, its access and availability is vital. Thus, data sharing across organizations is of particular importance. This thesis examines the role of data sharing in the digital economy and contributes to a better understanding why data sharing matters, why it is still underutilized, and how data sharing can be encouraged. Thereby, the thesis contributes to the ongoing academic debate as well as the practical and political efforts on how to promote data sharing.
The thesis is comprised of three studies. Study 1 examines personal data sharing among (competing) online services. Particularly, it investigates the consequences of Article 20 in the General Data Protection Regulation (GDPR, May 2018), ensuring the right to data portability. This relatively new right allows online service users to transfer any personal data from one service provider to another. Focusing on a) the amount of data provided by users and b) the amount of user data disclosed to third party data brokers by service providers, the study investigates the right to data portability's effect on competitiveness and consumer surplus. Study 2 and Study 3 focus on non-personal data sharing among competing firms. Study 2 examines the literature to identify and classify barriers to non-personal, machine-generated data sharing. The study explains firms' reluctance to sharing data and discusses policy and managerial implications for overcoming the data sharing barriers. Study 3 focuses on data sharing via platforms. It investigates the Business-to-Business (B2B) data sharing platform design implications for promoting industrial data sharing. In particular, Study 3 investigates the dimensions control and transparency regarding their effect in eliciting cooperation and encouraging data sharing among firms.
In summary, this thesis examines and reveals how access and availability of data can be increased through creating beneficial data sharing conditions in B2B relationships. Particularly, the thesis contributes to the understanding of a) the implications of data sharing laws, defined in the GDPR for personal data and b) the challenges and measures of the not yet successfully established, non-personal data sharing.
Over the last decades, ongoing advancements in information technology (i.e., Internet and mobile devices) have expanded a firm’s ability to communicate and interact with consumers and hence, create the potential of building sustainable relationships. Tailoring offerings through (1) consumer-initiated customization and (2) firm-initiated personalization is considered a key driver of long-term consumer relationships. As technologies continue to evolve, the opportunities for tailored marketing expand and enable new technology-driven business models that help to leverage customization and personalization and strengthen customer relationships in the era of the digital economy.
Across three independent essays, the purpose of this dissertation is to answer the overarching research question of how innovative technology-driven business models versus traditional business models in the domains of customization and personalization influence consumer behavior. Thereby, this dissertation contributes to an understanding of challenges and opportunities of innovative customization and personalization business models with the ultimate goal of enabling their successful diffusion in the marketplace.
Specifically, in Essay 1 and Essay 2, I investigate an innovative business model located in the realm of customization, that is, internal product upgrades (i.e., offering fee-based access to originally built-in, but deliberately restricted, optional features). Using a conceptual approach, Essay 1 provides a framework for understanding how internal product upgrades will likely influence consumers’ responses. As such, it outlines evolving challenges and opportunities of internal product upgrades and derives questions for future research. In Essay 2, I use an empirical approach to examine pitfalls of internal product upgrades in the product usage phase. Drawing on research on normative expectations and perceived ownership, this essay reveals that consumers respond less favorably to internal (vs. external) product upgrades and investigates managerially relevant boundary conditions.
Finally, Essay 3 creates novel insights into a business model in the domain of personalization. This essay examines how the increasingly prevalent data disclosure practice of firms engaging in a network with other firms to exchange consumer data, which we denote as Business Network Data Exchange (BNDE), influences consumers’ privacy-related decision-making. In particular, this essay shows that consumers are less likely to disclose personal data in BNDE (vs. traditional dyadic) data exchange settings and that immediate affective reactions are crucial in explaining consumers’ privacy-related decision-making.
Within this dissertation, I make substantial contributions at a more general level to literature on customization and personalization by comparing innovative business models to established ones. At the individual essay level, I extend existing research in the domains of product feature modifications, norm violations, and privacy-related decision making. Moreover, this dissertation provides actionable implications for managers who are facing the decision to transform their established business model into an innovative technology-driven one.
Network communication has become a part of everyday life, and the interconnection among devices and people will increase even more in the future. A new area where this development is on the rise is the field of connected vehicles. It is especially useful for automated vehicles in order to connect the vehicles with other road users or cloud services. In particular for the latter it is beneficial to establish a mobile network connection, as it is already widely used and no additional infrastructure is needed. With the use of network communication, certain requirements come along.
One of them is the reliability of the connection. Certain Quality of Service (QoS) parameters need to be met. In case of degraded QoS, according to the SAE level specification, a downgrade of the automated system can be required, which may lead to a takeover maneuver, in which control is returned back to the driver. Since such a handover takes time, prediction is necessary to forecast the network quality for the next few seconds. Prediction of QoS parameters, especially in terms of Throughput (TP) and Latency (LA), is still a challenging task, as the wireless transmission properties of a moving mobile network connection are undergoing fluctuation. In this thesis, a new approach for prediction Network Quality Parameters (NQPs) on Transmission Control Protocol (TCP) level is presented. It combines the knowledge of the environment with the low level parameters of the mobile network. The aim of this work is to perform a comprehensive study of various models including both Location Smoothing (LS) grid maps and Learning Based (LB) regression ones. Moreover, the possibility of using the location independence of a model as well as suitability for automated driving is evaluated.
A firm's entrepreneurial orientation (EO) is its propensity to act proactively, innovate, take risks, and engage in competitive and autonomous behaviors. Prior research shows that EO is an im-portant factor for new ventures to overcome barriers to survival and fostering growth, measured by annual sales and employment growth rates. In particular, individual-level EO (IEO) is an important driver of a firm’s EO. The firm’s ability to exploit opportunities appearing in the mar-ket and to achieve superior performance depends on the employees’ skills and experiences to act and think entrepreneurially. The main objective of this dissertation is to investigate how and when employees engage in entrepreneurial behaviors at work. Building on three essays, this dissertation takes an interdisciplinary approach to employee entrepreneurial behaviors in new ventures, encompassing both entrepreneurship and gamification research. The first main contri-bution proposed in this field is a more nuanced understanding of how employee entrepreneurial behaviors help young firms cope with growth-related, organization-transforming challenges (i.e., changes in organizational culture that accompany growth, the introduction of hierarchical structures, and the formalization of processes). When new ventures grow, employees’ IEO tends to manifest in introducing technological innovations and business improvements rather than in actions related to risk-taking. Second, this dissertation reveals the relevance of self-efficacy for entrepreneurial behaviors and explores how gamification can enhance employee entrepreneurial behaviors in new ventures. Based on these findings, this dissertation contributes to EO research by highlighting the role of IEO as a building block for EO pervasiveness. This research further develops our knowledge on the use of gamification in new ventures. This cu-mulative dissertation is structured as follows. Part A is an introduction to the study of entrepre-neurial behaviors. Part B contains the three essays.
In three essays, this dissertation examines the past, present and future of branding in an international context, contributing to the research area of global/local brands, while also offering managers valuable insights for their branding strategies.
The first essay provides scholars and practitioners a detailed state of the art of global/local brand research and proposes promising angles for future research, especially considering major
challenges for our societies.
The second essay incorporates the segment of cosmopolitan consumers into perceived brand globalness/localness research. Theoretically grounded in the concepts of social identity theory and complexity, the essay builds on perceived brand globalness/localness to analyze how cosmopolitans arrange both their global and local orientations. Aside offering scholars a new theoretical lens regarding consumer cosmopolitanism, managers can benefit from the gained insights, if cosmopolitans are a particular target group in their business strategy.
The third and final essay meta-analytically investigates how the variables perceived brand globalness and localness materialize on various key outcome variables. At heart of this essay is a comparison of both perceived brand globalness and localness, offering scholars and practitioners valuable empirical insights on similarities and differences between their effects on outcomes such as brand quality.
The identification and estimation of trends in hydroclimatic time series remains an important task in applied climate research. The statistical challenge arises from the inherent nonlinearity, complex dependence structure, heterogeneity and resulting non-standard distributions of the underlying time series. Quantile regressions are considered an important modeling technique for such analyses because of their rich interpretation and their broad insensitivity to extreme distributions. This paper provides an asymptotic justification of quantile trend regression in terms of unknown heterogeneity and dependence structure and the corresponding interpretation. An empirical application sheds light on the relevance of quantile regression modeling for analyzing monthly Central England temperature anomalies and illustrates their various heterogenous trends. Our results suggest the presence of heterogeneities across the considered seasonal cycle and an increase in the relative frequency of observing unusually high temperatures.
Prior to the emergence of Big Data and technologies such as Learning Analytics (LA), classroom research focused mainly on measuring learning outcomes of a small sample through tests. Research on online environments shows that learners’ engagement is a critical precondition for successful learning and lack of engagement is associated with failure and dropout. LA helps instructors to track, measure and visualize students’ online behavior and use such digital traces to improve instruction and provide individualized support, i.e., feedback. This paper examines 1) metrics or indicators of learners’ engagement as extracted and displayed by LA, 2) their relationship with academic achievement and performance, and 3) some freely available LA tools for instructors and their usability. The paper concludes with making recommendations for practice and further research by considering challenges associated with using LA in classrooms.
Germany is considered a role model for dealing with past mass atrocities. In particular, the social reappraisal of the Holocaust is emblematic of this. However, when considering the genocide on the Herero and Nama in present-day Namibia, it is puzzling that an official recognition was only pronounced after almost 120 years, in May 2021. For a long time, silence surrounded this colonial cruelty in German political discourse. Although the discourse on German responsibility toward Namibia emerged after the end of World War II, it initially appeared detached from the genocide. That silence on colonial atrocities is to be considered a cruelty itself. Studies on silence have been expanding and becoming richer. Building on these works, the paper sets two goals: First, it advances the theorization of silence by producing a new typology, which is then integrated into discourse-bound identity theory. Second, it applies this theory to the analysis of the silencing and later acknowledging of the genocide on the Herero and Nama by German political elites. To this end, Bundestag debates, official documents, and statements by relevant political actors are analyzed in the period from 1980 to 2021. The results reveal the dynamics between hegemonic and counter-hegemonic discursive formations, how those are shifting in a period of 40 years, and what role silence plays in it. Beyond our emphasis on the genocide on the Herero and Nama, our findings might benefit future studies as the approach proposed in this paper can make silence a tangible research object for global studies.
The described secondary data provide a comprehensive basis for modeling conditional mean nitrogen dioxide (NO2) concentration levels across Germany. Besides concentration levels, meta data on monitoring sites from the German air quality monitoring network, geocoordinates, altitudes, and data on land use and road lengths for different types of roads are provided. The data are based on a grid of resolution 1 × 1 km, which is also included. The underlying raw data are open access and were retrieved from different sources. The statistical software R was used for (pre-)processing the data and all codes are provided in an online repository. The data were employed for modeling mean annual NO2 concentration levels in the paper "Agglomeration and infrastructure effects in land use regression models for air pollution - Specification, estimation, and interpretations" by Fritsch and Behm (2021).
The autonomic composition of Virtual Networks (VNs) and Service Function Chains (SFCs)based on application requirements is significant for complex environments. In this paper, we use graph transformation in order to compose an Extended Virtual Network (EVN) that is based on
different requirements, such as locations, low latency, redundancy, and security functions. The EVN can represent physical environment devices and virtual application and network functions. We build
a generic Virtual Network Embedding (VNE) framework for transforming an Application Request (AR) to an EVN. Subsequently, we define a set of transformations that reflect preliminary topological, performance, reliability, and security policies. These transformations update the entities and demands of the VN and add SFCs that include the required Virtual Network Functions (VNFs). Additionally, we propose a greedy proactive heuristic for path-independent embedding of the composed SFCs. This heuristic is appropriate for real complex environments, such as industrial networks. Furthermore, we present an Industrail Internet of Things (IIoT) use case that was inspired by Industry 4.0 concepts,in which EVNs for remote asset management are deployed over three levels; manufacturing halls and edge and cloud computing. We also implement the developed methods in Alevin and show exemplary mapping results from our use case. Finally, we evaluate the chain embedding heuristic while using a random topology that is typical for such a use case, and show that it can improve the admission ratio and resource utilization with minimal overhead.
Earlier research showed that religion is related to participation
among adolescents. It emphasized the effects of belonging (affiliation to groups and traditions) on community service among Western populations. This article takes one step further and focuses on religiosity as a potential motivation for community problem solving during adolescence and young adulthood, in the Eastern European Orthodox cultural setting. Data comes from several semistructured interviews with participants in a civic project conducted in the city of Timisoara (Romania). Findings indicated a low impact of the social religious component on engagement. The cognitive dimension of belief and the emotional bonding (prayer, ritual connection to the higher reality) function as indirect motivators, through the moral element of behavior. Results also showed a privatization of spiritual life at young adults (the invisible religion): estrangement from doctrines and the development of an individualistic type of morality, meant to drive volunteer activities further.
Southeast Asia is one of the most dynamic regions in the world. This volume offers a timely approach to Southeast Asian Studies, covering recent transitions in the realms of urbanism, rural development, politics, and media. While most of the contributions deal with the era of post-independence, some tackle the colonial period and the resulting developments. The volume also includes insights from Southern India.
As a tribute to the interdisciplinary project of Southeast Asian Studies, this book brings together authors from disciplines as diverse as area studies, sociology, history, geography, and journalism.
The power demand (kW) and energy consumption (kWh) of data centers were augmenteddrastically due to the increased communication and computation needs of IT services. Leveragingdemand and energy management within data centers is a necessity. Thanks to the automated ICTinfrastructure empowered by the IoT technology, such types of management are becoming more feasiblethan ever. In this paper, we look at management from two different perspectives: (1) minimization of theoverall energy consumption and (2) reduction of peak power demand during demand-response periods.Both perspectives have a positive impact on total cost of ownership for data centers. We exhaustivelyreviewed the potential mechanisms in data centers that provided flexibilities together with flexiblecontracts such as green service level and supply-demand agreements. We extended state-of-the-artby introducing the methodological building blocks and foundations of management systems for theabove mentioned two perspectives. We validated our results by conducting experiments on a lab-gradescale cloud computing data center at the premises of HPE in Milano. The obtained results support thetheoretical model, by highlighting the excellent potential of flexible service level agreements in Green IT:33% of overall energy savings and 50% of power demand reduction during demand-response periods inthe case of data center federation.
Natural Language Processing has an important role in Artificial Intelligence for easing human-machine interaction. Processing human language, though, poses many challenges, among which is the semantics-related phenomenon known as language variability, the fact that the same thing can be said in several ways. NLP applications' inputs and outputs can be expressed in different forms, whose equivalence can be verified through inference. The textual entailment paradigm was established to enable the creation of a unifying framework for applied inference, providing a means of delivering other NLP task from handling inference issues in an ad-hoc manner, using instead the outputs of an inference-dedicated mechanism.
Text entailment, the task of determining whether a piece of text logically follows from another piece of text, involves different scenarios, which can range from a simple syntactic variation to more complex semantic relationships between sentences. However, most approaches try a one-size-fits-all solution that usually favors some scenario to the detriment of another. The commonsense world knowledge necessary to support more complex inferences is also usually employed in a limited way, with most approaches sticking to shallow semantic information, leaving more elaborate semantic relationships aside. Furthermore, most systems still work as a "black box", providing a yes/no answer that does not explain the underlying reasoning process.
This thesis aims at addressing these issues by proposing a composite interpretable approach for recognizing text entailment where the entailment pair is analyzed so the most relevant phenomenon is detected and the suitable method can be used to solve it. Syntactic variations are dealt with through the analysis of the sentences' syntactic structures, and semantic relationships are detected with the aid of a knowledge graph built from natural language dictionary definitions. Also, if a semantic matching is involved, the answer is made interpretable through the generation of natural language justifications that explain the semantic relationship between the pieces of text. The result is the XTE - Explainable Text Entailment - a system that outperforms well-established tools based on single-technique entailment algorithms, and that also gives an important step towards Explainable AI, allowing the inference model interpretation, making the semantic reasoning process explicit and understandable.
Programming is a key skill in a world where businesses are driven by digital transformations. Although many of the programming demand can be addressed by a simple set of instructions composing libraries and services available in the web, non-technical professionals, such as domain experts and analysts, are still unable to construct their own programs due to the intrinsic complexity of coding. Among other types of end-user development, natural language programming has emerged to allow users to program without the formalism of traditional programming languages, where a tailored semantic parser can translate a natural language utterance to a formal command representation able to be processed by a computational machine. Currently, semantic parsers are typically built on the top of a learning method that defines its behaviours based on the patterns behind a large training data, whose production frequently are costly and time-consuming. Our research is devoted to study and propose a semantic parser for natural language commands targeting a scenario with low availability of training data. Our proposed semantic parser follows a multi-component architecture, composed of a specialised shallow parser that associates natural language commands to predicate-argument structures, integrated to a distributional ranking model that matches the command to a function signature available from an API knowledge base. Systems developed with statistical learning models and complex linguistics resources, as the proposed semantic parser, do not provide natively an easy way to associate a single feature from the input data to the impact in system behaviour. In this scenario, end-user explanations for intelligent systems has become a strong requirement to increase user confidence and system literacy. Thus, our research designed an explanation model for the proposed semantic parser that fits the heterogeneity of its multi-component architecture. The explanation model explores a hierarchical representation in an increasing degree of technical depth, providing higher-level explanations in the initial layers, going gradually to those that demand technical knowledge, applying different explanation strategies to better express the approach behind each component. With the support of a user-centred experiment, we compared the utility of different types of explanations and the impact of background knowledge in their preferences.
In this thesis we consider real analytic functions, i.e. functions which can be described locally as convergent power series and ask the following: Which real analytic functions definable in R_{an,exp} have a holomorphic extension which is again definable in R_{an,exp}? Finding a holomorphic extension is of course not difficult simply by power series expansion. The difficulty is to construct it in a definably way.
We will not answer the question above completely, but introduce a large non trivial class of definable functions in R_{an,exp} where for example functions which are iterated compositions from either side of globally subanalytic functions and the global logarithm are contained. We call them restricted log-exp-analytic. After giving some preliminary results like preparation theorems and Tamm's Theorem for this class of functions we are able to show that real analytic restricted log-exp-analytic functions have a holomorphic extension which is again restricted log-exp-analytic.
Noise in schools is considered a massive stress factor for teachers because of various reasons and can therefore lead to performance deficits in the job, but also to physical and psychological impairments. Consequently, research on school noise and its effects is essential. The three studies presented in this paper examined the immediate effects of noise on student teachers and the collateral effects on practicing teachers.
In the first and second study, two experiments were conducted to examine the effects of noise during breaks on stress experience, performance in a concentration test, and error correction of a dictation. Based on the transactional stress model (Lazarus & Folkman, 1984), it was hypothesized that noise leads to an increase in stress experience. According to the maximal adaptability theory (Hancock & Warm, 1989, 2003), noise should initially cause optimal performance, but in the long run should cause performance impairment. For this purpose, in the first study 74 and in the second study 104 student teachers of the University of Passau worked on two different concentration tests and corrected a student’s dictation while listening to short, continuous, or no noise. In both experiments, continuous noise led to an increase in the experience of stress. Neither short nor continuous noise led to a deterioration in concentration performance. Further, different findings emerged: In the first experiment, a short concentration test in combination with continuous noise led to positive effects in dictation correction, i.e., subjects showed better performance in error correction. In the second experiment, a long concentration test combined with short or continuous noise resulted in negative effects, i.e., subjects made more errors in the subsequent correction of dictation. It can be concluded that school noise can, on the one hand, increase the experience of stress and, on the other hand, promote or limit teachers’ subsequent performance. The latest, however, seems to depend on the specific situation of the individual.
In the first part of the third study, the focus was on teachers’ coping styles and experienced stress. Since coping styles have been shown to have a major impact on mental health, it was reasonable to assume that the stress experience caused by noise would vary depending on the coping style. Based on the stress-strain model (Rudow, 2000) and the transactional stress model (Lazarus & Folkman, 1984), it was hypothesized that teachers with risky coping styles experience more stress symptoms. Therefore, an online study was conducted to investigate whether there were differences in psychological and physical symptoms between teachers with different coping styles. For this purpose, 99 Bavarian elementary and middle school teachers were surveyed. Four professional coping styles resulted from the overarching scales of professional commitment and resilience. The healthy type (high commitment, high resilience), the unambitious type (low commitment, high resilience), type A (high commitment, low resilience), and type burnout (low commitment, low resilience) differed in terms of threat appraisal, noise stress, voice and hearing problems, and noise-related burnout. Compared to the healthy type, the risk types − type A and type burnout − exhibited higher stress experience and were generally more susceptible to school noise than the healthy type. This is the first study to show that school noise is particularly hazardous for teachers with risky coping styles.
The second part of the third study focused on the impact pathways of school noise. Associations between teachers’ individual characteristics and the consequences of school noise were hypothesized. Based on the simplified model of teacher stress (van Dick & Wagner, 2001), we examined 159 Bavarian elementary and middle school teachers to determine whether noise stress and vocal fatigue mediate the association between noise sensitivity and noise-related burnout. Results indicated that noise stress mediated the relationship between noise sensitivity and vocal fatigue; vocal fatigue mediated the relationship between noise stress and noise-related burnout; noise stress and vocal fatigue serially mediated the relationship between noise sensitivity and noise-related burnout. This is the first study to show links between noise-sensitive teachers and noise stress, voice problems, and noise-related burnout.
Network virtualization provides high flexibility for deploying communication services in dense and heterogeneous environments. Two main approaches (dimensions) that are usually combined exist: Network Function Virtualization (NFV) technologies for functionality virtualization and Virtual Network Embedding (VNE) algorithms for resource virtualization. These approaches can be applied to different network levels, such as factory and enterprise levels of industrial networks. Several objectives and constraints, that might be conflicting, shall be considered when network virtualization is applied, mainly in complex topologies. This thesis proposes a network virtualization model that considers both virtualization dimensions, two network levels, and different objectives and constraints. The network levels considered are two primary levels in industrial networks. However, this consideration does not restrict the model to a particular environment or certain levels. The considered objectivities/constraints are topology, reliability, security, performance, and resource usage.
Based on this model, we first build an overall combined solution for autonomic and composite virtual networking. This solution considers both virtualization dimensions, two network levels, and target objectives. Furthermore, this solution combines three novel virtualization sub-approaches that consider performance, reliability, and performance. However, the sub-approaches apply to different combinations of levels and dimensions, and the reliability approach additionally considers the resource usage objective. After presenting all solutions, we map them to the defined model.
Regarding applicability to industrial networks, the combined approach is applied to an enterprise-level Industrial Internet of Things (IIoT) use case inspired by the smart factory concept in Industry 4.0. However, the sub-approaches are applied to more specific use cases. The performance and reliability solutions are integrated with relevant components of the Time Sensitive Networks (TSN) standard as a modern technology for industrial networks. The goal is to enrich the reliability and performance capabilities of TSN with the flexibility of network virtualization.
In the combined approach, we compose and embed an environment-aware Extended Virtual Network (EVN) that represents the physical devices, virtual application functions, and required Service Function Chains (SFCs). We use the graph transformation method to transform abstract application requirements (represented by an Application Request (AR)) into an EVN. Both EVN composition and embedding methods consider the Substrate Network (SN) topology and different security, reliability, performance, and resource usage policies. These policies are applied with a certain priority and depend on the properties of communicating entities such as location and type. The EVN is embedded using property-based node mapping, reliability-aware branching, and a greedy chain embedding heuristic. The chain embedding heuristic is evaluated using a random topology that represents the use case.
The performance sub-approach is NFV-based and is applied to a specific use case with Time-critical Traffic (TCT) flows. We develop and evaluate a complete framework for virtualizing Time-aware Shaper (TAS) using high-performance NFV. The reliability sub-approach is VNE-based and is applied to a specific factory level use case. We develop minimal and maximal branching heuristics based on a reliability-aware k-shortest path algorithm and compare them using a typical factory topology. We then integrate these algorithms with a Frame Replication and Elimination for Reliability (FRER) simulator to realize reliability policies by the autonomic and efficient configuration of a supporting technology.
The security sub-approaches are related to both virtualization dimensions and are applied to generic enterprise-level use cases. However, the applicability of the security aspect to industrial networks is only shown in the combined (EVN) approach and its use case. We research the autonomic security management in Network Function Virtualization Infrastructure (NFVI) with the main goal of early reaction to threats through SFC reconfiguration through Virtual Network Function (VNF) live migration. This goal is approached by supporting the security measurements with a decision making architecture that considers, on the one hand, the threats and events in the environment and, on the other hand, the Service Level Agreement (SLA) between the NFVI provider and user. For this purpose, we classify the VNF-specific attacks and define possible early detectable behavior patterns. Finally, we develop a security-aware VNE heuristic that considers the security requirements of the Virtual Network (VN) and the security capabilities of the SN. This approach is modified in the combined approach to consider deploying virtualized security VNFs.
Due to the advances of digitalization, firms are able to collect more and more personal consumer data and strive to do so. Moreover, many firms nowadays have a data sharing cooperation with other firms, so consumer data is shared with third parties. Accordingly, consumers are confronted regularly with the decision whether to disclose personal data to such a data sharing cooperation (DSC). Despite privacy research has become highly important, peculiarities of such disclosure settings with a DSC between firms have been neglected until now. To address this gap is the first research objective in this thesis. Another underexplored aspect in privacy research is the impact of low-cognitive-effort decision-making. This is because the privacy calculus, the most dominant theory in privacy research, assumes for consumers a purely cognitive effortful and deliberative disclosure decision-making process. Therefore, to expand this perspective and examine the impact of low-cognitive-effort decision-making is the second research objective in this thesis. Additionally, with the third research objective, this thesis strives to unify and increase the understanding of perceived privacy risks and privacy concerns which are the two major antecedents that reduce consumers’ disclosure willingness.
To this end, five studies are conducted: i) essay 1 examines and compares consumers’ privacy risk perception in a DSC disclosure setting with disclosure settings that include no DSC, ii) essay 2 examines whether in a DSC disclosure setting consumers rely more strongly on low-cognitive-effort processing for their disclosure decision, iii) essay 3 explores different consumer groups that vary in their perception of how a DSC affects their privacy risks, iv) essay 4 refines the understanding of privacy concerns and privacy risks and examines via meta-analysis the varying effect sizes of privacy concerns and privacy risks on privacy behavior depending on the applied measurement approach, v) essay 5 examines via autobiographical recall the effects of consumers’ feelings and arousal on disclosure willingness.
Overall, this thesis shines light on consumers’ personal data disclosure decision-making: essay 1 shows that the perceived risk associated with a disclosure in a DSC setting is not necessarily higher than to an identical firm without DSC. Also, essay 3 indicates that only for the smallest share of consumers a DSC has a negative impact on their disclosure willingness and that one third of consumers do not intensively think about consequences for their privacy risks arising through a DSC. Additionally, essay 2 shows that a stronger reliance on low-cognitive-effort processing is prevalent in DSC disclosure settings. Moreover, essay 5 displays that even unrelated feelings of consumers can impact their disclosure willingness, but the effect direction also depends on consumers’ arousal level.
This thesis contributes in three ways to theory: i) it shines light on peculiarities of DSC disclosure settings, ii) it suggests mechanisms and results of low-effort processing, and iii) it enhances the understanding of perceived privacy risks and privacy concerns as well as their resulting effect sizes.
Besides theoretical contributions, this thesis offers practical implications as well: it allows firms to adjust the disclosure setting and the communication with their consumers in a way that makes them more successful in data collection. It also shows that firms do not need to be too anxious about a reduced disclosure willingness due to being part of a DSC. However, it also helps consumers themselves by showing in which circumstances they are most vulnerable to disclose personal data. That consumers become conscious of situations in which they are especially vulnerable to disclose data could serve as a countermeasure: this could prevent that consumers disclose too much data and regret it afterwards. Similarly, this thesis serves as a thought-provoking input for regulators as it emphasizes the importance of low-cognitive-effort processing for consumers’ decision-making, thus regulators may be able to consider this in the future.
In sum, this thesis expands knowledge on how consumers decide whether to disclose personal data, especially in DSC settings and regarding low-cognitive-effort processing. It offers a more unified understanding for antecedents of disclosure willingness as well as for consumers’ disclosure decision-making processes. This thesis opens up new research avenues and serves as groundwork, in particular for more research on data disclosures in DSC settings.
This collection of three chapters responds to today’s energy challenges. It explores innovative policy aimed to equip the energy poor with access to improved cooking energy and electricity, looking both at the demand and supply side of modern energy technologies. Concretely, it discusses mechanisms to increase uptake of off-grid solar electricity in rural Rwanda based on experimental demand measurements (Chapter 1), it studies how to diffuse improved cooking technologies in rural Senegal via supply-side mechanisms (Chapter 2), and it identifies the need to target cooking technologies in consideration of the broader household context in rural Senegal and beyond (Chapter 3).
Sentences that present a complex linguistic structure act as a major stumbling block for Natural Language Processing (NLP) applications whose predictive quality deteriorates with sentence length and complexity. The task of Text Simplification (TS) may remedy this situation. It aims to modify sentences in order to make them easier to process, using a set of rewriting operations, such as reordering, deletion or splitting. These transformations are executed with the objective of converting the input into a simplified output, while preserving its main idea and keeping it grammatically sound. State-of-the-art syntactic TS approaches suffer from two major drawbacks: first, they follow a very conservative approach in that they tend to retain the input rather than transforming it, and second, they ignore the cohesive nature of texts, where context spread across clauses or sentences is needed to infer the true meaning of a statement. To address these problems, we present a discourse-aware TS framework that is able to split and rephrase complex English sentences within the semantic context in which they occur. By generating a fine-grained output with a simple canonical structure that is easy to analyze by downstream applications, we tackle the first issue. For this purpose, we decompose a source sentence into smaller units by using a linguistically grounded transformation stage. The result is a set of selfcontained propositions, with each of them presenting a minimal semantic unit. To address the second concern, we suggest not only to split the input into isolated sentences, but to also incorporate the semantic context in the form of hierarchical structures and semantic relationships between the split propositions. In that way, we generate a semantic hierarchy of minimal propositions that benefits downstream Open Information Extraction (IE) tasks. To function well, the TS approach that we propose requires syntactically well-formed input sentences. It targets generalpurpose texts in English, such as newswire or Wikipedia articles, which commonly contain a high proportion of complex assertions.
In a second step, we present a method that allows state-of-the-art Open IE systems to leverage the semantic hierarchy of simplified sentences created by our discourseaware TS approach in constructing a lightweight semantic representation of complex assertions in the form of semantically typed predicate-argument structures. In that way, important contextual information of the extracted relations is preserved that allows for a proper interpretation of the output. Thus, we address the problem of extracting incomplete, uninformative or incoherent relational tuples that is commonly to be observed in existing Open IE approaches. Moreover, assuming that shorter sentences with a more regular structure are easier to process, the extraction of relational tuples is facilitated, leading to a higher coverage and accuracy of the extracted relations when operating on the simplified sentences. Aside from taking advantage of the semantic hierarchy of minimal propositions in existing Open IE Abstract approaches, we also develop an Open IE reference system, Graphene. It implements a relation extraction pattern upon the simplified sentences.
The framework we propose is evaluated within our reference TS implementation DisSim. In a comparative analysis, we demonstrate that our approach outperforms the state of the art in structural TS both in an automatic and a manual analysis. It obtains the highest score on three simplification datasets from two different domains with regard to SAMSA (0.67, 0.57, 0.54), a recently proposed metric targeted at automatically measuring the syntactic complexity of sentences which highly correlates with human judgments on structural simplicity and grammaticality. These findings are supported by the ratings from the human evaluation, which indicate that our baseline implementation DisSim returns fine-grained simplified sentences that achieve a high level of syntactic correctness and largely preserve the meaning of the input. Furthermore, a comparative analysis with the annotations contained in the RST Discourse Treebank (RST-DT) reveals that we are able to capture the contextual hierarchy between the split sentences with a precision of approximately 90% and reach an average precision of almost 70% for the classification of the rhetorical relations that hold between them. Finally, an extrinsic evaluation shows that when applying our TS framework as a pre-processing step, the performance of state-ofthe-art Open IE systems can be improved by up to 32% in precision and 30% in recall of the extracted relational tuples.
Accordingly, we can conclude that our proposed discourse-aware TS approach succeeds in transforming sentences that present a complex linguistic structure into a sequence of simplified sentences that are to a large extent grammatically correct, represent atomic semantic units and preserve the meaning of the input. Moreover, the evaluation provides sufficient evidence that our framework is able to establish a semantic hierarchy between the split sentences, generating a fine-grained representation of complex assertions in the form of hierarchically ordered and semantically interconnected propositions. Finally, we demonstrate that state-of-the-art Open IE systems benefit from using our TS approach as a pre-processing step by increasing both the accuracy and coverage of the extracted relational tuples for the majority of the Open IE approaches under consideration. In addition, we outline that the semantic hierarchy of simplified sentences can be leveraged to enrich the output of existing Open IE systems with additional meta information, thus transforming the shallow semantic representation of state-of-the-art approaches into a canonical context-preserving representation of relational tuples.
This dissertation deals with geostatistical, time series, and regression analytical approaches for modelling spatio-temporal processes, using air quality data in the applications. The work is structured into four essays the abstracts of which are given in the following.
The first essay is titled 'Spatial detrending revisited: Modelling local trend patterns in NO2-concentration in Belgium and Germany'. It is written in co-authorship by Prof. Dr. Harry Haupt and Dr. Angelika Schmid and published in 2018 in Spatial Statistics 28, pp. 331-351 (https://doi.org/10.1016/j.spasta.2018.04.004).
Abstract
Short-term predictions of air pollution require spatial modelling of trends, heterogeneities, and dependencies. Two-step methods allow real-time computations by separating spatial detrending and spatial extrapolation into two steps. Existing methods discuss trend models for specific environments and require specification search. Given more complex environments, specification search gets complicated by potential nonlinearities and heterogeneities. This research embeds a nonparametric trend modelling approach in real-time two-step methods. Form and complexity of trends are allowed to vary across heterogeneous environments. The proposed method avoids ad hoc specifications and potential generated predictor problems in previous contributions. Examining Belgian and German air quality and land use data, local trend patterns are investigated in a data driven way and are compared to results computed with existing methods and variations thereof. An important aspect of our empirical illustration is the heterogeneity and superior performance of local trend patterns for both research regions. The findings suggest that a nonparametric spatial trend modelling approach is a valuable tool for real-time predictions of pollution variables: it avoids specification search, provides useful exploratory insights and reduces computational costs.
The second essay is titled 'Predictability of hourly nitrogen dioxide concentration'. It is written in co-authorship with Prof. Dr. Harry Haupt and published in 2020 in Ecological Modelling 428, 109076 (https://doi.org/10.1016/j.ecolmodel.2020.109076).
Abstract
Temporal aggregation of air quality time series is typically used to investigate stylized facts of the underlying series such as multiple seasonal cycles. While aggregation reduces complexity, commonly used aggregates can suffer from non-representativeness or non-robustness. For example, definitions of specific events such as extremes are subjective and may be prone to data contaminations. The aim of this paper is to assess the predictability of hourly nitrogen dioxide concentrations and to explore how predictability depends on (i) level of temporal aggregation, (ii) hour of day, and (iii) concentration level. Exploratory tools are applied to identify structural patterns, problems related to commonly used aggregate statistics and suitable statistical modeling philosophies, capable of handling multiple seasonalities and non-stationarities. Hourly times series and subseries of daily measurements for each hour of day are used to investigate the predictability of pollutant levels for each hour of day, with prediction horizons ranging from one hour to one week ahead. Predictability is assessed by time series cross validation of a loss function based on out-of-sample prediction errors. Empirical evidence on hourly nitrogen dioxide measurements suggests that predictability strongly depends on conditions (i)-(iii) for all statistical models: for specific hours of day, models based on daily series outperform models based on hourly series, while in general predictability deteriorates with exposure level.
The third essay is titled 'Agglomeration and infrastructure effects in land use regression models for air pollution – Specification, estimation, and interpretations'. It is written in co-authorship with Dr. Markus Fritsch and published in 2021 in Atmospheric Environment 253, 118337 (https://doi.org/10.1016/j.atmosenv.2021.118337).
Abstract
Established land use regression (LUR) techniques such as linear regression utilize extensive selection of predictors and functional form to fit a model for every data set on a given pollutant. In this paper, an alternative to established LUR modeling is employed, which uses additive regression smoothers. Predictors and functional form are selected in a data-driven way and ambiguities resulting from specification search are mitigated. The approach is illustrated with nitrogen dioxide (NO2) data from German monitoring sites using the spatial predictors longitude, latitude, altitude and structural predictors; the latter include population density, land use classes, and road traffic intensity measures. The statistical performance of LUR modeling via additive regression smoothers is contrasted with LUR modeling based on parametric polynomials. Model evaluation is based on goodness of fit, predictive performance, and a diagnostic test for remaining spatial autocorrelation in the error terms.
Additionally, interpretation and counterfactual analysis for LUR modeling based on additive regression smoothers are discussed. Our results have three main implications for modeling air pollutant concentration levels: First, modeling via additive regression smoothers is supported by a specification test and exhibits superior in- and out-of-sample performance compared to modeling based on parametric polynomials. Second, different levels of prediction errors indicate that NO2 concentration levels observed at background and traffic/industrial monitoring sites stem from different processes. Third, accounting for agglomeration and infrastructure effects is important: NO2 concentration levels tend to increase around major cities, surrounding agglomeration areas, and their connecting road traffic network.
The fourth essay is titled 'Outlier detection and visualisation in multi-seasonal time series and its application to hourly nitrogen dioxide concentration'. It is written in single authorship and has not been published yet.
Abstract
Outlier detection in data on air pollutant recordings is conducted to uncover data points that refer to either invalid measurements or valid but unusually high concentration levels. As air pollutant data is typically characterised by multiple seasonalities, the task of outlier detection is associated with the question of how to deal with such non-stationarities. The present work proposes a method that combines time series segmentation, seasonal adjustment, and standardisation of random variables. While the former two are employed to obtain subseries of homoskedastic data, the latter ensures comparability across the subseries. Further, the standardised version of the seasonally adjusted subseries represents a scaled measure for the outlyingness of each data point in the original time series from its mean and therefore forms a suitable basis for outlier detection. In an empirical application to data on hourly NO2 concentration levels recorded at a traffic monitoring site in Cologne, Germany, over the years 2016 to 2019, the common boxplot criterion is used to examine each standardised seasonally adjusted subseries for positive outliers. The results of the analyses are put into their natural temporal order and displayed in a heatmap layout that provides information on when single and sequential outliers occur.
Poverty, underemployment, lack of infrastructure, low agricultural productivity, degradation of natural resources, climate change, and eroding social cohesion are among the biggest challenges that many low and lower-middle income countries are facing. Objectives linked to addressing these pressing challenges have been ascribed to public works programmes (PWPs). These are social protection instruments which offer remuneration (in cash or kind) for vulnerable people in exchange for temporary work on labour-intensive low-skill activities with social benefits. PWPs are being implemented in around two out of three developing countries. Given the substantial amounts spent on PWPs, it is critical to know to what extent the expectations towards them are backed by evidence. This dissertation sheds light on this overarching question with three self-contained essays. The first essay synthesises the evidence from PWPs in Sub-Saharan Africa, guided by three questions: First, what can we infer from the available impact evaluations regarding the effectiveness of PWPs as a social protection instrument? Second, what do we know about the role of the wage vector, asset vector, and skills vector in this respect? Third, what can we infer about the role of design features in explaining differences in outcomes? The other two essays use empirical evidence from Malawi to address more specific questions regarding the potential of PWPs to strengthen climate resilience and the relationship between PWPs and social cohesion.
What sets the evidence synthesis in my first essay apart from existing reviews of PWPs is that it accounts for their heterogeneity by systematically differentiating results by PWP type and outcome area (income, consumption and expenditures, labour supply, food security, nutrition, asset holdings, agricultural production and techniques, and education). Programmes that offer short-term ad-hoc employment (Type 1) are distinguished from programmes that offer more predictable employment over longer periods (Type 2). For the review of impacts, this paper relies solely on (quasi-)experimental studies, but for the analysis of the role of design factors also on other literature. In line with existing reviews, my results suggest that Type 1 programmes can effectively enable consumption smoothing in the wake of acute crises, whereas in contexts of chronic poverty, Type 2 programmes perform, on balance, better. Offering complementary access to extension services in Type 2 programmes can boost impacts further. However, in all cases, evidence is too scant and mixed to safely conclude whether the higher benefits of costlier PWP types justify the cost premium.
The second essay investigates the potential of PWPs to strengthen climate resilience. Among the main social protection instruments, the biggest potential to strengthen climate resilience is often ascribed to PWPs if they create climate-smart community assets and transfer knowledge of climate-smart practices. Yet, there is a lack of evidence whether design changes to this end can indeed enhance the contribution of an existing PWP to climate resilience. I use a difference-in-differences approach based on two-period panel data to analyse how a modified PWP model performs compared to the standard model of Malawi’s largest PWP after 24 months. The key modification is to embed public works in a communal watershed management plan with a strong emphasis on collective action and capacity building. I find that the modified approach considerably increased communal watershed management activities through voluntary labour contributions on top of the paid public works labour. While this increase was mainly driven by PWP participants, non-participants also made substantial contributions. I also find a small increase in the adoption of soil and water conservation practices on respondents’ private land, especially by non-PWP participants. These findings imply that such modest changes can make PWPs climate-smarter. In particular, they can broaden the engagement in and adoption of climate-smart activities beyond the group of PWP participants.
The co-authored third essay investigates the relationship between Malawi’s MASAF PWP and social cohesion, specifically within-community cooperation for the common good. Like the existing studies, we face the challenge that neither the assignment of the programme to communities nor the selection of individual participants is randomised. We try to mitigate the endogeneity concerns by triangulating fixed effects panel analyses for a set of outcomes and sectors using two datasets with different units of analysis (households and communities). We find that public works are positively associated with coordination activities and voluntary (unpaid) contributions to public goods, along both vertical ties (between community members and local leaders) and horizontal ties (among community members). Especially for school-building activities, voluntary inputs in the form of labour and other in-kind contributions are higher in the presence of the public works programme. Our results contribute to a better understanding of the link between social protection programmes with community-driven features and social cohesion.
Overall, the findings of the three essays in this dissertation contribute to the knowledge base regarding effectiveness and potential of PWPs across a broad range of outcome areas. Specifically, they offer new insights how to harness the potential of PWP to strengthen climate resilience and into the seemingly positive relationship between PWPs and social cohesion. The findings can help researchers and policy makers who are interested specifically in PWPs or in any of the many objectives that can be pursued through PWPs.
The increasing relevance of massive graph data reinforces the need for adequate graph data management. While several graph database engines have been developed, the storage of graph data in a relational database management system, and therefore the seamless integration into existing information systems remains an open challenge.
Motivated by the use case to integrate Building Information Modeling (BIM) data into the MonArch system, we propose a solution that transforms the BIM data into a property graph and stores this graph in the database system.
We present a novel approach to efficiently store property graph data in a relational database management system using JSON functionality and redundant storage of edges in adjacency lists and show how to import huge data sets into this schema. Applying this approach, we import data sets of up to nearly 1 TB of disk space within the relational database, while only having 96 GB of main memory available.
We also present a new approach of how to retrieve data from this database schema, translating queries written in the popular property graph query language Cypher into SQL. Hence, we provide an intuitive way to write semantically complex queries.
We also demonstrate the efficiency of our approach using the standardized Linked Data Benchmark Council – Social Network Benchmark (LDBC - SNB) framework. Our approach increases the throughput for this benchmark by up to 85 times, compared to existing approaches for RDBMS.
In addition, we propose a new method to transform BIM data into the property graph model and how to apply the aforementioned property graph storage to this data. We can import IFC models of up to 300 MB within five minutes.
We show the suitability of our approach using our own use case specific benchmark, which we integrated into the previously mentioned Social Network Benchmark. For our interactive use case-specific queries, we achieve response times faster than 5 ms in 99% of all executions.
Finally, we present how the aforementioned approach to store BIM data in a relational database management system is integrated into the existing MonArch system by splitting the different functionalities of our approach into a microservice architecture.
Most Sub-Saharan African (SSA) countries experienced sound economic growth and a declining rate of poverty over the last two decades. Though, by far, the SSA region remains the poorest in the world and faces tremendous political, social, and economic challenges. Moreover, due to the COVID-19 pandemic, SSA entered into a recession with a GDP growth rate of minus 5% in 2020 as ever recorded over 25 years. This has also induced an increase in poverty in the region, which adds up to the structural challenges and further highlight the need of sound policies to address economic growth, governance, jobs, and poverty for the region to meet the Sustainable Development Goals (SDGs) in 2030 and beyond.
This thesis examines the effects of institutional quality, political instability, and a government targeted entrepreneurship program on the accumulation of human, physical, and financial capital by households and firms. In the literature, these factors are identified as the key determinants of economic growth and job creation, yet this thesis contributes to a knowledge gap, especially at the microeconomic level, on how households and firms accumulate these factors in the presence of weak institutional quality, political instability, and government targeted entrepreneurship programs. In particular, this thesis investigates heterogeneity as well as a single country study of the effects of institutional quality and political instability; it also employs a randomized controlled trial (RCT) to assess the impacts of two different targeted entrepreneurship support programs; and finally, it taps on data from this field experiment to assess the performance of two different targeting mechanisms for selecting growth-oriented entrepreneurs. Each paper is self-contained and three among the four papers were written with co-authors.
The first paper assesses the effects of institutional quality and political instability on household assets and human capital accumulation in 19 Sub-Saharan African countries for the period 2003-16. In this paper, the concept of instability is enlarged to include factual instability as measured by the number of political violence and civil unrest events, perceived instability as measured by the perceptions of the quality of institutions by households, and the interplay between factual and perceived instability. Contrary to most previous analyses, this paper takes into account household wealth distribution to show how the effects of political instability differ for poor vs. rich households. For identification, I exploit the variation of factual and perceived instability across 185 administrative regions in the 19 countries. My regressions control for a large range of confounding factors measured at the levels of households, regions, and countries. Overall, factual and perceived instability are associated with higher investments in assets, and factual instability is also associated with more investment in house improvements, yet it is negatively associated with the ownership of financial accounts. With regard to the heterogeneous effects, increased factual or perceived instability is associated with more investments in physical capital but less investments in financial and human capital among rich households, and with less investments in physical, financial and human capital among poor households. These findings suggest that political instability might enhance the accumulation of wealth by rich households and reduce that of poor households, implying that the detrimental effects of political instability have lasting consequences for poor households, especially when poor households are exposed to an actual or even just perceived deteriorating quality of the country’s institutions.
The second paper, written with Nicolas Büttner and Michael Grimm, analyzes households’ investments in assets and their consumption, and education and health expenditures when exposed to actual instability as measured by the number of political violence and protest events in Burkina Faso. There is a large, rather macroeconomic, literature that shows that political instability and social conflict are associated with poor economic outcomes including lower investment and reduced economic growth. However, there is only very little research on the impact of instability on households’ behavior, in particular their saving and investment decisions. This paper merges six rounds of household survey data and a geo-referenced time series of politically motivated events and fatalities from the Armed Conflict Location and Event Data project (ACLED) to analyze households’ decisions when exposed to instability in Burkina Faso. For identification, the paper exploits variation in the intensity of political instability across time and space while controlling for time-effects and municipality fixed effects as well as rainfall and nighttime light intensity, and many other potential confounders. The results show a negative effect of political instability on financial savings, the accumulation of durables, investment in house improvements, as well as on investment in education and health. Instability seems, in particular, to lead to a reshuffling from investment expenditures to increased food consumption, implying lower growth prospects in the future. With respect to economic growth, the sizable education and health effects seem to be particularly worrisome.
The third paper, written with Michael Grimm and Michael Weber, employs a randomized controlled trial (RCT) to assess the short-term effects of a government support program targeted at already existing and new firms located in a semi-urban area in Burkina Faso. Most support programs targeted at small firms in low- and middle-income countries fail to generate transformative effects and employment at a larger scale. Bad targeting, too little flexibility and the limited size of the support are some of the factors that are often seen as important constraints. This paper assesses the short-term effects of a randomized targeted government support program to a pool of small and medium-sized firms that have been selected based on a rigorous business plan competition (BPC). One group received large cash grants of up to US$8,000, flexible in use. A second group received cash grants of an equally important size, but earmarked to business development services (BDSs) and thus less flexible and with a required own contribution of 20%. A third group serves as a control group. All firms operate in agri-business or related activities in a semi-urban area in the Centre-Est and Centre-Sud regions of Burkina Faso. An assessment of the short-term impacts shows that beneficiaries of cash grants engage in better business practices, such as formalization and bookkeeping. They also invest more, though, this does not translate into higher profits and employment yet. Beneficiaries of cash grants and BDSs show a higher ability to innovate. The results also show that cash grants cushioned the adverse effects of the COVID-19 pandemic for the beneficiaries. More generally, this study adds to the thin literature on support programs implemented in a fragile-state context.
The fourth paper, written with Michael Weber, examines the selection of entrepreneurs based on expert judgments for a BPC in Burkina Faso. To support job creation in developing countries, governments allocate significant funds to a typically small number of new or already existing micro, small, and medium-sized enterprises (MSMEs) that are growth oriented. Increasingly, these enterprises are picked through BPCs where thematic experts are asked to make the selection. So far, there exists contrasting and limited evidence on the effectiveness and efficiency of these expert judgments for screening growth-oriented entrepreneurs among contestants in BPCs. Alternative or complementary approaches such as evaluation and selection algorithms are discussed in the literature but evidence on their performance is thin. This paper uses a principal component analysis (PCA) to build a metric for comparing the performance of these alternative mechanisms for targeting entrepreneurs with high potential to grow. The results show expert subjectivity bias in judging contestant entrepreneurs. The paper finds that the scores from the expert judgment and those from the algorithm perform similarly well for picking the top-ranked or talented entrepreneurs. It also finds that both types of scores have predictive power, i.e. have statistically significantly associated with 17 firm performance outcomes measured 10 or 34 months after the BPC started. Yet, the predictive power, as measured by the magnitude of the regression coefficients, is higher for the algorithm metric, even when it is considered jointly with expert judgment scores. Despite the statistical superiority of the algorithm, expert assessments at least through pitches of entrepreneurs have proved useful in many settings where free-riding or misuse of public funds may occur. Hence, efficiency and precision could be achieved by relying on a reasoned combination of expert judgments and an algorithm for targeting growth-oriented entrepreneurs.
These four papers bring new insights on the relationship between weak institutions, political instability, and targeted government support to entrepreneurship for increasing the accumulation of financial, physical, and human capital, and productivity. And these are the key factors for spurring economic growth and creating jobs in SSA. These findings suggest that efficient institutions building in SSA countries would enhance citizen perceptions of good governance which would reduce political instability and enable households including the poor to accumulate productive assets, increase their productivity and reduce poverty. The findings also suggest that targeted government entrepreneurship support programs, e.g. in the forms of cash grants with monitored disbursements yet flexible in use, can enhance firms’ human capital, productive assets, and innovations, even in the short term. Moreover, the targeting mechanism of such programs could be made more effective and efficient by relying on a combinaison of expert judgments and an algorithm for picking growth-oriented entrepreneurs.
With respect to religious motivations for political participation and civic engagement, scholars have set their focus on social capital and the recruiting potential for volunteers inside religious communities. Less attention has been paid to individual religious impulses, but also to reasons for which people step out from deliberative processes, especially in regard to the Eastern European Orthodox cultural context. We are approaching this under-explored research field in this dissertation, with focus on the Romanian city of Timișoara, and look at three particular aspects: the influence of religious perceptions on political protests (analysis of the 1989 revolution), the manner in which religion motivates young people to volunteer (view on a local community project) and factors determining people to retreat from public engagement (analysis of citizens` local committees). The research methods are qualitative - interviews, group discussions and analytical interpretation. Results show that non- or less religious young people are encouraged to protest by an indefinable supernatural force and motivated by moral interests (need for dignity and a fair treatment / procedural justice), more than material ones (distributive justice). When engaging in the community, the impulse partially comes from an intrinsic spirituality and a privatized experience with the divine. Giving up civic engagement has nothing to do with remuneration, but with the need for freedom of expression and moral appreciation.
In many cases, transitioning towards sustainable agricultural production requires farmers to change their practices. These changes can include the adoption of sustainable agricultural practices, water-saving, or the disadoption of excessive chemical input use or land burning. Policy makers interested in making agricultural production more sustainable need to understand what encourages the uptake of sustainable practices and what is effective in reducing unsustainable practices. This thesis seeks to understand whether and how information provision and endorsement can contribute to the transition towards more sustainable agricultural systems.
The thesis consists of three self-contained papers. The first paper explores the potential of religious endorsement for inducing pro-environmental behaviour and encouraging the disadoption of fire as an agricultural practice, thereby preventing forest fires. The paper analyses the impact of a fatwa (an Islamic religious ruling) on reducing fire incidence in Indonesia. Results indicate that fire incidence decreased in Muslim majority villages following the issuing of the fatwa. For the post-fatwa period from August 2016 to December 2019, the average monthly effect amounts to around 2.2 prevented fire events per village. This is a considerable effect. The paper concludes that fire prevention efforts, and potentially other environmental conservation efforts, could benefit significantly from support by religious institutions and stakeholders.
The second paper investigates the role of information provision and training for the adoption of organic farming practices in Java, Indonesia. We use a randomised controlled trial (RCT) to identify the impact of a three-day hands-on training in organic farming for smallholder farmers. We find that the training intervention increased the adoption of organic inputs and had a positive and statistically significant effect on farmers’ knowledge and perceptions of organic farming. Overall, our findings suggest that information constraints are a barrier to the adoption of organic farming, as information provision increased the use of organic farming practices.
The third paper investigates whether urban and suburban Indonesian consumers are willing to pay a price premium for organic food. We use an incentive-compatible auction based on the Becker-DeGroot-Marschak (BDM) approach to elicit consumers’ WTP. We further study the effect of income and a randomised information treatment about the benefits of organic food on respondents’ WTP. Estimates suggest that consumers are willing to pay a price premium for organic rice, on average 20 percent more than what they paid for conventional rice outside of our experiment. However, our results also indicate that raising consumers’ WTP further is complex. Showing participants a video about the health or, alternatively, environmental benefits of organic food was not effective in further raising WTP. Exposure to the environmental benefits video was, however, effective in raising stated organic food consumption intentions.
Critical infrastructure and contemporary business organizations are experiencing an ongoing paradigm shift of business towards more collaboration and agility. On the one hand, this shift seeks to enhance business efficiency, coordinate large-scale distribution operations, and manage complex supply chains. But, on the other hand, it makes traditional security practices such as firewalls and other perimeter defenses insufficient. Therefore, concerns over risks like terrorism, crime, and business revenue loss increasingly impose the need for enhancing and managing security within the boundaries of these systems so that unwanted incidents (e.g., potential intrusions) can still be detected with higher probabilities. To this end, critical infrastructure organizations step up their efforts to investigate new possibilities for actively engaging in situational awareness practices to ensure a high level of persistent monitoring as well as on-site observation.
Compliance with security standards is necessary to ensure that organizations meet regulatory requirements mostly shaped by a set of best practices. Nevertheless, it does not necessarily result in a coherent security strategy that considers the different aims and practical constraints of each organization. In this regard, there is an increasingly growing demand for risk-based security management approaches that enable critical infrastructures to focus their efforts on mitigating the risks to which they are exposed. Broadly speaking, security management involves the identification, assessment, and evaluation of long-term (or overall) objectives and interests as well as the means of achieving them.
Due to the critical role of such systems, their decision-makers tend to enhance the system resilience against very unpleasant outcomes and severe consequences. That is, they seek to avoid decision options associated with likely extreme risks in the first place. Practically speaking, this risk attitude can significantly influence the decision-making process in such critical organizations. Towards incorporating the aversion to extreme risks into security management decisions, this thesis investigates thoroughly the capabilities of a recently emerged theory of games with payoffs that are probability distributions. Unlike traditional optimization techniques, this theory provides an alternative decision technique that is more robust to extreme risks and uncertainty. Furthermore, this thesis proposes a new method that gives a decision maker more control over the decision-making process through defining loss regions with different importance levels according to people's risk attitudes. In this way, the static decision analysis used in the distribution-valued games is transformed into a dynamic process to adapt to different subjective risk attitudes or account for future changes in the decision caused by a learning process or other changes in the context.
Throughout their different parts, this thesis shows how theoretical models, simulation, and risk assessment models can be combined into practical solutions. In this context, it deals with three facets of security management: allocating limited security resources, prioritizing security actions, and tweaking decision making. Finally, the author discusses experiences and limitations distilled from this research and from investigating the new theory of games, which can be taken into account in future approaches.
Governments around the world currently focus on shaping the digital economy. Particular attention is paid to Internet platforms, Internet infrastructure and data as essential components of the digital economy. The three studies in this thesis contribute to the understanding of the behavior of firms in each of these domains and derive insights for future regulations and business projects.
The first study deals with the ranking of content on Internet platforms and how it affects the incentives of content providers to invest in content quality. The focus of the study is on sponsored ranking and organic ranking, but the case that a vertically integrated content provider is favored by an Internet platform is also taken into account. Using a game theoretic model, it is shown that there is no ranking design that strictly leads to more investment compared to the other designs. It is also shown that the Internet platform usually chooses the type of ranking that, from the perspective of the Internet platform and consumers, yields the best expected overall content quality. The second study deals with the incentive of Internet service providers to throttle specific Internet content. The key finding is that Internet service providers use this instrument to utilize the capacity of their telecommunications network more efficiently. This leads not only to more benefits for Internet users, but also to a higher incentive to invest in network capacity due to better monetization. The third study examines the circumstances under which firms are willing to share data with other firms. By means of an economic laboratory experiment, it is shown that more data is shared if the firms have control over who exactly they share data with. Thus, for example, data pools that grant unrestricted data access to all participating firms can be expected to perform worse than data pools that give their participating firms control over with whom their uploaded data is shared. In addition, the third study finds that established relationships are characterized by more data sharing and less volatility in the amount of shared data than new relationships. The study concludes that data sharing projects should not be expected to work optimally right away.
In summary, the studies in this thesis identify a number of costs that may arise when digital firms' choice is restricted by regulation or design. The ability of Internet service providers to throttle certain content and the ability of Internet platforms to choose the ranking design are usually used in the best interests of consumers. Data sharing also works best when firms are free to decide who gets their data.
The current electricity grid is undergoing major changes. There is increasing pressure to move away from power generation from fossil fuels, both due to ecological concerns and fear of dependencies on scarce natural resources. Increasing the share of decentralized generation from renewable sources is a widely accepted way to a more sustainable power infrastructure. However, this comes at the price of new challenges: generation from solar or wind power is not controllable and only forecastable with limited accuracy. To compensate for the increasing volatility in power generation, exerting control on the demand side is a promising approach. By providing flexibility on demand side, imbalances between power generation and demand may be mitigated.
This work is concerned with developing methods to provide grid support on demand side while limiting the associated costs. This is done in four major steps: first, the target power curve to follow is derived taking both goals of a grid authority and costs of the respective load into account. In the following, the special case of data centers as an instance of significant loads inside a power grid are focused on more closely. Data center services are adapted in a way such as to achieve the previously derived power curve. By means of hardware power demand models, the required adaptation of hardware utilization can be derived. The possibilities of adapting software services are investigated for the special use case of live video encoding. A method to minimize quality of experience loss while reducing power demand is presented. Finally, the possibility of applying probabilistic model checking to a continuous demand-response scenario is demonstrated.
Netzwerke der Regional Cross-Border Governance haben in der EU insbesondere in den vergangenen zwei Jahrzehnten einen spürbaren Bedeutungszuwachs erfahren. Sie stellen hochkomplexe Governancestrukturen dar und bieten für beteiligte Akteure einen großen Mehrwert. Die EUSDR und EUSALP sind eine weitere Fortentwicklung von RCBG und können in verschiedenen Bereichen nachweisbare Erfolge vorweisen, werden jedoch der hohen im Vorfeld postulierten Erwartungshaltung nicht gerecht. RCBG-Netzwerke und insbesondere die Makroregionalen Strategien tragen in gewisser Weise zu einer territorialen Differenzierung der EU bei, diese sind jedoch noch weit davon entfernt, dass die zum Teil postulierte Erwartungshaltung bezüglich einer „(Makro-)Regionalisierung der EU“ erfüllt wird.
Replacing fossil-fueled vehicles with Electric Vehicles (EVs) poses new challenges for power distribution networks. Specifically speaking, the electrification of the mobility sector relies on the ability to process and analyze information on when, where, for how long, or how fast charging processes will take place. Nevertheless, such kind of information is typically difficult to acquire or insufficiently predictable due to the dynamic nature of the system. Also, the increasing adoption rate of the renewable energy sources, specifically the domestic Photovoltaic (PV) systems, and the potentially associated grid defection scenarios will significantly impact the cost and efforts required to operate the grid in terms of power quality and demand-supply aspects. However, such emerging requirements have arguably not been taken into account when the distribution grid was built originally. Besides, expanding the distribution and transmission capacity is a very costly and lengthy process. Therefore, any proposed solution should be cost-effective as well as environment-, grid- and user-friendly. To this end, the advancements in Information and Communications Technology (ICT) are increasingly adopted and applied. This thesis addresses the rapidly growing EV sector and deals with the problems to overcome potential power quality degradation caused by the challenges mentioned above.
Since time switch and radio ripple control as existing solutions in Germany are costly and neither very effective nor scalable as it requires hardware retrofitting of existing public Charging Stations (CSs), the primary focus of this work is the development of an appropriate, standards-based, scalable, and smart charging solution of EVs. Such a solution can, in turn, boost the usage of renewable energy by ensuring that the existing grid infrastructure can operate within its permissible limits while maintaining acceptable levels of power quality.
This work introduces a new definition of the concept, “grid-friendly EV charging”, where the power demand of a CS is adjusted depending on the real-time status of a power grid. In this regard, the conflicting concerns of stakeholders in an EV ecosystem are considered. For example, a Distribution System Operator (DSO) does not want to reveal a lot of technical details about the power grid or its status. Similarly, a Charging Service Provider (CSP) wants to keep its clients happy without sharing the details of its business model with others, namely, DSOs. For that sake, a distributed smart charging architecture is proposed in this thesis. It is event-driven and responds in nearly real-time to unforeseen and critical grid situations such as high/low voltage, congestion, phase unbalance, and harmonics. In that regard, the publish/subscribe messaging pattern, used as a part of the architecture, enables an efficient and well-performing communication scheme among the different components. Moreover, an indication mechanism about the different issues in a power grid is developed; it adopts the traffic light model. It works as a black box to separate smart controllers for each CS and configured only by the CSP. Smart chargers enable a smooth adjustment of the charging power to avoid drastic changes in the grid state. To that end, two types of intelligent controllers are developed and tested. While the first controller is inspired by the fuzzy logic, the second one is inspired by the slow-start mechanism used in TCP to control congestion in computer networks.
A simulative approach is applied to evaluate the solution, thereby, a topology of a real low voltage grid with realistic load and generation profiles is used. Furthermore, a set of metrics is defined regarding the main concerns of stakeholders: voltage, overloading, fairness, the satisfaction of EV users and grid operator, as well as the grid-friendly behavior of a CS/ EV user. The evaluation shows that the solution is able to guarantee a safe operation of the grid. The proposed system can ensure a grid-friendly charging by sacrificing of a small portion of user satisfaction, that sacrifice of a user is awarded via a points-based reward system. Last but not least, the proposed distributed controllers are compared to two other controllers: (1) a decentralized controller based only on sensing the local voltage and (2) a very strict centralized controller focusing on grid-friendliness. The latter ensures proportional fairness among users regarding the objective function of the optimization problem solved in each simulation step. The distributed controllers are superior to the decentralized controller in terms of grid friendly and fairness and converge in general to the centralized one.
The segmentation of volumetric datasets, i.e., the partitioning of the data into disjoint sub-volumes with the goal to extract information about these regions,is a difficult problem and has been discussed in medical imaging for decades.
Due to the ever-increasing imaging capabilities, in particular in X-ray computed tomography (CT) or magnetic resonance imaging, segmentation in industrial applications also gains interest.
Especially in industrial applications the generated datasets increase in size.
Hence, most applications apply well-known techniques in a 2+1-dimensional manner,i.e., they apply image segmentation procedures on each slice separately and track the progress along the axis of the volume in which the slices are stacked on.
This discards the information on preceding or subsequent slices, which is often assumed to be nearly identical. However, in the industrial context this might prove wrong since industrial parts might change their appearance significantly over the course of even a few slices.
Moreover, artifacts can further distort the content of the slices.
Therefore, three-dimensional processing of voxel volumes has to be preferred, which induces constraints upon the segmentation procedures. For example, they must not consider global information as it is usually not feasible in big scans to compute them efficiently.
Yet another frequent problem is that applications focus on individual parts only and algorithms are tailored to that case. Most prominent medical segmentation procedures do so by applying methods to specifically find the liver and only the liver of a patient, for example.
The implication is that the same method then cannot be applied to find other parts of the scan and such methods have to be designed individually for any object to be segmented.
Flexible segmentation methods are needed too specifically when partitioning unique scans. We define a unique scan to be a voxel dataset for which no comparable volume exists.
Classical examples include the use case of cultural heritage where not only the objects themselves are unique but also scan parameters are optimized to obtain the best image quality possible for that specific scan.
This thesis aims at introducing novel methods for voxelwise classifications based on local geometric features.
The latter are computed from local environments around each voxel and extract information in similar ways as humans do, namely by observing their similarity to geometric or textural primitives.
These features serve as the foundation to learning the proposed voxelwise classifiers and to discriminate between segmented and unsegmented voxels.
On the one hand, they perform fully automated clustering of volumes for which a representative random sample is extracted first.
On the other hand, a set of segmenting classifiers can be trained from few seed voxels, i.e., volume elements for which a domain expert marked if they belong to the components that shall be segmented. The interactive selection offers the advantage that no completely labeled voxel volumes are necessary and hence that unique scans of objects can be segmented for which no comparable scans exist.
Overall, it will be shown that all proposed segmentation methods are effectively of linear runtime with respect to the number of voxels in the volume. Thus, voxel volumes without size restrictions can be segmented in an efficient linear pass through the volume.
Finally, the segmentation performance is evaluated on selected datasets which shows that the introduced methods can achieve good results on scans from a broad variety of domains for both small and big voxel volumes.
Online social networks provide a rich source of information about millions of users worldwide. However, due to sparsity and complex structure, analyzing these networks is quite challenging and expensive. Recently, graph embedding emerged to map networked data into low-dimensional representations, i.e. vector embeddings. These representations are fed into off-the-shelf machine learning algorithms to simplify and speed up graph analytic tasks. Given the immense importance of social network analysis, in this thesis, we aim to study graph embedding for social networks in three directions.
Firstly, we focus on social networks at microscopic level to primarily encode the structural characteristic of users' personal networks so-called ego networks. These representations are utilized in evaluation tasks whose performance depends on relational information from direct neighbors. For example, social circle prediction and event attendance inference both need structural information from neighbors in social networks.
Secondly, we explore assessing the content of vector embeddings in terms of topological properties. This could be explained via two proposed approaches: 1) a learning to rank algorithm in which the model weights reveal the importance of properties at subgraph level (ego networks), 2) a regression model for direct approximation of network statistical properties at vertex level.
Thirdly, we propose extensions of graph embedding to capture sign or additional content of social networks. Users in social media often express their feelings and attitudes towards others which forms sentiment links besides social links. We design a joint objective function whose terms capture semantics of both social and sentiment links simultaneously. We also propose a multi-task learning framework for networks with attributes and labels by stacking autoencoders. The weights of the learning tasks are automatically assigned via an adaptive loss weighting layer.
This thesis is concerned with proposals that aimed at transforming or reforming the English-speaking world so that it could continue to dominate the world in the future. In the late 19th and early 20th centuries, these ideas emerged from the discourse of Anglo-Saxonism that represented the Anglo-Saxon ‘race’ as the most developed ‘race’ in the world which could, therefore, ‘legitimately’ rule the world. In the later 20th century, an Atlantic discourse developed, which appeared to address further nations in the group of world leaders. However, it seems to rely on similar discursive elements as Anglo-Saxonism, which only includes the English-speaking world. The construction of the respective discourses is examined in late 19th/early 20th century writings by authors broadly associated with the British Empire as well as in Union Now, a 1939 book by U.S.-American Clarence K. Streit. The latter part presents the focus of this thesis. Streit developed a new concept of a world order in which a world state – the Atlantic Union – was to be established. In a first step, it should only be founded by a nucleus of the 15 ‘leading’ democracies in the world and should subsequently be expanded. In addition to the connection between Anglo-Saxonism and Atlanticism which is investigated in Streit's writings, his network and prominence are analyzed, as are the resolutions Streit's supporters introduced into the U.S. Congress and Streit's stance on imperialism.
Fundamental changes in business-to-business (B2B) buying behavior confront B2B supplier firms with unprecedented challenges. On the one hand, a rising share of industrial buyers demands digitalized offerings and processes from suppliers. Consequently, suppliers are urged to implement digital transformations by expanding the range of both digital offerings and processes. On the other hand, B2B buyers increasingly expect suppliers to provide individually tailored solutions to their idiosyncratic needs. Hence, suppliers are also required to implement non-digital transformations by providing offerings and processes that are customized to each customers’ specific requirements.
The rise of these digital and non-digital transformations calls established knowledge into question. Thus, B2B marketing research and practice are urged to create a comprehensive understanding of digital and non-digital transformations by means of novel and empirically grounded insights and derive actionable response strategies. In respond, my dissertation addresses the overall research question of how B2B supplier firms can successfully implement both digital and non-digital transformations in three individual essays.
In Essay 1, I offer a broader perspective on both digital and non-digital transformations by investigating digital service customization (i.e., the tailoring of digital B2B services to customers’ individual needs). Through a systematic literature review and bibliometric analysis, I outline a comprehensive set of factors that favor the application of distinct digital service customization strategies. Essay 2 represents a deep dive into digital transformations of sales processes. By making use of two rich sets of qualitative interview material from supplier and buyer firms, I identify the challenges resulting for B2B salespeople from the introduction of digital sales channels into personal selling. Moreover, I uncover facilitating mechanisms that sales managers can employ to support salespeople in coping with digital sales channels. Finally, Essay 3 constitutes a deep dive into non-digital transformations. Based on qualitative interview material and survey data from matched sales manager–salesperson dyads, the essay explores how configurations of individual salespeople’s personal and procedural competencies facilitate success at selling customer solutions (i.e., highly customized, performance-oriented offerings comprising products and/or services). The essay shows that successfully selling customized offerings like solutions hinges on salespeople’s unique configurations of present and absent competencies.
In a nutshell, these essays provide three major insights on how B2B suppliers can successfully implement digital and non-digital transformations. First, they underscore that a comprehensive understanding of the origins and spillover effects of transformations is a key prerequisite to successfully implementing them. Second, they unveil that digital and non-digital transformations impact on multiple organizational levels. Third, they point out important resources and capabilities that help suppliers to successfully implement transformations, be they digital or non-digital.
With this dissertation, I make substantial contributions to the broader literature on digital and non-digital transformations in B2B contexts. At the same time, my dissertation provides hands-on implications for managers in B2B supplier firms that are facing fundamental transformations in the marketplace—both digital and non-digital in nature.
The concept of programmable networks is radically changing the way communication infrastructures are designed, integrated, and operated. Currently, the topic is spearheaded by concepts such as software-defined networking, forwarding and control element separation, and network function virtualization. Notably, software-defined networking has attracted significant attention in telecommunication and data centers and thus already in some production-grade networks.
Despite the prevalence of software-defined networking in these domains, industrial networks are yet to see its benefits to encourage adoption. However, the misconceptions around the concept itself, the role of virtualization, and algorithms pose a significant obstacle.
Furthermore, the desire to accommodate new services in the automation industry results in a pattern of constantly increasing complexity of industrial networks, which is compounded by the requirement to provide stringent deterministic service guarantees considering characteristically different applications and thus posing a significant challenge for management, configuration, and maintenance as existing solutions are architecturally inflexible.
Therefore, the first contribution of this thesis addresses the misconceptions around software-defined networking by providing a comparative analysis of programmable network concepts, detailing where software-defined networks compare with other concepts and how its principles can be leveraged to evolve industrial networks.
Armed with the fundamental principles of programmable networks, the second contribution identifies virtualization technologies and proposes novel algorithms to provide varied quality of service guarantees on converged time-sensitive Ethernet networks using software-defined networking concepts.
Finally, a performance analysis of a software-defined hybrid deployment solution for control and management of time-sensitive Ethernet networks that integrates proposed novel algorithms is presented as an industrial use-case that enables industrial operators to harness the full potential of time-sensitive networks.
IoT is defined as a paradigm where "things" have sensing, actuating, communicating, and self-configuring abilities, and are connected to each other and to the Internet. Recent advancements in the manufacturing industry have helped to produce embedded devices with various sensors and actuators in mass numbers at a reduced cost. As part of the IoT revolution, everyday devices such as television, refrigerator, cars, even industrial machines are now connected IoT devices. Recent studies have predicted that by 2025 there will be over 75 billion of such IoT devices connected to the Internet.
The providers of IoT based services want to integrate their services to satisfy customer requirements. For example, in the mobility scenario, different mobility solution providers want to offer a multi-modal ticket to their customers jointly. In such a distributed and loosely coupled environment, each owner and stakeholder wants to secure his/her own integrity, confidentiality, and functionality goals. This means that distributed rules and conditions defined by the individual owners must be enforced on the participating entities (e.g., customers or partners using their services). The owners and stakeholders may not necessarily trust each other's actions. Therefore, a mechanism is required that guarantees the rules and conditions specified by the different owners.
Attacks on IoT devices and similar computing systems are increasing and getting more advanced. IoT devices are often constrained, i.e., they have limited processing power, memory, and energy. Security mechanisms designed for traditional computing systems, e.g., computers, servers, or mobile computing devices such as smartphones, may not fit in those constrained IoT devices. Weak security mechanisms and unenforced security measures were one of the main reasons for recent successful attacks on IoT devices and services. As IoT is now used in many sensitive places, including critical infrastructures, securing them becomes more critical than ever. This thesis focuses on developing mechanisms that secure IoT devices and services and enforcing the rules and conditions specified by the owners on entities that want to access owners' resources.
In classical computer systems, security automata are used for specifying security policies and monitoring mechanisms are used for enforcing such policies. For instance, a reference monitor observes and stops the execution when the security policies are about to be violated, thus, the security policies are enforced. To restrict the adversary from using protected IoT devices or services for malicious purposes, it is required to ensure that a workflow must be followed to access the protected resource. In distributed IoT systems where the policies are governed by different owners, each owner would like to specify their rules and conditions in their workflows. The workflows contain tasks that must be performed in a particular order. The goal of this thesis is to develop mechanisms to specify and enforce these workflows in the distributed IoT environment.
This thesis introduces a distributed WFAC framework that restricts the entities to do only what they are allowed to do in a collaborative environment. To gain access to a service protected by the WFAC framework, every workflow participant must prove that he/she is in a particular state of an authorized workflow. Authorized means two things: (a) the owner has authorized the workflow to be executed; (b) the workflow participant is authorized to execute it. This restricts the adversary's access to the devices and its services. The security policies defined by different owners are modeled as workflows and specified using Petri Nets. The policies are then enforced with the help of the WFAC framework which supports error-handling, accountability, integration of practitioner-friendly tools, and interoperability with existing security mechanisms such as OAuth. Thus, the WFAC guarantees the integrity of workflows in a distributed environment.
Hazardous materials (hazmat) have become important goods for satisfying the industrial and customer demand in our modern society. The transportation of these materials is always associated with safety, security and environmental concerns due to the dangerous nature of the cargo. To improve the safety of the transportation process hazmat transportation problems have become a popular research topic in the field of operations research. This thesis contributes to the ongoing research on the hazmat transportation problem. It provides an extensive overview of the existing literature on the hazardous materials transportation problem and offers a new classification extending the existing ones. With particular focus on the hazardous materials vehicle routing problem (HMVRP), this thesis compares different risk models and analyses their influence on the problem outcomes. Additionally, heuristic and meta-heuristic solution procedures are proposed for handling the NP-hard nature of the problem.
For this purpose, four different studies are conducted. Study 1 presents a state of the art literature review including over 300 contributions to the hazmat transportation problem. The historical development of the research field is analyzed and the most important journals are identified. A detailed classification focusing on hazmat transportation on public roads is provided. Furthermore, the study identifies research gaps and presents new research opportunities. Study 2 and 3 investigate the effects of path generation in a realistic urban network on the outcomes of the HMVRP. Additionally, different risk models for the HMVRP are compared and their influence on the problem solutions is analyzed. Study 2 proposes a simple but effective heuristic algorithm to solve the HMVRP with load-independent risk models. Study 3 extends the focus and includes load-dependent risk models. The influence of six different risk models on the solution outcomes of the HMVRP is compared and the tradeoff between risk minimization and the minimization of traveled distance is investigated. For this purpose, more than 1,700 problem instances are solved to optimality using CPLEX. In study 4 a hybrid genetic algorithm (HGA) for solving the HMVRP with a load-depending risk model is proposed. The HGA aims to find pareto-optimal solutions for the bi-objective HMVRP when risk and travel distance are addressed simultaneously. The structure of the HGA is explained and experimental findings are presented.
In conclusion, this thesis contributes to an improved understanding of the general development in the research field of hazmat logistics and the influence of different risk models on the solution outcomes of the HMVRP. Additionally, heuristic solution methods are proposed and tested for finding compromise solutions when the bi-objective case of risk and distance minimization is addressed. Furthermore, this thesis helps new researchers the access to the field of hazmat logistics as it provides a structured overview of the research field while pointing out research gaps. To address some of the identified research gaps, the thesis provides an extensive analysis of the risk modelling approaches. Thereby, it provides new insights to the basic research on risk modelling for the HMVRP. Finally, to overcome the long computation times of large problem instances heuristic solution approaches are proposed.
Das Aufzeichnen der Internetaktivität ist mit der Verknüpfung persönlicher Daten zu einer Schlüsselressource für viele kostenpflichtige und kostenfreie Dienste im Web geworden. Diese Dienste sind zum einen Webanwendungen, wie beispielsweise die von Google bereitgestellten Karten/Navigation oder Websuche, die täglich kostenlos verwendet werden. Zum anderen sind es alle Webseiten, die meist kostenlos Nachrichten oder allgemeine Informationen zu verschiedenen Themen bereitstellen. Durch das Aufrufen und die Nutzung dieser Webdienste werden alle Informationen, die im Webdienst verarbeitet werden, an den Dienstanbieter weitergeben. Dies umfasst nicht nur die im Benutzerkonto des Webdienstes gespeicherte Profildaten wie Name oder Adresse, sondern auch die Aktivität mit dem Webdienst wie das anklicken von Links oder die Verweildauer.
Darüber hinaus gibt es jedoch auch unzählige Drittparteien, welche zumeist im Hintergrund in die Webdienste eingebunden sind und das Benutzerverhalten der kompletten Webaktivität - Webseiten übergreifend - mitspeichern sowie auswerten. Der Einsatz verschiedener, in der Regel für den Benutzer verborgener Techniken, dient dazu das Online-Verhalten der Benutzer genau zu verfolgen und viele sensible Daten zu sammeln. Dieses Verhalten wird als Web-Tracking bezeichnet und wird hauptsächlich von Werbeunternehmen genutzt. Die gesammelten Daten sind oft personenbezogen und eine wertvolle Ressourcen der Unternehmen, um Beispielsweise passend zum Benutzerprofil personalisierte Werbung schalten zu können. Mit der Nutzung dieser personenbezogenen Daten entstehen aber auch weitreichendere Auswirkungen, welche sich unter anderem in Preisanpassungen für Benutzer mit speziellen Profilattributen, wie der Nutzung von teuren Endgeräten, widerspiegeln. Ziel dieser Arbeit ist es die Privatsphäre der Nutzer im Internet zu steigern und die Nutzerverfolgung von Web-Tracking signifikant zu reduzieren. Dabei stellen sich vier Herausforderungen, die jeweils einen Forschungsschwerpunkt dieser Arbeit bilden: (1) Systematische Analyse und Einordnung eingesetzter Tracking-Techniken, (2) Untersuchung vorhandener Schutzmechanismen und deren Schwachstellen,(3) Konzeption einer Referenzarchitektur zum Schutz vor Web-Tracking und (4) Entwurf einer automatisierten Testumgebungen unter Realbedingungen, um die Reduzierung von Web-Tracking in den entwickelten Schutzmaßnahmen zu untersuchen. Jeder dieser Forschungsschwerpunkte stellt neue Beiträge bereit, um einheitlich das übergeordnete Ziel zu erreichen: der Entwicklung von Schutzmaßnahmen gegen die Preisgabe sensibler Benutzerdaten im Internet. Der erste wissenschaftliche Beitrag dieser Dissertation ist eine umfassende Evaluation eingesetzter Web-Tracking Techniken und Methoden, sowie deren Gefahren, Risiken und Implikationen für die Privatsphäre der Internetnutzer. Die Evaluation beinhaltet zusätzlich die Untersuchung vorhandener Tracking-Schutzmechanismen und deren Schwachstellen. Die gewonnenen Erkenntnisse sind maßgeblich für die in dieser Arbeit neu entwickelten Ansätze und verbessern den bisherigen nicht hinreichend gewährleisteten Schutz vor Web-Tracking. Der zweite wissenschaftliche Beitrag ist die Entwicklung einer robusten Klassifizierung von Web-Tracking, der Entwurf einer effizienten Architektur zur Langzeituntersuchung von Web-Tracking sowie einer interaktiven Visualisierung des Auftreten von Web-Tracking im Internet. Dabei basiert der neue Klassifizierungsansatz, um Tracking zu identifizieren, auf der Entropie Messung des Informationsgehalts von Cookies. Die Resultate der Web-Tracking Langzeitstudien sind unter anderem 1.209 identifizierte Tracking-Domains auf den meistbesuchten Webseiten in Deutschland. Hierbei wurden innerhalb der Top 25 Webseiten im Durchschnitt 45 Tracking-Elemente pro Webseite gefunden. Der Tracker mit dem höchsten Potenzial zum Erstellen eines Benutzerprofils war doubleclick.com, da er 90% der Webseiten überwacht. Die Auswertung des untersuchten Tracking-Netzwerks ergab weiterhin einen detaillierten Einblick in die Tracking-Technik mithilfe von Weiterleitungslinks. Dabei haben wir 1,2 Millionen HTTP-Traces von monatelangen Crawls der 50.000 international meistbesuchten Webseiten analysiert. Die Ergebnisse zeigen, dass 11,6% dieser Webseiten HTTP-Redirects, verborgen in Webseiten-Links, zum Tracken verwenden. Dies wird eingesetzt, um den Webseitenverlauf des Benutzers nach dem Klick durch eine Kette von (Tracking-)Servern umzuleiten, welche in der Regel nicht sichtbar sind, bevor das beabsichtigte Link-Ziel geladen wird. In diesem Szenario erfasst der Tracker wertvolle Verbindungs-Metadaten zu Inhalt, Thema oder Benutzerinteressen der Website. Die Visualisierung des Tracking Ökosystem stellen wir in einem interaktiven Open-Source Web-Tool bereit. Der dritte wissenschaftliche Beitrag dieser Dissertation ist die Konzeption von zwei neuartigen Schutzmechanismen gegen Web-Tracking und der Aufbau einer automatisierten Simulationsumgebung unter Realbedingungen, um die Effektivität der Umsetzungen zu verifizieren. Der Fokus liegt auf den beiden meist verwendeten Tracking-Verfahren: Cookies (hierbei wird eine eindeutigen ID auf dem Gerät des Benutzers gespeichert), sowie Browser-Fingerprinting. Letzteres beschreibt eine Methode zum Sammeln einer Vielzahl an Geräteeigenschaften, um den Benutzer eindeutig zu (re- )identifizieren, ohne eine eindeutige ID auf dem Gerät zu speichern. Um die Effektivität der in dieser Arbeit entwickelten Schutzmechanismen vor Web-Tracking zu untersuchen, implementierten und evaluierten wir die Schutzkonzepte direkt im Chromium Browser. Das Ergebnis zeigt eine erfolgreiche Reduzierung von Web-Tracking um 44%. Zusätzlich verbessert das in dieser Arbeit entwickelte Konzept “Site Isolation” den Datenschutz des privaten Browsing-Modus, ermöglicht das Setzen eines manuellen Speicher-Zeitlimits von Cookies und schützt den Browser gegen verschiedene Bedrohungen wie CSRF (Cross-Site Request Forgery) oder CORS (Cross-Origin Ressource Sharing). Site Isolation speichert dabei den Status der lokalen Website in separaten Containern und kann dadurch diverse Tracking-Methoden wie Cookies, lokalStorage oder redirect tracking verhindern. Bei der Auswertung von 1,6 Millionen Webseiten haben wir gezeigt, dass der Tracker doubleclick.com das höchste Potenzial besitzt, den Nutzer zu verfolgen und auf 25% der 40.000 international meistbesuchten Webseiten vertreten ist. Schließlich demonstrieren wir in unserem erweiterten Chromium-Browser einen robusten Browser-Fingerprinting-Schutz. Der Test unseres Prototyps mittels 70.000 Browsersitzungen zeigt, dass unser Browser den Nutzer vor sogenanntem Browser-Fingerprinting Tracking schützt. Im Vergleich zu fünf anderen Browser-Fingerprint-Tools erzielte unser Prototyp die besten Ergebnisse und ist der erste Schutzmechanismus gegen Flash sowie Canvas Fingerprinting.
With the frequency and impact of data breaches raising, it has become essential for organizations to automate intrusion detection via machine learning solutions. This generally comes with numerous challenges, among others high class imbalance, changing target concepts and difficulties to conduct sound evaluation. In this thesis, we adopt a user-centered anomaly detection perspective to address selected challenges of intrusion detection, through a real-world use case in the identity and access management (IAM) domain. In addition to the previous challenges, salient properties of this particular problem are high relevance of categorical data, limited feature availability and total absence of ground truth.
First, we ask how to apply anomaly detection to IAM audit logs containing a restricted set of mixed (i.e. numeric and categorical) attributes. Then, we inquire how anomalous user behavior can be separated from normality, and this separation evaluated without ground truth. Finally, we examine how the lack of audit data can be alleviated in two complementary settings. On the one hand, we ask how to cope with users without relevant activity history ("cold start" problem). On the other hand, we seek how to extend audit data collection with heterogeneous attributes (i.e. categorical, graph and text) to improve insider threat detection.
After aggregating IAM audit data into sessions, we introduce and compare general anomaly detection methods for mixed data to a user identification approach, designed to learn the distinction between normal and malicious user behavior. We find that user identification outperforms general anomaly detection and is effective against masquerades. An additional clustering step allows to reduce false positives among similar users. However, user identification is not effective against insider threats. Furthermore, results suggest that the current scope of our audit data collection should be extended.
In order to tackle the "cold start" problem, we adopt a zero-shot learning approach. Focusing on the CERT insider threat use case, we extend an intrusion detection system by integrating user relations to organizational entities (like assignments to projects or teams) in order to better estimate user behavior and improve intrusion detection performance. Results show that this approach is effective in two realistic scenarios.
Finally, to support additional sources of audit data for insider threat detection, we propose a method representing audit events as graph edges with heterogeneous attributes. By performing detection at fine-grained level, this approach advantageously improves anomaly traceability while reducing the need for aggregation and feature engineering. Our results show that this method is effective to find intrusions in authentication and email logs.
Overall, our work suggests that masquerades and insider threats call for different detection methods. For masquerades, user identification is a promising approach. To find malicious insiders, graph features representing user context and relations to other entities can be informative. This opens the door for tighter coupling of intrusion detection with user identities, roles and privileges used in IAM solutions.
New arising phenomena in the occupational realm strongly shape contemporary work settings. These developments heavily affect how individuals work within and beyond organizational boundaries. Two phenomena associated with the changing nature of work have been especially prevalent in work settings and intensively discussed in public debates. First, organizations started to introduce mindfulness practices to their workforce. Rooted in spirituality and formerly used in clinical therapy, mindfulness is applied as a human resource development practice to train employees and managers to cope with the increased work intensification. Second, digitization and the importance of individualization opened up the path for work settings beyond organizational boundaries on crowdworking online platforms. On these online platforms, workers process tasks independently and remotely. Research just started to address the implications and meaning of mindfulness practices in organizations and the rise of crowdworking platforms. Several questions remain unanswered. This dissertation addresses unanswered but pressing questions related to these two phenomena shaping contemporary work settings. Structured in four essays the first two essays address the application and meaning of mindfulness practices. The first essay analyzes the meaning and interpretations of these new practices within organizations. The second essay takes contextual factors of the organizational environment into account and investigates their relevance for the successful implementation of mindfulness practices. The second two essays are dedicated to work attitudes and behavior on crowdworking online platform. Essay three captures individuals’ motivation for working on such platforms and their effects for workers’ work performance. The last essay deals with the role of professional crowdworking online communities in the work experience and asses the effects of social support in these communities on occupational identification, work meaningfulness and finally on work engagement. Each essay in this dissertation generates new insights on arising phenomena in contemporary work settings. They address several timely yet unanswered research questions for these rising phenomena and thereby offer a deeper and more nuanced understanding of the role mindfulness practices and crowdworking online platforms play in the context of the future of work.
The current movement towards a smart grid serves as a solution to present power grid challenges by introducing numerous monitoring and communication technologies. A dependable, yet timely exchange of data is on the one hand an existential prerequisite to enable Advanced Metering Infrastructure (AMI) services, yet on the other a challenging endeavor, because the increasing complexity of the grid fostered by the combination of Information and Communications Technology (ICT) and utility networks inherently leads to dependability challenges.
To be able to counter this dependability degradation, current approaches based on high-reliability hardware or physical redundancy are no longer feasible, as they lead to increased hardware costs or maintenance, if not both. The flexibility of these approaches regarding vendor and regulatory interoperability is also limited. However, a suitable solution to the AMI dependability challenges is also required to maintain certain regulatory-set performance and Quality of Service (QoS) levels.
While a part of the challenge is the introduction of ICT into the power grid, it also serves as part of the solution. In this thesis a Network Functions Virtualization (NFV) based approach is proposed, which employs virtualized ICT components serving as a replacement for physical devices. By using virtualization techniques, it is possible to enhance the performability in contrast to hardware based solutions through the usage of virtual replacements of processes that would otherwise require dedicated hardware. This approach offers higher flexibility compared to hardware redundancy, as a broad variety of virtual components can be spawned, adapted and replaced in a short time. Also, as no additional hardware is necessary, the incurred costs decrease significantly. In addition to that, most of the virtualized components are deployed on Commercial-Off-The-Shelf (COTS) hardware solutions, further increasing the monetary benefit.
The approach is developed by first reviewing currently suggested solutions for AMIs and related services. Using this information, virtualization technologies are investigated for their performance influences, before a virtualized service infrastructure is devised, which replaces selected components by virtualized counterparts. Next, a novel model, which allows the separation of services and hosting substrates is developed, allowing the introduction of virtualization technologies to abstract from the underlying architecture. Third, the performability as well as monetary savings are investigated by evaluating the developed approach in several scenarios using analytical and simulative model analysis as well as proof-of-concept approaches. Last, the practical applicability and possible regulatory challenges of the approach are identified and discussed.
Results confirm that—under certain assumptions—the developed virtualized AMI is superior to the currently suggested architecture. The availability of services can be severely increased and network delays can be minimized through centralized hosting. The availability can be increased from 96.82% to 98.66% in the given scenarios, while decreasing the costs by over 60% in comparison to the currently suggested AMI architecture. Lastly, the performability analysis of a virtualized service prototype employing performance analysis and a Musa-Okumoto approach reveals that the AMI requirements are fulfilled.
Computer vision aims at developing algorithms to extract high-level information from images and videos. In the industry, for instance, such algorithms are applied to guide manufacturing robots, to visually monitor plants, or to assist human operators in recognizing specific components. Recent progress in computer vision has been dominated by deep artificial neural network, i.e., machine learning methods simulating the way that information flows in our biological brains, and the way that our neural networks adapt and learn from experience. For these methods to learn how to accurately perform complex visual tasks, large amounts of annotated images are needed. Collecting and labeling such domain-relevant training datasets is, however, a tedious—sometimes impossible—task. Therefore, it has become common practice to leverage pre-available three-dimensional (3D) models instead, to generate synthetic images for the recognition algorithms to be trained on. However, methods optimized over synthetic data usually suffer a significant performance drop when applied to real target images. This is due to the realism gap, i.e., the discrepancies between synthetic and real images (in terms of noise, clutter, etc.). In my work, three main directions were explored to bridge this gap.
First, an innovative end-to-end framework is proposed to render realistic depth images from 3D models, as a growing number of solutions (especially in the industry) are utilizing low-cost depth cameras (e.g., Microsoft Kinect and Intel RealSense) for recognition tasks. Based on a thorough study of these devices and the different types of noise impairing them, the proposed framework simulates their inner mechanisms, comprehensively modeling vital factors such as sensor noise, material reflectance, surface geometry, etc. Able to simulate a wide panel of depth sensors and to quickly generate large datasets, this framework is used to train algorithms for various recognition tasks, consistently and significantly enhancing their performance compared to other state-of-the-art simulation tools.
In some cases, however, relevant 2D or 3D object representations to generate synthetic samples are not available. Considering this different case of data scarcity, a solution is then proposed to incrementally build a representation of visual scenes from partial observations. Provided observations are localized from one to another based on their content and registered in a global memory with spatial properties. Simultaneously, this memory can be queried to render novel views of the scene. Furthermore, unobserved regions can be hallucinated in memory, in consistence with previous observations, hallucinations, and global priors. The efficacy of the proposed mnemonic and generative system, trainable end-to-end, is demonstrated on various 2D and 3D use-cases.
Finally, an advanced convolutional neural network pipeline is introduced, tackling the realism gap from a novel angle. While most methods addressing this problem focus on bringing synthetic samples—or the knowledge acquired from them—closer to the real target domain, the proposed solution performs the opposite process, mapping unseen target images into controlled synthetic domains. The pre-processed samples can then be handed to downstream recognition methods, themselves purely trained on similar synthetic data, to greatly improve their accuracy.
For each approach, a variety of qualitative and quantitative studies are detailed, providing successful comparisons to state-of-the-art methods. By proposing solutions to bridge the realism gap from either side, as well as a pipeline to improve the acquisition and generation of new visual content, this thesis provides a unique perspective on the challenges of data scarcity when building robust recognition systems.
A plethora of resources made available via retrieval systems in digital libraries remains untapped in the so called long tail of the Web. These long-tail websites get considerably less visits than major Web hubs.
Zero-effort queries ease the discovery of long-tail resources by proactively retrieving and presenting information based on a user’s context. However, zero-effort queries over existing digital library structures are challenging, since the underlying retrieval system is only accessible via an API. The information need must be expressed by a query, instead of optimizing the ranking between context and resources in the retrieval system directly. We address three research questions that arise from replacing the user information seeking process by zero-effort queries.
Our first question addresses the transformation of a user query to an automatic query, derived from the context. We present means to 1) identify the relevant context on different levels of granularity, 2) derive an information need from the context via keyword extraction and personalization and 3) express this information need in a query scheme that avoids over- or under-specified queries. We address the cold start problem with an approach to bootstrap user profiles from social media, even for passive users.
With the second question, we address the presentation of resources in zero-effort query scenarios, presenting guidelines for presentation interfaces in the browser and a visualization of the triadic relationship between context, query and results. QueryCrumbs, a compact query history visualization supports recalling information found in the past and exploratory search by visualizing qualitative and quantitative query similarity.
Our last question addresses the gap between (simple) keyword queries and the representation of resources by rich and complex meta-data. We investigate and extend feature representation learning techniques centered around the skip-gram model with negative sampling. Finally, we present an approach to learn representations from network and text jointly that can cope with the partial absence of one modality.
Experimental results show close to human performance of our zero-effort query and user profile generation approach and visualizations to be helpful in terms of transparency, efficiency and support for exploratory search. These results indicate that the proposed zero-effort query approach indeed eases the discovery of long-tail resources and the accompanying visualizations further facilitate this process. The joint representation model provides a first step to bridge the gap between query and resource representation and we plan to follow and investigate this route further in the future.
Whenever software faults can endanger human life, property, or the environment, the absence of faults must be ensured with utmost care and the best technologies available. Evidence is needed showing that all requirements are satisfied and that the risk of faults is reduced. One technique to conduct such a verification task—composed of the software to verify, the specification to check, and a model of the environment—is software model checking.
To conduct a verification task with a model checker, different models of the task are constructed. We distinguish between two types of task models: syntactic task models and semantic task models, which define the respective syntactic structure (control flow) and semantic structure (state transitions, invariants) of the verification task. When constructing such models, we can observe that similar structures and substructures reappear within and among different verification tasks. For example, the same assertions to check can appear in different functions, or the same predicate can be part of different invariants to describe sets of program states. Similarities that appear during the model construction process can be the result of solving similar reasoning problems, often solved using computationally expensive procedures (as typical for model checking), over and over again. Not reusing results of solving similar problems, not having a means for conducting repeated efforts automatically, or not trying to reduce the number of similar reasoning efforts, is a waste of precious resources.
To address these problems, we present a common conceptual and technical foundation for sharing syntactic and semantic task artifacts for reuse, within and among verification runs. Both the syntactic construction of a verification task and the construction of its semantic model—which describes all possible behaviors and states—are covered. We study how commonalities and regularities in the task models can be taken into account to facilitate the process of sharing task artifacts for reuse, and to make the overall verification process more efficient and effective. We introduce abstract transducers as the theoretical foundation of this thesis: a type of finite-state transducers with an inherent notion of abstraction for states, the input alphabet, and its output alphabet. Abstracting these transducers allows us to widen both the set of input words for that they produce output and the sets of output words. Abstract transducers are instantiated as task artifact transducers to map from program structures to task artifacts to share. We show that the notion of abstraction provides a means for increasing the scope for that task artifacts are shared for reuse. We present two instances of task artifact transducers: Yarn transducers and precision transducers. We use Yarn transducers for providing code to weave into the control-flow structure of a computer program, and present the Loom analysis as a means for orchestrating the weaving process. Precision transducers provide a means for sharing abstraction precisions for reuse, thus aid in defining the level of abstraction of a semantic task model. For both types of transducers, we provide empirical evidence on their practical applicability, for example, to verify Linux kernel modules, and show that they can help in increasing the verification performance.
Job Sequencing and Tool Switching Problems with a Generalisation to Non-Identical Parallel Machines
(2020)
Manufacturing tools have been dominating the manufacturing process since the 1960s. The job sequencing and tool switching problem is an NP-hard combinatorial optimization that has first been introduced in the context of flexible manufacturing systems in the late 1980s. Since then, production systems have undisputedly changed and improved but manufacturing tools still dominate manufacturing processes. Production and system operation processes are continuously adjusted and optimised to changing customer requirements. If the product variety requires an increasing number of tools for processing that exceeds the local tool magazine capacity of the manufacturing system, tool switches become necessary. Although tool changing times within a manufacturing centre or cell may nowadays be very small due to the high degree of automation, tool switching within a dynamic production environment is still a time consuming process that must be avoided. In order to minimize the total tool setup time to enhance productivity, the objectives of the basic job sequencing and tool switching problem are to sequence a set of jobs and simultaneously to determine the best tool loading. Therefore, job sequencing and tool switching problems are gaining considerable attention.
Several solution approaches to the standard problem and related versions of the problem exist. The first part of this dissertation assesses the current state-of-the-art of the job sequencing and tool switching problem and provides a classification scheme for literature on the job sequencing and tool switching problem and its variations. Only few authors consider generalisations of the problem because the level of complexity of extended problems is high. A general approach of the job sequencing and tool switching problem with non-identical parallel machines and sequence-dependent setup times is described in this dissertation. A novel mathematical model based on time periods is presented and analysed which can be adapted to different objective functions. The last part of this dissertation is a quantitative evaluation of fast and effective construction heuristics as well as of an iterated local search algorithm tested on a new set of benchmark instances. As such this dissertation provides a broad basis for future evaluations of solution approaches to the job sequencing and tool switching problem with non-identical parallel machines and sequence-dependent setup times as well as a basis for further generalisations of the problem like for example tool availability constraints or tool-size dependent variations.
Research on flipped classroom instruction has substantially advanced in the past ten years. Flipped classroom refers to an instructional approach in which students study educational videos at home and do homework assignments in class. Since an increasing number of teachers wants to adopt the flipped classroom approach in their practice, further research—particularly in the context of secondary education—is clearly required. The two presented studies in this thesis aimed at examining the effectiveness of flipped classroom instruction in secondary education by conducting a meta-analytic synthesis of prior studies and an intervention study with a methodologically new approach. Specifically, the studies investigated whether and under which conditions the flipped classroom approach has a positive impact on student achievement and which learners benefit most from a flipped or video-based classroom.
In the first study, meta-analytic methods were used to examine whether the flipped classroom approach, after controlling for sampling error, positively effects student achievement in secondary education. Effect sizes were calculated for the research designs pre-test-post-test (Time), post-test only (PostOnly) and pre-test-post-test with control group (Treatment). Moreover, the impact of four moderator variables as boundary conditions of flipped classroom effectiveness was estimated: disciplinary field, length of the intervention, use of a quiz and use of a learning management system. The meta-analytical findings for the effect size Treatment confirmed the effectiveness of flipped classroom on student achievement in comparison to traditional instruction (Cohen’s d = 0.42). Moderator analyses on the effect size Time showed stronger effects for subjects in the STEM area (science, technology, engineering, mathematics) than for foreign languages and humanities. The effect sizes were also higher for shorter intervention studies than for longer ones and if quiz at home had been left out. Moderator analyses on the effect sizes PostOnly and Treatment made clear that the effect sizes for intervention studies without a learning management system were higher than with a learning management system.
The second study aimed to compare flipped classroom with other forms of video-based instruction and determine which types of students benefit most from video-based instruction. Thirty-eight school classes with 848 ninth-grade students took part in a quasi-experimental pre-post-test intervention study over the course of four weeks. Two independent variables were completely crossed resulting in four experimental conditions: video (at home vs. in class) and instructional method (student-centred vs. teacher-centred). Multilevel analyses revealed that all four experimental conditions were equally effective in promoting students’ learning gains. At-risk, average and excellent students profited least from video-based instruction. Confident and independent students had the highest learning gains from pre- to post-test. The study constitutes a first step towards a comprehensive evaluation of flipped classroom by using a better-controlled research design and may contribute to a more objective discussion about the positive effects of flipped classroom.
Abstract concepts and ideas from Computer Science Education can benefit from immersive visualizations that can be provided in virtual environments. This thesis explores the effects of the key characteristics of virtual environments, immersion and presence, on learning outcomes in Educational Virtual Environments for learning Computer Science.
Immersion is a quantifiable description of the technology to immerse the user into the virtual environment; presence describes the subjective feeling of 'being there'. While technological immersion can be seen as a strong predictor for presence, motivational traits, cognition, and the emotional state of the user also influence presence. A possible localization of these technological and person-specific variables in Helmke's pedagogical supply-use framework is introduced as the Educational Framework for Immersive Learning (EFiL). Presence is emphasized as a central criterion influencing immersive learning processes. The EFiL provides an educational understanding of immersive learning as learning activities initiated by a mediated or medially enriched environment that evokes a sense of presence.
The idea of Computer Science Unplugged is pursued by using Virtual Reality technology in order to provide interactive virtual learning experiences that can be accurately displayed, schematizing, substantiating, or metaphorical. For exploring the effects of virtual environment characteristics on learning, the idea of Computer Science Replugged focuses 'hands-on' activities and combines them with immersive technology. By providing a perception of non-mediation, Computer Science Replugged might enable experiences that can contribute additional possibilities to the real activity or enable new activities for teaching Computer Science.
Three game-based Educational Virtual Environments were developed as treatments: 'Bill's Computer Workshop' introduces the components of a computer; 'Fluxi's Cryptic Potions' uses a metaphor to teach asymmetric encryption; 'Pengu's Treasure Hunt' is an immersive visualization of finite state machines. A first study with 23 middle school students was conducted to test the instruments in terms of selectivity, the devices' induced levels of presence, and adequacy of the selected learning objectives. The second study with 78 middle school students playing the environments on different devices (laptop, Mobile Virtual Reality, or head-mounted-display) assessed motivational, cognitive, and emotional factors, as well as presence and learning outcomes.
An overall analysis showed that pre-test performance, presence, and the previous scholastic performance in Maths and German predict the learning outcomes in the virtual environments. Presence could be predicted by the student's positive emotions and by the technological immersion. The level of immersion had no significant effect on learning outcomes. While a good-fitting path analysis model indicated that the assumed relations deriving from the EFiL are largely correct for 'Bill's Computer Workshop' and 'Fluxi's Cryptic Potions', not all results of the overall path analysis were significant for the analyses of the particular environments.
Presence seems to have a small effect on learning outcomes while being influenced by technological and emotional factors. Even though the level of immersion can be used to predict the level of presence, it is not an appropriate predictor for learning outcomes. For future studies, the questionnaires have to be revised as some of them suffered from poor scale reliabilities. While the second study could provide indications that the localization of presence and immersion in an existing educational supply-use framework seems to be appropriate, many factors had to be blanked out.
The thesis contributes to existing research as it adds factors that are crucial for learning processes to the discussion on immersive learning from an educational perspective and assesses these factors in hands-on activities in Educational Virtual Environments for Computer Science Education.
Innovate with Crowds. Co-Creation and Idea Evaluation in Internal and External Crowdsourcing.
(2020)
Crowdsourcing seems to be a promising approach for organizations to overcome challenges widely discussed in innovation and organizational research. However, the extent to which an organization can leverage the benefits from crowdsourcing is contingent on which type of crowd is addressed and how crowds are used. Based on unique data from crowdsourcing contests, the dissertation provides insights how to innovate with internal and external crowds in order to utilize their potential for co-creation and idea evaluation.
Main memory forensics and its special form, virtual machine introspection (VMI), are powerful tools for digital forensics and can be used to improve the security of computer-based systems. However, their use in production systems is often not possible. This work identifies the causes and offers practical solutions to apply these techniques in cloud computing and on mobile devices to improve digital forensics and incident analysis.
Four key challenges must be tackled. The first challenge is that many existing solutions are not reproducible, for example, because the corresponding software components are not available, obsolete or incompatible. The use of these tools is also often complex and can lead to a crash of the system to be monitored in case of incorrect use. To solve this problem, this thesis describes the design and implementation of Libvmtrace, which is a framework for the introspection of Linux-based virtual machines. The focus of the developed design is to implement frequently used methods in encapsulated modules so that they are easy for developers to use, optimize and test.
The second challenge is that many production systems do not provide an interface for main memory forensics and virtual machine introspection. To address this problem, this thesis describes possible solutions for how such an interface can be implemented on mobile devices and in cloud environments designed to protect main memory from unprivileged access. We discuss how cold boot attacks, the ARM TrustZone and the hypervisor of cloud servers can be used to acquire data from storage.
The third challenge is how to reconstruct information from main memory efficiently. This thesis describes how these questions can be solved by employing two practical examples. The first example involves extracting the keys of encrypted TLS connections from the main memory of applications to decrypt network traffic without affecting the performance of the monitored application. The TLSKex and DroidKex architecture describe two approaches to localize the keys efficiently with the help of semantic knowledge in the main memory of applications. The second example discusses how to monitor and document SSH sessions of potential attackers from outside of a virtual machine. It is important that the monitoring routines are not noticed by an attacker. To achieve this, we evaluate how to optimize the performance of the monitoring mechanism.
The fourth challenge is how to deal with the performance degradation caused by introspection in productive systems. This thesis discusses how this can be achieved using the example of a SIEM system. To reduce the performance overhead, we describe how to configure the monitoring routine to collect only the information needed to detect incidents. Also, we describe two approaches that permit the monitoring routine to be dynamically adjusted at runtime to extract more information if necessary so that incidents can be better analyzed.
Investment fraud, cybercrime, inconsistencies in health care or the emission scams at the car manufacturers, economic crime (fraud) manifests itself in many facets. For Germany, the cases of FlowTex, Comroad, HRE-Bad-Bank, Holzmann, Volkswagen and the current fraud suspicions at Porsche AG are prominent examples with mostly appalling consequences (Ballwieser and Dobler 2003; Kögler 2015; Meck, Nienhaus, and von Petersdorff 2011; Peemöller and Hofmann 2005). Nevertheless, newspapers without reports on fraud have become scarce. Headlines such as: "Corruption - the daily business" impress hardly anyone, not least because of their certain regularity. The cases revealed publicly are, however, only the tip of the iceberg, as reported by renowned experts (Bundeskriminalamt 2018; LKA 2018). Currently, the State Criminal Police Office (Landeskriminalamt (LKA)) of Baden-Württemberg and its department for economic and environmental crime and corruption is concerned with 72 major proceedings (LKA 2018). However, fraud could be avoided or at least contained by appropriate preventive measures (Bundeskriminalamt 2018; Bussmann 2004; Hlavica, Klapproth, and Hülsberg 2011). Consequently, the pressure on companies and employees to demonstrate compliant and ethical behavior and to meet the demands of stakeholders at all times within their business activities has grown (Buff 2000). This raises the question about which precautionary measures a company can and must implement (Weick and Sutcliffe 2015). Although corporate awareness of this issue has increased, most in-house detection of fraud is accidental, suggesting that companies are still lacking appropriately functioning and systematic (early) detection mechanism (Hlavica et al. 2011). If a company is accused of fraud, this usually has serious repercussions on its corporate reputation. Prior research found that capital market reputation-based penalties for affected companies are on average 7.5 times higher than penalties imposed by the legal system (Karpoff, Lee, and Martin 2008). Furthermore, the accusation of fraud also affects the external auditor’s reputation, since lacking the detection of manipulations in clients’ (financial) reports not only damages public confidence in the accuracy of firms’ financial statements but also in the reliability of the auditor's report. Therefore, it is not surprising that the demand for greater supervision and control of firms’ (financial) reporting as well as for reliable work of statutory auditors continually increases (Herkendell 2007). Although to a lesser extent, this is also the case for the determination of material (accounting) errors within a firm’s financial statements, which are often difficult to distinguish from accounting fraud. According to the International Accounting Standard (IAS) 8.5, published by the International Accounting Standards Board (IASB), errors are omissions and/or misstatements of items that result from the nonapplication or misapplication of trusted information (IASB 2003). Thus, accounting errors and accounting fraud both result in incorrect information of a firm’s financial reports and consequently affect stakeholders’ decision-making. One resulting attempt in counteracting the broad demand for appropriate protective measures was the implementation of a two-stage enforcement system involving the German Financial Reporting Enforcement Panel (Deutsche Prüfstelle für Rechnungslegung (DPR)) as part of the adopted Financial Reporting Enforcement Act (Bilanzkontrollgesetz (BilKoG)) in 2004. The primary objective of the Federal Government's implementation of this mechanism was to strengthen investors' lost confidence in the German capital market, the information content of financial reporting, and Germany as a financial center in the international competition. In addition, the enforcement system serves as a sanctioning instrument for firms in the event of an error detection and subsequent adverse error disclosure via the German federal registry (elektronischer Bundesanzeiger). This adverse error disclosure not only sanctions denounced firms but also questions the quality of the annual financial statement audit and thus the quality of the responsible audit firm. Hence, the often thin line between firms’ unintentional accounting errors, purposive engagement in earnings management, and intentional fraud in particular presents an increasing challenge for the audit profession.
The objective of my cumulative dissertation is to provide a comprehensive overview of fraud and forensic accounting as well as insights into the distinct dimensions among the concepts of errors, earnings management and fraud from a German accounting perspective. I aim at achieving this objective in three steps: First (1), by providing an overview of discipline-specific education possibilities, existing forensic accounting practices, institutions, and current developments in research. Second (2), by assessing auditors’ obligations and responsibilities for the detection of irregularities within the scope of the annual financial statement audit and whether including forensic services into the service portfolio of audit firms can help increase their audit quality due to spillover effects. Third (3), by examining firms’ reputation (re-)building management in response to financial violations and how this process is associated with managing multiple (stakeholder) reputations. This dissertation is composed of three individual papers whereby each considers one of the above outlined focus areas
The amount of audio, video and image data on the Web is immensely growing, which leads to data management problems based on the hidden character of Multimedia. Therefore the interlinking of semantic concepts and media data with the aim to bridge the gap between the Internet of documents and the Web of Data has become a common practice. However, the value of connecting media to its semantic meta data is limited due to lacking access methods and the absence of an adapted query language specialized for media assets and fragments. This thesis aims to extend the standard query language for the Semantic Web (SPARQL) with media specific concepts and functions. The main contributions of the work are an exhaustive survey on Multimedia query languages of the last 3 decades, the SPARQL extension specification itself and an approach for the efficient evaluation of the new query concepts. Additionally I elaborate and evaluate a meta data based media fragment similarity approach, which provides a basis for further language extensions.