open_access
Refine
Year of publication
Document Type
- Doctoral Thesis (227)
- Article (26)
- Preprint (10)
- Conference Proceeding (9)
- Report (4)
- Book (3)
- Part of Periodical (3)
- Master's Thesis (1)
- Other (1)
Language
- English (284) (remove)
Has Fulltext
- yes (284)
Keywords
- Computersicherheit (8)
- Maßtheorie (8)
- Graphenzeichnen (6)
- Iteriertes Funktionensystem (5)
- Marketing (5)
- Multimedia (5)
- Software Engineering (5)
- Information Retrieval (4)
- Kryptologie (4)
- Modellierung (4)
Institute
- Fakultät für Informatik und Mathematik (101)
- Wirtschaftswissenschaftliche Fakultät (52)
- Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik (47)
- Philosophische Fakultät (36)
- Mitarbeiter Lehrstuhl/Einrichtung der Wirtschaftswissenschaftlichen Fakultät (9)
- Sonstiger Autor der Fakultät für Informatik und Mathematik (9)
- Sozial- und Bildungswissenschaftliche Fakultät (8)
- Philosophische Fakultät / Südostasienkunde (6)
- Sonstiger Autor der Wirtschaftswissenschaftlichen Fakultät (5)
- Juristische Fakultät (4)
We consider a number of enhancements to the standard neural network training paradigm. First, we show that carefully designed parameter update rules may replace the need for a loss function and its gradient. We introduce a parameter update rule that generalises the standard cross-entropy gradient, and allows directly controlling the relative effect of easy and hard examples on the training process. We show that the proposed update rule cannot be derived by using a loss function and yields better classification accuracy compared to training with the standard cross-entropy loss.
In addition, we study the effect of the loss function choice on the learnt representations. We introduce the Single Logit Classification (SLC) task: classifying whether a given class is the correct class for a given example, in a computationally efficient manner, based on the appropriate class logit alone. A natural principle is proposed, the Principle of Logit Separation (PoLS), as a guideline for choosing and designing loss functions suitable for the SLC task. We mathematically analyse the alignment of eleven existing and novel loss functions with this principle. Experiment results show that using loss functions that are aligned with this principle results in a representation in the logits layer in which each logit is more informative of its class correctness, leading to a considerably better SLC accuracy.
Further, we attempt to alleviate the dependency of standard neural network models on large amounts of quality labels. The task of weakly supervised one-shot detection is considered, in which at training time the model is trained without any localisation labels, and at test time it needs to identify and localise instances of unseen classes. We propose the attention similarity networks (ASN) for this task. ASN use a Siamese neural network to compute a similarity score between an exemplar and different locations in a target example. Then, an attention mechanism performs localisation by learning to attend to the correct locations. The ASN model outperforms the relevant baselines for weakly supervised one-shot detection tasks in the audio and computer vision domains.
Finally, we consider the problem of quantifying prediction confidence in the regression setting. We propose two novel algorithms for emitting calibrated prediction intervals for neural network regressors, at any given confidence level. The two algorithms require binning of the output space and training the neural network regressor as a classifier. Then, the calibration algorithms choose the intervals in the output space, making sure they contain the amount of posterior probability mass that results in the desired confidence level.
We have proposed a strategy for the creation of attributes based on hidden Markov models (HMM) characterizing the transaction from different points of view. This strategy makes it possible to integrate a broad spectrum of sequential information into the attributes of transactions. In fact, we model the authentic and fraudulent behavior of merchants and card holders according to two univariate characteristics: the date and the amount of transactions. In addition, attributes based on HMMs are created in a supervised manner, thereby reducing the need for expert knowledge for the creation of the fraud detection system. Ultimately, our HMM-based multi-perspective approach allows automated data pre-processing to model time correlations to complement and eventually replace transaction aggregation strategies to improve detection efficiency. Experiments carried out on a large set of credit card transaction data from the real world (46 million transactions carried out by Belgian card holders between March and May 2015) have shown that the strategy proposed for data preprocessing based on HMM can detect more fraudulent transactions when combined with the strategy of preprocessing reference data based on expert knowledge for the detection of credit card fraud.
Our subject of study is strong approximation of stochastic differential equations (SDEs) with respect to the supremum and the L_p error criteria, and we seek approximations that are strongly asymptotically optimal in specific classes of approximations. For the supremum error, we prove strong asymptotic optimality for specific tamed Euler schemes relating to certain adaptive and to equidistant time discretizations. For the L_p error, we prove strong asymptotic optimality for specific tamed Milstein schemes relating to certain adaptive and to equidistant time discretizations. To illustrate our findings, we numerically analyze the SDE associated with the Heston–3/2–model originating from mathematical finance.
Nowadays, consumers are often required to disclose private data in various contexts such as while surfing the internet, downloading a mobile application, or engaging in a business relationship with a firm. Privacy-related decision-making research has so far mainly investigated data disclosure as a cognitive risk-benefit trade-off analysis. While this cognitive approach might be appropriate for situations where consumers have the opportunity for cognitive evaluations, there are many situations in the modern landscape where consumers cannot or do not want to engage in cognitive processing. Decision-making under stress or data disclosure to a business network of collaborating firms, for example, constitute challenges to purely cognitive decision-making approaches, calling for an extension of the established paradigm of cognitive privacy-related decision making. This dissertation advocates for the crucial role of affective processing in many modern data disclosure situations, where consumers do not engage in purely cognitive processing due to external hindrances or a lack of personal involvement in the data disclosure situation.
Geography, social context, time, and cultural mindset are four (out of many) cornerstones of human interaction. When building statistical models, their consideration is vital: They all cause dependency between individual observations, violating assumptions of independence and exchangeability. While this can be problematic and inhibit the unbiased inference of parameters, it can also be a fruitful source of insights and enhance prediction performance.
One class of models that serves to manage or profit from the presence of dependence is the class of latent variable models. This class of models assumes that the presence of non-explicit, unobserved causes of continuous or discrete nature can explain the observed correlations. Latent variable models explicitly take account of dependency, for example, by modeling an unobserved local source of pollution as a continuous spatial variable. Through their widespread use for information fitering, link prediction, and statistical inference, latent variable models have developed an essential impact on our daily life and the way we consume information.
The four articles in this thesis shed light on assumptions, usage, and potential drawbacks of latent variable models in various contexts that involve geographic and interaction data. We model unobserved sources of pollution in geophysical data, explore individual taste and mindsets in cross-cultural contexts, and predict the evolution of social relationships in software development projects. This combination of various perspectives contributes to the interdisciplinary exchange of methodological knowledge on the modeling of dependent data.
In high-performance computing, one primary objective is to exploit the performance that the given target hardware can deliver to the fullest. Compilers that have the ability to automatically optimize programs for a specific target hardware can be highly useful in this context. Iterative (or search-based) compilation requires little or no prior knowledge and can adapt more easily to concrete programs and target hardware than static cost models and heuristics. Thereby, iterative compilation helps in situations in which static heuristics do not reflect the combination of input program and target hardware well. Moreover, iterative compilation may enable the derivation of more accurate cost models and heuristics for optimizing compilers. In this context, the polyhedron model is of help as it provides not only a mathematical representation of programs but, more importantly, a uniform representation of complex sequences of program transformations by schedule functions. The latter facilitates the systematic exploration of the set of legal transformations of a given program.
Early approaches to purely iterative schedule optimization in the polyhedron model do not limit their search to schedules that preserve program semantics and, thereby, suffer from the need to explore numbers of illegal schedules. More recent research ensures the legality of program transformations but presumes a sequential rather than a parallel execution of the transformed program. Other approaches do not perform a purely iterative optimization.
We propose an approach to iterative schedule optimization for parallelization and tiling in the polyhedron model. Our approach targets loop programs that profit from data locality optimization and coarse-grained loop parallelization. The schedule search space can be explored either randomly or by means of a genetic algorithm.
To determine a schedule's profitability, we rely primarily on measuring the transformed code's execution time. While benchmarking is accurate, it increases the time and resource consumption of program optimization tremendously and can even make it impractical. We address this limitation by proposing to learn surrogate models from schedules generated and evaluated in previous runs of the iterative optimization and to replace benchmarking by performance prediction to the extent possible.
Our evaluation on the PolyBench 4.1 benchmark set reveals that, in a given setting, iterative schedule optimization yields significantly higher speedups in the execution of the program to be optimized. Surrogate performance models learned from training data that was generated during previous iterative optimizations can reduce the benchmarking effort without strongly impairing the optimization result. A prerequisite for this approach is a sufficient similarity between the training programs and the program to be optimized.
Algebraic solving of polynomial systems and satisfiability of propositional logic formulas are not two completely separate research areas, as it may appear at first sight. In fact, many problems coming from cryptanalysis, such as algebraic fault attacks, can be rephrased as solving a set of Boolean polynomials or as deciding the satisfiability of a propositional logic formula. Thus one can analyze the security of cryptosystems by applying standard solving methods from computer algebra and SAT solving. This doctoral thesis is dedicated to studying solvers that are based on logic and algebra separately as well as integrating them into one such that the combined solvers become more powerful tools for cryptanalysis.
This disseration is divided into three parts. In this first part, we recall some theory and basic techniques for algebraic and logic solving. We focus mainly on DPLL-based SAT solving and techniques that are related to border bases and Gröbner bases. In particular, we describe in detail the Border Basis Algorithm and discuss its specialized version for Boolean polynomials called the Boolean Border Basis Algorithm.
In the second part of the thesis, we deal with connecting solvers based on algebra and logic. The ultimate goal is to combine the strength of different solvers into one. Namely, we fuse the XOR reasoning from algebraic solvers with the light, efficient design of SAT solvers. As a first step in this direction, we design various conversions from sets of clauses to sets of Boolean polynomials, and vice versa, such that solutions and models are preserved via the conversions. In particular, based on a block-building mechanism, we design a new blockwise algorithm for the CNF to ANF conversion which is geared towards producing fewer and lower degree polynomials. The above conversions allow usto integrate both solvers via a communication interface.
To reach an even tighter integration, we consider proof systems that combine resolution and polynomial calculus, i.e. the two most used proof systems in logic and algebraic solving. Based on such a proof system, which we call SRES, we introduce new types of solving algorithms that demostrate the synergy between Gröbner-like and DPLL-like solving. At the end of the second part of the dissertation, we provide some experiments based on a new benchmark which illustrate that the our new method based on DPLL has the potential to outperform CDCL SAT solvers.
In the third part of the thesis, we focus on practical attacks on various cryptograhic primitives. For instance, we apply SAT solvers in the case of algebraic fault attacks on the symmetric ciphers LED and derivatives of the block cipher AES. The main goal there is to derive so-called fault equations automatically from the hardware description of the cryptosystem and thus automatizate the attack. To give some extra power to a SAT solver that inverts the hash functions SHA-1 and SHA-2, we describe how to tweak the SAT solver using a programmatic interface such that the propagation of the solver and thus the attack itself is improved.
Internet browsers include Application Programming Interfaces (APIs) to support Web applications that require complex functionality, e.g., to let end users watch videos, make phone calls, and play video games. Meanwhile, many Web applications employ the browser APIs to rely on the user's hardware to execute intensive computation, access the Graphics Processing Unit (GPU), use persistent storage, and establish network connections.
However, providing access to the system's computational resources, i.e., processing, storage, and networking, through the browser creates an opportunity for attackers to abuse resources. Principally, the problem occurs when an attacker compromises a Web site and includes malicious code to abuse its visitor's computational resources. For example, an attacker can abuse the user's system networking capabilities to perform a Denial of Service (DoS) attack against third parties. What is more, computational resource abuse has not received widespread attention from the Web security community because most of the current specifications are focused on content and session properties such as isolation, confidentiality, and integrity.
Our primary goal is to study computational resource abuse and to advance the state of the art by providing a general attacker model, multiple case studies, a thorough analysis of available security mechanisms, and a new detection mechanism. To this end, we implemented and evaluated three scenarios where attackers use multiple browser APIs to abuse networking, local storage, and computation. Further, depending on the scenario, an attacker can use browsers to perform Denial of Service against third-party Web sites, create a network of browsers to store and distribute arbitrary data, or use browsers to establish anonymous connections similarly to The Onion Router (Tor). Our analysis also includes a real-life resource abuse case found in the wild, i.e., CryptoJacking, where thousands of Web sites forced their visitors to perform crypto-currency mining without their consent. In the general case, attacks presented in this thesis share the attacker model and two key characteristics: 1) the browser's end user remains oblivious to the attack, and 2) an attacker has to invest little resources in comparison to the resources he obtains.
In addition to the attack's analysis, we present how existing, and upcoming, security enforcement mechanisms from Web security can hinder an attacker and their drawbacks. Moreover, we propose a novel detection approach based on browser API usage patterns. Finally, we evaluate the accuracy of our detection model, after training it with the real-life crypto-mining scenario, through a large scale analysis of the most popular Web sites.
In various fields of image analysis, determining the precise geometry of occurrent edges, e.g. the contour of an object, is a crucial task. Especially the curvature of an edge is of great practical relevance. In this thesis, we develop different methods to detect a variety of edge features, among them the curvature.
We first examine the properties of the parabolic Radon transform and show that it can be used to detect the edge curvature, as the smoothness of the parabolic Radon transform changes when the parabola is tangential to an edge and also, when additionally the curvature of the parabola coincides with the edge curvature. By subsequently introducing a parabolic Fourier transform and establishing a precise relation between the smoothness of a certain class of functions and the decay of the Fourier transform, we show that the smoothness result for the parabolic Radon transform can be translated into a change of the decay rate of the parabolic Fourier transform.
Furthermore, we introduce an extension of the continuous shearlet transform which additionally utilizes shears of higher order. This extension, called the Taylorlet transform, allows for a detection of the position and orientation, as well as the curvature and other higher order geometric information of edges. We introduce novel vanishing moment conditions which enable a more robust detection of the geometric edge features and examine two different constructions for Taylorlets. Lastly, we translate the results of the Taylorlet transform in R^2 into R^3 and thereby allow for the analysis of the geometry of object surfaces.
The appearance of web and online media has created a substantial change in the manner by which employers and applicants interact. The development of web 1.0 applications with one-way communication and the advancement of web 2.0 technologies with interactive components have extended the spectrum of recruitment channels. The new recruitment media channels have led the selection and analysis of their impact out of interaction on each other to a new challenge within academical literature. This dissertation addresses these issues in three separate essays.
Study 1 focuses on the impact of Facebook as a social media recruitment channel on recruitment success. Many companies embed Facebook into their recruitment strategy as an additional recruitment channel for reaching potential applicants and motivating them to apply for available positions. Study 1 analyzes these activities and addresses the question of whether different Facebook activities influence recruitment success above and beyond other undertakings on traditional and online media channels. Study 1 concludes that on Facebook, company posts with a general focus and posts containing work or recruitment information both have a positive impact on recruitment success. The results of Study 1 are validated by company interviews with human resources (HR) managers who are responsible for the overall HR strategy of the company. Study 1 is the first academic work within HR and marketing research, which analyzes the impact of a company’s Facebook activities.
Study 2 examines the impact of traditional media recruitment channels on recruitment success. Many companies employ traditional media channels for their recruitment marketing actions with the aim of achieving recruitment success. Study 2 uses media richness theory as a basis for analyzing the impact of a company’s activities within traditional media channels on recruitment success. Study 2 concludes that exhibition fair and online marketing activities influence recruitment success. In connection with brand equity theory, Study 2 also verifies whether the addition of Facebook activities reinforces the impact of traditional media channels on recruitment success. The results indicate that general Facebook activities have a reinforcing impact on exhibition fair and print media recruitment practices.
Finally, Study 3 focuses on both the literature overview of traditional and social media recruitment practices and social media influence from the marketing literature. It also summarizes and categorizes previous research on the influence of traditional, online, and social media recruitment practices; the effect of a multichannel mix; and the influence of social media and social networking sites on different business outcomes from the marketing literature. Additionally, Study 3 identifies the research gaps and provides recommendations for future studies.
This dissertation uses vector autoregression modelling, including a validation with the help of company interviews and the employment of media richness, signaling, and brand equity theories, combined with a thorough analysis of the research need. The dissertation closes the research gap regarding the analysis of the impact of Facebook, online, and traditional media on recruitment success. It also adds new perspectives to the HR and marketing literature.
The processing of personal information is omnipresent in our data-driven society enabling personalized services, which are regulated by privacy policies. Although privacy policies are strictly defined by the General Data Protection Regulation (GDPR), no systematic mechanism is in place to enforce them. Especially if data is merged from several sources into a data-set with different privacy policies associated, the management and compliance to all privacy requirements is challenging during the processing of the data-set. Privacy policies can vary hereby due to different policies for each source or personalization of privacy policies by individual users. Thus, the risk for negligent or malicious processing of personal data due to defiance of privacy policies exists.
To tackle this challenge, a privacy-preserving framework is proposed. Within this framework privacy policies are expressed in the proposed Layered Privacy Language (LPL) which allows to specify legal privacy policies and privacy-preserving de-identification methods. The policies are enforced by a Policy-based De-identification (PD) process. The PD process enables efficient compliance to various privacy policies simultaneously while applying pseudonymization, personal privacy anonymization and privacy models for de-identification of the data-set. Thus, the privacy requirements of each individual privacy policy are enforced filling the gap between legal privacy policies and their technical enforcement.
The dissertation is located in the field of quantizations of certain stochastic processes, namely a solution X of a multidimensional stochastic differential equation (SDE). The quantization problem for X consists in approximating X by a a random element which takes only finitely many values. Our main interest lies in the investigation of the asymptotic behavior of the Nth minimal quantization error of X as N tends to infinity, which incorporates the determination of both the sharp rate of convergence and explicit asymptotic constants. Especially explicit asymptotic constants have been so far unknown in the context of multidimensional SDEs. Furthermore, as part of our analysis, we provide a method which yields a strongly asymptotically optimal sequence of N-quantization of X. In certain special cases our method is fully constructive and the algorithm is easy to implement.
A widely used class of codes are stencil codes. Their general structure is very simple: data points in a large grid are repeatedly recomputed from neighboring values. This predefined neighborhood is the so-called stencil. Despite their very simple structure, stencil codes are hard to optimize since only few computations are performed while a comparatively large number of values have to be accessed, i.e., stencil codes usually have a very low computational intensity. Moreover, the set of optimizations and their parameters also depend on the hardware on which the code is executed.
To cut a long story short, current production compilers are not able to fully optimize this class of codes and optimizing each application by hand is not practical. As a remedy, we propose a set of optimizations and describe how they can be applied automatically by a code generator for the domain of stencil codes. A combination of a space and time tiling is able to increase the data locality, which significantly reduces the memory-bandwidth requirements: a standard three-dimensional 7-point Jacobi stencil can be accelerated by a factor of 3. This optimization can target basically any stencil code, while others are more specialized. E.g., support for arbitrary linear data layout transformations is especially beneficial for colored kernels, such as a Red-Black Gauss-Seidel smoother. On the one hand, an optimized data layout for such kernels reduces the bandwidth requirements while, on the other hand, it simplifies an explicit vectorization.
Other noticeable optimizations described in detail are redundancy elimination techniques to eliminate common subexpressions both in a sequence of statements and across loop boundaries, arithmetic simplifications and normalizations, and the vectorization mentioned previously. In combination, these optimizations are able to increase the performance not only of the model problem given by Poisson’s equation, but also of real-world applications: an optical flow simulation and the simulation of a non-isothermal and non-Newtonian fluid flow.
Our internal clock, the circadian clock, determines at which time we have our best cognitive abilities, are physically strongest, and when we are tired. Circadian clock phase is influenced primarily through exposure to light. A direct pathway from the eyes to the suprachiasmatic nucleus, where the circadian clock resides, is used to synchronise the circadian clock to external light-dark cycles.
In modern society, with the ability to work anywhere at anytime and a full social agenda, many struggle to keep internal and external clocks synchronised. Living against our circadian clock makes us less efficient and poses serious health impact, especially when exercised over a long period of time, e.g. in shift workers. Assessing circadian clock phase is a cumbersome and uncomfortable task. A common method, dim light melatonin onset testing, requires a series of eight saliva samples taken in hourly intervals while the subject stays in dim light condition from 5 hours before until 2 hours past their habitual bedtime.
At the same time, sensor-rich smartphones have become widely available and wearable computing is on the rise. The hypothesis of this thesis is that smartphones and wearables can be used to record sensor data to monitor human circadian rhythms in free-living. To test this hypothesis, we conducted research on specialised wearable hardware and smartphones to record relevant data, and developed algorithms to monitor circadian clock phase in free-living. We first introduce our smart eyeglasses concept, which can be personalised to the wearers head and 3D-printed. Furthermore, hardware was integrated into the eyewear to recognise typical activities of daily living (ADLs). A light sensor integrated into the eyeglasses bridge was used to detect screen use. In addition to wearables, we also investigate if sleep-wake patterns can be revealed from smartphone context information. We introduce novel methods to detect sleep opportunity, which incorporate expert knowledge to filter and fuse classifier outputs. Furthermore, we estimate light exposure from smartphone sensor and weather in- formation. We applied the Kronauer model to compare the phase shift resulting from head light measurements, wrist measurements, and smartphone estimations.
We found it was possible to monitor circadian phase shift from light estimation based on smartphone sensor and weather information with a weekly error of 32±17min, which outperformed wrist measurements in 11 out of 12 participants. Sleep could be detected from smartphone use with an onset error of 40±48 min and wake error of 42±57 min. Screen use could be detected smart eyeglasses with 0.9 ROC AUC for ambient light intensities below 200lux. Nine clusters of ADLs were distinguished using Gaussian mixture models with an average accuracy of 77%. In conclusion, a combination of the proposed smartphones and smart eyeglasses applications could support users in synchronising their circadian clock to the external clocks, thus living a healthier lifestyle.
Credit card fraud has emerged as major problem in the electronic payment sector. In this thesis, we study data-driven fraud detection and address several of its intricate challenges by means of machine learning methods with the goal to identify fraudulent transactions that have been issued illegitimately on behalf of the rightful card owner. In particular, we explore several means to leverage contextual information beyond a transaction’s basic attributes on the transaction level, sequence level and user level.
On the transaction level, we aim to identify fraudulent transactions which, in terms of their attribute values, are globally distinguishable from genuine transactions. We provide an empirical study of the influence of class imbalance and forecasting horizons on the classification performance of a random forest classifier. We augment transactions with additional features extracted from external knowledge sources and show that external information about countries and calendar events improves classification performance most noticeably on card-not-present transactions.
On the sequence level, we aim to detect frauds that are inconspicuous in the background of all transactions but peculiar with respect to the short-term sequence they appear in. We use a Long Short-term Memory network (LSTM) for modeling the sequential succession of transactions. Our results suggest that LSTM-based modeling is a promising strategy for characterizing sequences of card-present transactions but it is not adequate for card-not-present transactions.
On the user level, we elaborate on feature aggregations and propose a flexible concept allowing us define numerous features by means of a simple syntax. We provide a CUDA-based implementation for the computationally expensive extraction with a speed-up of two orders of magnitude over a single-core implementation. Our feature selection study reveals that aggregates extracted from users’ transaction sequences are more useful than those extracted from merchant sequences. Moreover, we discover multiple sets of candidate features with equivalent performance as manually engineered aggregates while being structurally different.
Regarding future work, we motivate the usage of simple and transparent machine learning methods for credit card fraud detection and we sketch a simple user-focused modeling approach.
The main research question of this thesis is to develop a theory that would provide foundations for the development of Web of Things (WoT) systems. A theory for WoT shall provide a model of the ‘things’ WoT agents relate to such that these relations determine what interactions take place between these agents. This thesis presents a knowledge-based approach in which the semantics of WoT systems is given by a transformation (an homomorphism) between a graph representing agent interactions and a knowledge graph describing ‘things’. It focuses on three aspects of knowledge graphs in particular: the vocabulary with which assertions can be made, the rules that can be defined over this vocabulary and its serialization to efficiently exchange pieces of a knowledge graph. Each aspect is developed in a dedicated chapter, with specific contributions to the state-of-the-art.
The need for a unified vocabulary to describe ‘things’ in WoT and the Internet of Things (IoT) has been identified early on in the literature. Many proposals have been consequently published, in the form of Web ontologies. In Ch. 2, a systematic review of these proposals is being developed, as well as a comparison with the data models of the principal IoT frameworks and protocols. The contribution of the thesis in that respect is an alignment between the Thing Description (TD) model and the Semantic Sensor Network (SSN) ontology, two standards of the World Wide Web Consortium (W3C). The scope of this thesis is generally limited to Web standards, especially those defined by the Resource Description framework (RDF).
Web ontologies do not only expose a vocabulary but also rules to extend a knowledge graph by means of reasoning. Starting from a set of TD documents, new relations between ‘things’ can be “discovered” this way, indicating possible interactions between the servients that relate to them. The experiments presented in Ch. 3 were done on the basis of this semantic discovery framework on two use cases: a building automation use case provided by Intel Labs and an industrial control use case developed internally at Siemens. The relations to discover often involve anonymous nodes in the knowledge graph: the chapter also introduces a novel skolemization algorithm to correctly process these nodes on a well-defined fragment of the Web Ontology Language (OWL).
Finally, because this semantic discovery framework relies on the exchange of TD documents, Ch. 4 introduces a binary format for RDF that proves efficient in serializing TD assertions such that even the smallest WoT agents, i.e. micro-controllers, can store and process them. A formalization for the semantics-preserving compaction and querying of TD documents is also introduced in this chapter, at the basis of an embedded RDF store called the µRDF store. The ability of all WoT agents to query logical assertions about themselves and their environment, as found in TD documents, is a first step towards knowledge-based intelligent systems that can operate autonomously and dynamically in a decentralized way. The µRDF store is an attempt to illustrate the practical outcomes of the theory of WoT developed throughout this thesis.
Service Provisions and Business Relationships in the Digital Era – Four Essays in the B2B Context
(2019)
Digitalization has fundamentally changed how services are provided and how service providers and their customers interact with each other in the business-to-business (B2B) context. Against the backdrop of these developments, this thesis considers – in four essays – the changes brought about by both service and sales digitalization. Each essay investigates for one research topic the aspects of existing knowledge regarding non-digital services and/or sales which can be transferred to digital services and sales and which aspects must be adjusted. The aim is to support B2B firms that offer or receive services or plan to do so in the future which cope with the challenges of service and sales digitalization.
In doing so, the first and second essay investigate the contingency effect of service digitalization on service characteristics from the provider and customer views, respectively. Both essays aim at explaining service value as an endogenous variable. In the first essay, service modularity and service flexibility are considered as predecessors that help explain service value. The second essay investigates the effect of customer cocreation on service value. Whereas the first and second essay focus on service characteristics, the third and fourth essay focus on business relationships. Both essays explain relational conflict, one important facet of business relationships, as the endogenous variable and consider the perspectives of both providers and customers. The third essay elaborates on the diverging effect of service digitalization on relational conflict from these two perspectives. The fourth essay incorporates both service and sales digitalization and investigates the contingency effects of the two forms of digitalization on the relationship between coercive power use and relational conflict.
In conclusion, this thesis provides a more fine-grained view on the construct digitalization by differentiating explicitly between service digitalization and sales digitalization and introduces a new conceptualization of service (see all four essays) and sales digitalization (see the fourth essay) by treating digitalization as a continuum. Furthermore, this thesis investigates the opportunities and challenges brought about by digitalization. In particular, the first and second essays show the opportunities service digitalization creates for providers, who could benefit from service modularity, and customers, who could benefit from the integration of their own resources into service provisions. In addition, the third essay shows that for providers service digitalization has a positive and for customers contrarily a negative effect on relation conflict. The fourth essay shows that sales and service digitalization positively moderate the effect of coercive power use on relational conflict for weaker parties in business relationships (except for weaker providers) but not for stronger parties. In sum, this thesis contributes to a better understanding of the consequences of service and sales digitalization and provides recommendations for companies facing challenges and decisions related to this development.
Blockchains and distributed ledger technology (DLT) that rely on Proof-of-Work (PoW) typically show limited performance. Several recent approaches incorporate Byzantine fault-tolerant (BFT) consensus protocols in their DLT design as Byzantine consensus allows for increased performance and energy efficiency, as well as it offers proven liveness and safety properties. While there has been a broad variety of research on BFT consensus protocols over the last decades, those protocols were originally not intended to scale for a large number of nodes. Thus, the quest for scalable BFT consensus was initiated with the emerging research interest in DLT. In this paper, we first provide a broad analysis of various optimization techniques and approaches used in recent protocols to scale Byzantine consensus for large environments such as BFT blockchain infrastructures. We then present an overview of both efforts and assumptions made by existing protocols and compare their solutions.
In geo-replicated systems, the heterogeneous latencies of connections between replicas limit the system’s ability to achieve fast consensus. State machine replication (SMR) protocols can be refined for their deployment in wide-area networks by using a weighting scheme for active replication that employs additional replicas and assigns higher voting power to faster replicas. Utilizing more variability in quorum formation allows replicas to swifter proceed to subsequent protocol stages, thus decreasing consensus latency. However, if network conditions vary during the system’s lifespan or faults occur, the system needs a solution to autonomously adjust to new conditions. We incorporate the idea of self-optimization into geographically distributed, weighted replication by introducing AWARE, an automated and dynamic voting weight tuning and leader positioning scheme. AWARE measures replica-replica latencies and uses a prediction model, thriving to minimize the system’s consensus latency. In experiments using different Amazon EC2 regions, AWARE dynamically optimizes consensus latency by self-reliantly finding a fast weight configuration yielding latency gains observed by clients located across the globe.
In the last decade, crowdsourcing has proved its ability to address large scale data collection tasks, such as labeling large data sets, at a low cost and in a short time. However, the performance and behavior variability between workers as well as the variability in task designs and contents, induce an unevenness in the quality of the produced contributions and, thus, in the final output quality. In order to maintain the effectiveness of crowdsourcing, it is crucial to control the quality of the contributions. Furthermore, maintaining the efficiency of crowdsourcing requires the time and cost overhead related to the quality control to be at its lowest. While effective, current quality control techniques such as contribution aggregation, worker selection, context-specific reputation systems, and multi-step workflows, suffer from fairly high time and budget overheads and from their dependency on prior knowledge about individual workers.
In this thesis, we address this challenge by leveraging the similarity between completed and incoming tasks as well as the correlation between the worker declarative profiles and their performance in previous tasks in order to perform an efficient task-aware worker selection. To this end, we propose CAWS (Context AwareWorker Selection) method which operates in two phases; in an offline phase, completed tasks are clustered into homogeneous groups for each of which the correlation with the workers declarative profile is learned. Then, in the online phase, incoming tasks are matched to one of the existing clusters and the correspondent, previously inferred profile model is used to select the most reliable online workers for the given task. Using declarative profiles helps eliminate any probing process, which reduces the time and the budget while maintaining the crowdsourcing quality. Furthermore, the set of completed tasks, when compared to a probing task split, provides a larger corpus from which a more precise profile model can be learned. This translates to a better selection quality, especially for harder tasks.
In order to evaluate CAWS, we introduce CrowdED (Crowdsourcing Evaluation Dataset), a rich dataset to evaluate quality control methods and quality-driven task vectorization and clustering. The generation of CrowdED relies on a constrained sampling approach that allows to produce a task corpus which respects both, the budget and type constraints. Beside helping in evaluating CAWS, and through its generality and richness, CrowdED helps in plugging the benchmarking gap present in the crowdsourcing quality control community.
Using CrowdED, we evaluate the performance of CAWS in terms of the quality of the worker selection and in terms of the achieved time and budget reduction. Results shows the following: first, automatic grouping is able to achieve a learning quality similar to job-based grouping. And second, CAWS is able to outperform the state-of-the-art profile-based worker selection when it comes to quality. This is especially true when strong budget and time constraints are present on the requester side.
Finally, we complement our work by a software contribution consisting of an open source framework called CREX (CReate Enrich eXtend). CREX allows the creation, the extension and the enrichment of crowdsourcing datasets. It provides the tools to vectorize, cluster and sample a task corpus to produce constrained task sets and to automatically generate custom crowdsourcing campaign sites.
In the public debate it is often assumed that communication in so-called “Echo Chambers” - online structures in which like-minded people share mostly messages that confirm their mutual, shared attitudes - can lead to negative outcomes such as increased societal polarization between groups holding opposing beliefs. This thesis aimed to examine this assumption from a psychological perspective and substantiate it empirically. First, based on existing research and psychological theories, a working definition of Echo Chambers was formulated, that highlights two key factors: Selective Exposure to attitudinally congruent messages and communication in homogeneous networks. Then, three studies were conducted to test links between these factors and two individual-level outcomes that are associated to subjects’ actual behavior: Their False Consensus, that is, how strongly subjects perceive the public in agreement with their own attitudes, and their Intergroup Bias, which reflects to which degree subjects’ identify as members of an in-group that is in conflict with negatively perceived out-groups. The studies employed questionnaire-based, experimental, as well as real-word data driven approaches. Overall, they confirm that exposure to Echo Chamber-like online structures can indeed lead to a more favorably distorted perception of public opinions and to more signs of Intergroup Bias in subjects’ communicational style. Thus, the thesis provides first psychologically founded empirical evidence for effects of online Echo Chamber exposure on behavior-related individual-level outcomes. The results can serve as a basis for further research as well as for the discussion of possible strategies to counter negative effects of online Echo Chambers.
Die Forschung zu Lehrkrafturteilen hat in den letzten drei Jahrzehnten beträchtliche Fortschritte gemacht. Die Bedeutung des Lehrkrafturteils und die Variabilität in der Urteilsgenauigkeit erfordern eine eingehendere Untersuchung. Basierend auf der Überprüfung früherer Studien wurde ein systematischer analytischer Rahmen vorbereitet, der aus drei Hauptstudien besteht, um das Verständnis der Prozesse und Merkmale von Lehrkrafturteilen zu erweitern. In den drei vorgestellten Studien wurde insbesondere untersucht, wie Lehrkrafturteile durch verschiedene Schülermerkmale generiert werden, welche Möglichkeiten es gibt, die Urteilsgenauigkeit von Lehrkräften zu verbessern und ob die Urteilsgenauigkeit von Lehrkräften im Laufe der Zeit stabil bleiben kann.
In der ersten Studie wurde das Linsenmodell der Theorie der sozialen Beurteilung angewendet, um die Einschätzungen von Lehrpersonen über die Leistung von Schülerinnen und Schülern und ihre Strategien der Informationsverarbeitung besser zu verstehen. 260 Lehrkräfte aus sieben chinesischen Grundschulen wurden gebeten, aus sieben Informationsquellen Schülermerkmale auszuwählen und zu bewerten, anhand derer sie die Leistungen der Schüler beurteilen könnten. Die Lehrpersonen entwickelten eine klare Hierarchie der verwendeten Datenquellen. Die besten Informationen wurden aus den Fähigkeiten und Einstellungen der Schülerinnen und Schüler gewonnen und die am wenigsten wichtigen Informationen aus der sozialen Interaktion mit anderen sowie aus der Schüler-Demografie. Um genauere Einschätzungen zu treffen, sollten die Lehrkräfte über gültige Indikatoren für die Schülerleistung informiert werden.
Die zweite Studie zielte darauf ab, die Urteilsgenauigkeit von Lehrkräften und die Leistung der Schülerinnen und Schüler durch den Einsatz von Classroom-Response-Systemen („Clickern“) zu fördern. 20 Schulklassen mit 459 Schülerinnen und Schülern der sechsten Klasse und ihren Mathematiklehrkräften wurden für eine fünfwöchige quasi-experimentelle Interventionsstudie mit einem Pre- und Post-Test in drei Gruppen eingeteilt. Die Ergebnisse zeigen, dass beide Ziele weitgehend erreicht werden konnten. Schülerinnen und Schüler der Clicker-Gruppe haben durch die Intervention mehr mathematisches Wissen erworben als Studenten der Tagebuch- und Kontrollgruppe. Die Lehrkrafturteile aller drei Gruppen wurden vom Pre- zum Post-Test genauer. Lehrpersonen, die Clicker verwendeten, beurteilten jedoch mit höchster Genauigkeit. Clicker können als wertvolles Werkzeug zur Verbesserung der Urteilsgenauigkeit von Lehrkräften empfohlen werden.
In der dritten Studie wurde die zeitliche Stabilität der Urteilsgenauigkeit der Lehrkräfte hinsichtlich Motivation, Emotion und Leistung der Schülerinnen und Schüler untersucht. Neun Klassen mit 326 Sechstklässlern einer chinesischen Grundschule und ihren Mathematiklehrpersonen nahmen an der Studie teil. Die Schüler arbeiteten an einem standardisierten Mathematik-Test und einem Selbstbeschreibungsfragebogen zu Motivation und Emotion. Die Lehrpersonen beurteilten die Motivation, Emotion und Leistung jedes einzelnen Schülers anhand einzelner Items. Das Lehrkrafturteil und die Eigenschaften der Schülerinnen und Schüler wurden innerhalb von vier Wochen zweimal gemessen. Die Ergebnisse zeigten, dass die Lehrkräfte in der Lage waren, die Schülerleistungen mit hoher Genauigkeit, die Motivation der Schülerinnen und Schüler mit mäßiger bis hoher Genauigkeit und die Emotion der Schülerinnen und Schüler meist mit geringer Genauigkeit zu bewerten. Die Urteilsgenauigkeit der Lehrpersonen war sehr stabil mit nur geringen Veränderungen an den verschiedenen Genauigkeitskomponenten. Es kann gefolgert werden, dass chinesische Grundschullehrkräfte in der Lage sind, zu verschiedenen Zeitpunkten faire Urteile über Schülerleistungen und der Motivation ihrer Schülerinnen und Schüler zu treffen. Die Emotionen der Schülerinnen und Schüler sind für Lehrpersonen jedoch schwer zu erfassen.
Software has become an important part of our life. Therefore, the number of different applications scenarios and user requirements of software systems grows rapidly. To satisfy these requirements, software vendors build configurable software systems that can be tailored to diverse needs without rebuilding them from scratch, which reduces costs and development time.
Despite considerable advances in software engineering, which allow building high-quality configurable software systems, some challenges remain. One of these challenges is the feature interaction problem that arises when parts (features), from which a configurable system is composed, interact in unexpected ways, and inadvertently change the behavior or quality attributes (such as performance) of the system.
The goal of this dissertation is to systematically study the nature of feature interactions, their causes, their influence on performance of configurable systems, and, based on empirical results, suggest ways of improving techniques for detecting and predicting feature interactions.
More specifically, we compared and evaluated different strategies for the analysis of configurable software systems. The results of our evaluation complement empirical data from previous work about how different analysis strategies for configurable software systems compare with respect to different aspects, such as performance. These results shall be used to develop effective and scalable techniques and tools for analysis of configurable software including feature-interaction detection and prediction techniques and tools.
Technically, we used a machine-learning technique to quantify the influence of feature interactions on performance of real-world configurable systems. We studied the characteristics of interactions that have the largest influence on performance and found that interactions among few features have higher influence than interactions among many features. With a growing number of interacting features, the influence of the corresponding interactions decreases consistently. This implies that interactions involving multiple features can be ignored in practice because of their marginal influence on performance. We also investigated the causes of the interactions and were able to identify several patterns that link these interactions to the architecture of the systems: For example, we found that if a data processing system consisted of multiple features that processed the same data in sequence then these features interacted. The identified patterns can help to anticipate performance interactions already at an early development stage when a system’s architecture is designed.
Furthermore, considering that control-flow interactions (observable at the level of control flow among features) are easier to detect than performance interactions (externally observable through measuring performance of different combinations of features), we conducted a case study on two configurable systems. In this case study, we investigated a possible relation among control-flow feature interactions and performance feature interactions. We also discussed how this relation can be exploited by interaction detection and performance prediction techniques to make them more time efficient and precise. Our case study on two real-world configurable systems revealed that a relation indeed exists, and we were able to show how it can be used to reduce the search space of possibly existing performance interactions. The study can serve as a blueprint for further studies that can rely on our conceptual framework for investigating relations among external and internal interactions.
Overall, the contribution of this dissertation consists of scientific and technical insights, practical tool implementations, empirical evaluations, and case studies that advance the current state of research in the area of feature interactions in configurable software systems. In particular, we provide insights into the causes of feature interactions and their influence on performance of real-world configurable systems (e.g., interaction patterns, decreasing influence of interactions with growing number of involved features). Our results also suggest ways of improving techniques for detecting and predicting feature interactions (e.g., ignoring interactions among multiple features, reducing the search space based on relations among interactions).
The Semantic Web exists for about 20 years by now, but its applicability as well as its presence does not live up to the standards of its original idea. Incorporated Semantic Web Technologies do have an initial barrier to learn and apply, which can discourage many potential users. This leads to less available data overall in addition to decreased data quality.
This work solves parts of the aforementioned problem by supporting idiomatic entry to those Semantic Web Technologies, allowing for "easier" accessibility and usability. Anno4j is a Java library that implements a form of Object-Relational Mapping for RDF data. With its application, RDF data can be created via a mapping by simply instantiating Java objects - an object-oriented programming concept the user is familiar with. On the other side, requesting persisted data is supported by a path-based querying possibility, while other features like transactional behaviour, code generation, and automated validation of input contribute to a more effective, comprehensive, and straightforward usage.
A use-case is provided by the MICO Platform, a centralized software instance that connects autonomous multimedia extractors in a workflow-driven fashion. This leads to a rich metadata background for the inserted multimedia files, enabling them to be used in diverse scenarios as well as unlocking yet hidden semantics. For this task it was necessary to design and implement a metadata model that is able to aggregate and merge the varying extractor results under a common denominator: the MICO Metadata Model.
The results of this work allow the use case to incorporate idiomatic Semantic Web Technologies which are then usable natively by non-Semantic Web experts. Additionally, an increase has been achieved in forms of data integration, synchronisation, integrity and validity, as well as an overall more comprehensive and rich implementation of the multimedia extractors.
Analysing security assumptions taken for the WebRTC and postMessage APIs led us to find a novel attack abusing the browsers' persistent storage capabilities. The presented attack can be executed without the website's visitor knowledge, and it requires neither browser vulnerabilities nor additional software on the browser's side. To exemplify this, we study how can an attacker use browsers to create a network for persistent storage and distribution of arbitrary data.
In our proof of concept, the total storage of the network, and therefore the space used within each browser, grows linearly with the number of origins delivering the malicious JavaScript code. Further, data transfers between browsers are not restricted by the Same Origin Policy, which allows for a unified cross-origin browser network, regardless of the origin from which the script executing the functionality is loaded from.
In the course of our work, we assess the feasibility of a real-life deployment of the network by running experiments using Linux containers and browser automation tools. Moreover, we show how security mechanisms against third-party tracking, cross-site scripting and click-jacking can diminish the attack's impact, or even prevent it.
We introduce a new browser abuse scenario where an attacker uses local storage capabilities without the website's visitor knowledge to create a network of browsers for persistent storage and distribution of arbitrary data. We describe how security-aware users can use mechanisms such as the Content Security Policy (CSP), sandboxing, and third-party tracking protection, i.e., CSP & Company, to limit the network's effectiveness. From another point of view, we also show that the upcoming Suborigin standard can inadvertently thwart existing countermeasures, if it is adopted.
Direct access to the system's resources such as the GPU, persistent storage and networking has enabled in-browser crypto-mining. Thus, there has been a massive response by rogue actors who abuse browsers for mining without the user's consent. This trend has grown steadily for the last months until this practice, i.e., CryptoJacking, has been acknowledged as the number one security threat by several antivirus companies.
Considering this, and the fact that these attacks do not behave as JavaScript malware or other Web attacks, we propose and evaluate several approaches to detect in-browser mining. To this end, we collect information from the top 330.500 Alexa sites. Mainly, we used real-life browsers to visit sites while monitoring resource-related API calls and the browser's resource consumption, e.g., CPU.
Our detection mechanisms are based on dynamic monitoring, so they are resistant to JavaScript obfuscation. Furthermore, our detection techniques can generalize well and classify previously unseen samples with up to 99.99\% precision and recall for the benign class and up to 96\% precision and recall for the mining class. These results demonstrate the applicability of detection mechanisms as a server-side approach, e.g., to support the enhancement of existing blacklists.
Last but not least, we evaluated the feasibility of deploying prototypical implementations of some detection mechanisms directly on the browser. Specifically, we measured the impact of in-browser API monitoring on page-loading time and performed micro-benchmarks for the execution of some classifiers directly within the browser. In this regard, we ascertain that, even though there are engineering challenges to overcome, it is feasible and beneficial for users to bring the mining detection to the browser.
Allowing users to control access to their data is paramount for the success of the Internet of Things; therefore, it is imperative to ensure it, even when data has left the users' control, e.g. shared with cloud infrastructure. Consequently, we propose several state of the art mechanisms from the security and privacy research fields to cope with this requirement.
To illustrate how each mechanism can be applied, we derive a data-centric architecture providing access control and privacy guaranties for the users of IoT-based applications. Moreover, we discuss the limitations and challenges related to applying the selected mechanisms to ensure access control remotely. Also, we validate our architecture by showing how it empowers users to control access to their health data in a quantified self use case.
This doctoral thesis is dedicated to improve a linear algebra attack on the so-called braid group-based Diffie-Hellman conjugacy problem (BDHCP). The general procedure of the attack is to transform a BDHCP to the problem of solving several simultaneous matrix equations. A first improvement is achieved by reducing the solution space of the matrix equations to matrices that have a specific structure, which we call here the left braid structure. Using the left braid structure the number of matrix equations to be solved reduces to one. Based on the left braid structure we are further able to formulate a structure-based attack on the BDHCP. That is to transform the matrix equation to a system of linear equations and exploiting the structure of the corresponding extended coefficient matrix, which is induced by the left braid structure of the solution space. The structure-based attack then has an empirically high probability to solve the BDHCP with significantly less arithmetic operations than the original attack. A third improvement of the original linear algebra attack is to use an algorithm that combines Gaussian elimination with integer polynomial interpolation and the Chinese remainder theorem (CRT), instead of fast matrix multiplication as suggested by others. The major idea here is to distribute the task of solving a system of linear equations over a giant finite field to several much smaller finite fields. Based on our empirically measured bounds for the degree of the polynomials to be interpolated and the bit size of the coefficients and integers to be recovered via the CRT, we conclude an improvement of the run time complexity of the original algorithm by a factor of n^8 bit operations in the best case, and still n^6 in the worst case.
Free digital platforms constitute one of the most important phenomena of modern times; they create value by bringing together customer groups that would not have interacted without digital technology or that could have done so only by incurring increased costs. In the free digital platform model, firms pay for the interaction with end consumers that use the digital platform for free. Extant research on two-sided markets has provided rich evidence for how digital platforms can attract enough members from both customer groups to enable the interaction between the customer groups. However, this research lacks insights into how free
digital platforms can create value for their customer groups once these customer groups joined the platform, and extract this value for themselves.
To address this substantial research gap, in this dissertation, I investigate the overall research question of how activities of free digital platforms affect the value creation for their customer groups and the ability of the platform to extract this value. In a first step, I examine this value creation and value extraction by focusing on concrete activities of free digital platforms. In Study 1, I investigate how offering firms the possibility of personalizing and positioning their search ads on search engines affects consumers’ search engine click behavior. In Study 2, I examine how adapting ad positions to consumers’ previous online shopping behavior on search engines influences consumers’ click and conversion behavior. In Study 3, I investigate the impact of a review platform’s policy of tagging reviews as written on either mobile or nonmobile devices on consumers’ perceptions of review helpfulness. In a second step, in Study 4, I generalize these findings by investigating the overall impact of such customer-oriented activities on value creation for customer groups and on the extraction of this value by free digital platforms.
These four studies yield three major findings. First, free digital platforms’ activities toward one customer group always affect the value creation of the other customer group as well. Second, free digital platforms should emphasize value creation activities especially for non-paying customer groups. Third, internal, operative, and macro-environments influence the value creation and value extraction of free digital platforms.
With this dissertation, I make substantial contributions to research on two-sided markets, customer orientation, search engine advertising, and online reviews. In addition, my dissertation provides numerous actionable recommendations for managers of free digital platforms and outlines promising avenues for further research.
In three studies, this dissertation contributes to the link between, on the one hand, how a firm’s supply chain is organized and, on the other, how good its financial and stock market performance is. Study I examines the relationship between the degree of vertical integration and financial performance. Study II links the degree of vertical integration, both theoretically and empirically, to long-term stock returns. Study III concentrates on the relationship between inventory efficiency and financial performance.
An increasing number of companies report that eco-sustainable initiatives have a positive impact on firms’ economic performance and concurrently allow the combination of social and commercial goals by optimizing environmental and economic decisions simultaneously. These initiatives are considered an integral part of organizational sustainability transformations, which are a special case of multilayered, complex organizational change efforts that relate to environmental, organizational, and individual factors. Institutional logics and information systems (IS) have shown to be two important perspectives from which to explore mechanisms and processes central to organizational sustainability transformations. Institutional logics offer a unique perspective to investigate organizational change for sustainability because they provide a new approach to organizational change that incorporates macro structures, culture, and agency to explain how actions are enabled or constrained. It thus allows for insights into the complex and miscellaneous interplay of external and internal determinants that govern organizational transformation processes towards sustainability. By providing insights into institutional changes of practice and behaviors, an institutional logic perspective allows for a detailed analysis of organizational transformations. Within these change processes, IS have shown to be an efficient and pervasive tool to leverage sustainability by integrating human and technological factors. Since IS have become a key resource for the encouragement of organizational sustainability transformations, adopting an IS perspective allows for an understanding of mechanisms and processes that enable IS to foster sustainability in organizations. Thus, this dissertation draws on four studies by investigateing an institutional logic perspective as well as an IS perspective to explore organizational sustainability transformations and facilitate an in-depth understanding of organizational, human, and technological factors that encourage sustainability in organizational transformations.
Information has a particular importance in online purchase decision processes. As opposed to consumers in online markets, consumers in online markets cannot inspect the physical product to evaluate it and reduce their perceived risk. Hence, consumers in online markets are dependent upon the information that they can gather about a product in which they are interested. Therefore, they have two primary sources of information: Product descriptions and customer reviews. Both sources affect consumers’ purchase decision processes and hence have an economic impact for consumers, shop providers and manufacturers. It is important to know how these sources of information influence a customer’s online purchase decision and how to extract the relevant information.
To examine these research objectives, several studies applying different methodological approaches have been conducted. The results are presented in this dissertation.
In the age of globalization, exponential growth and digitalization, organizations need to make faster decisions, as well as continually innovate and adapt to changing customer needs to ensure their long-term competitiveness (Kammerlander et al. 2018; Magnusson and Martini 2008). Notably, large and established companies find it difficult to keep up with start-ups and smaller companies as well as with digital transformation (Christensen et al. 2015).
Thus, organizations are increasingly providing their workforce with Enterprise Social Networks (ESNs) as intra-organizational social software platforms.
Although ESNs hold great promises for organizations and their employees (Mäntymäki and Riemer 2016), most ESN initiatives fail to leverage the intended benefits (Chin et al. 2015). This dissertation seeks to open up the black box behind this seemingly paradoxical relationship.
It comprises four studies that are guided by the following research questions:
R 1: How do different types of users (posters and lurkers) differ in their motivations for participating in ESNs? (= Paper 1)
R 2: Why do employees deliberately not use the ESN? (= Paper 2)
R 3: How can ESNs be successfully implemented and improved to overcome the challenges perceived by employees? (= Paper 3)
R 4: Does ESN usage impact on individual task performance? And how can ESNs be used effectively to increase performance outcomes? (= Paper 4)
Overall, the four studies contribute to a better understanding of the non-adoption phenomenon of ESNs, provide rich insights on employees’ underlying reasons and challenges regarding ESN usage, while underscoring the potential value of ESNs.
This cumulative thesis consists of six single contributions: five independent essays and an introductory chapter. All of the conributions have been published or accepted for publication. The overarching scope of these essays is to analyze different factors accounting for the stability of post-Soviet authoritarian regimes and the obstacles, which Western democracy promoters can face when dealing with autocrats. The starting point for these enquiries has been the striking inability of Western democracies and in particular of the European Union to encourage and to assist political transformation in the majority of the post-Soviet republics.
Securitization Theory has been applied and advanced continuously since the publication of the seminal work “Security – A New Framework for Analysis” by Buzan et al. in 1998. Various extensions, clarifications and definitions have been added over the years. Ontological and epistemological debates as well as debates about the normativity of the concept have taken place, furthering the approach incrementally and adapting it to new empirical cases. This paper aims at contributing to the improvement of the still useful framework in a more general way by amending it with well-established findings from another discipline: Psychology. The exploratory article will point out what elements of Securitization Theory might benefit most from incorporating insights from Psychology and in which ways they might change our understanding of the phenomenon. Some well-studied phenomena in the field of (Social) Psychology, it is argued here, play an important role for the construction and perception of security threats and the acceptance of the audience to grant the executive branch extraordinary measures to counter these threats: availability heuristic, loss-aversion and social identity theory are central psychological concepts that can help us to better understand how securitization works, and in which situations securitizing moves have great or little chances to reverberate. The empirical cases of the 9/11 and Paris terror attacks will serve to illustrate the potential of this approach, allowing for variances in key factors, among them: (point in) time, system of government and ideological orientation. As a hypotheses-generating pilot study, the paper will conclude by discussing further research possibilities in the field of Securitization.
In this thesis I empirically assess a wide range of constraints faced by different types of entrepreneurs in developing countries and the effectiveness of different policy interventions aimed at removing them. Thereby, the thesis contributes to the growing literature on enterprises and entrepreneurial activities in developing countries, by producing valuable evidence on which are the constraints faced by which types of entrepreneurs in starting, surviving, and growing, as well as on which interventions work, and for whom. The thesis hence provides important insights for policy formulation aimed at inclusive growth.
It consists of three self-contained papers, which put different emphasis on three different areas in which important research gaps remain. The first paper, which is presented in chapter 2, focuses on interventions aimed at removing firm-level shocks. It consists of a systematic review of evaluations of targeted programs and broader policies that intend to promote micro-, small and medium-sized enterprises (MSMEs) in developing countries, and assesses which of these interventions are effective in creating jobs. The paper presented in chapter 3 considers death of small, mainly one-person and household businesses. Together with my co-author, I collated panel data on more than 14,000 small firms from 16 firm panel surveys conducted in 12 developing countries. We use this unique panel dataset to provide answers to the following questions: What is the rate of firm death over different horizons? Which firms are more likely to die? Why do they die and what happens afterwards? In chapter 4, I take a closer look at household-level shocks and non-separability between the household and the firm, and assess whether extending health insurance to previously uninsured households leads to increased investment in productive activities, by assessing the impact of a large national health insurance scheme, the Mexican Seguro Popular, on investment in productive agricultural and non-agricultural assets and activities in rural areas. Using panel data from the Mexican rural evaluation surveys of the Oportunidades cash transfer program, I estimate the effect of the program on out-of-pocket health care expenditures and productive assets and account for possible self-selection of households into the program, using difference-in-differences estimation, as well as a propensity score matched difference-in-differences specification.
This study examines the dynamics which lead to revitalization of everyday life in the public spaces of Cagayan de Oro, a medium-sized urban center in Northern Mindanao, the Philippines. By employing the oriental philosophies together with western thoughts such as Henri Lefebvre, Alain Touraine and Jürgen Habermas, this study elucidates that the core of perceived, lived and conceived spaces is ‘the Subject.’ Once the Subject utilizes the public sphere to instill social action, social space is ultimately produced. Hawkers, grassroots environmental activists, street readers and artists are the social Subjects who partake in the vibrancy of public spaces. The social Subjects utilize public spaces as venues of social transformation. Thus, this study argues that the social Subjects’ role in democratic process lead to inclusivity of the marginalized sector in the public spaces of the city.
Due to the need for fast and energy-efficient accesses to growing amounts of data, the share and number of embedded memories inside modern microchips has been continuously increasing within the last years. Since embedded memories have the highest integration density of a fabrication technology they pose special test challenges due to complex manufacturing defects as well as strong transistor aging phenomena. This necessitates efficient methods for detecting more subtle defects while keeping test costs low. This work presents novel methods and techniques for improving the efficiency of embedded memory manufacturing tests. The proposed methods are demonstrated in an industrial setting based on production-proven transistor, memory as well as chip models and their benefits over the current state-of-the art is worked out.
While in some Eastern European countries a wave of colored revolutions challenged existing political orders, Belarus has remained largely untouched by mass protests. In Minsk, the diffusion of democratic ideas leading to the mobilization of population meets a stable authoritarian regime. Nevertheless, the stagnating democratization process cannot be only attributed to the strong authoritarian rule and abuse of power. Indeed, Belarusian president Alexander Lukashenko still enjoys popularity by a large part of the population. Although international observers report that elections in Belarus have never been free and fair, few commentators doubt that Lukashenko would not have won in democratic elections. This evidence suggests that the regime succeeded in building a strong legitimizing basis, which has not been seriously challenged during the last two decades. This paper explores the authoritarian stability in Belarus by looking at the patterns of state ideology. The government effectively spreads state ideology since the early 2000s. Ideology departments have been created in almost all state institutions. The education sector has been affected by the introduction of the compulsory course "The Fundamentals of Belarusian State Ideology" at all universities, and increasing attention to the patriotic education at schools. Based on document analysis, I trace the creation of "ideological vertical" in Belarus and focuse on the issue of ideology in education and youth policy sectors.
Previous laboratory studies on the centipede game have found that subjects exhibit surprisingly high levels of cooperation. Across disciplines, it has recently been highlighted that these high levels of cooperation might be explained by “team reasoning”, the willingness to think as a team rather than as an individual. We run an experiment with a standard centipede game as a baseline. In two treatments, we seek to induce team reasoning by making a joint goal salient. First, we implement a probabilistic variant of the centipede game that makes it easy to identify a joint goal. Second, we frame the game as a situation where a team of two soccer players attempts to score a goal. This frame increases the salience even more. Compared to the baseline, our treatments induce higher levels of cooperation. In a second experiment, we obtain similar evidence in a more natural environment–a beer garden during the 2014 FIFA Soccer World Cup. Our study contributes to understanding how a salient goal can support cooperation.
This thesis distills technical requirements for an increased probative value and data protection compliance, and maps them onto cryptographic properties for which it constructs provably secure and especially private malleable signature schemes (MSS). MSS are specialised digital signature schemes that allow the signatory to authorize certain subsequent modifications, which will not negatively affect the signature verification result.
Legally, regulations such as European Regulation 910/2014 (eIDAS), ‘follow-up’ to longstanding Directive 1999/93/EC, describe the requirements in technology-neutral language. eIDAS states that, when a digital signature meets the full requirements it becomes a qualified electronic signature and then it “[...] shall have the equivalent legal effect of a handwritten signature [...]” [Art. 25 Regulation 910/2014]. The question of what legal effect this has with regards to the probative value that is assigned is actually not determined in EU Regulation 910/2014 but in European member state law. This thesis concentrates in its analysis on the — in this respect detailed — German Code of Civil Procedure (ZPO). Following the ZPO, a signature awards the signed document with at least a high probative value of prima facie evidence. For signed documents of official authority the ZPO’s statutory rules even award evidence with a legal presumption of authenticity. This increased probative value is also awarded to electronic documents bearing electronic signatures when those conform to the eIDAS requirements. The requirements centre around the technical security goals of integrity and accountability. Technical mechanisms use cryptographic means to detect the absence of unauthorized modifications (integrity) and allow to authenticate the signed document’s signatory (accountability).
However, the specialised malleable signature schemes’ main advantage is a cryptographic property termed privacy: An authorized subsequent modification will protect the confidentiality of the modified original. Moreover, the MSS will retain a verifiable signature if only authorized modifications were carried out. If these properties are reached with provable security the schemes are called private malleable signature schemes. This thesis analyses two forms of MSS discussed in existing literature: Redactable signature schemes (RSS) which allow subsequent deletions, and sanitizable signature schemes (SSS) which allow subsequent edits. These two forms have many application scenarios: A signatory can delegate that a later redaction might take place while retaining the integrity and authenticity protection for the still remaining parts. The verification of a signature on a redacted or sanitized document still enables the verifying entity to corroborate the signatory’s identity with the help of flanking technical and organisational mechanisms, e.g. a trusted public key infrastructure. The valid signature further corroborates the absence of unauthorized changes, because the MSS is still cryptographically protecting the signed document from undetected unauthorized changes inflicted by adversaries. Due to the confidentiality protection for the overwritten parts of the document following from cryptographic privacy the sanitization and redaction can be used to safeguard personal data to comply with data protection regulation or withhold trade-secrets.
The research question is: Can a malleable signature scheme be private to be compliant with EU data protection regulation and at the same time fulfil the integrity protection legally required in the EU to achieve a high probative value for the data signed?
Answering this requires to understand the protection requirements in respect to accountability and integrity rooted in Regulation 910/2014 and related legal texts. This thesis has analysed the previous Directive 1999/93/EC as well as German SigG and SigVO or UK and US laws. Besides that, legal texts, laws and regulations for the protection requirements of personal data (or PII) have been analysed to distill the confidentiality requirements, e.g. the German BDSG or the EU Regulation 2016/679 (GDPR). Moreover, an answer to the research question entails understanding the relevant difference between regular digital signature schemes, like RSASSA-PSS from PKCS-v2.2 [422], which are legally accepted mechanisms for generating qualified electronic signatures and MSS for which the legal status was completely unknown before the thesis. Especially as MSS allow the authorized entity to adapt the signature, such that it is valid after the authorized modification, without the knowledge or use of the signatory’s signature generation key. On verification of an MSS the verifying entity still sees a valid signature technically appointing the legal signatory as the origin of a document, which might — however — have undergone authorized modifications after the signature was applied.
The thesis documents the results achieved in several domains:
1. Analysis of legal requirements towards integrity protection for an increased probative value and towards the confidentiality protection for use as a privacy-enhancing-technique to comply with data protection regulation.
2. Definition of a suitable terminology for integrity protection to capture (a) the differences between classical and malleable signature schemes, (b) the subtleties among existing MSS, as well as (c) the legal requirements.
3. Harmonisation of existing MSS and their cryptographic properties and the analysis of their shortcomings with respect to the legal requirements.
4. Design of new cryptographic properties and their provably secure cryptographic instantiations, i.e., the thesis proposes nine new cryptographic constructions accompanied by rigorous proofs of their security with respect to the formally defined cryptographic properties.
5. Final evaluation of the increased probative value and data-protection level achievable through the eight proposed cryptographic malleable signature schemes.
The thesis concludes that the detection of any subsequent modification (authorized and unauthorized) is of paramount legal importance in order to meet EU Regulation 910/2014. Further, this thesis formally defined a public form of the legally requested integrity verification which allows the verifying entity to corroborate the absence of any unauthorized modifications with a valid signature verification while simultaneously detecting the presence of an authorized modification — if at least one such authorized modification has subsequently occurred. This property, called non-interactive public accountability (PUB), has been formally defined in this thesis, was published and has already been adopted by the academic community. It was carefully conceived to not negatively impact a base-line level of privacy protection, as non-interactive public accountability had to destroy an existing strong privacy notion of transparency, which was identified as a hinderance to legal equivalence arguments. With RSS and SSS constructions that meet these properties, the thesis can give a positive answer to the research question:
Private MSS can reach a level of integrity protection and guarantee a level of accountability comparable to that of technical mechanisms that are legally accepted to generate qualified electronic signatures giving an increased probative value to the signed document, while at the same time protect the overwritten contents’ confidentiality.
Performance optimization of stencil codes requires data locality improvements. The polyhedron model for loop transformation is well suited for such optimizations with established techniques, such as the PLuTo algorithm and diamond tiling. However, in the domain of our project ExaStencils, stencil codes, it fails to yield optimal results. As an alternative, we propose a new, optimized, multi-dimensional polyhedral search space exploration and demonstrate its effectiveness: we obtain better results than existing approaches in several cases. We also propose how to specialize the search for the domain of stencil codes, which dramatically reduces the exploration effort without significantly impairing performance.
Smart Grids integrate currently isolated power and communications networks, while introducing several new technologies on the hardware and software sides. One of the most important ingredients is the potential for demand-response programs, which offer the possibility of sending instructions to consumers to adapt their power consumption over a certain period of time. However, high-frequency data collection exposes consumers’ usage behaviors, leading to security and privacy challenges for Smart Grids.
In this thesis, three cryptographic schemes are constructed for different demand-response programs. In the mandatory incentive-based demand-response program, privacy preservation depends on the power consumption of consumers. An anonymous authentication scheme is constructed for overload auditing and privacy preservation. Consumers’ identities are anonymous during normal operation. The operation center defines an acceptable consumption threshold at times of power shortage. Consumers must follow the instruction and curtail their power consumption to meet the threshold. If they do so, the consumers keep their anonymity, while disobedient consumers, whose power consumption exceeds the threshold, can be identified. Security analysis demonstrates that the constructed anonymous authentication scheme is secure in a random oracle model. In the voluntary incentivebased demand-response program, consumers are categorized as either obedient or disobedient consumers according to their consumption curtailment. Consumers utilize a homomorphic encryption algorithm to encrypt their usage and report the ciphertexts to the operation center periodically. At a time of grid instability, the obedient consumers reduce their consumption and prove their curtailment by using a range proof. Both the usage reports and the proofs from obedient consumers concerning their consumption are reported without leaking private information. In order to achieve the real-time requirement, a security model is proposed and a batch verification algorithm is constructed, which is proved to be secure in the defined oracle model. Apart from reward and penalty detection in demand-response programs, theft detection is also an important requirement in Smart Grids. In order to achieve theft detection, this thesis employs the dynamic k-times anonymous authentication and blind signatures to create an efficient theft detection mechanism in the prepaid card system, where consumers pay for their consumption in advance and obtain credentials. A consumer sends the credentials anonymously and obtains corresponding credentials during times of consumption. If a thief tries to send reused credentials to steal electricity, his anonymity will be revoked. Finally, this thesis proves that the proposed mechanism finds the real identities of power thieves, without sacrificing the privacy of honest consumers under the random oracle model.
The Internet of Things (IoT) is a network of computational services, devices, and people, which share information with each other. In IoT, inter-system communication is possible and human interaction is not required. IoT devices are penetrating the home and office building environments. According to current estimates, about 35 billion IoT devices will be connected by the year 20212. In the IoT business model, value comes from integrating devices into applications, e.g., home and office automation. In general, an IoT application associates different information sources with actions which can modify the environment, e.g., change the room’s temperature, inform a person, e.g., send an e-mail, or activate other services, e.g., buy milk on-line.
In this thesis, we focus on the commissioning and verification processes of IoT devices used in building automation applications. Within a building’s lifespan, new devices are added, interior spaces are refurbished, and faulty devices are replaced. All of these changes are currently made manually. Furthermore, consider that a context-aware Building Management System (BMS) is an IoT application, which measures direct-context from the building’s sensors to characterize environmental conditions, user locations, and state. Additionally, a BMS combines sensor information to derive inferred-context, such as user activity. Similar to IoT devices, inferred-context instances have to be created manually. As the number of devices and inferred-context instances increases, keeping track of all associations becomes a time-consuming and error-prone task.
The hypothesis of the thesis is that users who interact with the building create use-patterns in the data, which describe functional relations between devices and inferred-context instances, e.g., which desk-movement sensor is used to infer desk-presence and controls which overhead light; additionally, use-patterns can also provide structural relations, e.g., the relative position of spatial sensors. To test the hypothesis, this thesis presents an extension to the new IoT class rule programming paradigm, which simplifies rule creation based on classes. The proposed extension uses a semantic compiler to simplify the device and inferred-context associations. Using direct-context information and template classes, the compiler creates all possible inferredcontext instances. Buildings using context-aware BMSs will have a dynamic response to user behaviour, e.g., required illumination for computer-work is provided by adjusting blinds or increasing the dim setting of overhead ceiling lamps. We propose a rule mining framework to extract use-patterns and find the functional and structural relationships between devices. The rule mining framework uses three stages: (1) event extraction, (2) rule mining, (3) structure creation. The event extraction combines the building’s data into a time-series of device events. Then, in the rule mining stage, rules are mined from the time series, where we use the established algorithm temporal interval tree association rule learner. Additionally, we proposed a rule extraction algorithm for spatial sensor’s data. The algorithm is based on statistical analysis of user transition times between adjacent sensors. We also introduce a new rule extraction algorithm based on increasing belief. In the last stage, structure creation uses the extracted rules to produce device association groups, hierarchical representation of the building, or the relative location of spatial sensors. The proposed algorithms were tested using a year-long installation in a living-lab consisting of a four-person office, a 12-person open office, and a meeting room. For the spatial sensors, four locations within public buildings were used: a meeting room, a hallway, T-crossing, and a foyer. The recording times range from two weeks to two months depending on scenario complexity.
We found that user-generated patterns appear in building data. The rule mining framework produced structures that represent functional and spatial relationships of building’s devices and provide sufficient information to automate maintenance tasks, e.g., automatic device naming. Furthermore, we found that environmental changes are also a source of device data patterns, which provide additional associations. For example, using the framework we found the façade group for exterior light sensors. The façade group can be used to automatically find an alternative signal source to replace broken outdoor light sensors. Finally, the rule mining framework successfully retrieved the relative location of spatial sensors in all locations but the foyer.
Data management is a cornerstone for any kind of information system - including the aerospace and aviation sector. In contrast to conventional domains, software development in the avionics domain must adhere to a legally binding certification process, called qualification. The success of the process depends on compliance with international standards, such as DO-178: Software Considerations in Airborne Systems and Equipment Certification. From a software developer's perspective, challenges arise in terms of methods and tools. Techniques that have a potential impact on the deterministic and predictable execution of avionics software are prohibited.
The objective of this thesis' research is to develop a scalable method to realize data-management for multi-variant avionics software under the restrictions and constraints of the domain. Since avionics software faces very long-term life-cycles (up to 75 years), a particular focus is being placed on maintenance and evolution. Based on the insights gained in a semi-structured interview at Airbus Helicopters, industrial established approaches to implement qualified avionics software are assessed at first and compared with respect to strengths and weaknesses for data-management afterwards. As a result, a novel development approach is proposed, combining model-based techniques and product-line technology to derive the source code of highly specific data-management variants, as well as the majority of assets required for the qualification process, from a declarative system specification.
In order to demonstrate the practicability of the approach in industry, a framework is presented that is deployed and applied at Airbus Helicopters to generate qualifiable data-management components for the variants of the NH90 helicopter. The maintainability is shown by means of a domain-specific optimization, in which the model-based and generative approach is used to establish safe memory overlays at compile-time. Key findings reveal a substantially reduced memory footprint (29,1% in case of a real-world scenario), as well as an significantly facilitated implementation process, which would not be accomplishable using conventional methods for software development in the avionics domain.
German state elections are in focus of this work due to the decreasing importance of the "catch all parties" and rise of the AfD in 2013. As small parties like the AfD first reached the 5% threshold in state parliaments (e.g. in the Saxony state election 2014), state elections can be used as barometer elections for the national ones. Further, state elections fill the gap between the 4-year national election cycle and provide additional information for the national election.
The aim of this thesis is to forecast state elections based on polling data from different institutes. Despite occurring errors in polls like measurement or sampling errors - which are also discussed in this work - forecasting is made with aggregate models depending on short term polling data. Irregular polling data have to be customized to generate daily data to apply parametric regression based models. To forecast single vote shares in multi-party elections, the range of methods varies from basic methods like averaging over nonparametric regression based methods to dynamic linear models.
The Linear Ordering problem consists in finding a total ordering of the vertices of a directed graph such that the number of backward arcs, i.e., arcs whose heads precede their tails in the ordering, is minimized. A minimum set of backward arcs corresponds to an optimal solution to the equivalent Feedback Arc Set problem and forms a minimum Cycle Cover.
Linear Ordering and Feedback Arc Set are classic NP-hard optimization problems and have a wide range of applications. Whereas both problems have been studied intensively on dense graphs and tournaments, not much is known about their structure and properties on sparser graphs. There are also only few approximative algorithms that give performance guarantees especially for graphs with bounded vertex degree.
This thesis fills this gap in multiple respects: We establish necessary conditions for a linear ordering (and thereby also for a feedback arc set) to be optimal, which provide new and fine-grained insights into the combinatorial structure of the problem. From these, we derive a framework for polynomial-time algorithms that construct linear orderings which adhere to one or more of these conditions. The analysis of the linear orderings produced by these algorithms is especially tailored to graphs with bounded vertex degrees of three and four and improves on previously known upper bounds. Furthermore, the set of necessary conditions is used to implement exact and fast algorithms for the Linear Ordering problem on sparse graphs. In an experimental evaluation, we finally show that the property-enforcing algorithms produce linear orderings that are very close to the optimum and that the exact representative delivers solutions in a timely manner also in practice.
As an additional benefit, our results can be applied to the Acyclic Subgraph problem, which is the complementary problem to Feedback Arc Set, and provide insights into the dual problem of Feedback Arc Set, the Arc-Disjoint Cycles problem.
This paper aims at the documentation of the results of a study dealing with the relationship between in-group and out-group attitudes. The research enquires into the question whether nationalistic, patriotic, and anti-immigration attitudes are only correlated with each other or whether there is a causal relationship. The presentation of the Mplus output together with the correlation and covariance matrices will allow the readers to control the results and, if they want to do it, to test alternative models.
Three essays on the welfare and social impacts of external shocks and public policies in Mexico
(2018)
In this thesis I examine the welfare and social consequences associated with one economic shock and two policy interventions in Mexico. I use non-experimental approaches that exploit increased data availability as well as in-depth knowledge of the institutional background and show how germane extensions in the methodology shed light on previously unaccounted consequences and help better identify affected households. In particular, I quantify the role of quantity substitution effects in the alleviation of welfare, I estimate the effect of quality substitution on the efficiency of a taxation policy, and I document the possible consequences of an aggressive policy intervention on organized and property crime.
Data is an important competitive resource in digital online markets. As a result, the access and availability of data can be the basis for a competitive advantage. This thesis analyzes the role and competitive effects of data in digital markets and contributes to an improved understanding of data as a (potential) basis for market power. Thereby, the thesis also contributes to the ongoing policy debate on how to safeguard a fair and open competitive environment for internet-based digital (i.e., online) services as well as traditional services.
In doing so, Study 1 surveys the literature and discusses (i) the challenges that are associated with assessing market power in digital markets, (ii) the challenges in creating a level playing field in digital markets, e.g., by harmonizing regulatory obligations for online and offline services, and (iii) the vital role of data and data protection in the context of data-driven business models. Study 2 and Study 3 focus on the competitive effects of transferring data between online competitors. The study on social logins (Study 2) highlights the strategic effects as well as welfare implications if competing online services deliberately and voluntarily decide to share user and usage data. Whereas Study 2 abstracts from the user’s decision how much data to provide to an online service, Study 3 focuses on the amount of data that firms require for their services and that is provided by users. In particular, Study 3 investigates the competitive and welfare effects of a new fundamental consumer right: The right to data portability. This right is part of the General Data Protection Regulation (GDPR) which becomes effective in May 2018 for all online services available to users in the European Union and allows users to transfer their personal data from one online service to another (competing) online service (c.f., GDPR Article 20). In this context, the study examines the firms’ strategic reactions to the introduction of such a right and identifies the ensuing market outcomes as well as policy and managerial implications.
In conclusion, this doctoral thesis contributes to an improved understanding of (i) the competitive effects that arise from data as an economic good or valuable asset for digital services, and (ii) the aspects that may constitute market power in digital markets. Moreover, from a policy perspective, the thesis can be understood as a theoretically founded research project that (i) informs which market failures may arise in the context of digital (data-driven) markets and that (ii) highlights the peculiarities that need to be considered to define appropriate legal requirements in order to establish a level playing field between online services and traditional (established) services, but also between competing online services. Therefore, the thesis also contributes to the discussion on whether and how dominant online platforms should and can be regulated.
Advanced driver assistance systems play an important role in increasing the safety on today's roads. The knowledge about the other vehicles' positions is a fundamental prerequisite for numerous safety critical applications, making it possible to foresee critical situations, warn the driver or autonomously intervene. Forward collision avoidance systems, lane change assistants or adaptive cruise control are examples of safety relevant applications that require an accurate, continuous and reliable relative position of surrounding vehicles.
Currently, the positions of surrounding vehicles is estimated by measuring the distance with e.g. radar, laser scanners or camera systems. However, all these techniques have limitations in their perception range, as all of them can only detect objects in their line-of-sight. The limited perception range of today's vehicles can be extended in future by using cooperative approaches based on Vehicle-to-Vehicle (V2V) communication.
In this thesis, the capabilities of cooperative relative positioning for vehicles will be assessed in terms of its accuracy, continuity and reliability. A novel approach where Global Navigation Satellite System (GNSS) raw data is exchanged between the vehicles is presented. Vehicles use GNSS pseudorange and Doppler measurements from surrounding vehicles to estimate the relative positioning vector in a cooperative way. In this thesis, this approach is shown to outperform the absolute position subtraction as it is able to effectively cancel out common errors to both GNSS receivers. This is modeled theoretically and demonstrated empirically using simulated signals from a GNSS constellation simulator.
In order to cope with GNSS outages and to have a sufficiently good relative position estimate even in strong multipath environments, a sensor fusion approach is proposed. In addition to the GNSS raw data, inertial measurements from speedometers, accelerometers and turn rate sensors from each vehicle are exchanged over V2V communication links. A Bayesian approach is applied to consider the uncertainties inherently to each of the information sources. In a dynamic Bayesian network, the temporal relationship of the relative position estimate is predicted by using relative vehicle movement models.
Also real world measurements in highway, rural and urban scenarios are performed in the scope of this work to demonstrate the performance of the cooperative relative positioning approach based on sensor fusion. The results show that the relative position of another vehicle towards the ego vehicle can be estimated with sub-meter accuracy in highway scenarios. Here, good reliability and 90% availability with an uncertainty of less than 2.5m is achieved. In rural environments, drives through forests and towns are correctly bridged with the support of on-board sensors. In an urban environment, the difficult estimation of the ego vehicle heading has a mayor impact in the relative position estimate, yielding large errors in its longitudinal component.
Ongoing advancements in technology have significantly shaped the marketing landscape over the course of the last decades. Consequently, new technology-driven opportunities and challenges for marketing research and practice emerge. Those urge the need to redefine firm-based value propositions, to adapt business models, to place significant emphasis on topics such as innovation, design and strategy, but also to develop new knowledge and skill sets. In response to those changes, this dissertation addresses two major developments and related opportunities and challenges – namely, digitized retail environments and innovative, complex business models – in three different essays. Thereby, this dissertation contributes to a better understanding of the evolving relationship between marketing and technology.
Essay 1 and Essay 2 address the increasing digitization of physical retail environments. Retailers embrace a plethora of retail technologies to facilitate activities and processes for creating, communicating and delivering value to consumers and to consequently improving physical retail stores.
Essay 1 provides an integrated literature review on digital retail technologies’ impact on consumer behavior at physical retail stores. Thereby, Essay 1 adopts a shopping cycle based framework structured around distinct phases of consumer behavior to delineate and summarize findings from existing literature on the behavioral effects of retail technologies embraced by retailers. With the applied shopping cycle based framework, Essay 1 identifies specific gaps in extant literature relative to currently embraced practices of retail technology, but furthermore on emerging trends, which have the potential to reshape the retailing environment. Subsequently, an extensive research agenda is proposed to advance the next generation of knowledge development. With its integrated literature review and research agenda, Essay 1 contributes to research in the retail area, such as research on customer experience, shopper marketing and specifically on the role of technology in retailing.
Essay 2 analyzes consumers’ response to one particular innovative retail technology and thus replies to future questions of the research agenda developed in Essay 1. Essay 2 analyzes the perceptions and consequences of attribute-based personalized advertising in physical retail stores, where other people are present and can see the personalized content. Essay 2 shows when and how social presence of others impacts consumers’ attitudes and behavioral intentions, as well as emotions, when exposed to personalized advertising. The findings of two experimental studies provide evidence, that the presence of others does not influence consumer response per se, but it interacts with personalization. Further, the results show that consumers’ negative response to personalized ads in the social presence of others is mediated by embarrassment and moderated by consumers’ congruity state (the extent to which the ad is consistent with the consumer’s self-concept). These findings offer new theoretical insights into how consumers respond to personalized advertising in the social presence of others, and thus advance marketing research on personalized advertising, digital displays, shopper marketing as well as research on customer experience. Further, the results disclose meaningful managerial implications for the application of new consumer tracking technology.
Essay 3 addresses opportunities and challenges related to innovative, complex business models resulting from technological advancements. With customer orientation in free e-services, Essay 3 analyzes a strategically highly relevant phenomenon, which thus far has been neglected by prior research. Free e-services are characterized by the superiority of free costumers and interdependencies between free and paying customers. Essay 3 investigates how free e-service providers respond to those particularities in their customer orientation activities. Results from one qualitative and one quantitative study uncover, that only free-born providers, that from the outset strategically committed themselves to the free business model, possess customer orientation capabilities that match the particularities of free e-services. They use customer orientation toward one customer group to increase the satisfaction of the other simultaneously and, thus, are reaching their financial goals. Contrastively, laggards, that started with a non-free business model before launching their free e-service, do not exploit the full potential of customer orientation, as they focus too much on the paying customer group. These findings offer new theoretical insights for research on customer orientation, research on two-sided markets as well as on stakeholder marketing. Moreover, Essay 3 provides valuable and actionable insights for managers of free e-services.
Taking a broader perspective, this dissertation advances marketing knowledge on technological innovation, new types of consumer data, strategy shifts, as well as new firm capabilities and managerial skill sets required in an age of disruption within the virtual, but also the physical world. By using a variety of methods including conceptual work, experiments, qualitative interviews as well as survey research, and furthermore by ensuring high practical relevance, this dissertation adds important perspectives to the evolving relationship between marketing and technology.
The demands of a career in competitive sports can lead to chronic stress perception among athletes if there is a non-conformity of requirements and available coping resources. The Trier Inventory for Chronic Stress (TICS) (Schulz et al., 2004) is said to be thoroughly validated. Nevertheless, it has not yet been subjected to a confirmatory factor analysis. The present study aims (1) to evaluate the factorial validity of the TICS within the context of competitive sports and (2) to adapt a short version (TICS-36). The total sample consisted of 564 athletes (age in years: M = 19.1, SD = 3.70). The factor structure of the original TICS did not adequately fit the present data, whereas the short version presented a satisfactory fit. The results indicate that the TICS-36 is an economical instrument for gathering interpretable information about chronic stress. For assessment in competitive sports with TICS-36, we generated overall and gender-specific norm values.
Located at the interface of land and sea, Caribbean mangroves frequently experience severe disturbances by hurricanes, but in most cases storm-impacted mangrove forests are able to regenerate. How exactly regeneration proceeds, however, is still a matter of debate: does—due to the specific site conditions—regeneration follows a true auto-succession with exactly the same set of species driving regeneration that was present prior to the disturbance, or do different trajectories of regeneration exist? Considering the fundamental ecosystem services mangroves provide, a better understanding of their recovery is crucial. The Honduran island of Guanaja offers ideal settings for the study of regeneration dynamics of storm-impacted mangrove forests. The island was hit in October 1998 by Hurricane Mitch, one of the most intense Atlantic storms of the past century. Immediately after the storm, 97% of the mangroves were classified as dead. In 2005, long-term monitoring on the regeneration dynamics of the mangroves of the island was initiated, employing permanent line-transects at six different mangrove localities all around the island, which have been revisited in 2009 and 1016. Due to the pronounced topography of the island, different successional pathways emerge depending on the severity of the previous disturbance.
The focus of this study is a single manuscript in a Tai Nuea village near Mueang Sing in northwestern Laos, copied in 1935 and entitled Pukthanusati (Pali buddhānussati) ‘the Recollection of the Buddha.’ It is written in the Tai Nuea language and Lik Tho Ngok or ‘Bean Sprout’ script, and is in the form of a jātaka or narrative story of a former lifetime of the Buddha, the most popular genre for recitation. The thesis examines the lay manuscript culture of which it is a part and the language and orthography of its contents, and then provide a phonemic transcription and annotated English translation of the text together a complete glossary of terms and images of the manuscript.
The detailed study of this one manuscript is used as an entry point for a broader investigation of Lik manuscript culture as found in Mueang Sing today, including the distinct roles of the Lik and Tham orthographies, scribal vocation, manuscript production, uses and functions, and the contents of the texts. The Pukthanusati text is then the basis for examination of phonological aspects of language use in the recitation of Lik manuscript literature and the historical context of the dialect’s phonemes as well as Burmese and Indic forms occurring as loanwords. The Lik Tho Ngok orthography is also placed in context through an overview of its historical development, including possible prototypes and phonological influences, the twentieth century reforms of some traditional orthographies and the question of orthographic depth. The Lik Tho Ngok orthography as found in the manuscript studied is then described in detail, with tables and accompanying notes illustrating the inventories of consonant and vowel glyphs, consonant clusters, ligatures and special orthographic forms, use of subscripts and superscripts, numerals, punctuation, and the Tai Nuea spelling system. The phonemic transcription and annotated English translation of the text illustrate the rhyming structure and other features of specialised language use in Lik manuscript culture.
The Tai Nuea and a number of closely related Tai groups have generally been overlooked in the field of Buddhist Studies. The study of this manuscript culture therefore contributes to our understanding of local practices on the northern periphery of Theravāda Buddhist influence in mainland Southeast Asia in addition to responding to an urgent need to examine and document this endangered scribal tradition and it specialised use of language and orthography.
An investigative study of newly released archive material representing six decades of Royal Court Theatre history including analyses of the productions of John Osborne’s Look Back in Anger, Edward Bond’s Early Morning, Caryl Churchill’s Cloud Nine, Jim Allen’s Perdition, Sarah Kane’s Blasted, and debbie tucker green’s Stoning Mary.
Sixty years after the first season of the English Stage Company was launched at the Royal Court Theatre there is no theatre maker and theatre scholar in the world who has not heard of this first writer’s theatre in Britain: it famously put the angry young Jimmy Porter on stage, it helped put an end to stage censorship in Britain and has through the years been one of the most important engines for new writing in the English speaking world. The who-is-who of British playwrighting started off, visited or ended up at “the most important theatre in Europe” (New York Times).
But no matter how big the names attached to a theatre are, it is the everyday battles of budgets, politics and compromises that really are a theatre’s history. Like a detective story, this first independent study of the Royal Court delves deep into the newly opened Royal Court Archives to fully bring to light some of the most controversial decisions, struggles and compromises that shaped the Royal Court.
This work illustrates reform approaches in Africa using an international legal comparative approach. The research uses Tanzania and Senegal as the primary case studies and France, the United Kingdom and Germany as secondary case studies to illustrate how Europe reformed data protection regimes through the transposition of the EU Data Protection Directive of 1995.
Chapter one introduces the work; explaining the forces towards data protection regulations and their basis. Chapter two provides for a ‘back-to-back' comparison in three countries (France, Germany and United Kingdom) against the 1995 Data Protection Directive. The idea behind this chapter is to draw a picture on how the legal culture and the pre-existing notions of the right to privacy inform on data protection legal reforms and determines the nature, contents, context and interpretation of adopted regime for data protection. Eventually, all these aspects affect the nature and extent of protection offered regardless of the substance of the law adopted.
Chapter three gives a narrative explanation of nature and perceptions of the right to privacy in Africa and how this may affect data protection reforms in Africa. In the same disposition, African customary legal systems and practices are explained providing a reader with a picture of the overall nature of African systems that makes up an African legal culture. The overview of African privacy perception and legal system is necessary for assessing the workability of any data protection regime to be adopted in Africa which in effect answers the first research question. The chapter draws its rationale from chapter two. In understanding African perceptions of privacy and the African legal culture, one can be able to predict the content and context of the reforms and maybe how the judiciary might interpret the laws based on local perceptions and supporting systems.
An overview of the African data protection architecture or rather human right architecture is provided in chapter four; ideally to provide a reader with a picture of the enforcement systems in Africa as a continent. This is followed by chapter five discussing the two major legal systems in Africa; the civil law and the common law system. The chapter also illustrates the position of African landscape in relation to legal harmonization/unification. This aspect is considered necessary because data protection regimes are more focused on legal harmonization and hence the question of how well or to what extent Africa as a continent can bring about harmonization in law became inevitable. Eventually, the chapter offers a comparative mirror analysis of the primary case studies, i.e. Senegal and Tanzania. The analysis is made on the reform approach taken, motivation behind the reforms and on the regime erected (this is done through textual analysis of the law and the draft bill respectively).
Chapter six concludes the work by answering research questions based on findings and scrutiny from each chapter. It is concluded that there is a very slim chance for the African States to cling on the cultural defence against the adoption of the Western frameworks for data protection. It is also concluded that, lest Africa becomes an active participant in the global process that informs on data protection challenges and regulations, it faces a danger of becoming a puppet of foreign data protection regulation, which may or may not fit African legal culture. The chapter also illustrates how Africa as a continent and the African States individually have taken up data protection reforms blindly. The motivations for the reforms are vaguely stated and unclear. In the majority of legal instruments, the reforms are not taken as a move towards securing and protecting individual rights rather a purely political move influenced by economic motivations. The reforms are to a large extent, a mere impression to align with global data protection regimes and hence lack the political will to enforce the laws.
Optical Graph Recognition
(2017)
Graphs are an important model for the representation of structural information between objects. One identifies objects and nodes as well as a binary relation between objects and edges. Graphs have many uses, e. g., in social sciences, life sciences and engineering. There are two primary representations: abstract and visual. The abstract representation is well suited for processing graphs by computers and is given by an adjacency list, an adjacency matrix or any abstract data structure. A visual representation is used by human users who prefer a picture. Common terms are diagram, scheme, plan, or network. The objective of Graph Drawing is to transform a graph into a visual representation called the drawing of a graph. The goal is a “nice” drawing.
In this thesis we introduce Optical Graph Recognition. Optical Graph Recognition (OGR) reverses Graph Drawing and transforms a digital image of a graph into an abstract representation. Our approach consists of four phases: Preprocessing where we determine which pixels of an image are part of the graph, Segmentation where we recognize the nodes, Topology Recognition where we detect the edges and Postprocessing where we enrich the recognized graph with additional information. We apply established digital image processing methods and make use of the special property that the image contains nodes that are connected by edges. We have focused on developing algorithms that need as little parameters as possible or to automatically calibrate the parameters. Most false recognition results are caused by crossing edges as this makes tracing the edges difficult and can lead to other recognition errors.
We have evaluated hand-drawn and computer-drawn graphs. Our algorithms have a very high recognition rate for computer-drawn graphs, e. g., from a set of 100000 computer-drawn graphs over 90% were correctly recognized. Most false recognition results where observed for hand-drawn graphs as they can include drawing errors and inaccuracies. For universal usability we have implemented a prototype called OGRup for mobile devices like smartphones or tablet computers. With our software it is possible to directly take a picture of a graph via a built in camera, recognize the graph, and then use the result for further processing. Furthermore, in order to gain more insight into the way a person draws a graph by hand, we have conducted a field study.
In three studies, this dissertation deals with success factors of digital and non-digital services. More specifically it reveals the role of non-monetary costs for the success of free digital services. Moreover, it deals with customer orientation of frontline employees. It reveals the crucial role of the customer for frontline employees' ability to act in a customer-oriented manner.
Even though the negative outcomes of boundary spanners’ role stress are well- known to service theory and practice, insights into how to manage frontline employees to lower experienced role stress are sparse. Moreover, extant research fails to include most up-to-date challenges of new service environments and their effects on frontline employees’ role stress perceptions. This dissertation’s studies focus on three different aspects of managing boundary spanners’ role stress in service occupations affected by the challenges of new service environments: managing (a) enhanced individual role complexity by feedback (Study 1), (b) person-role conflicts by transformational leadership (Study 2), and (c) role ambiguity by problem-focused coping strategies (Study 3).
Study 1 introduces the new concept of individual role complexity as mediator between environmental complexity (from customers and the own organization) and role stress of frontline employees in service settings. Moreover, Study 1 details different aspects of individual role complexity and considers feedback from various sources as potential moderators that mitigate the stressing effect of high individual role complexity. The results of this study provide a deeper understanding of current boundary spanner roles, their complexity and potential approaches to handle individual role complexity. The study offers directions for further empirical research on boundary spanners’ role complexity. Moreover, it helps managers to understand today’s service environments and frontline employees’ roles plus it suggests tools to manage imminent performance declines caused by individual role complexity.
The purpose of Study 2 is to identify effective leadership behavior that reduces frontline employees’ person-role conflicts, a hitherto rather neglected sub-dimension of role conflicts that strongly differs from externally originated role conflict dimensions which have been examined in existing research. Moreover, Study 2 aims at identifying individual cultural values as important contingency factors for the effects of transformational leadership respectively person-role conflicts on a frontline employee’s job performance. Results reveal that charisma-related transformational leadership styles promote the job performance of frontline employees, even in the face of person-role conflicts, while intellectual stimulation has a negative effect on job performance. In addition, the individual cultural values collectivism, power distance, and uncertainty avoidance moderate the effects of transformational leadership on job performance. The findings imply that service firms should train managers in the use of charisma-related leadership styles and highlight the importance of employees’ individual culture when leading frontline employees with distinct leadership styles.
Study 3 addresses which frontline employees’ problem-focused coping strategies might effectively reduce perceived role ambiguity. The gathered evidence indicates that only action coping is effective, whereas instrumental support seeking does even enhance perceived role ambiguity. An examination of intrinsic and extrinsic coping resources as drivers of coping reveals that conscientiousness and supervisor support are helpful coping resources. Contrary, neuroticism drives insufficient coping and inhibits the use of effective coping resources. Managers of service firms should provide training to ensure effective supervisor support plus consider the personality traits of potential employees in recruitment procedures to reduce and prevent experienced role ambiguity among frontline employees. Altogether, the three studies contribute to existing service theory and practice by identifying management tools to counter role stress perceptions induced by the challenges of new service environments.
Software model checking is a successful technique for automated program verification. Several of the most widely used approaches for software model checking are based on solving first-order-logic formulas over predicates using SMT solvers, e.g., predicate abstraction, bounded model checking, k-induction, and lazy abstraction with interpolants. We define a configurable framework for predicate-based analyses that allows expressing each of these approaches. This unifying framework highlights the differences between the approaches, producing new insights, and facilitates research of further algorithms and their combinations, as witnessed by several research projects that have been conducted on top of this framework. In addition to this theoretical contribution, we provide a mature implementation of our framework in the software verifier that allows applying all of the mentioned approaches to practice. This implementation is used by other research groups, e.g., to find bugs in the Linux kernel, and has proven its competitiveness by winning gold medals in the International Competition on Software Verification.
Tools and approaches for software model checking like our predicate analysis are typically evaluated using performance benchmarking on large sets of verification tasks. We have identified several pitfalls that can silently arise during benchmarking, and we have found that the benchmarking techniques and tools that are used by many researchers do not guarantee valid results in practice, but may produce arbitrarily large measurement errors. Furthermore, certain hardware characteristics can also have nondeterministic influence on the measurements. In order to being able to properly evaluate our framework for software verification, we study the effects of these hardware characteristics, and define a list of the most important requirements that need to be ensured for reliable benchmarking. We present as solution an open-source benchmarking framework BenchExec, which in contrast to other benchmarking tools fulfills all our requirements and aims at making reliable benchmarking easy. BenchExec was already adopted by several research groups and the International Competition on Software Verification.
Using the power of BenchExec we conduct an experimental evaluation of our unifying framework for predicate analysis. We study the effect of varying the SMT solver and the way program semantics are encoded in formulas across several verification algorithms and find that these technical choices can significantly influence the results of experimental studies of verification approaches. This is valuable information for both researchers who study verification approaches as well as for users who apply them in practice. Our comprehensive study of 120 different configurations would not have been possible without our highly flexible and configurable unifying framework for predicate analysis and shows that the latter is a valuable base for conducting experiments. Furthermore, we show using a comparison against top-ranking verifiers from the International Competition on Software Verification that our implementation is highly competitive and can outperform the state of the art.
In this thesis, we examine whether the probability distribution given by the Brownian Motion on a semialgebraic set is definable in an o-minimal structure and we establish asymptotic expansions for the time evolution.
We study the probability distribution as an example for the occurrence of special parameterized integrals of a globally subanalytic function and the exponential function of a globally subanalytic function. This work is motivated by the work of Comte, Lion and Rolin, which considered parameterized integrals of globally subanalytic functions, of Cluckers and Miller, which examined parameterized integrals of constructible functions, and by the work of Cluckers, Comte, Miller, Rolin and Servi, which treated oscillatory integrals of globally subanalytic functions.
In the one dimensional case we show that the probability distribution on a family of sets, which are definable in an o-minimal structure, are definable in the Pfaffian closure.
In the two-dimensional case we investigate asymptotic expansions for the time evolution. As time t approaches zero, we show that the integrals behave like a Puiseux series, which is not necessarily convergent. As t tends towards infinity, we show that the probability distribution is definable in the expansion of the real ordered field by all restricted analytic functions if the semialgebraic set is bounded.
For this purpose, we apply results for parameterized integrals of globally subanalytic functions of Lion and Rolin. By establishing the asymptotic expansion of the integrals over an unbounded
set, we demonstrate that this expansion has the form of convergent Puiseux series with negative exponents and their logarithm. Subsequently, we get that the asymptotic expansion is definable in an o-minimal structure.
Finally, we study the three-dimensional case and give the proof that the probability distribution given by the Brownian Motion behaves like a Puiseux series as time t tends towards zero.
As t approaches infinity and the semialgebraic set is bounded, it can be ascertained that the probability distribution has the form of a constructible function by results of Cluckers and Miller and therefore it is definable in an o-minimal structure.
If the semialgebraic set is unbounded, we establish the asymptotic expansions and prove that the probability distribution given by the Brownian Motion on unbounded sets has an asymptotic expansion of the form of a constructible function. In consequence of that, the asymptotic expansion is definable in an o-minimal structure.
In smart grids, managing and controlling power operations are supported by information
and communication technology (ICT) and supervisory control and data acquisition (SCADA) systems. The increasing adoption of new ICT assets in smart grids is making smart grids vulnerable to cyber threats, as well as raising numerous concerns about the adequacy of current security approaches.
As a single act of penetration is often not sufficient for an attacker to achieve his/her goal, multistage cyber attacks may occur. Due to the interdependence between the power grid and the communication network, a multistage cyber attack not only affects the cyber system but impacts the physical system. This thesis investigates an application-oriented stochastic game-theoretic cyber threat assessment framework, which is strongly related to the information security risk management process as standardized in ISO/IEC 27005. The proposed cyber threat assessment framework seeks to address the specific challenges (e.g., dynamic changing attack scenarios and understanding cascading effects) when performing threat assessments for multistage cyber attacks in smart grid communication networks.
The thesis looks at the stochastic and dynamic nature of multistage cyber attacks in smart grid use cases and develops a stochastic game-theoretic model to capture the interactions of the attacker and the defender in multistage attack scenarios. To provide a flexible and practical payoff formulation for the designed stochastic game-theoretic model, this thesis presents a mathematical analysis of cascading failure propagation (including both interdependency cascading failure propagation and node overloading cascading failure propagation) in smart grids. In addition, the thesis quantifies the characterizations of disruptive effects of cyber attacks on physical power grids.
Furthermore, this thesis discusses, in detail, the ingredients of the developed stochastic game-theoretic model and presents the implementation steps of the investigated stochastic game-theoretic cyber threat assessment framework. An application of the proposed cyber threat assessment framework for evaluating a demonstrated multistage cyber attack scenario in smart grids is shown. The cyber threat assessment framework can be integrated into an existing risk management process, such as ISO 27000, or applied as a standalone threat assessment process in smart grid use cases.
Entity Linking is the task of mapping terms in arbitrary documents to entities in a knowledge base by identifying the correct semantic meaning. It is applied in the extraction of structured data in RDF (Resource Description Framework) from textual documents, but equally so in facilitating artificial intelligence applications, such as Semantic Search, Reasoning and Question and Answering. Most existing Entity Linking systems were optimized for specific domains (e.g., general domain, biomedical domain), knowledge base types (e.g., DBpedia, Wikipedia), or document structures (e.g., tables) and types (e.g., news articles, tweets). This led to very specialized systems that lack robustness and are only applicable for very specific tasks. In this regard, this work focuses on the research and development of a robust Entity Linking system in terms of domains, knowledge base types, and document structures and types.
To create a robust Entity Linking system, we first analyze the following three crucial components of an Entity Linking algorithm in terms of robustness criteria: (i) the underlying knowledge base, (ii) the entity relatedness measure, and (iii) the textual context matching technique. Based on the analyzed components, our scientific contributions are three-fold. First, we show that a federated approach leveraging knowledge from various knowledge base types can significantly improve robustness in Entity Linking systems. Second, we propose a new state-of-the-art, robust entity relatedness measure for topical coherence computation based on semantic entity embeddings. Third, we present the neural-network-based approach Doc2Vec as a textual context matching technique for robust Entity Linking.
Based on our previous findings and outcomes, our main contribution in this work is DoSeR (Disambiguation of Semantic Resources). DoSeR is a robust, knowledge-base-agnostic Entity Linking framework that extracts relevant entity information from multiple knowledge bases in a fully automatic way. The integrated algorithm represents a collective, graph-based approach that utilizes semantic entity and document embeddings for entity relatedness and textual context matching computation. Our evaluation shows, that DoSeR achieves state-of-the-art results over a wide range of different document structures (e.g., tables), document types (e.g., news documents) and domains (e.g., general domain, biomedical domain). In this context, DoSeR outperforms all other (publicly available) Entity Linking algorithms on most data sets.
Really new products frequently receive great academic, managerial and public attention, because they have the potential to change everybody’s lives. Our economic system fundamentally relies on creative destruction, i.e. the constant process by which innovations challenge established products. This tension between the new and the already existing in the end promises competitive advantage, but makes it difficult to successfully introduce product innovations to the market. Hence, companies strive to become better all along the innovation process, ranging from idea generation, to product launch, and customers’ post-adoption product experience. This dissertation addresses challenges along the product innovation process in three independent essays.
Essay 1 focuses on members’ collaboration behavior in innovation forums. Many innovations at first are invented by users and later on get developed into products ready for the consumer market within firms’ R&D departments. Users are often the first to encounter novel needs and come up with ideas, concepts and even prototypes to satisfy them. Online forums are social spheres where hobbyists and professionals from all over the world meet to exchange their ideas about products and share their latest experiences. Hence, online forums represent incredibly rich resources to inspire companies’ ideation processes. Essay 1 investigates the social mechanism of member collaboration in innovation communities. Theory on creativity postulates that the interaction between the core and the periphery of social networks enables the communities’ innovative performance. Recent research showed that members from the periphery can boost their individual innovative performance via exchanging with the community core. However, we still do not know why core members assist peripherals. Essay 1 reveals that core members benefit from exchanging with members from the periphery throughout increasing their own influence as experts among peers in the core. The identified mechanism can be used to select the right candidates for co-creation workshops.
Essay 2 identifies a product launch strategy for innovations which are both, radical and disruptive. The successful launch of radical and disruptive innovations is a difficult challenge, because their radical newness provokes consumers’ uncertainties. Furthermore, because of their disruptiveness, at the time of product launch those innovations are perceived to be inferior regarding their primary functionality compared to established products, although their performance in general is fully sufficient to satisfy consumers’ actual needs. A means to address the perceived inferiority is to add a feature to the innovation which offers a functional surplus by means of existing technology. Essay 2 shows that simultaneously introducing the innovation and a functionally modified version thereof increases consumers’ adoption intentions. Furthermore, consumers who opted for the modified, but more expensive version are not dissatisfied with their choice in the long run, although they do not need the additional performance from an objective point of view. Essay 2 contributes insights regarding the causes and consequences of this new product launch strategy to the theory of radical and disruptive innovations. The findings of this essay are highly relevant to managers, because they show how this strategy can be implemented in practice to lower the failure rate of really new products at product launch.
Innovation research to date is rich on evidence regarding the determinants of consumers’ adoption decisions. However, perhaps due to questions of data accessibility, insights of consumers’ post-adoption product evaluation are still scarce. Hence, essay 3 shifts the focus beyond product adoption to investigate the interplay of early adopters’ experienced utility, perceived aesthetic value and attitude toward the product in the long-run. Findings show that the perceived aesthetic value creates a halo effect in the post-adoption phase, such that the influence of early adopters’ experienced utility on attitude gets weaker, the more positively they evaluate the innovations’ aesthetics over time. Furthermore, the halo effect only tempers the relationship between experienced hedonic utility and attitude when consumers are characterized by innate consumer innovativeness. Managers should see the products’ form not only as a means to increase purchase intentions, but understand that aesthetic value offers the potential to shape consumers’ long term attitude toward the product, which in turn is an important driver for word of mouth behavior. A high score on the trait innate consumer innovativeness is an appropriate criterion for the selection of users for design co-creation.
Which skills may student teachers be expected to have, after all, after having completed their first period of practical training at school? After three to six semesters of studying and a six-week long period of practical training at school, what may be expected from them when it comes to those professional skills as being expected from professional teachers? Which skills may they be expected to have in their fifth semester of studying, at a stage when, in the context of their studies, student teachers in Bavaria are doing their subject-related periods of practical training? And what may a trainee teacher be expected to have achieved at the end of his or her teaching training? Finally, which skills may professional teachers be expected to have which may not be expected from trainee teachers?
Questions like these give expression to a problem a team of professors, lecturers and responsible school supervisors has dedicated their efforts to. The overall objective of the collaboration was to define skills and indicators that describe developing processes between the beginning of studies and being a professional teacher. For this purpose, one reached back to current empirical findings from the field of pedagogical and psychological teaching quality research. Generally, due to the fact that pedagogical work focuses on achieving certain objectives, it is always normative. Usually empiricists hesitate when it comes to deductively deriving normative skills from the findings of quality research, even more as the data situation concerning certain sub-dimensions is rather contradictory. But still: in the course of an individual, biographical professionalization process, teacher training aims at providing each individual with abilities that will make him/her a good teacher or allow for “high-quality” teaching. Everybody working in the field of teacher training is aware of the fact that, when it comes to the practical aspects of teacher training, general formulations require concrete action. An illustrative example is the individual counselling of students or trainee teachers: After having given a lesson, each student and his/her tutor reflect on positive and less positive aspects by reaching back to quality criteria. And this is exactly what this paper is about: It tries to offer an advisory tool for students, trainee teachers, teachers, school administrations and school supervising authorities, by providing information about which concrete skills may be expected at a defined stage of professionalization and how these skills can be identified.
This paper is the result of a two-year collaboration; its release is meant to contribute to a debate on the topic among a larger public. Following the introductory remarks, in the first section of the paper Mägdefrau, Kufner and Hank present the theoretical basis: The criteria for selecting the dimensions of qualities of teaching as well as the spiral-curricular structure of the standards given in Section II are explained. Furthermore, there are some brief comments on possible practical applications.
Then in Sections II and III the standards of the collaborating authors are presented. Section II presents the standards according to the chosen dimensions of teacher behavior. That is, the reader will find the respective skills and indicators for the four phases of the Bavarian teacher education system (period of practical training at school, subject-didactic period of practical training, traineeship, and professional teacher training) in sequence. The first two phases refer to the university phase, the third to the training phase at school after graduating, and the forth describes the skills teachers should have at their disposal and continually develop through teacher trainings. (For an overview, see Figure 1.) This juxtaposition allows for easily pursuing the skills development process supposed to be achieved phase by phase for each dimension and sub-dimension. Then section III once again gives an overview of the standards according to the four phases, so that e. g. students in their first semester may see at first sight what skills they are expected to have after their first period of practical training at school.
The influence of situational interest on the appropriate use of cognitive learning strategies
(2017)
This study explores the role of two facets of situational interest, interestingness and
personal significance, as predictors of the adequate use of three types of cognitive
learning strategies (rehearsal strategies, organizational strategies, and elaboration
strategies). In order to attain this goal, it introduces a new measure of the adequacy
of the use of cognitive learning strategies by using the distance between teachers’
estimates of appropriate use of learning strategies for a specific task and students’
reported strategic behavior.
Based on a theoretical model of the use of cognitive learning strategies, the study
shows, by means of structural equation modeling, that different facets of situational
interest play different roles in predicting students’ surface and deep processing. In
summary, it was found that experienced personal significance played a major role
in predicting deep-processing strategies for a significant proportion of the 34 tasks
in this study, whereas interestingness fell short of expectations.
Limitations did arise owing to some missing values, which may blur the findings at
the lower interest and achievement end for the student sample. Nevertheless, suggestions
have been made for future research, which can help teachers of history
classes to determine components of success, namely experienced personal significance,
when designing tasks and consequently provide effective learning tasks to
their classes.
This thesis presents various techniques that aim at enabling more effective and more
efficient approaches for automatic software verification.
After a brief motivation why automatic software verification is getting ever more
relevant, we continue with detailing the formalism used in this thesis and on the
concepts it is built on.
We then describe the design and implementation of the value analysis, an analysis
for automatic software verification that tracks state information concretely. From
a thorough evaluation based on well over 4 000 verification tasks from the latest
edition of the International Competition on Software Verification (SV-COMP), we
learn that this plain value analysis leads to an efficient verification process for many
verification tasks, but at the same time, fails to solve other verification tasks due
to state-space explosion. From this insight we infer that some form of abstraction
technique must be added to the value analysis in order to also allow the successful
verification of large and complex verification tasks.
As a solution, we propose to incorporate counterexample-guided abstraction refinement (CEGAR) and interpolation into the value domain. To this end, we design
a novel interpolation procedure, that extracts from infeasible counterexamples interpolants for the value domain, allowing to form a precision strong enough to exclude
these infeasible counterexamples, and to make progress in the CEGAR loop. We
then describe several optimizations and extensions to these concepts, such that the
value analysis with CEGAR becomes competitive for automatic software verification.
As the next step, we combine the value analysis with CEGAR with a predicate
analysis, to obtain a more precise and efficient composite analysis based on CEGAR.
This composite analysis is indeed on a par with the world’s leading software verification tools, as witnessed by the results of SV-COMP’13 where this approach achieved
the 2 nd place in the overall ranking.
After having available competitive CEGAR-based analyses for the value domain,
the predicate domain, and the combination thereof, we then turn our attention to
techniques that have the goal to make all these CEGAR-based approaches more
successful. Our first novel idea in this regard is based on the concept of infeasible
sliced prefixes, which allow the computation of different precisions from a single
infeasible counterexample. This adds choice to the CEGAR loop, while without this
enhancement, no choice for a specific precision, i. e., a specific refinement, is possible.
In our evaluation we show, for both the value analysis and the predicate analysis,
that choosing different infeasible sliced prefixes during the refinement step leads to
major differences in verification effectiveness and verification efficiency.
Extending on the concept of infeasible sliced prefixes, we define several heuristics
in order to precisely select a single refinement from a set of possible refinements. We
make this new concept, which we refer to as guided refinement selection, available
to both the value and predicate analysis, and in a large-scale evaluation we try to
answer the question which selection technique leads to well suited abstractions and
thus, to a more effective verification process. Additionally, we present the idea of
inter-analysis refinement selection, where the refinement component of a composite
analysis may decide which of its component analyses is best to be refined, and in yet
another evaluation we highlight the positive effects of this technique.
Finally, we present the results of SV-COMP’16, where the verifier we contributed
and which is based on the concepts and ideas presented in this thesis achieved the
1 st place in the category DeviceDriversLinux64.
In the first article, I will focus on the career choice itself and identify core components that a career choice regarding the ICT-industry is comprised of.
In the second article, the thesis focuses on the Indian ICT-industry. The Indian ICT-industry consists of a comparably high percentage of female employees, namely 35% at the entry-level, but at the same time struggles with high attrition rates at higher levels. Therefore, the Indian ICT-industry serves as a best practice example for attracting women to the ICT-industry, and simultaneously assists us in under-standing the reasons why they tend not to stay.
In the third article, the thesis focuses on the stability of career choices by analyzing the short- and long-term persistence of females and males in computing disciplines.
In the fourth article, the thesis analyzes the motivation structure of people that plan to enter the ICT-industry. This is to identify what kinds of people are attracted to the ICT-industry and to reveal differences in their vocational behavior.
The increasing scale and complexity of computer networks imposes a need for highly flexible management mechanisms. The concept of network virtualization promises to provide this flexibility. Multiple arbitrary virtual networks can be constructed on top of a single substrate network. This allows network operators and service providers to tailor their network topologies to the specific needs of any offered service.
However, the assignment of resources proves to be a problem. Each newly defined virtual network must be realized by assigning appropriate physical resources. For a given set of virtual networks, two questions arise: Can all virtual networks be accommodated in the given substrate network? And how should the respective resources be assigned? The underlying problem is commonly known as the Virtual Network Embedding problem. A multitude of algorithms has already been proposed, aiming to provide solutions to that problem under various constraints. For the evaluation of these algorithms typically an empirical approach is adopted, using artificially created random problem instances. However, due to complex effects of random problem generation the obtained results can be hard to interpret correctly. A structured evaluation methodology that can avoid these effects is currently missing.
This thesis aims to fill that gap. Based on a thorough understanding of the problem itself, the effects of random problem generation are highlighted. A new simulation architecture is defined, increasing the flexibility for experimentation with embedding algorithms. A novel way of generating embedding problems is presented which migitates the effects of conventional problem generation approaches. An evaluation using these newly defined concepts demonstrates how new insights on algorithm behavior can be gained. The proposed concepts support experimenters in obtaining more precise and tangible evaluation data for embedding algorithms.
Tropical dry forests and woodlands are comprised of trees that are specially adapted to the harsh climatic and edaphic conditions, providing important ecosystem services for communities in an environment where other types of tropical tree species would not survive. Due to cyclic droughts which results in crop failure and death of livestock, the inhabitants turn to charcoal production through selective logging of preferred hardwood species for their livelihood support. This places the already fragile dryland ecosystem under risk of degradation, further impacting negatively on the lives of the inhabitants.
The main objective of the doctoral study was to evaluate the nature of degradation caused by selective logging for charcoal production and how this could be addressed to ensure the woodlands recover without impacting negatively on the producers’ livelihoods. To achieve this objective, the author formulated four main specific objectives namely: 1) To assess the impact of selective logging for charcoal production on the dry woodlands in Mutomo District; 2) To evaluate the characteristics of the charcoal producers that enforces their continued participation in the trade; 3) To assess the potential for adoption of agroforestry to supply wood for charcoal production, and; 4) To evaluate the potential for recovery of the degraded woodlands through sustainable harvesting of wood for charcoal production. The findings based on the four objectives were compiled into to four scientific papers as a part of a cumulative dissertation. Three of these papers have already been published in peer reviewed journals while the final one is under review.
The study used primary data collected in Mutomo District, Kenya through a forest inventory and household survey both conducted between December 2012 and June 2013. The study confirmed that the main use of selectively harvested trees is charcoal production. Consequently, this leads to degradation of the woodlands through reduction in tree species richness, diversity and density. Furthermore, the basal area of the preferred species is significantly less than the other species. However, the results also show that the woodlands have a high potential to recover if put under a suitable management regime since they have a high number of saplings. The study recommends a harvesting rate of 80% of the Mean Annual Increment (MAI), which would ensure the woodlands recover after 64 years. This is about twice the duration it would take if no harvesting is allowed but it would be easier to implement as it allows the producers to continue earning some money for their livelihood.
The study also demonstrates that charcoal production is an important livelihood source for many poor residents of Mutomo District who have no alternative sources of income. As such, addressing the problem of this degradation would require an innovative approach that does not compromise on the livelihoods of these poor people. An intervention that involves total ban on charcoal production would therefore not be acceptable or even feasible unless people are assured of alternative sources of income. The study recommends an intervention with overarching objectives geared towards: 1) diversification of the livelihood sources of the producers to gradually reduce their dependence on charcoal; 2) introduction of preferred charcoal trees in agroforestry systems especially through Famer Managed Natural Regeneration (FMNR) to reduce pressure on the natural woodlands; 3) controlled harvesting of hardwoods for charcoal production from the natural woodlands at a rate below the MAI; 4) promotion of efficient carbonisation technologies and practices to increase charcoal recovery; 5) promotion of efficient combustion technology and cooking practices to reduce demand side pressure, and; 6) encourage fuel switching to other fuels like LPG and electricity.
Border Basis Schemes
(2017)
The basic idea of border basis theory is to describe a zero-dimensional ring P/I by an order ideal of terms whose residue classes form a K-vector space basis of P/I. The O-border basis scheme is a scheme that parametrizes all zero-dimensional ideals that have an O-border basis. In general, the O-border basis scheme is not an affine space. Subsequently, in [Huib09] it is proved that if an order ideal with "d" elements is defined in a two-dimensional polynomial ring and it is of some special shapes, then the O-border basis scheme is isomorphic to the affine space of dimension 2d. This thesis is dedicated to find a more general condition for an O-border basis scheme to be isomorphic to an affine space of dimension "nd" that is independent of the shape of the order ideal with "d" elements and "n" is the dimension of the polynomial ring that the order ideal is defined in.
We accomplish this in 6 Chapters. In Chapters 2 and 3 we develop the concepts and properties of border basis schemes. In Chapter 4 we transfer the smoothness criterion (see [Huib05]) for the point (0,...,0) in a Hilbert scheme of points to the monomial point of the border basis scheme by employing the tools from border basis theory. In Chapter 5 we explain trace and Jacobi identity syzygies of the defining equations of a O-border basis scheme and characterize them by the arrow grading. In Chapter 6 we give a criterion for the isomorphism between 2d dimensional affine space and O-border basis scheme by using the results from Chapters 3 and Chapter 4. The techniques from other chapters are applied in Chapter 6.1 to segment border basis schemes and in Chapter 6.2 to O-border basis schemes for which O is of the sawtooth form.
During the last few years, the technological progress in collecting, storing and processing a large quantity of data for a reasonable cost has raised serious privacy issues. Privacy concerns many areas, but is especially important in frequently used services like search engines (e.g., Google, Bing, Yahoo!). These services allow users to retrieve relevant content on the Internet by exploiting their personal data. In this context, developing solutions to enable users to use these services in a privacy-preserving way is becoming increasingly important.
In this thesis, we introduce SimAttack an attack against existing protection mechanism to query search engines in a privacy-preserving way. This attack aims at retrieving the original user query. We show with this attack that three representative state-of-the-art solutions do not protect the user privacy in a satisfactory manner.
We therefore develop PEAS a new protection mechanism that better protects the user privacy. This solution leverages two types of protection: hiding the user identity (with a succession of two nodes) and masking users' queries (by combining them with several fake queries). To generate realistic fake queries, PEAS exploits previous queries sent by the users in the system.
Finally, we present mechanisms to identify sensitive queries. Our goal is to adapt existing protection mechanisms to protect sensitive queries only, and thus save user resources (e.g., CPU, RAM). We design two modules to identify sensitive queries. By deploying these modules on real protection mechanisms, we establish empirically that they dramatically improve the performance of the protection mechanisms.
Large-scale software engineering projects are often distributed among a number sites that are geographically separated by a substantial distance. In globally distributed software projects, time zone issues, language and cultural barriers, and a lack of familiarity among members of different sites all introduce coordination complexity and present significant obstacles to achieving a coordinated effort.
For large-scale software engineering projects to satisfy their scheduling and quality goals, many developers must be capable of completing work items in parallel. A key factor to achieving this goal is to remove interdependencies among work items insofar as possible. By applying principles of modularity, work item interdependence can be reduced, but not removed entirely. As a result of uncertainty during the design and implementation phases and incomplete or misunderstood design intents, dependencies between work items inevitably arises and leads to requirements for developers to coordinate. The capacity of a project to satisfy coordination needs depends on how the work items are distributed among developers and how developers are organizationally arranged, among other factors. When coordination requirements fail to be recognized and appropriately managed, anecdotal evidence and prior empirical studies indicate that this condition results in decreased product quality and developer productivity. In essence, properties of the socio-technical environment, comprised of developers and the tasks they must complete, provides important insights concerning the project's capacity to meet product quality and scheduling goals. In this dissertation, we make contributions to support socio-technical analyses of software projects by developing approaches for abstracting and analyzing the technical and social activities of developers. More specifically, we propose a fine-grained, verifiable, and fully automated approach to obtain a proper view on developer coordination, based on commit information and source-code structure, mined from version-control systems. We apply methodology from network analysis and machine learning to identify developer communities automatically. To evaluate our approach, we analyze ten open-source projects with complex and active histories, written in various programming languages. By surveying 53 open-source developers from the ten projects, we validate the accuracy of the extracted developer network and the authenticity of the inferred community structure. Our results indicate that developers of open-source projects form statistically significant community structures and this particular network view largely coincides with developers' perceptions.
Equipped with a valid network view on developer coordination, we extend our approach to analyze the evolutionary nature of developer coordination. By means of a longitudinal empirical study of 18 large open-source projects, we examine and discuss the evolutionary principles that govern the coordination of developers. We found that the implicit and self-organizing structure of developer coordination is ubiquitously described by non-random organizational principles that defy conventional software-engineering wisdom. In particular, we found that: (a) developers form scale-free networks, in which the majority of coordination requirements arise among an extremely small number of developers, (b) developers tend to accumulate coordination requirements with more and more developers over time, presumably limited by an upper bound, and (c) initially developers are hierarchically arranged, but over time, form a hybrid structure, in which highly central developers are hierarchically arranged and all other developers are not. Our results suggest that the organizational structure of large software projects is constrained to evolve towards a state that balances the costs and benefits of coordination, and the mechanisms used to achieve this state depend on the project's scale.
As a final contribution, we use developer networks to establish a richer understanding of the different roles that developers play in a project. Developers of open-source projects are often classified according to core and peripheral roles. Typically, count-based operationalizations, which rely on simple counts of individual developer activities (e.g., number of commits), are used for this purpose, but there is concern regarding their validity and ability to elicit meaningful insights. To shed light on this issue, we investigate whether count-based operationalizations of developer roles produce consistent results, and we validate them with respect to developers' perceptions by surveying 166 developers. We improve over the state of the art by proposing a relational perspective on developer roles, using our fine-grained developer networks, and by examining developer roles in terms of developers' positions and stability within the developer network. In a study of 10 substantial open-source projects, we found that the primary difference between the count-based and our proposed network-based core--peripheral operationalizations is that the network-based ones agree more with developer perception than count-based ones. Furthermore, we demonstrate that a relational perspective can reveal further meaningful insights, such as that core developers exhibit high positional stability, upper positions in the hierarchy, and high levels of coordination with other core developers, which confirms assumptions of previous work.
Overall, our research demonstrates that data stored in software repositories, paired with appropriate analysis approaches, can elicit valuable, practical, and valid insights concerning socio-technical aspects of software development.
Many communication scholars agree that political communication in the 21st century has permanently changed due to the enormous shifts in mass media communications landscape. Today, political culture and public debate are characterized by a highly individualistic and consumption-oriented electorate conditioned by a hypermedialized public sphere spawned by the so-called “Digital Revolution”.
Confronted with a fluid and volatile communications landscape, political actors of the 21st century have had to develop new strategies to fulfill their functional role in the public arena and satisfy their audiences’ expectations. This has had massive impact on the communication logistics of perhaps the world’s most powerful political office and the ultimate of collective meaning-making and national identity in the United States: that of the American Presidency.
The perception of Barack Obama’s 2008 election campaign as epitomizing such shifts in the collective meaning-making processes lies at the core of this research project. The analyzes Barack Obama’s 2008 presidential election campaign with respect to its underlying communicational and performative concept(s), and its impact on the socio-ontological dimension of the office of the American Presidency in the 21st century.
While storytelling and performative staging have always been crucial to the American presidential office (Cornog 2004; Woodward 2007, Weiss 2008), this research project aims to show how “telling a story” and staging it properly have become the sine qua non for American presidential political communication. This case study of Barack Obama’s 2008 presidential election campaign examines the central role that imaginative processes, their narrative formation and performative staging play in 21st century political dialog. Furthermore the dissertation suggests, that these paradigmatic shifts demand a refinement of existing socio-political concepts of the institution of the American Presidency.
In our knowledge-driven society, the acquisition and the transfer of knowledge play a principal role. Web search engines are somehow tools for knowledge acquisition and transfer from the web to the user. The search engine results page (SERP) consists mainly of a list of links and snippets (excerpts from the results). The snippets are used to express, as efficiently as possible, the way a web page may be relevant to the query.
As an extension of the existing web, the semantic web or “web 3.0” is designed to convert the presently available web of unstructured documents into a web of data consumable by both human and machines. The resulting web of data and the current web of documents coexist and interconnect via multiple mechanisms, such as the embedded structured data, or the automatic annotation.
In this thesis, we introduce a new interactive artifact for the SERP: the “Semantic Snippet”. Semantic Snippets rely on the coexistence of the two webs to facilitate the transfer of knowledge to the user thanks to a semantic contextualization of the user’s information need. It makes apparent the relationships between the information need and the most relevant entities present in the web page.
The generation of semantic snippets is mainly based on the automatic annotation of the LOD1’s entities in web pages. The annotated entities have different level of impor- tance, usefulness and relevance. Even with state of the art solutions for the automatic annotations of LOD entities within web pages, there is still a lot of noise in the form of erroneous or off-topic annotations. Therefore, we propose a query-biased algorithm (LDRANK) for the ranking of these entities. LDRANK adopts a strategy based on the linear consensual combination of several sources of prior knowledge (any form of con- textual knowledge, like the textual descriptions for the nodes of the graph) to modify a PageRank-like algorithm.
For generating semantic snippets, we use LDRANK to find the more relevant entities in the web page. Then, we use a supervised learning algorithm to link each selected entity to excerpts from the web page that highlight the relationship between the entity and the original information need.
In order to evaluate our semantic snippets, we integrate them in ENsEN (Enhanced Search Engine), a software system that enhances the SERP with semantic snippets.
Finally, we use crowdsourcing to evaluate the usefulness and the efficiency of ENsEN.
Opportunistic networks (OppNets) are human-centric mobile ad-hoc networks, in which neither the topology nor the participating nodes are known in advance. Routing is dynamically planned following the store-carry-and-forward paradigm, which takes advantage of people mobility. This widens the range of communication and supports indirect end-to-end data delivery. But due to individuals’ mobility, OppNets are characterized by frequent communication disruptions and uncertain data delivery. Hence, these networks are mostly used for exchanging small messages like disaster alarms or traffic notifications. Other scenarios that require the exchange of larger data (e.g. video) are still challenging due to the characteristics of this kind of networks. However, there are still multimedia sharing scenarios where a user might need switching from infrastructural communications to an ad-hoc alternative. Examples are the cases of 1) absence of infrastructural networks in far rural areas, 2) high costs due to roaming or limited data volumes or 3) undesirable censorship by third parties while exchanging sensitive content. Consequently, we target in this thesis a video dissemination scheme in OppNets.
For the video delivery problem in the sparse opportunistic networks, we propose a solution with the objective of reducing the video playout delay, so that enabling the recipient to play the video content as soon as possible even if at a low quality. Furthermore, the received video reaches later a higher quality level, ensuring a better viewing experience.
The proposed solution encloses three contributions. The first one is given by granulating the videos at the source node into smaller parts, and associating them with unequal redundancy degrees. This is technically based on using the Scalable Video Coding (SVC), which encodes a video into several layers of unequal importance for viewing the content at different quality levels. Layers are routed using the Spray-and-Wait routing protocol, with different redundancy factors for the different layers depending on their importance degree. In this context as well, a video viewing QoE metric is proposed, which takes the values of the perceived video quality, delivery delay and network overhead into consideration, and on a scalable basis.
Second, we take advantage of the small units of the Network Abstraction Layer (NAL), which compose SVC layers. NAL units are packetized together under specific size constraints to optimize granularity. Packets sizes are tuned in an adaptive way, with regard to the dynamic network conditions. Each node is enabled to record a history of environmental information regarding the contacts and forwarding opportunities, and use this history to predict future opportunities and optimize the sizes accordingly.
Lastly, the receiver (destination) node is pushed into action by reacting to missing data parts in a composite ``backward'' loss concealment mechanism. So, the receiver asks first for the missing data from other nodes in the network in the form of request-response. Then, since the transmission is concerned with video content, video frame loss error concealment techniques are also exploited at the receiver side. Consequently, we propose to combine the two techniques in the loss concealment mechanism, which is enabled then to react to missing data parts.
To study the feasibility and the applicability of the proposed solutions, simulation-driven experiments are performed, and statistical results are collected and analyzed. Consequently, we have got promising results that show the applicability of video dissemination in opportunistic delay tolerant networks, and open the door for a range of possible future works.
In order to end poverty by 2030, the declared goal of the United Nations, a better understanding is needed which policies help poor households to escape poverty and how to end its inter-generational transmission.
Since the Millennium Declaration in September 2000, and the adoption of the Millennium Development Goals (MDGs), the delivery of basic social services, such as education, health, water supply and sanitation, has become the central focus of international development assistance.
However, the provision of basic social services is not necessarily sufficient to lead to an accumulation of human and productive capital, which would allow households to escape poverty and interrupt its inter-generational transmission.
To understand why people are poor, we need to understand what productive decisions poor households take, and to identify what constraints households face in their attempt to accumulate human, as well as, productive capital. A better understanding of such constraints could guide policies that have a long-term impact on poverty reduction and on development.
A number of factor could explain why poor households operate at unprofitable levels and why they are constrained in their investment decisions. Empirical evidence points to different explanations: cost of learning and access to information, insufficient education, risk, credit constraints, non-convex production technologies, and behavioral patterns that are inconsistent with standard neoclassical models. Currently, one of the major challenges in formulating policies that foster productive investments among the poor, seems to be to disentangle the effects of scale, credit constraints, and the lack of insurance mechanisms.
This thesis seeks to shed further light on the relative role of these three constraints. In the context of rural India, it analyzes what production and investment decisions households take and how important risk and credit constraints as well as scale effects are in these decisions. Finally, it evaluates potential policy tools that could support households in overcoming these constraints.
Today, 33% of the world's poor live in India, the vast majority of them (80.5%) in rural areas. The economic structure of rural India is still dominated by agricultural production, and consequently, this thesis concentrates on agricultural production decisions and employment in agriculture.
In particular, this thesis addresses three questions in three individual papers: First, are farm households constrained in their crop choices by agricultural production risk and to which extent can India's public works program support households in overcoming this constraint? Second, how profitable is cattle farming in rural India at different levels of investment and which barriers do households face in reaching optimal investment levels? And third, can risk in agricultural wages explain limited investment in girls' education in the presence of intra-household substitution in household chores?
The first paper focuses on crop choice of farm households. It reassesses the stylized fact that households have to trade-off between returns and risk in their crop choice in the context of Andhra Pradesh, a state in the south of India. It then explores the effect of India's flagship anti-poverty program, the National Rural Employment Guarantee Scheme (NREGS) on households' crop choice using a representative panel data set. The NREGS guarantees each household living in rural India up to a hundred days of employment per year, at state minimum wages.
The paper shows theoretically, and empirically, that the introduction of the NREGS reduces households' uncertainty about future income streams because it provides reliable employment opportunities in rural areas independently of weather shocks and crop failure. With access to the NREGS, households can compensate income losses emanating from shocks to agricultural production. Households with access to the NREGS can therefore shift their production towards riskier but also more profitable crops. These shifts in agricultural production have the potential to considerably raise the incomes of smallholder farmers.
The paper concludes that employment guarantees can, similarly to crop insurance, help households in managing agricultural productions risks. It also argues that accounting for the effects of the NREGS on crop choice and profits from agricultural production affects the cost-benefit analysis of such a program considerably.
The second paper concentrates on the profitability of farming cattle in Andhra Pradesh.
The paper also uses a representative panel dataset, and examines average and marginal returns to cattle at different levels of cattle investment. It finds average returns in the order of -8% at the mean of cattle value. These returns vary across the cattle value distribution between negative 53% (in the lowest quintile) and positive 2% (in the highest). While marginal returns are positive on average, they also vary considerably with cattle value and breed. The paper shows that average and marginal returns are considerably higher for modern variety cows, i.e. European breeds and their crossbreeds, than for traditional varieties of cows or for buffaloes. It also shows that cattle farming becomes most profitable at minimum herd sizes of five animals, due to decreasing average labor costs with increasing herd sizes.
The results of this paper suggest that cattle farming is associated with sizable non-convexities in the production technology and that substantial economies of scale, as well as high upfront expenses of acquiring and feeding high-productivity animals, might trap poorer households in low-productivity asset levels. The fact that wealthier households and households with lower costs to access veterinary services are more likely to overcome these barriers, supports this idea.
The second paper concludes that cattle farming might well generate positive returns for households in rural India, but that most households seem to operate at unprofitable levels. This could also explain the apparent paradox between widespread support of cattle farming through agricultural policy interventions and negative returns to cattle, as stressed in recent works. It argues that policy interventions that target productive assets will only be beneficial if transfers are high enough to allow households to overcome these entry barriers.
The third paper concentrates on the effect of risk on the productive decisions of households, and analyzes the effect of wage risk in agricultural employment on women's labor supply and time allocated to home production. It seeks to understand the extent to which risk raises labor supply of women to levels that can become harmful for other members of the household. The hypothesis is that in the presence of intra-household substitution effects -- for instance in the performance of household chores -- increased female labor supply might have negative effects on the time allocation of girls. If women have less time available for home production and childcare, and such activities can only be foregone at high cost, they might be forced to take older girls out of school or to cut down on the time these girls study at home in order for them to fill in for these tasks.
The paper uses cross-sectional data on the time allocation of different household members and predicts wage risk at the village level as a function of the historical rainfall distribution and a village's share of land that is under irrigation. The results show that wage risk affects the time allocation of women, increasing their labor supply and reducing the time they allocate to home production. Wage risk also increases the time girls spend on household chores and reduces their time in school. Because the observed effect of wage risk on girls' time allocated to household chores corresponds very closely to the effect observed for women, it seems plausible to attribute it to intra-household substitution effects. The observed effect of risk on girls' school time, however, is greater than the observed effect of risk on the home-production time of girls. This can be due to two reasons: First, in the presence of intra-household substitution effects, shocks in wages will not only increase female labor supply but also girls' time on household chores. And the model predicts that risk-averse households invest less in education when future school time becomes uncertain, because future school time affects the returns to current schooling. Second, if school attendance is indivisible, then girls might be forced to drop out of school temporarily or even permanently.
The paper then simulates the effect of the NREGS on the time-allocation decisions of working women and school-age girls. The results suggest that the NREGS could increase the time working women spend on household duties, because it reduces uncertainty regarding future earnings, and alleviates the need to accumulate savings. Thereby, the NREGS would reduce the pressure on girls to perform household tasks and allow them to increase the time they spend in school or studying by 6 minutes daily.
Wit these findings, this thesis contributes to a better understanding of the choices poor households in rural India face in their day-to-day decision making, and offers insights into what policies could support households in escaping poverty, and interrupt its inter-generational transmission.
In modern CMOS technology, process variations have significantly increased impact on the circuit behavior with continuously scaled transistor sizes. Manufactured devices tend to have different performances due to parameter variations during manufacturing and
in the operating context. Conventional tests generated regardless of variations could fail to rule out devices with low performance and even functional failure caused by extreme variations; the unreliability in shipped products is in turn raised. To tackle the problem, many existing test approaches have focused on identifying and testing a number of critical paths in the circuit, and aimed at the efficiency of the searching process. However, the statistical circuit model, which better describes the circuit timing behavior under variations, is not yet sufficiently investigated and employed by existing testing methodologies.
This thesis work proposes Opt-KLPG and MIRID, which can be utilized by a statistical delay testing flow. Opt-KLPG—a K Longest Paths Generation (KLPG) algorithm for optimal solutions under memory constraints—can pin-pointedly generate tests for small delay defects, which are common small timing deviations under process variations, based on the traditional KLPG algorithm. In contrast to KLPG, Opt-KLPG guarantees the optimality of the solution (the K longest sensitizable paths indeed). MIRID is a mixed-mode timing-aware simulator, incorporating effects of power-supply noise and combining an event-driven logic simulation engine with interfaces to provided electrical models. MIRID aims at evaluating delay tests in presence of process variations efficiently yet accurately, by performing logic simulation at the gate level while determining the gate delays using simplified electrical modes. The electrical models applied by the simulator focus on the IR drop effect. Electrical parameters mainly contributing to the effect
are incorporated into the model. The simulator is generic and flexible to be adapted by modifying the interfaces with minor effort. Both applications were verified in various aspects by experiments for academical/industrial circuits, and turned out to have satisfiable effectiveness and performance.
Employment of a very large number of antennas is seen as the key technology to provide future users with very high data rates. At the same time, the implementation complexity will rise due to large memories required and sophisticated signal processing algorithms employed. Continuous technology downscaling allows implementation of such complex digital designs. At the same time, its inherent variability and vulnerability to physical disturbances violate the assumption of perfectly reliable hardware operation.
This work considers Unique Word OFDM which represents the alternative to the standard Cyclic Prefix OFDM providing superior detection quality. The generalization of Unique Word OFDM to a MIMO system is performed which allows interpretation as a virtual massive MIMO system with only few physical antennas. Detection methods for the introduced generalization are discussed and their performance is quantified.
Because of the large memory size required, linear detection represents the cost and performance effective solution. The possible memory errors due to radiation effects or voltage scaling are addressed and the nonlinear MMSE detection algorithm is proposed. This algorithm keeps track of the memory errors and is able to significantly mitigate their effect on the quality of the estimated data.
Apart of memory issues, reliability of the actual computational hardware which constitutes the receiver is of concern in this work. An own implementation of the MMSE Sorted Givens Rotations is subjected to transient fault injection. The impact of faults in various parts of the implemented circuit on the detection performance is quantified. Most vulnerable components of the implemented circuit in terms of reliability are identified.
Security is another major address of this work, since most current implementations include cryptographic devices.
Fault-based attacks on such systems are known to be able to extract the secret key in feasible time.
The remaining part of this work addresses such fault injection-based malicious attacks. Countermeasures based on a combination of information and hardware redundancy are considered. Recently introduced robust codes target such attacks by providing guaranteed detection capability. The performance of these codes is assessed by application to actual cryptographic and general purpose circuits. The work introduces metrics that help to identify fault locations in the circuit which could escape detection with high probability. These locations are targeted by transistor resizing that renders fault injection unfeasible.
Recently, non- and paraverbal properties of literary texts at the level of documentary inscription (i.e. materiality), seen individually or as aspects of a so-called ‘material text’, that is, the union of materiality and verbal sign systems, received an increasing amount of attention in textual scholarship and literary studies. Here, ‘meaning’ or at least ‘semantic potentiality’ has been attributed to both or either and physical features of texts have been construed as hitherto neglected aspects of literary communication and literary aesthetics. In what follows, I will present a brief conspectus of the current debate and then try to provide a reconstruction of underlying ideas by answering the question ‘how does a material text mean?’. Taking a descriptive meta-perspective and focusing on conceptual and methodological clarification, I try to clarify the somewhat blurry expressions ‘meaning’, ‘to mean’ and the like by translating them into the distinct terminology of semiotics and transferring them into the theoretical framework of an instrumentalist notion of signs.
Three Essays on Understanding Mobile Consumer Behavior: Business Models, Perceptions, and Features
(2016)
For about a decade, consumers have been carrying the Internet in their pockets. The rapid penetration of modern smartphones has meant that more than two-thirds of the people in the West can access and use online resources, anytime and anywhere. Consumers also can communicate and share their consumption experiences instantaneously. Platforms reach users for time-critical events through highly personal communication channels, in the sense that smartphones serve as constant companions. Many mobile applications and their basic services and contents also are available for free. The digital and mobile worlds thus are changing the very means of communication, suggesting the powerful need for marketing research and practice to find the opportunities and meet the challenges of the mobile Internet. In particular, scientific investigations are required to describe new business models in the free e-service industry and the consumer behavior affected by mobile features. This thesis examines these topics in three essays.
Study 1 considers business models that offer their services without charge. Offering services for free is symptomatic of not only mobile apps (90% of all apps are available for free) but the digital economy in general. For companies offering free e-services, this situation raises several important questions, in that, without any access device restrictions, how do customers of free e-services contribute value without paying? What are the nature and dynamics of nonmonetary value contributions by nonpaying customers? With a literature review and interviews with senior executives of free e-service providers, Study 1 presents a comprehensive overview of nonmonetary value contributions in the free e-service sector, including word of mouth, co-production, and network effects. Moreover, adding attention and data into this framework reveals two further aspects that have not been addressed in prior customer value research. By putting the findings in the context of the existing literature on customer value and customer engagement, this study sheds light on the complex processes of value creation in the emerging e-service sector, while advancing marketing and service research in general.
Study 2 deepens the findings from the first study; specifically, the focus is on the way that mobile users co-produce content and how this contribution is perceived by recipients in the network. With field data and a scenario experiment, this study demonstrates that recipients appreciate mobile-generated customer reviews fundamentally differently from other reviews. In particular, they discount the helpfulness of mobile reviews, due to their text-specific content and style particularities. The very fact that a review has been identified as written on a mobile device also lowers recipients’ perceptions of its value. Recipients use information about the device as a source cue to assess their compatibility with the review contribution channel. If they perceive themselves as compatible with the method used to generate the review (mobile or non-mobile), recipients regard the review as more helpful, because they attribute the review to the quality of the reviewed subject. If they perceive it as incompatible though, recipients assume that the review reflects the personal dispositions of the reviewer and discount its helpfulness.
Finally, Study 3 takes up the attention and cross-market network effects in a mobile setting; these were two nonmonetary dimensions identified by Study 1. Platform providers should develop measures to draw the attention of nonpaying customers to the offers of their paying customers. One attention-grabbing mobile-specific feature is push notifications to the device, which provide information about temporally or spatially relevant events. More concretely, Study 3 investigates how mobile push notifications remind users of upcoming deadlines in online auctions and therefore improve late bidding success. Late bidding is a prevalent strategy, in which bidders submit their bids at the very end of an online auction. This research uses field data about an online auction platform to demonstrate that late bidders use these mobile push notifications more frequently than do bidders with different bidding patterns. Within the group of late bidders, the chance to win an auction increases with their use of push notifications. After a mobile push notification, late bidders submit bids through mobile devices but also through non-mobile channels. Less experienced late bidders also benefit from push notifications, which increase their chances of success.
In summary, this dissertation contributes to an enhanced understanding of mobile consumer behavior by using various methods, including qualitative interviews, field observations, and online experiments. From a theoretical perspective, it contributes to current knowledge about nonmonetary costumer value contributions in general and their role in mobile settings in particular. This thesis highlights the role of mobile devices in co-production and perceptions of co-produced content. It also reveals how mobile-specific interactive features, like push notifications, affect late bidding efficiency. Therefore, it specifies the role of mobile devices in cross-market effects, in that they enable the platform to direct the relationship between buyers and sellers. The insights presented herein encourage managers to reevaluate their current practices, think about whether they should label co-produced content as generated through a mobile channel or not, and contemplate whether to develop mobile push notifications as helpful features for users (not as intrusive marketing messages).
IT outsourcing to clouds bears new challenges to the technical implementation of legally compliant clouds. On the one hand, outsourcing companies have to comply with legal requirements. On the other hand, cloud providers have to support their customers in achieving compliance with these legal requirements when processing data in the cloud. Consequently, the questions arise when IT outsourcing to clouds is lawful, which legal requirements apply to data processing in clouds, and how cloud providers can support their customers on achieving legal compliance.
In this thesis, answers to these questions are given by performing a legal analysis identifying the legal requirements and a technical analysis identifying how legal requirements can be addressed in the context of cloud computing. Further, an information flow analysis is done, resulting in a system theoretical model that is able to describe information flow control in clouds based on the security classification of virtual resources and hardware resources. In a proof-of-concept implementation which is based on the OpenStack open-source cloud platform, it is shown that information flow control can be implemented as a part of cloud management and that legal compliance can be monitored and reported based on the actual assignment of virtual resources to hardware resources. Thereby, cloud providers are able to provide cloud customers with cloud resources, which are automatically assigned to hardware resources that comply with the legal requirements of the cloud customers. This consequently empowers cloud customers to utilise cloud resources according to their legal requirements and to keep control of managing the legal compliance of their data processing in clouds.
Entity disambiguation is the task of mapping ambiguous terms in natural-language text to its entities in a knowledge base. It finds its application in the extraction of structured data in RDF (Resource Description Framework) from textual documents, but equally so in facilitating artificial intelligence applications, such as Semantic Search, Reasoning and Question & Answering. We propose a new collective, graph-based disambiguation algorithm utilizing semantic entity and document embeddings for robust entity disambiguation. Robust thereby refers to the property of achieving better than state-of-the-art results over a wide range of very different data sets. Our approach is also able to abstain if no appropriate entity can be found for a specific surface form. Our evaluation shows, that our approach achieves significantly (>5%) better results than all other publicly available disambiguation algorithms on 7 of 9 datasets without data set specific tuning. Moreover, we discuss the influence of the quality of the knowledge base on the disambiguation accuracy and indicate that our algorithm achieves better results than non-publicly available state-of-the-art algorithms.
A configurable system enables users to derive individual system variants based on a selection of configuration options. To cope with the often huge number of possible configurations, several analysis approaches (e.g., for verification of configurable systems) implement different strategies to account for configurability. One popular strategy—often applied in practice—is to use sampling (i.e.,analyzing only a subset of all system variants). While sampling reduces the analysis effort significantly, the information obtained is necessarily incomplete as some variants are not analyzed. A second strategy is to identify the common parts and the variable parts of a configurable system and analyze each part separately (called feature-based strategy). As a third strategy, researchers have begun to develop family-based analyses. Family-based approaches analyze the code base of a configurable system as a whole, rather than the individual variants or parts of the system, this way exploiting similarities among individual variants to reduce analysis effort. Each of these three strategies has advantages and disadvantages, which might even prevent its application (e.g., the family-based strategy typically needs much main memory). The goal of this thesis is to enable the efficient analysis of configuable systems, even if existing strategies fail (e.g., the family-based strategy, because of memory limitations). To this end, we designed a framework that models the key aspects of configurable-system analysis strategies, independent of their implementation and of the analyses techniques (e.g., type checking or model checking). Guided by our model, we developed a number of analysis strategies for configurable systems. To learn about advantages and disadvantages of individual strategies, we compared these in a series of empirical studies. In particular, we developed and evaluated a model-checking analysis and a data-flow analysis for configurable systems. One of our key findings is that family-based analysis outperforms most sampling heuristics with respect to analysis time, while being able to make definite statements about all variants of a configurable system. Furthermore, we identified advantages and disadvantages of analysis strategies and how to mitigate them by combining strategies. In our endeavor, we identified two key problems that are common to configurable-system analyses, and we developed supporting techniques to solve them. These techniques are general and are applicable beyond our research. In particular, we developed presence-condition simplification and variability encoding. Presence-condition simplification provides a simple method to reduce the size of the output or the internal data structure of configurable-systemanalyses. Variability encoding provides a means for transforming compile-time variability to run-time variability, which enables many family-based analyses.
Our key contributions are the model of analysis strategies for configurable systems and the corresponding empirical comparisons of strategies. Our findings are backed by empirical studies, which helped broaden the community knowledge on analyses of configurable systems (indicated by citations). For these evaluations, we prepared several subject systems, which have also been used already by other researchers. Furthermore, we developed several analysis tools and demonstrated their feasibility in practical application scenarios based on code from, for example, the Linux kernel. Our tools are based on variability-aware optimizations that enable levels of scalability on configurable systems that were not possible with other tools before.
This dissertation sets out to deepen our understanding on the causes and manifestations of digital inequality through the lens of technology adoption research. To this day, digital inequality remains an important and relevant societal issue. With the rapid proliferation of digital ICT over the last 20 years the nature of the phenomenon may have evolved from an access-based to an appropriation-based issue, yet its significance has not diminished. Again this backdrop, this dissertation aims to explore which mechanisms and factors influence why and how individuals use ICT in the context of digital inequality, and, in particular, what role social influence, socio-cognitive processes, and socio-economic determinants play. In a series of essays, this dissertation first develops a theoretical understanding of the concept of social influence, which plays an important role in determining whether and how individuals use a technology. Next, a process lens is adopted to explore the underlying mechanisms that drive individuals to disengage from a new technology and may lead to digital exclusion. Building on that, this dissertation examines how digital inequality manifests itself in the specific realm of e-commerce. This dissertation concludes with a practical perspective on the issue of digital inequality aimed at policy makers seeking to bridge the gap between digitally advantaged and disadvantaged users.
Ever since the inception of the Internet, researchers have been both, enthusiastic and concerned about the social implications of Internet-enabled digitization (DiMaggio, Hargittai, Neuman, & Robinson, 2001).In particular, the issue of unequal access to digital opportunities has garnered substantial research attention and has been termed ‘digital inequality’ (Hargittai & Hinnant, 2008; Hsieh, Rai, & Keil, 2008; Kvasny & Keil, 2006; Riggins & Dewan, 2005). Generally, digital inequality refers to the unequal opportunity and ability of individuals to profit from information communication technologies (ICT) (DiMaggio & Hargittai, 2001). The phenomenon of digital inequality has also been at the heart of public debates because in the light of the ongoing digitization using ICT actively is more and more becoming a prerequisite to fully participate in society. This thesis seeks to expand research on the complex and societally relevant phenomenon of digital inequality. Specifically, I aim to explore the following focal research question:
How do individuals use ICTs and which mechanisms and factors influence individual use and non-use of ICTs in the context of digital inequality?
Advancing our knowledge in this field is particularly relevant for the following reasons. First, while understanding all stages of digital inequality is essential for both, assessing the true severity of the phenomenon and developing measures to bridge it, a large part of research has so far focused on ICT access and adoption. Yet, if digital inequality eventually translates into inequality in ‘real life’ is determined by whether individuals can use ICTs to their advantage and benefit from digital opportunities. This thesis seeks to address this research gap by shifting the attention to the factors and mechanisms that drive individual differences in ICT appropriation as opposed to ICT access and adoption. Second, digital inequality research still stands to profit from a broader methodological foundation. In fact, most of what we know about digital inequality is based on the quantitative analysis of surveys and statistical data and might limit research in exploring and better understanding the more complex and multi-layered forms of digital inequality as evident in ICT appropriation. This thesis aims at strengthening the methodological foundation of digital inequality research and at generating new and rich insights by adopting so far underrepresented research methods, in particular qualitative and internet-enabled data tracking methodology. Third, digital inequality is an interdisciplinary research field and different insights haven been gained in a diverse range of academic disciplines. In this thesis I also seek to lay a sound theoretical foundation for my own research and the research of others by integrating these otherwise separate perspectives on digital inequality. Fourth, better understanding digital inequality and potential means to bridge it is of high societal relevance. Therefore, this thesis also aims at inferring implications not only for academic research but also for practitioners, in particular public policy makers. The thesis comprises four papers that seek to address the points outlined above.
This doctoral thesis is devoted to generalize border bases to the module setting and to apply them in various ways.
First, we generalize the theory of border bases to finitely generated modules over a polynomial ring. We characterize these generalized border bases and show that we can compute them. As an application, we are able to characterize subideal border bases in various new ways and give a new algorithm for their computation. Moreover, we prove Schreyer's Theorem for border bases of submodules of free modules of finite rank over a polynomial ring.
In the second part of this thesis, we study the effect of homogenization to border bases of zero-dimensional ideals. This yields the new concept of projective border bases of homogeneous one-dimensional ideals. We show that there is a one-to-one correspondence between projective border bases and zero-dimensional closed subschemes of weighted projective spaces that have no point on the hyperplane at infinity. Applying that correspondence, we can characterize uniform zero-dimensional closed subschemes of weighted projective spaces that have a rational support over the base field in various ways. Finally, we introduce projective border basis schemes as specific subschemes of border basis schemes. We show that these projective border basis schemes parametrize all zero-dimensional closed subschemes of a weighted projective space whose defining ideals possess a projective border basis. Assuming that the base field is algebraically closed, we are able to prove that the set of all closed points of a projective border basis scheme that correspond to a uniform subscheme is a constructive set with respect to the Zariski topology.
The well-known Riemann Mapping Theorem states the existence of a conformal map of a simply connected proper domain of the complex plane onto the upper half plane. One of the main topics in geometric function theory is to investigate the behaviour of the mapping functions at the boundary of such domains. In this work, we always assume that a piecewise analytic boundary is given. Hereby, we have to distinguish regular and singular boundary points. While the asymptotic behaviour for regular boundary points can be investigated by using the Schwarz Reflection at analytic arcs, the situation for singular boundary points is far more complicated. In the latter scenario two cases have to be differentiated: analytic corners and analytic cusps. The first part of the thesis deals with the asymptotic behaviour at analytic corners where the opening angle is greater than 0. The results of Lichtenstein and Warschawski on the asymptotic behaviour of the Riemann map and its derivatives at an analytic corner are presented as well as the much stronger result of Lehman that the mapping function can be developed in a certain generalised power series which in turn enables to examine the o-minimal content of the Riemann Mapping Theorem. To obtain a similar statement for domains with analytic cusps, it is necessary to investigate the asymptotic behaviour of a Riemann map at the cusp and based on this result to determine the asymptotic power series expansion. Therefore, the aim of the second part of this work is to investigate the asymptotic behaviour of a Riemann map at an analytic cusp. A simply connected domain has an analytic cusp if the boundary is locally given by two analytic arcs such that the interior angle vanishes. Besides the asymptotic behaviour of the mapping function, the behaviour of its derivatives, its inverse, and the derivatives of the inverse are analysed. Finally, we present a conjecture on the asymptotic power series expansion of the mapping function at an analytic cusp.
Web Search engines have become an indispensable online service to retrieve content on the Internet. However, using search engines raises serious privacy issues as the latter gather large amounts of data about individuals through their search queries. Two main techniques have been proposed to privately query search engines. A first category of approaches, called unlinkability, aims at disassociating the query and the identity of its requester. A second category of approaches, called indistinguishability, aims at hiding user’s queries or user’s interests by either obfuscating user’s queries, or forging new fake queries. This paper presents a study of the level of protection offered by three popular solutions: Tor-based, TrackMeNot, and GooPIR. For this purpose, we present an efficient and scalable attack – SimAttack – leveraging a similarity metric to capture the distance between preliminary information about the users (i.e., history of query) and a new query. SimAttack de-anonymizes up to 36.7 % of queries protected by an unlinkability solution (i.e., Tor-based), and identifies up to 45.3 and 51.6 % of queries protected by indistinguishability solutions (i.e., TrackMeNot and GooPIR, respectively). In addition, SimAttack de-anonymizes 6.7 % more queries than state-of-the-art attacks and dramatically improves the performance of the attack on TrackMeNot by 23.6 %, while retaining an execution time faster by two orders of magnitude.
Following an “agency-oriented Urban Theory” as advanced by Smith (2001), this study takes the urban landscape of Vinh City in Central Vietnam as a starting point into an investigation of multiple visions of modernity (Eisenstadt, 2000) put forward by social actors, as well as into urban change resulting from the implementation of such visions. Focusing on the period from 1973 to 2011, it traces the application of three different visions for urban development in Vinh: The Socialist City, The Modern and Civilized City, and the Participatory City. Projects aiming at implementing these visions in Vinh that are presented in this study have one thing in common: they are informed by a specific view of what a city is and what it should be, and their implementation aims at changing the city in the desired direction. This goal involves not only physical change of the city, but also institutional change in the urban society. To grasp the interplay between visions of a modern city, their application through concrete projects, and the results of these implementations, the study operates with two specific terms: modern projects, and urban change. After introducing Vinh and its history, the thesis presents the period of the vision of The Socialist City and its application in Vinh through cooperation between Vietnam and the German Democratic Republic in the 1970s. It then moves on to contemporary period starting in the 1990s, during which varying and conflicting modern projects for the city were put forward by different social actors cooperating in joint projects on urban development: the Modern and Civilized City and the Participatory City. While the modes of cooperation differed between the two periods, the study concludes with the argument that the impact of these transnational projects has led to path-dependent, as well as ambivalent, urban change in Vinh.
This thesis attempts to investigate the Noether, Dedekind, and Kähler differents for a 0-dimensional scheme X in the projective n-space P^n_K over an arbitrary field K. In particular, we focus on studying the relations between the algebraic structure of these differents and geometric properties of the scheme X.
In Chapter 1 we give an outline to the problems this thesis is concerned with, a brief literature review for each problem, and the main results regarding these problems. Chapter 2 contains background results that we will need in the subsequent chapters. We introduce the concept of maximal p_j-subschemes of a 0-dimensional scheme X and give some descriptions of them and their Hilbert functions. Furthermore, we generalize the notion of a separator of a subscheme of X of degree deg(X)-1 to a set of separators of a maximal p_j-subscheme of X. In Chapter 3 we explore the Noether, Dedekind, and Kähler differents for 0-dimensional schemes X. First we define these differents for X, and take a look at how to compute these differents and examine their relations. Then we give an answer to the question "What are the Hilbert functions of these differents?" in some cases.
In Chapter 4 we use the differents to investigate the Cayley-Bacharach property of 0-dimensional schemes over an arbitrary field K. The principal results of this chapter are characterizations of CB-schemes and of arithmetically Gorenstein schemes in terms of their Dedekind differents and a criterion for a 0-dimensional smooth scheme to be a complete intersection. We also generalize some results such as Dedekind's formula and the characterization of the Cayley-Bacharach property by using Liaison theory. In addition, several propositions on the uniformities are proven. In Chapter 5 we are interested in studying the Noether, Dedekind, and Kähler differents for finite special classes of schemes and finding out some applications of these differents. First, we investigate these differents for reduced 0-dimensional almost complete intersections X in P^n_K over a perfect field K. Then we investigate the relationships between these differents and the i-th Fitting ideals of the module of Kähler differentials of the homogeneous coordinate ring of X. Finally, we look more closely at the Hilbert functions and the regularity indices of these differents for fat point schemes.
The Will to Play. Performance and Construction of Royal Masculinity in Early Modern History Plays
(2015)
Die vorliegende Arbeit untersucht Männlichkeitskonzepte in der Frühen Neuzeit, wobei das Hauptaugenmerk auf die dramatische Konstruktion der Figur des Königs gerichtet wird. Anhand von zehn Historiendramen der 1590er wird zum einen die diskursive Komplexität königlicher Männlichkeit in der Renaissance untersucht, um darauf aufbauend deren performative Darstellung zu analysieren. Im Theorieteil werden Männlichkeit und Herrschaft im elisabethanischen England mithilfe zeitgenössischer Texte diskutiert und durch den Genderdiskurs und die Performativität von Gender erweitert. Der darauf folgende Methodikteil entwickelt aus den gewonnenen Erkenntnissen eine Semiotik von königlicher Männlichkeit, die anschließend im Analyseteil anhand der ausgewählten Historiendramen evaluiert wird.
This doctoral thesis is dedicated to the analysis and the design of
symmetric cryptographic algorithms.
In the first part of the dissertation, we deal with fault-based attacks
on cryptographic circuits which belong to the field of active implementation
attacks and aim to retrieve secret keys stored on such chips. Our main focus
lies on the cryptanalytic aspects of those attacks. In particular, we target
block ciphers with a lightweight and (often) non-bijective key schedule where
the derived subkeys are (almost) independent from each other. An attacker who is
able to reconstruct one of the subkeys is thus not necessarily able to directly
retrieve other subkeys or even the secret master key by simply reversing the key
schedule. We introduce a framework based on differential fault analysis that
allows to attack block ciphers with an arbitrary number of independent subkeys
and which rely on a substitution-permutation network. These methods are then
applied to the lightweight block ciphers LED and PRINCE and we show in both
cases how to recover the secret master key requiring only a small number of
fault injections. Moreover, we investigate approaches that utilize algebraic
instead of differential techniques for the fault analysis and discuss advantages
and drawbacks. At the end of the first part of the dissertation, we explore
fault-based attacks on the block cipher Bel-T which also has a lightweight key
schedule but is not based on a substitution-permutation network but instead on
the so-called Lai-Massey scheme. The framework mentioned above is thus not
usable against Bel-T. Nevertheless, we also present techniques for the case of
Bel-T that enable full recovery of the secret key in a very efficient way using
differential fault analysis.
In the second part of the thesis, we focus on authenticated encryption
schemes. While regular ciphers only protect privacy of processed data,
authenticated encryption schemes also secure its authenticity and integrity.
Many of these ciphers are additionally able to protect authenticity and
integrity of so-called associated data. This type of data is transmitted
unencrypted but nevertheless must be protected from being tampered with during
transmission. Authenticated encryption is nowadays the standard technique to
protect in-transit data. However, most of the currently deployed schemes have
deficits and there are many leverage points for improvements. With NORX we
introduce a novel authenticated encryption scheme supporting associated data.
This algorithm was designed with high security, efficiency in both hardware and
software, simplicity, and robustness against side-channel attacks in mind. Next
to its specification, we present special features, security goals,
implementation details, extensive performance measurements and discuss
advantages over currently deployed standards. Finally, we describe our
preliminary security analysis where we investigate differential and rotational
properties of NORX. Noteworthy are in particular the newly developed
techniques for differential cryptanalysis of NORX which exploit the power of
SAT- and SMT-solvers and have the potential to be easily adaptable to other
encryption schemes as well.
The aim of this dissertation is to investigate Kaehler differential algebras and their Hilbert functions for 0-dimensional schemes in P^n. First we give relations between Kaehler differential 1-forms of fat point schemes and another fat point schemes. Then we determine the Hilbert polynomial and give a sharp bound for the regularity index of the module of Kaehler differential m-forms, for 0<m<n+2. Next, we examine the Kaehler differential algebras for fat point schemes whose supports lie on non-singular conics in P^2. Finally, we prove the Segre bounds for equimultiple fat point schemes in P^4, this result allows us to determine the regularity index of the module of Kaehler differential 1-forms, and a sharp bound for the regularity index of the module of Kaehler differential m-forms, for 1<m<6.
This thesis is divided into two parts. The first part is devoted to the curvature estimation of piecewise smooth curves using variation diminishing splines. The variation diminishing property combined with the ability to reconstruct linear functions leads to a convexity preserving approximation that is crucial if additional sign changes in the curvature estimation have to be avoided. To this end, we will first establish the foundations of variation diminishing transforms and introduce the Bernstein and the Schoenberg operator on the space of continuous functions and its generalization to the Lp-spaces. In order to be able to detect C2-singularities in piecewise smooth curves, we establish lower estimates for the approximation error in terms of the second order modulus of smoothness for Schoenberg’s variation diminishing operator. Afterwards, we consider smooth curve approximations using only finitely many samples of the curve, where the approximation, its first, and its second derivative converge uniformly to its corresponding part of the curve to be approximated. In this case, we can show that the estimated curvature converges uniformly to the real curvature if the number of samples goes to infinity. Based on the lower estimates that relates the decay rate of the approximation error with smoothness we propose a multi-scale algorithm to estimate the curvature and to detect C2-singularities. We numerically evaluate our algorithm and compare it to others to show that our algorithm achieves competitive accuracy while our curvature estimations are significantly faster to compute.
The second part deals with generalizations of the established lower estimates for the Schoenberg operator. We will show that such estimates can be obtained for linear operators on a general Banach function space with smooth range provided that the iterates of the operator converge uniformly and a semi-norm defined on the range of the operator annihilates the fixed points of the operator. To this end, we will prove by spectral properties that the iterates of every positive finite-rank operator converge uniformly. As highlight of this thesis, we show a constructive way using a Gramian matrix where the dual fixed points operate on the fixed points of an operator to derive the limit of the iterates for an arbitrary quasi-compact operator defined on a general Banach space.
Most major airports collect recordings of the position of aircrafts at specific times. Those data typically requires extensive smoothing and corrections before it can be used for later analysis. Conventional smoothing approaches fail to model the movement physically correct, i.e. do not take standstills of aircrafts into account.
In this thesis we develop a method to detect standstills, employ robust smoothing splines for data fitting, add adequate boundary conditions for the detected standstill periods (i.e. force the function to be constant and to entry- and exit-direction for the standstills to be identical) and give an algorithm to solve those approximation problems efficiently.
In the progress we give an explicit proof for the convergence of the IRLS algorithm proposed by Huber to solve M-type estimates for non-linear approximation problems. Furthermore we derive a blueprint for a method to solve separable, quadratic least squares problems with very few quadratic variables.
Top-k Semantic Caching
(2015)
The subject of this thesis is the intelligent caching of top-k queries in an environment with high latency and low throughput. In such an environment, caching can be used to reduce network traffic and improve response time. Slow database connections of mobile devices and to databases, which have been offshored, are practical use cases.
A semantic cache is a query-based cache that caches query results and maintains their semantic description. It reuses partial matches of previous query results. Each query that is processed by the semantic cache is split into two disjoint parts: one that can be completely answered with tuples of the cache probe query, and another that requires tuples to be transferred from the server (remainder query).
Existing semantic caches do not support top-k queries, i.e., ordered and limited queries. In this thesis, we present an innovative semantic cache that naturally supports top-k queries. The support of top-k queries in a semantic cache has considerable effects on cache elements, operations on cache elements -- like creation, difference, intersection, and union -- and query answering. Hence, we introduce new techniques for cache management and query processing. They enable the semantic cache to become a true top-k semantic cache.
In addition, we have developed a new algorithm that can estimate the lower bounds of query results of sorted queries using multidimensional histograms. Using this algorithm, our top-k semantic cache is able to pipeline partial query results of top-k queries. Thereby, query execution performance can be significantly increased.
We have implemented a prototype of a top-k semantic cache called IQCache (Intelligent Query Cache). An extensive and thorough evaluation with various benchmarks using our prototype demonstrates the applicability and performance of top-k semantic caching in practice. The experiments prove that the top-k semantic cache invariably outperforms simple hash-based caching strategies and scales very well.
The world wide web today serves as a distributed application platform. Its origins, however, go back to a simple delivery network for static hypertexts. The legacy from these days can still be observed in the communication protocol used by increasingly sophisticated clients and applications. This thesis identifies the actual security requirements of modern web applications and shows that HTTP does not fit them: user and application authentication, message integrity and confidentiality, control-flow integrity, and application-to-application authorization. We explore the other protocols in the web stack and work out why they can not fill the gap. Our analysis shows that the underlying problem is the connectionless property of HTTP. However, history shows that a fresh start with web communication is far from realistic. As a consequence, we come up with approaches that contribute to meet the identified requirements.
We first present impersonation attack vectors that begin before the actual user authentication, i.e. when secure web interaction and authentication seem to be unnecessary. Session fixation attacks exploit a responsibility mismatch between the web developer and the used web application framework. We describe and compare three countermeasures on different implementation levels: on the source code level, on the framework level, and on the network level as a reverse proxy.
Then, we explain how the authentication credentials that are transmitted for the user login, i.e. the password, and for session tracking, i.e. the session cookie, can be complemented by browser-stored and user-based secrets respectively. This way, an attacker can not hijack user accounts only by phishing the user's password because an additional browser-based secret is required for login. Also, the class of well-known session hijacking attacks is mitigated because a secret only known by the user must be provided in order to perform critical actions.
In the next step, we explore alternative approaches to static authentication credentials. Our approach implements a trusted UI and a mutually authenticated session using signatures as a means to authenticate requests. This way, it establishes a trusted path between the user and the web application without exchanging reusable authentication credentials. As a downside, this approach requires support on the client side and on the server side in order to provide maximum protection. Another approach avoids client-side support but can not implement a trusted UI and is thus susceptible to phishing and clickjacking attacks.
Our approaches described so far increase the security level of all web communication at all time. This is why we investigate adaptive security policies that fit the actual risk instead of permanently restricting all kinds of communication including non-critical requests. We develop a smart browser extension that detects when the user is authenticated on a website meaning that she can be impersonated because all requests carry her identity proof. Uncritical communication, however, is released from restrictions to enable all intended web features.
Finally, we focus on attacks targeting a web application's control-flow integrity. We explain them thoroughly, check whether current web application frameworks provide means for protection, and implement two approaches to protect web applications: The first approach is an extension for a web application framework and provides protection based on its configuration by checking all requests for policy conformity. The second approach generates its own policies ad hoc based on the observed web traffic and assuming that regular users only click on links and buttons and fill forms but do not craft requests to protected resources.