open_access
Refine
Year of publication
Document Type
- Doctoral Thesis (229)
- Article (26)
- Preprint (10)
- Conference Proceeding (9)
- Report (4)
- Book (3)
- Part of Periodical (3)
- Master's Thesis (1)
- Other (1)
Language
- English (286) (remove)
Has Fulltext
- yes (286)
Keywords
- Computersicherheit (8)
- Maßtheorie (8)
- Graphenzeichnen (6)
- Iteriertes Funktionensystem (5)
- Marketing (5)
- Multimedia (5)
- Software Engineering (5)
- Information Retrieval (4)
- Kryptologie (4)
- Modellierung (4)
Institute
- Fakultät für Informatik und Mathematik (102)
- Wirtschaftswissenschaftliche Fakultät (53)
- Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik (47)
- Philosophische Fakultät (36)
- Mitarbeiter Lehrstuhl/Einrichtung der Wirtschaftswissenschaftlichen Fakultät (9)
- Sonstiger Autor der Fakultät für Informatik und Mathematik (9)
- Sozial- und Bildungswissenschaftliche Fakultät (8)
- Philosophische Fakultät / Südostasienkunde (6)
- Sonstiger Autor der Wirtschaftswissenschaftlichen Fakultät (5)
- Juristische Fakultät (4)
Top-k Semantic Caching
(2015)
The subject of this thesis is the intelligent caching of top-k queries in an environment with high latency and low throughput. In such an environment, caching can be used to reduce network traffic and improve response time. Slow database connections of mobile devices and to databases, which have been offshored, are practical use cases.
A semantic cache is a query-based cache that caches query results and maintains their semantic description. It reuses partial matches of previous query results. Each query that is processed by the semantic cache is split into two disjoint parts: one that can be completely answered with tuples of the cache probe query, and another that requires tuples to be transferred from the server (remainder query).
Existing semantic caches do not support top-k queries, i.e., ordered and limited queries. In this thesis, we present an innovative semantic cache that naturally supports top-k queries. The support of top-k queries in a semantic cache has considerable effects on cache elements, operations on cache elements -- like creation, difference, intersection, and union -- and query answering. Hence, we introduce new techniques for cache management and query processing. They enable the semantic cache to become a true top-k semantic cache.
In addition, we have developed a new algorithm that can estimate the lower bounds of query results of sorted queries using multidimensional histograms. Using this algorithm, our top-k semantic cache is able to pipeline partial query results of top-k queries. Thereby, query execution performance can be significantly increased.
We have implemented a prototype of a top-k semantic cache called IQCache (Intelligent Query Cache). An extensive and thorough evaluation with various benchmarks using our prototype demonstrates the applicability and performance of top-k semantic caching in practice. The experiments prove that the top-k semantic cache invariably outperforms simple hash-based caching strategies and scales very well.
The world wide web today serves as a distributed application platform. Its origins, however, go back to a simple delivery network for static hypertexts. The legacy from these days can still be observed in the communication protocol used by increasingly sophisticated clients and applications. This thesis identifies the actual security requirements of modern web applications and shows that HTTP does not fit them: user and application authentication, message integrity and confidentiality, control-flow integrity, and application-to-application authorization. We explore the other protocols in the web stack and work out why they can not fill the gap. Our analysis shows that the underlying problem is the connectionless property of HTTP. However, history shows that a fresh start with web communication is far from realistic. As a consequence, we come up with approaches that contribute to meet the identified requirements.
We first present impersonation attack vectors that begin before the actual user authentication, i.e. when secure web interaction and authentication seem to be unnecessary. Session fixation attacks exploit a responsibility mismatch between the web developer and the used web application framework. We describe and compare three countermeasures on different implementation levels: on the source code level, on the framework level, and on the network level as a reverse proxy.
Then, we explain how the authentication credentials that are transmitted for the user login, i.e. the password, and for session tracking, i.e. the session cookie, can be complemented by browser-stored and user-based secrets respectively. This way, an attacker can not hijack user accounts only by phishing the user's password because an additional browser-based secret is required for login. Also, the class of well-known session hijacking attacks is mitigated because a secret only known by the user must be provided in order to perform critical actions.
In the next step, we explore alternative approaches to static authentication credentials. Our approach implements a trusted UI and a mutually authenticated session using signatures as a means to authenticate requests. This way, it establishes a trusted path between the user and the web application without exchanging reusable authentication credentials. As a downside, this approach requires support on the client side and on the server side in order to provide maximum protection. Another approach avoids client-side support but can not implement a trusted UI and is thus susceptible to phishing and clickjacking attacks.
Our approaches described so far increase the security level of all web communication at all time. This is why we investigate adaptive security policies that fit the actual risk instead of permanently restricting all kinds of communication including non-critical requests. We develop a smart browser extension that detects when the user is authenticated on a website meaning that she can be impersonated because all requests carry her identity proof. Uncritical communication, however, is released from restrictions to enable all intended web features.
Finally, we focus on attacks targeting a web application's control-flow integrity. We explain them thoroughly, check whether current web application frameworks provide means for protection, and implement two approaches to protect web applications: The first approach is an extension for a web application framework and provides protection based on its configuration by checking all requests for policy conformity. The second approach generates its own policies ad hoc based on the observed web traffic and assuming that regular users only click on links and buttons and fill forms but do not craft requests to protected resources.
This thesis addresses a problem related to usage analysis in information retrieval
systems. Indeed, we exploit the history of search queries as support of analysis to
extract a profile model. The objective is to characterize the user and the data source
that interact in a system to allow different types of comparison (user-to-user, sourceto-
source, user-to-source). According to the study we conducted on the work done on
profile model, we concluded that the large majority of the contributions are strongly
related to the applications within they are proposed. As a result, the proposed
profile models are not reusable and suffer from several weaknesses. For instance,
these models do not consider the data source, they lack of semantic mechanisms and
they do not deal with scalability (in terms of complexity). Therefore, we propose
a generic model of user and data source profiles. The characteristics of this model
are the following. First, it is generic, being able to represent both the user and the
data source. Second, it enables to construct the profiles in an implicit way based on histories of search queries. Third, it defines the profile as a set of topics of interest,
each topic corresponding to a semantic cluster of keywords extracted by a specific
clustering algorithm. Finally, the profile is represented according to the vector space
model. The model is composed of several components organized in the form of a
framework, in which we assessed the complexity of each component.
The main components of the framework are:
• a method for keyword queries disambiguation
• a method for semantically representing search query logs in the form of a
taxonomy;
• a clustering algorithm that allows fast and efficient identification of topics of
interest as semantic clusters of keywords;
• a method to identify user and data source profiles according to the generic
model.
This framework enables in particular to perform various tasks related to usage-based
structuration of a distributed environment. As an example of application, the framework
is used to the discovery of user communities, and the categorization of data
sources. To validate the proposed framework, we conduct a series of experiments
on real logs from the search engine AOL search, which demonstrate the efficiency
of the disambiguation method in short queries, and show the relation between the
quality based clustering and the structure based clustering.
Static analysis tools and transformation engines for source code belong to the standard equipment of a software developer. Their use simplifies a developer's everyday work of maintaining and evolving software systems significantly and, hence, accounts for much of a developer's programming efficiency and programming productivity. This is also beneficial from a financial point of view, as programming errors are early detected and avoided in the the development process, thus the use of static analysis tools reduces the overall software-development costs considerably.
In practice, software systems are often developed as configurable systems to account for different requirements of application scenarios and use cases. To implement configurable systems, developers often use compile-time implementation techniques, such as preprocessors, by using #ifdef directives. Configuration options control the inclusion and exclusion of #ifdef-annotated source code and their selection/deselection serve as an input for generating tailor-made system variants on demand. Existing configurable systems, such as the linux kernel, often provide thousands of configuration options, forming a huge configuration space with billions of system variants.
Unfortunately, existing tool support cannot handle the myriads of system variants that can typically be derived from a configurable system. Analysis and transformation tools are not prepared for variability in source code, and, hence, they may process it incorrectly with the result of an incomplete and often broken tool support.
We challenge the way configurable systems are analyzed and transformed by introducing variability-aware static analysis tools and a variability-aware transformation engine for configurable systems' development. The main idea of such tool support is to exploit commonalities between system variants, reducing the effort of analyzing and transforming a configurable system. In particular, we develop novel analysis approaches for analyzing the myriads of system variants and compare them to state-of-the-art analysis approaches (namely sampling). The comparison shows that variability-aware analysis is complete (with respect to covering the whole configuration space), efficient (it outperforms some of the sampling heuristics), and scales even to large software systems. We demonstrate that variability-aware analysis is even practical when using it with non-trivial case studies, such as the linux kernel.
On top of variability-aware analysis, we develop a transformation engine for C, which respects variability induced by the preprocessor. The engine provides three common refactorings (rename identifier, extract function, and inline function) and overcomes shortcomings (completeness, use of heuristics, and scalability issues) of existing engines, while still being semantics-preserving with respect to all variants and being fast, providing an instantaneous user experience. To validate semantics preservation, we extend a standard testing approach for refactoring engines with variability and show in real-world case studies the effectiveness and scalability of our engine.
In the end, our analysis and transformation techniques show that configurable systems can efficiently be analyzed and transformed (even for large-scale systems), providing the same guarantees for configurable systems as for standard systems in terms of detecting and avoiding programming errors.
”Volkstümliche Musik” is a significant phenomenon (at least) of the early 1990s. The analysis of this phenomenon can paradigmatically represent the discursive practices and sub-thought systems of large parts of the population of the Federal Republic of Germany at this time. ”Volkstümliche Musik” depends on the political situation and indirectly deals with the needs and problems of individuals which result from such a situation and which lie in the deep structure of their mentalities. It thus takes on a cultural function.
In study 1 an introduction to the research on moral self-regulation is provided alongside with an explanation of the two manifestations of moral self-regulation: moral licensing and moral cleansing. At the core of the first study is an experiment which was designed to identify moral licensing and cleansing in the domain of honesty. The experiment merges relevant studies from social psychology and experimental economics. It assesses the question if moral self-regulation exists within the domain of honesty or more precisely, if the truth and lies are told in such a way as to balance each other out. After manipulating participants’ moral balances (either positively or negatively), rates of truth-telling are compared to a neutral baseline scenario. Since neither moral licensing nor moral cleansing is observed, the results provide no support to the initial hypothesis that moral self-regulation exists within the domain of honesty.
Study 2 builds on these results and discusses possible reasons for the absence of moral self-regulation. The research on moral hypocrisy and self-concept maintenance are presented and discussed as possible explanations. In order to shed more light on participants’ behavior, a coding procedure is presented that was used on the dataset from study 1. This approach makes it possible to quantify participants’ handwritten stories that resulted from the moral manipulation in study 1 and gain more insights on how truth-telling and lying affect the moral balance. By analyzing (dis)honesty on a more detailed level, results show that participants tend to act consistent to what they revealed about themselves in their stories.
Study 3 links together aspects of moral self-regulation, moral hypocrisy and impression management. The "looting game" is presented which lets participants loot money from a charity box being subject to altruistic punishment from observers. For their punishment decision observers are provided with a history of participants’ past actions. This design allows to assess how misconduct, punishment and the creation of a favorable impression interact and ultimately impact profits. The results indicate that moral cleansing, and not the desire to trick observers, is the reason for manipulation. Participants who loot money from the charity box do not expect to receive less punishment, rather they simply want to present a more favorable picture of themselves. On the other hand, observers fully account for the possibility of manipulation and tend to disregard a manipulated history. The looting game therefore brings the hypothesis into question that impressions are managed and manipulated to increase profits.
Quantum computing is an emerging technology that has the potential to change the perspectives and applications of computing in general. A wide range of applications are enabled: from faster algorithmic solutions of classically still difficult problems to theoretically more secure communication protocols. A quantum computer uses the quantum mechanical effects of particles or particle-like systems, and a major similarity between quantum and classical computers consists of both being abstracted as information processing machines. Whereas a classical computer operates on classical digital information, the quantum computer processes quantum information, which shares similarities with analog signals. One of the central differences between the two types of information is that classical information is more fault-tolerant when compared to its quantum counterpart.
Faults are the result of the quantum systems being interfered by external noise, but during the last decades quantum error correction codes (QECC) were proposed as methods to reduce the effect of noise. Reliable quantum circuits are the result of designing circuits that operate directly on encoded quantum information, but the circuit’s reliability is also increased by supplemental redundancies, such as sub-circuit repetitions.
Reliable quantum circuits have not been widely used, and one of the major obstacles is their vast associated resource overhead, but recent quantum computing architectures show promising scalabilities. Consequently the number of particles used for computing can be more easily increased, and that the classical control hardware (inherent for quantum computation) is also more reliable. Reliable quantum circuits haev been investigated for almost as long as general quantum computing, but their limited adoption (until recently) has not generated enough interest into their systematic design.
The continuously increasing practical relevance of reliability motivates the present thesis to investigate some of the first answers to questions related to the background and the methods forming a reliable quantum circuit design stack.
The specifics of quantum circuits are analysed from two perspectives: their probabilistic behaviour and their topological properties when a particular class of QECCs are used. The quantum phenomena, such as entanglement and superposition, are the computational resources used for designing quantum circuits. The discrete nature of classical information is missing for quantum information. An arbitrary quantum system can be in an infinite number of states, which are linear combinations of an exponential number of basis states. Any nontrivial linear combination of more than one basis states is called a state superposition. The effect of superpositions becomes evident when the state of the system is inferred (measured), as measurements are probabilistic with respect to their output: a nontrivial state superposition will collapse to one of the component basis states, and the measurement result is known exactly only after the measurement.
A quantum system is, in general, composed from identical subsystems, meaning that a quantum computer (the complete system) operates on multiple similar particles (subsystems). Entanglement expresses the impossibility of separating the state of the subsystems from the state of the complete system: the nontrivial interactions between the subsystems result into a single indivisible state. Entanglement is an additional source of probabilistic behaviour: by measuring the state of a subsystem, the states of the unmeasured subsystems will probabilistically collapse to states from a well defined set of possible states. Superposition and entanglement are the building blocks of quantum information teleportation protocols, which in turn are used in state-of-the-art fault-tolerant quantum computing architectures. Information teleportation implies that the state of a subsystem is moved to a second subsystem without copying any information during the process.
The probabilistic approach towards the design of quantum circuits is initiated by the extension of classical test and diagnosis methods. Quantum circuits are modelled similarly to classical circuits by defining gate-lists, and missing quantum gates are modelled by the single missing gate fault. The probabilistic approaches towards quantum circuits are facilitated by comparing these to stochastic circuits, which are a particular type of classical digital circuits. Stochastic circuits can be considered an emulation of analogue computing using digital components.
A first proposed design method, based on the direct comparison, is the simulation of quantum circuits using stochastic circuits by mapping each quantum gate to a stochastic computing sub-circuit. The resulting stochastic circuit is compiled and simulated on FPGAs. The obtained results are encouraging and illustrate the capabilities of the proposed simulation technique. However, the exponential number of possible quantum basis states was translated into an exponential number of stochastic computing elements.
A second contribution of the thesis is the proposal of test and diagnosis methods for both stochastic and quantum circuits. Existing verification (tomographic) methods of quantum circuits were targeting the reconstruction of the gate-lists. The repeated execution of the quantum circuit was followed by different but specific measurement at the circuit outputs. The similarities between stochastic and quantum circuits motivated the proposal of test and diagnosis methods that use a restricted set of measurement types, which minimise the number of circuit executions. The obtained simulation results show that the proposed validation methods improve the feasibility of quantum circuit tomography for small and medium size circuits.
A third contribution of the thesis is the algorithmic formalisation of a problem encountered in teleportation-based quantum computing architectures. The teleportation results are probabilistic and require corrections represented as quantum gates from a particular set. However, there are known commutation properties of these gates with the gates used in the circuit. The corrections are not applied as dynamic gate insertions (during the circuit’s execution) into the gate-lists, but their effect is tracked through the circuit, and the corrections are applied only at circuit outputs. The simulation results show that the algorithmic solution is applicable for very large quantum circuits.
Topological quantum computing (TQC) is based on a class of fault-tolerant quantum circuits that use the surface code as the underlying QECC. Quantum information is encoded in lattice-like structures and error protection is enabled by the topological properties of the lattice. The 3D structure of the lattice allows TQC computations to be visualised similarly to knot diagrams. Logical information is abstracted as strands and strand interactions (braids) represent logical quantum gates. Therefore, TQC circuits are abstracted using a geometrical description, which allows circuit input-output transformations (correlations) to be represented as geometric sub-structures.
TQC design methods were not investigated prior to this work, and the thesis introduces the topological computational model by first analysing the necessary concepts. The proposed TQC design stack follows a top-down approach: an arbitrary quantum circuit is decomposed into the TQC supported gate set; the resulting circuit is mapped to a lattice of appropriate dimensions; relevant resulting topological properties are extracted and expressed using graphs and Boolean formulas. Both circuit representations are novel and applicable to TQC circuit synthesis and validation. Moreover, the Boolean formalism is broadened into a formal mechanism for proving circuit correctness.
The thesis introduces TQC circuit synthesis, which is based on a novel logical gate geometric description, whose formal correctness is demonstrated. Two synthesis methods are designed, and both use a general planar representation of the circuit. Initial simulation results demonstrate the practicality and performance of the methods.
An additional group of proposed design methods solves the problem of automatic correlation construction. The methods use validity criteria which were introduced and analysed beforehand in the thesis. Input-output correlations existing in the circuit are inferred using both the graph and the Boolean representation.
The thesis extends the TQC state-of-the-art by recognising the importance of correlations in the validation process: correlation construction is used as a sub-routine for TQC circuit validation. The presented cross-layer validation procedure is useful when investigating both the QECC and the circuit, while a second proposed method is QECC-independent. Both methods are scalable and applicable even to very large circuits.
The thesis completes with the analysis of TQC circuit identities, where the developed Boolean formalism is used. The proofs of former known circuit identities were either missing or complex, and the presented approach reduces the length of the proofs and represents a first step towards standardising them. A new identity is developed and detailed during the process of illustrating the known circuit identities.
Reliable quantum circuits are a necessity for quantum computing to become reality, and specialised design methods are required to support the quest for scalable quantum computers. This thesis used a twofold approach towards this target: firstly by focusing on the probabilistic behaviour of quantum circuits, and secondly by considering the requirements of a promising quantum computing architecture, namely TQC. Both approaches resulted in a set of design methods enabling the investigation of reliable quantum circuits.
The thesis contributes with the proposal of a new quantum simulation technique, novel and practical test and diagnosis methods for general quantum circuits, the proposal of the TQC design stack and the set of design methods that form the stack. The mapping, synthesis and validation of TQC circuits were developed and evaluated based on a novel and promising formalism that enabled checking circuit correctness.
Future work will focus on improving the understanding of TQC circuit identities as it is hoped that these are the key for circuit compaction and optimisation. Improvements to the stochastic circuit simulation technique have the potential of spawning new insights about quantum circuits in general.
This study is an urban history of Cagayan de Oro (Northern Mindanao, Philippines), a city in a developing region, which faced urban transformation from the end of WWII in 1945 to 1980. It employed a multidisciplinary approach wherein the city’s demographic, economic and infrastructural changes were analyzed. The study revealed that Cagayan had grown and transformed from a traditional town into an urban centre whereby its demography had continuously expanded with high rates of natural increase and large streams of in-migrants; while its economic activities had shifted from agricultural production to commerce and industrialization. Its economic transition was due to the promotion of Cagayan as the “Gateway to the South” or “Gateway to Northern Mindanao” by the deciding elites who were part of the transnational economic system that supplied resources to the developed countries such as the United States and Japan. As a result, the growth of Cagayan was not directed towards the masses but instead to the elites and their foreign allies. Cagayan experienced inertia in terms of infrastructural development. Therefore the absorption of foreign structures resulted to an artificial form of urban transformation in Cagayan.
Precise, content-rich and well-structured document models are required for applications like verifying the consistency of documents. Creating such models for common documents is currently an expensive and error-prone process. In this thesis we present a novel approach to modelling and processing digital documents that uses semantic technologies. In contrast to other modelling approaches, we model the structure of documents as indicated by the content, not as defined by technical attributes like the file format. Additionally, our meta-model can be applied to a wide range of different documents, not just to a small set of documents with a predefined set of features. The models include semantic data and content relationships, which can be further extended with domain knowledge. Our new separation of technical and semantic document models fuels a standardised method for obtaining semantic models. This method is effective, suitable for live processing, and easily transferable to other document types and other domains. As it is makes extensive use of background knowledge, we also present techniques for obtaining such knowledge, and for representing complex forms of knowledge with multiple meta-layers. A flexible technique for obtaining relevant data from our document models completes the approach. This includes the ability to obtain various verification models, suitable for different types of consistency criteria and for different validation formalisms. We conclude this thesis with an evaluation that shows the viability and effectiveness of the proposed approach. We present runtime results for an implementation based on RDF/OWL and the rule language JBoss Drools that are adequate for live processing. We also provide and successfully apply techniques for measuring the quality of both document models and background knowledge.
The Internet is a global system of interconnected computers and computer networks where semi-structured data has been successfully applied for exchanging information. In nowadays Internet the huge range of actors, the large diversity of the associated device classes and domains, and the enormous amount of resource-restricted controllers in this system created new requirements and coined also a new term. Internet of Things (IoT), in this regard, refers to identifiable objects (things) and their virtual representations in an Internet-like structure. The fundamental question the thesis tries to answer is whether and how the same semi-structured data can be also applied to the IoT and the embedded domain in spite of resource-limited controllers. In order to discuss this question properties and requirements of embedded networks with regard to the IoT domain have been collected and evaluated. Thereafter the omnipresent semi-structured data exchange format in the Web, the Extensible Markup Language (XML), has been validated. The result was a list of missing requirements such as a compact representation, a representation that can be generated and consumed fast and also allows a small footprint implementation. To address the compiled requirements a binary representation of XML which nowadays is known as W3Cs Efficient XML Interchange (EXI) format has been accomplished which simultaneously optimizes performance and the utilization of computational resources and is designed to be compatible with XML. Moreover, in this work the format has been practically validated and tested. Addressing the needs of the embedded domain one result of this analyzes were optimizations to constrain runtime memory usage and to predict memory growth at runtime. A concept introduced in this thesis is LazyDOM which reduces memory requirements when processing and querying data. By means of a newly proposed code generation technique processing of EXI on ultra-constrained device classes has been enabled and resulting format modifications have been adopted by the W3C standardization. The research work described in this thesis on efficiently exchanging and processing semi-structured data on constrained embedded devices has not only triggered modifications in the W3C EXI format but even is already adopted in domain specific application standards and implementations. The above mentioned optimizations such as predictably limit the memory growth at runtime have been contributed, discussed and evaluated by the W3C experts and become a core part of the EXI specification. Even more significantly from the IoT perspective these optimizations provide the basis for the adoption of this technology in ISO and IEC standardization which is the first time for automotive and power industry to use IoT in the control plane. The implementation of EXI to conduct the evaluation as part of this thesis has become the de-facto open source reference implementation of EXI and became the basis of a number of other reference implementations such as the OpenV2G project that provides the reference implementation of the communication interface in ISO/IEC 15118. In summary the conducted research work has evaluated the options to adapt semi-structured data for the constrained embedded domain, proposed modifications and evaluated those under realistic conditions. This made it relevant for the technology as well as for application standardization despite the short period of this work. As such the research can now be taken as a basis for further challenges in the IoT field namely adopting concepts of the Semantic Web and adapting those to stimulate the quickly expanding eco-system of embedded devices.
Over the last two decades, the Internet has fundamentally changed the ways firms and consumers interact. The ongoing evolution of the Internet-enabled market environment entails new challenges for marketing research and practice, including the emergence of innovative business models, a proliferation of marketing channels, and an unknown wealth of data. This dissertation addresses these issues in three individual essays. Study 1 focuses on business models offering services for free, which have become increasingly prevalent in the online sector. Offering services for free raises new questions for service providers as well as marketing researchers: How do customers of free e-services contribute value without paying? What are the nature and dynamics of nonmonetary value contributions by nonpaying customers? Based on a literature review and depth interviews with senior executives of free e-service providers, Study 1 presents a comprehensive overview of nonmonetary value contributions in the free e-service sector, including not only word of mouth, co-production, and network effects but also attention and data as two new dimensions, which have been disregarded in marketing research. By putting their findings in the context of existing literature on customer value and customer engagement, the authors do not only shed light on the complex processes of value creation in the emerging e-service industry but also advance marketing and service research in general. Studies 2 and 3 investigate the analysis of online multichannel consumer behavior in times of big data. Firms can choose from a plethora of channels to reach consumers on the Internet, such that consumers often use a number of different channels along the customer journey. While the unprecedented availability of individual-level data enables new insights into multichannel consumer behavior, it also makes high demands on the efficiency and scalability of research approaches. Study 2 addresses the challenge of attributing credit to different channels along the customer journey. Because advertisers often do not know to what degree each channel actually contributes to their marketing success, this attribution challenge is of great managerial interest, yet academic approaches to it have not found wide application in practice. To increase practical acceptance, Study 2 introduces a graph-based framework to analyze multichannel online customer path data as first- and higher-order Markov walks. According to a comprehensive set of criteria for attribution models, embracing both scientific rigor and practical applicability, four model variations are evaluated on four, large, real-world data sets from different industries. Results indicate substantial differences to existing heuristics such as “last click wins” and demonstrate that insights into channel effectiveness cannot be generalized from single data sets. The proposed framework offers support to practitioners by facilitating objective budget allocation and improving team decisions and allows for future applications such as real-time bidding. Study 3 investigates how channel usage along the customer journey facilitates inferences on underlying purchase decision processes. To handle increasing complexity and sparse data in online multichannel environments, the author presents a new categorization of online channels and tests the approach on two large clickstream data sets using a proportional hazard model with time-varying covariates. By categorizing channels along the dimensions of contact origin and branded versus generic usage, Study 3 finds meaningful interaction effects between contacts across channel types, corresponding to the theory of choice sets. Including interactions based on the proposed categorization significantly improves model fit and outperforms alternative specifications. The results will help retailers gain a better understanding of customers’ decision-making progress in an online multichannel environment and help them develop individualized targeting approaches for real-time bidding. Using a variety of methods including qualitative interviews, Markov graphs, and survival models, this dissertation does not only advance knowledge on analyzing and managing online consumer behavior but also adds new perspectives to marketing and service research in general.
Following an “agency-oriented Urban Theory” as advanced by Smith (2001), this study takes the urban landscape of Vinh City in Central Vietnam as a starting point into an investigation of multiple visions of modernity (Eisenstadt, 2000) put forward by social actors, as well as into urban change resulting from the implementation of such visions. Focusing on the period from 1973 to 2011, it traces the application of three different visions for urban development in Vinh: The Socialist City, The Modern and Civilized City, and the Participatory City. Projects aiming at implementing these visions in Vinh that are presented in this study have one thing in common: they are informed by a specific view of what a city is and what it should be, and their implementation aims at changing the city in the desired direction. This goal involves not only physical change of the city, but also institutional change in the urban society. To grasp the interplay between visions of a modern city, their application through concrete projects, and the results of these implementations, the study operates with two specific terms: modern projects, and urban change. After introducing Vinh and its history, the thesis presents the period of the vision of The Socialist City and its application in Vinh through cooperation between Vietnam and the German Democratic Republic in the 1970s. It then moves on to contemporary period starting in the 1990s, during which varying and conflicting modern projects for the city were put forward by different social actors cooperating in joint projects on urban development: the Modern and Civilized City and the Participatory City. While the modes of cooperation differed between the two periods, the study concludes with the argument that the impact of these transnational projects has led to path-dependent, as well as ambivalent, urban change in Vinh.
In this thesis, we investigates plane drawings of undirected and directed graphs on cylinder surfaces. In the case of undirected graphs, the vertices are positioned on a line that is parallel to the cylinder’s axis and the edge curves must not intersect this line. We show that a plane drawing is possible if and only if the graph is a double-ended queue (deque) graph, i. e., the vertices of the graph can be processed according to a linear order and the edges correspond to items in the deque inserted and removed at their end vertices. A surprising consequence resulting from these observations is that the deque characterizes planar graphs with a Hamiltonian path. This result extends the known characterization of planar graphs with a Hamiltonian cycle by two stacks. By these insights, we also obtain a new characterization of queue graphs and their duals. We also consider the complexity of deciding whether a graph is a deque graph and prove that it is NP-complete. By introducing a split operation, we obtain the splittable deque and show that it characterizes planarity. For the proof, we devise an algorithm that uses the splittable deque to test whether a rotation system is planar. In the case of directed graphs, we study upward plane drawings where the edge curves follow the direction of the cylinder’s axis (standing upward planarity; SUP) or they wind around the axis (rolling upward planarity; RUP). We characterize RUP graphs by means of their duals and show that RUP and SUP swap their roles when considering a graph and its dual. There is a physical interpretation underlying this characterization: A SUP graph is to its RUP dual graph as electric current passing through a conductor to the magnetic field surrounding the conductor. Whereas testing whether a graph is RUP is NP-hard in general [Bra14], for directed graphs without sources and sink, we develop a linear-time recognition algorithm that is based on our dual graph characterization of RUP graphs.
Diese Arbeit untersucht, wie Privatinvestoren vorzeitige Kündigungsrechte in strukturierten Zinsprodukten nutzen. Als Grundlage für die Analyse dient hierbei ein neuartiger, nicht öffentlich verfügbarer Datensatz, der über einen Zeitraum von circa 13 Jahren Entscheidungen von mehr als 800.000 Privatinvestoren über ein weiteres Halten oder eine vorzeitige Kündigung von Putable Bonds (Bundesschatzbriefen) abbildet. Das Ziel der Arbeit ist es, das Verständnis von finanziellen Entscheidungsstrategien von Privatinvestoren theoretisch und empirisch zu erweitern sowie mögliche Unterschiede innerhalb dieser Investorengruppe und im Vergleich zu anderen Kapitalmarktakteuren zu identifizieren. Darüber hinaus soll die Arbeit mögliche Handlungsfelder für Emittenten und Banken, welche vergleichbare Finanzprodukte anbieten, aufzeigen.
Modern Web technology makes the dream of fully interactive and enriched video come true. Nowadays it is possible to organize videos in a non-linear way playing in a sequence unknown in advance. Furthermore, additional information can be added to the video, ranging from short descriptions to animated images and further videos. This affords an easy and efficient to use authoring tool which is capable of the management of the single media objects, as well as a clear arrangement of the links between the parts. Tools of this kind can be found rarely and do mostly not provide the full range of needed functions. While providing an interactive experience to the viewer in the Web player, parallel plot sequences and additional information lead to an increased download volume. This may cause pauses during playback while elements have to be downloaded which are displayed with the video. A good quality of experience for these videos with small waiting times and a playback without interruptions is desired. This work presents the SIVA Suite to create the previously described annotated interactive non-linear videos. We propose a video model for interactivity, non-linearity, and annotations, which is implemented in an XML format, an authoring tool, and a player. Video is the main medium, whereby different scenes are linked to a scene graph. Time controlled additional content called annotations, like text, images, audio files, or videos, is added to the scenes. The user is able to navigate in the scene graph by selecting a button at a button panel. Furthermore, other navigational elements like a table of contents or a keyword search are provided. Besides the SIVA Suite, this thesis presents algorithms and strategies for download and cache management to provide a good quality of experience while watching the annotated interactive non-linear videos. Therefor, we implemented a standard-independent player framework. Integrated into a simulation environment, the framework allows to evaluate algorithms and strategies for the calculation of start-up times, and the selection of elements to pre-fetch into and delete from the cache. Their interaction during the playback of non-linear video contents can be analyzed. The algorithms and strategies can be used to minimize interruptions in the video flow after user interactions. Our extensive evaluation showed that our techniques result in faster start-up times and lesser interruptions in the video flow than those of other players. Knowledge of the structure of an interactive non-linear video can be used to minimize the start-up time at the beginning of a video while minimizing an increase in the overall download volume.
In the Web 2.0 era, platforms for sharing and collaboratively annotating images with keywords, called tags, became very popular. Tags are a powerful means for organizing and retrieving photos. However, manual tagging is time consuming. Recently, the sheer amount of user-tagged photos available on the Web encouraged researchers to explore new techniques for automatic image annotation. The idea is to annotate an unlabeled image by propagating the labels of community photos that are visually similar to it. Most recently, an ever increasing amount of community photos is also associated with location information, i.e., geotagged. In this thesis, we aim at exploiting the location context and propose an approach for automatically annotating geotagged photos. Our objective is to address the main limitations of state-of-the-art approaches in terms of the quality of the produced tags and the speed of the complete annotation process. To achieve these goals, we, first, deal with the problem of collecting images with the associated metadata from online repositories. Accordingly, we introduce a strategy for data crawling that takes advantage of location information and the social relationships among the contributors of the photos. To improve the quality of the collected user-tags, we present a method for resolving their ambiguity based on tag relatedness information. In this respect, we propose an approach for representing tags as probability distributions based on the algorithm of Laplacian score feature selection. Furthermore, we propose a new metric for calculating the distance between tag probability distributions by extending Jensen-Shannon Divergence to account for statistical fluctuations. To efficiently identify the visual neighbors, the thesis introduces two extensions to the state-of-the-art image matching algorithm, known as Speeded Up Robust Features (SURF). To speed up the matching, we present a solution for reducing the number of compared SURF descriptors based on classification techniques, while the accuracy of SURF is improved through an efficient method for iterative image matching. Furthermore, we propose a statistical model for ranking the mined annotations according to their relevance to the target image. This is achieved by combining multi-modal information in a statistical framework based on Bayes' rule. Finally, the effectiveness of each of mentioned contributions as well as the complete automatic annotation process are evaluated experimentally.
Making multimedia data available online becomes less expensive and more convenient on a daily basis. This development promotes web phenomenons such as Facebook, Twitter, and Flickr. These phenomena and their increased acceptance in society in turn leads to a multiplication of the amount of available images online. This vast amount of, frequently public and therefore searchable, images already exceeds the zettabyte bound. Executing a similarity search on the magnitude of images that are publicly available in the Internet, and receiving a top quality result is a challenge that the scientific community has recently attempted to rise to. One approach to cope with this problem assumes the use of distributed heterogeneous Content Based Image Retrieval system (CBIRs). Following from this anticipation, the problems that emerge from a distributed query scenario must be dealt with. For example the involved CBIRs’ usage of distinct metadata formats for describing their content, as well as their unequal technical and structural information. An addition issue is the individual metrics that are used by the CBIRs to calculate the similarity between pictures, as well as their specific way of being combined. Overall, receiving good results in this environment is a very labor intensive task which has been scientifically but not yet comprehensively explored. The problem primarily addressed in this work is the collection of pictures from CBIRs, that are similar to a given picture, as a response to a distributed multimedia query. The main contribution of this thesis is the construction of a network of Content Based Image Retrieval systems that are able to extract and exploit the information about an input image’s semantic concept. This so called semantic CBIRn is mainly composed of CBIRs that are configured by the semantic CBIRn itself. Complementarily, there is a possibility that allows the integration of specialized external sources. The semantic CBIRn is able to collect and merge results of all of these attached CBIRs. In order to be able to integrate external sources that are willing to join the network, but are not willing to disclose their configuration, an algorithm was developed that approximates these configurations. By categorizing existing - as well as external - CBIRs and analyzing incoming queries, image queries are exclusively forwarded to the most suitable CBIRs. In this way, images that are not of any use for the user can be omitted beforehand. The hereafter returned images are rendered comparable in order to be able to merge them to one single result list of images, that are similar to the input image. The feasibility of the approach and the hereby obtained improvement of the search process is demonstrated by a prototypical implementation and its evaluation using classified images of ImageNet. Using this prototypical implementation an augmentation of the number of returned images that are of the same semantic concept as the input images is achieved by a factor of 4.75 with respect to a predefined non-semantic CBIRn.
UME is the notion that a user should receive informative adapted content anytime and anywhere. Personalization of videos, which adapts their content according to user preferences, is a vital aspect of achieving the UME vision. User preferences can be translated into several types of constraints that must be considered by the adaptation process, including semantic constraints directly related to the content of the video. To deal with these semantic constraints, a fine-grained adaptation, which can go down to the level of video objects, is necessary. The overall goal of this adaptation process is to provide users with adapted content that maximizes their Quality of Experience (QoE). This QoE depends at the same time on the level of the user's satisfaction in perceiving the adapted content, the amount of knowledge assimilated by the user, and the adaptation execution time. In video adaptation frameworks, the Adaptation Decision Taking Engine (ADTE), which can be considered as the "brain" of the adaptation engine, is responsible for achieving this goal. The task of the ADTE is challenging as many adaptation operations can satisfy the same semantic constraint, and thus arising in several feasible adaptation plans. Indeed, for each entity undergoing the adaptation process, the ADTE must decide on the adequate adaptation operator that satisfies the user's preferences while maximizing his/her quality of experience. The first challenge to achieve in this is to objectively measure the quality of the adapted video, taking into consideration the multiple aspects of the QoE. The second challenge is to assess beforehand this quality in order to choose the most appropriate adaptation plan among all possible plans. The third challenge is to resolve conflicting or overlapping semantic constraints, in particular conflicts arising from constraints expressed by owner's intellectual property rights about the modification of the content. In this thesis, we tackled the aforementioned challenges by proposing a Utility Function (UF), which integrates semantic concerns with user's perceptual considerations. This UF models the relationships among adaptation operations, user preferences, and the quality of the video content. We integrated this UF into an ADTE. This ADTE performs a multi-level piecewise reasoning to choose the adaptation plan that maximizes the user-perceived quality. Furthermore, we included intellectual property rights in the adaptation process. Thereby, we modeled content owner constraints. We dealt with the problem of conflicting user and owner constraints by mapping it to a known optimization problem. Moreover, we developed the SVCAT, which produces structural and high-level semantic annotation according to an original object-based video content model. We modeled as well the user's preferences proposing extensions to MPEG-7 and MPEG-21. All the developed contributions were carried out as part of a coherent framework called PIAF. PIAF is a complete modular MPEG standard compliant framework that covers the whole process of semantic video adaptation. We validated this research with qualitative and quantitative evaluations, which assess the performance and the efficiency of the proposed adaptation decision-taking engine within PIAF. The experimental results show that the proposed UF has a high correlation with subjective video quality evaluation.
The present thesis is based on four articles in the areas of labor economics, regional science and international trade. I make use of different micro-level data sets to evaluate reasons for performance disparities between firms and between workers and evaluate the interrelation of these disparities with characteristics of local labor markets. Chapter 1 of this thesis provides a discrimination between the effects of several agglomeration externalities on firms’ total factor productivity. The identification of TFP is not trivial, however. I thereby correct for biases due to unobserved output prices and the endogeneity of agglomeration economies. Traditional reasons, such as specialization, diversity and size of the county, as well as the more detailed Marshallian agglomeration economies, namely knowledge spillovers and labor market pooling, are jointly tested. It turns out that labor market pooling is the quantitatively most important agglomeration mechanism. It is captured by the correlation of the occupational composition between one county-industry and the rest of the county. The intuition behind it is that a plant readily finds suitable staff if sectors, which employ similar workers, have a large extent in the same region. Labor market pooling is still the dominant agglomeration force if the spatial boundaries of regions are changed. In general, the data demonstrate that the strength of agglomeration economies varies largely between sectors. Only for a subset of industries, some positive evidence is detected for knowledge spillovers. Chapter 2 analyzes labor market pooling in greater detail, but with a slightly different modeling than in chapter 1. Here, the central aspect of labor market pooling is based on the quality of workers and firms. The main questions are if there is a systematic matching in the labor market and if this matching pattern creates advantages for both parties. I devote attention to the identification of accurate quality measures: plants' total factor productivity and workers' fixed effect. Two different methods then yield evidence in favor of positive assortative matching. The correlation between both quality measures is positive. Wage gains amount up to 4% when both quality levels are equal. In a fairly general matching model, this shape of the wage curve arises due to complementarities of qualities in the production function. When generally higher productivities and wages in dense regions (caused by agglomeration economies and sorting) are not controlled for, the strength of matching and wage gains are overestimated. I also find that regional differences in matching quality cannot be attributed to the local density and unemployment rate. Chapter 3 applies several regression-based decomposition methods to analyze the impact of region-, worker-, firm- and sector-specific determinants on the wage level and the continuous increase in wage inequality between 1995 and 2007 in Germany. In contrast to prior studies, more than 50% of the wage dispersion and almost the entire increase in wage inequality are explained in this approach. Altogether, the entire growth of wage dispersion occurs within regions and changes in the composition of wage determinants are minor compared to changes in their returns. I find that occupational attributes are the most important wage determinant. Changes in the firm size premium in combination with assortative matching also depress wages in the bottom of the distribution while they increase wages at the top. Workers with an unemployment record or an occupation in the service, construction and logistics sectors particularly experience falling wages. Chapter 4 studies the effect of an expansion of imported intermediate inputs on establishments’ average task intensities and employment size in a middle-income country. I use confidential matched employer-employee data and information on trade transactions for the universe of Brazilian firms. Propensity Score Matching indicates that import expansion leads to an overall employment growth, higher intensities in routine and non-routine manual tasks and an increased share of intermediates exports. Thus my findings point out that intermediates imports represent onshored instead of offshored tasks. This result remains unchanged regardless of whether imports from high- or low-wage countries are considered.
Embedded networks are fundamental infrastructures of many different kinds of domains, such as home or industrial automation, the automotive industry, and future smart grids. Yet they can be very heterogeneous, containing wired and wireless nodes with different kinds of resources and service capabilities, such as sensing, acting, and processing. Driven by new opportunities and business models, embedded networks will play an ever more important role in the future, interconnecting more and more devices, even from other network domains. Realizing applications for such types of networks, however, is a highly challenging task, since various aspects have to be considered, including communication between a diverse assortment of resource-constrained nodes, such as microcontrollers, as well as flexible node infrastructure. Service Oriented Architecture (SOA) with Web services would perfectly meet these unique characteristics of embedded networks and ease the development of applications. Standardized Web services, however, are based on plain-text XML, which is not suitable for microcontroller-based devices with their very limited resources due to XML's verbosity, its memory and bandwidth usage, as well as its associated significant processing overhead. This thesis presents methods and strategies for realizing efficient XML-based Web service communication in embedded networks by means of binary XML using EXI format. We present a code generation approach to create optimized and dedicated service applications in resource-constrained embedded networks. In so doing, we demonstrate how EXI grammar can be optimally constructed and applied to the Web service and service requester context. In addition, so as to realize an optimized service interaction in embedded networks, we design and develop an optimized filter-enabled service data dissemination that takes into account the individual resource capabilities of the nodes and the connection quality within embedded networks. We show different approaches for efficiently evaluating binary XML data and applying it to resource constrained devices, such as microcontrollers. Furthermore, we will present the effectful placement of binary XML filters in embedded networks with the aim of reducing both, the computational load of constrained nodes and the network traffic. Various evaluation results of V2G applications prove the efficiency of our approach as compared to existing solutions and they also prove the seamless and successful applicability of SOA-based technologies in the microcontroller-based environment.
Multimedia retrieval is an essential part of today's world. This situation is observable in industrial domains, e.g., medical imaging, as well as in the private sector, visible by activities in manifold Social Media platforms. This trend led to the creation of a huge environment of multimedia information retrieval services offering multimedia resources for almost any user requests. Indeed, the encompassed data is in general retrievable by (proprietary) APIs and query languages, but unfortunately a unified access is not given due to arising interoperability issues between those services. In this regard, this thesis focuses on two application scenarios, namely a medical retrieval system supporting a radiologist's workflow, as well as an interoperable image retrieval service interconnecting diverse data silos. The scientific contribution of this dissertation is split in three different parts: the first part of this thesis improves the metadata interoperability issue. Here, major contributions to a community-driven, international standardization have been proposed leading to the specification of an API and ontology to enable a unified annotation and retrieval of media resources. The second part issues a metasearch engine especially designed for unified retrieval in distributed and heterogeneous multimedia retrieval environments. This metasearch engine is capable of being operated in a federated as well as autonomous manner inside the aforementioned application scenarios. The remaining third part ensures an efficient retrieval due to the integration of optimization techniques for multimedia retrieval in the overall query execution process of the metasearch engine.
This thesis addresses some of the algorithmic and numerical challenges associated with the computation of approximate border bases, a generalisation of border bases, in the context of the oil and gas industry. The concept of approximate border bases was introduced by D. Heldt, M. Kreuzer, S. Pokutta and H. Poulisse in "Approximate computation of zero-dimensional polynomial ideals" as an effective mean to derive physically relevant polynomial models from measured data. The main advantages of this approach compared to alternative techniques currently in use in the (hydrocarbon) industry are its power to derive polynomial models without additional a priori knowledge about the underlying physical system and its robustness with respect to noise in the measured input data. The so-called Approximate Vanishing Ideal (AVI) algorithm which can be used to compute approximate border bases and which was also introduced by D. Heldt et al. in the paper mentioned above served as a starting point for the research which is conducted in this thesis. A central aim of this work is to broaden the applicability of the AVI algorithm to additional areas in the oil and gas industry, like seismic imaging and the compact representation of unconventional geological structures. For this purpose several new algorithms are developed, among others the so-called Approximate Buchberger Möller (ABM) algorithm and the Extended-ABM algorithm. The numerical aspects and the runtime of the methods are analysed in detail - based on a solid foundation of the underlying mathematical and algorithmic concepts that are also provided in this thesis. It is shown that the worst case runtime of the ABM algorithm is cubic in the number of input points, which is a significant improvement over the biquadratic worst case runtime of the AVI algorithm. Furthermore, we show that the ABM algorithm allows us to exercise more direct control over the essential properties of the computed approximate border basis than the AVI algorithm. The improved runtime and the additional control turn out to be the key enablers for the new industrial applications that are proposed here. As a conclusion to the work on the computation of approximate border bases, a detailed comparison between the approach in this thesis and some other state of the art algorithms is given. Furthermore, this work also addresses one important shortcoming of approximate border bases, namely that central concepts from exact algebra such as syzygies could so far not be translated to the setting of approximate border bases. One way to mitigate this problem is to construct a "close by" exact border bases for a given approximate one. Here we present and discuss two new algorithmic approaches that allow us to compute such close by exact border bases. In the first one, we establish a link between this task, referred to as the rational recovery problem, and the problem of simultaneously quasi-diagonalising a set of complex matrices. As simultaneous quasi-diagonalisation is not a standard topic in numerical linear algebra there are hardly any off-the-shelf algorithms and implementations available that are both fast and numerically adequate for our purposes. To bridge this gap we introduce and study a new algorithm that is based on a variant of the classical Jacobi eigenvalue algorithm, which also works for non-symmetric matrices. As a second solution of the rational recovery problem, we motivate and discuss how to compute a close by exact border basis via the minimisation of a sum of squares expression, that is formed from the polynomials in the given approximate border basis. Finally, several applications of the newly developed algorithms are presented. Those include production modelling of oil and gas fields, reconstruction of the subsurface velocities for simple subsurface geometries, the compact representation of unconventional oil and gas bodies via algebraic surfaces and the stable numerical approximation of the roots of zero-dimensional polynomial ideals.
Contributing to the still scarce European evidence this thesis examines in detail different aspects of equity styles and systematic liquidity in Europe and their role with respect to European stocks and mutual funds. First, a consistent set of European style indices is outlined from which risk factors like market excess return, size, valuation and momentum, but also novel idiosyncratic risk and systematic liquidity factors are derived. The daily 2002 to 2009 time period examined contains the recent financial crisis. As based on a stochastic discount factor GMM based analysis, liquidity is found to help to price European stocks and a decrease in common liquidity during the recent period of market stress reveals the role of liquidity as a state variable of hedging concern to investors. Moreover, the risk factors including liquidity and idiosyncratic risk are found to be relevant in mutual fund performance evaluation as indicated by significant risk exposures of a set of mutual funds with European investment focus. However, regarding different models the risk-adjusted net performance of these funds is mainly found to be indistinguishable from zero, being in line with equilibrium models of fund performance. Furthermore, the dynamic abilities of fund managers with respect to liquidity and risk factor timing are examined by conducting unconditional as well as time-varying analyses based on a Kalman filter approach. The results reveal dynamics in the risk exposures of mutual funds, but evidence on daily risk factor timing is weak with respect to established risk factors as well as liquidity. Finally, the evidence that both liquidity and idiosyncratic risk affect the cross-section of asset returns suggests that both risk factors capture different return characteristics. As motivated by models of price discovery processes, liquidity might capture transaction costs, while idiosyncratic risk seems to capture effects of price discovery.
The dissertation consists of three self-contained essays, with a focus on empirical capital market research. The first essay "Time-Varying Conditional Market Returns: Is Variance or Tail-Risk Priced", empirically investigates the question whether there is a positive relationship between aggregate market tail risk and expected returns. Based on the classical risk return trade-off, intuition suggests a statistically positive relationship between aggregate market tail risk and expected returns. The paper contributes to previous literature in several ways. First it offers an statistically well founded method for aggregate tail risk estimation in relying on EVT. In the second place it empirically determines a positive relationship between lagged aggregate market tail risk and market returns, based on a time-series approach, which can be further characterized by a non-linear dependence structure. The second essay "Credit Cycle Dependent Spread Determinants in Emerging Sovereign Debt Markets", empirically estimates non-linear dependence structures of determinants of changes in sovereign bond spreads. Empirical results of the paper clearly identify a non-linear influence of determinants of changes in emerging markets sovereign credit spreads. The statistical and economical significance as well as the sign of some determinants changes with respect to the underlying sovereign credit cycle. The third paper "Modeling the Dependence Structure between Aggregate Market Tail Risk and Expected Returns" takes on the results of the first paper regarding the non-linear positive dependence structure between aggregate market tail risk and expected returns. In order to implement a more profound analysis of the non-linear pattern, the paper employs several copula functions to model the conditional bivariate time series dependence structure. The inspection of the empirical results clearly indicates the Clayton copula function to provide the best formula describing the conditional dependence structure.
This thesis investigates the suitability of state-of-the-art protocols for large-scale and long-term environmental event monitoring using wireless sensor networks based on the application scenario of early forest fire detection. By suitable combination of energy-efficient protocol mechanisms a novel communication protocol, referred to as cross-layer message-merging protocol (XLMMP), is developed. Qualitative and quantitative protocol analyses are carried out to confirm that XLMMP is particularly suitable for this application area. The quantitative analysis is mainly based on finite-source retrial queues with multiple unreliable servers. While this queueing model is widely applicable in various research areas even beyond communication networks, this thesis is the first to determine the distribution of the response time in this model. The model evaluation is mainly carried out using Markovian analysis and the method of phases. The obtained quantitative results show that XLMMP is a feasible basis to design scalable wireless sensor networks that (1) may comprise hundreds of thousands of tiny sensor nodes with reduced node complexity, (2) are suitable to monitor an area of tens of square kilometers, (3) achieve a lifetime of several years. The deduced quantifiable relationships between key network parameters — e.g., node size, node density, size of the monitored area, aspired lifetime, and the maximum end-to-end communication delay — enable application-specific optimization of the protocol.
Decentralised Service Location, i.e. finding an application communication endpoint based on a Distributed Hash Table (DHT), is a fairly new concept. The precise security implications of this approach have not been studied in detail. More importantly, a detailed analysis regarding the applicability of existing security solutions to this concept has not been conducted. In many cases existing client-server approaches to security may not be feasible. In addition, to understand the necessity for such an analysis, it is key to acknowledge that Decentralised Service Location has some unique security requirements compared to other P2P applications such as filesharing or live streaming. This thesis concerns the security challenges for Decentralised Service Location. The goals of our work are on the one hand to precisely understand the security requirements and research challenges for Decentralised Service Location, and on the other hand to develop and evaluate corresponding security mechanisms. The thesis is organised as follows. First, fundamentals are explained and the scope of the thesis is defined. Decentralised Service Location is defined and P2PSIP is explained technically as a prototypical example. Then, a security analysis for P2PSIP is presented. Based on this security analysis, security requirements for Decentralised Service Location and the corresponding research challenges -- i.e. security concerns not suitably mitigated by existing solutions -- are derived. Second, several decentralised solutions are presented and evaluated to tackle the security challenges for Decentralised Service Location. We present decentralised algorithms to enable availability of the DHTs lookup service in the presence of adversary nodes. These algorithms are evaluated via simulation and compared to analytical bounds. Further, a cryptographic approach based on self-certifying identities is illustrated and discussed. This approach enables decentralised integrity protection of location-bindings. Finally, a decentralised approach to assess unknown identities is introduced. The approach is based on a Web-of-Trust model. It is evaluated via prototypical implementation. Finally, the thesis closes with a summary of the main contributions and a discussion of open issues.
Up to a few years ago, the typical operation of a distributed architecture was modelled as the enactment of a collaborative protocol by networked nodes. In this context, all nodes were under the system designer’s control, faithfully executing the programmed behaviour. However, today’s networks are often characterized by a free aggregation of nodes. Thus, the possibility increases that a selfish party operates a node, which may violate the collaborative protocol in order to increase a personal benefit. If such violations differ from the system goals they can even be considered as attack. Current fault-tolerance techniques may weaken the harmful impact to some degree but they cannot necessarily prevent them. Furthermore, the several architectures differ in their fault-tolerance capabilities. This emphasizes the need for a systematic approach to achieve collaboration in distributed systems. In this PhD thesis we consider the problem of attaining a targeted level of collaboration in a distributed architecture deployed over rational selfish-driven nodes, which have interest in deviating from the communication protocol to increase a personal benefit. In order to reach this goal and to cover a broad spectrum of systems, we do not modify the architecture or communication protocol itself. Instead, we add a monitoring logic to inspect a node’s behaviour in terms of the correct interaction with the system. With this approach, the system designer needs to contrast several aspects such as the specific environmental circumstances, the inspection effort or the node’s individual preferences. Furthermore, he should consider the fact that each agent could be aware of the other agents’ preferences and selfishness, and perform strategic choices consequently. The natural frame for modelling such complex, interdependent and possibly interactive decision landscape is Game Theory (GT). In this context, the monitoring setup proposed in this thesis corresponds to a class of GT models known as Inspection Games (IG). Such games were introduced 1962 in their simplest formulation by Dresher in the context of non-proliferation treatises and arm control. They model the general situation where one inspector verifies through inspections the correct behaviour of another party, called inspectee. However, inspections are costly and the inspector’s resources are limited. Hence, a complete surveillance is not possible and an inspector will try to minimize the inspections. Finally, a game strategy combination (violating/inspecting or not) that is considered optimal by the parties represents a Nash equilibrium for the game. In this thesis, the initial IG model is enriched by the possibility of false negatives, i.e. the probability that a violation is not detected during an inspection. Both the initial and the enriched model remain abstract and can thus easily find interdisciplinary application. However, as solution approach in this thesis considering the context of distributed systems, it models the network participants’ strategy choice. As outcome, the IG model enables to calculate system parameters in order to shift the Nash equilibrium to the desired target collaboration. The approach is designed as framework. It can be therefore applied to any architecture considering, any selfish goal and any reliability technique. For sake of concreteness, we will discuss the IG approach by means of the illustrative case of a Publish/Subscribe (pub/sub) architecture. In this way messages over the communication infrastructure will have a specific associated semantics. The Inspection Game approach of this thesis secures the whole collaborative protocol in order to attain a correctly working system up to a specific degree (in the sense of collaboration). This represents a completely new way in terms of reliability mechanisms. Hence, this thesis can be considered as fundamental research. In order to enable a broad application, the generality of this approach is supported by further contributions. This is among others the software library RCourse for practical robustness evaluations of overlay networks and a simulation environment for further research of the abstract IG model. All developments will finally be published as open source software.
In his famous paper Gersho stressed that the codecells of optimal quantizers asymptotically make an equal contribution to the distortion of the quantizer. Motivated by this fact, we investigate in this paper quantizers in the scalar case, where each codecell contributes with exactly the same portion to the quantization error. We show that such quantizers of Gersho type - or Gersho quantizers for short - exist for non-atomic scalar distributions. As a main result we prove that Gersho quantizers are asymptotically optimal.
The dissertation "Three Essays on Credit Risk with a Special Focus on the Subprime Financial Crisis" consists of three self-contained essays. At the core of the dissertation is the market for credit risk and its role during and after the recent subprime financial crisis. In particular, it is dedicated to the following research questions: - What are the causes of the subprime financial crisis? - Which role did credit markets and credit derivatives play during the crisis? - How might the crisis be resolved? - What is the impact of the crisis on market participants perception of credit risk? - How can complex credit derivatives be modeled in a way that allows an understanding of their inherent risk?
In this thesis a new approach to building product recommender systems is introduced. By using a customer-centric dialogue, the customers' preferences are elicited. These are the basis for inferring utility estimations about the desired technical properties of the products in question. Systems built this way can both operate autonomously, e.g., in an online store, and support a salesperson directly at the point-of-sale. The core of the approach is formed by a layered domain description that models customer stereotypes and needs, product attributes, the products themselves, and the causal interrelations between customer and product properties. Maintenance of the domain description, i.e., keeping the model up-to-date in face of frequent changes, is facilitated by the clear separation of concerns provided by the layered structure. In fact, the most frequently used class of updates can be handled in an entirely automated way if some constraints are satisfied. On a high level of abstraction, the system behavior is described by State Charts that are parameterized according to the domain description. Those parts of the system description where State Charts would be too imprecise are implemented by separate components realizing the required complex semantics. From the domain description, a Bayesian network is generated that forms the core of the inference engine of the recommender system. The network essentially controls the system-initiated dialogue flow and the recommendation process. Due to the characteristics of Bayesian networks, it is possible to respond to user-initiated dialogue steps in a natural way. Moreover, an explanation of the current recommendation can be generated without having to explicitly encode additional information in the modeling layer. Finally, a database structure and the SQL queries necessary to obtain recommendations can be inferred from the corresponding parts of the domain description. Instantiation of the system to a specific business domain is supported by a dedicated maintenance application that hides the complexities of the underlying algorithms. Thus, day-to-day system updates by non-technical domain experts, e.g., product managers, are facilitated. The developed concepts were implemented in cooperation with a local industry partner who intends to apply the recommender system in the field of mobile communications.
This dissertation builds on the economic research about M&A and the effects on the merging plants' performance. In particular, the objective of this thesis is to shed some light on questions about causal effects of M&A on plants' performance, taking firm heterogeneity into account. Since there is no typical merger (Tichy, 2001) it distinguishes between acquiring and target plants, and between horizontal and non-horizontal mergers. The thesis focuses on two major research questions: do plants with specific characteristics self-select in merger activity, and is there a causal effect of M&A on the merging plants' performance parameters, in particular on labor productivity, employment, and skill-intensity? The results allow drawing some conclusions about the reasons why plants merge. The thesis consists of four chapters. All contributions have in common that they focus on questions about the effects of M&A on plant performance. That is, the thesis does not discuss questions about the effects of M&A on industry and aggregation concentration levels, or the effects of M&A on social welfare. Each chapter in this thesis can be read separately, because they are based on stand-alone papers. Hence, all chapters have their own introduction and conclusion. The structure and storyline of this thesis and the interaction of the chapters are as follows: the first chapter is a survey about M&A and acts as an introduction to this research field. The second chapter describes propensity score matching as a newer microeconometric evaluation method and explains its implementation in the econometric computer software STATA. In a certain sense, the second chapter serves as a preparation for a better understanding of the econometric analysis performed in chapters three and four, which form the heart of the thesis. They both discuss questions about self-selection of plants into merger activity, and questions about causal effects on plants' performance. In particular, the third chapter focuses on the effects on merging plants' labor productivity, while the fourth chapter focuses on the effects on both employment and skill-intensity. Even if both chapters discuss the effects on different performance parameters, they are similar with respect to motivation, structure, and estimation strategy. Hence, there is some inevitable overlapping between these two chapters which are based on stand-alone papers as mentioned above
IMPACT 2013 in Berlin, Germany (in conjuction with HiPEAC 2013) is the third workshop in a series of international workshops on polyhedral compilation techniques. The previous workshops were held in Chamonix, France (2011) in conjuction with CGO 2011 and Paris, France (2012) in conjuction with HiPEAC 2012.
This work investigates the intertemporal portfolio optimization of professional portfolio managers. It analyzes whether the special conditions of delegation in which portfolio managers make investment decisions - compensation depending on assets under management, capital flow depending on past performance, and influence on prices - can explain the observable investment patterns of portfolio managers. It further evaluates optimal portfolio policies from the primary investor’s perspective.
The introduction of the economic reform program, Doi Moi, in 1986 opened the door for private sector development in Vietnam, as well as the country’s integration into the world economy. As Hanoi is the national political centre, it occupies an important role in Vietnam’s transformation process. The capital has become a major hub for socio-economic development. Whereas “Hanoi was renowned for its quiet streets in the 1980s” (Drummond 2000: 2382), nowadays it is characterised by its bustling street life. Public spaces, such as streets and sidewalks, are appropriated by private individuals for mostly small-scale economic activities. Existing green parks are privatised in order to cater to the growing demand for leisure space. At the same time, official spaces like Ba Dinh Square or Ly Thai To Square are occupied by Hanoi’s residents for sports and gatherings. Thomas (2002: 1621) regards this a contestation of the state-defined landscape by local people. In other words, the state’s defining power for the urban image is challenged by a multiplicity of spatial producers. This thesis focuses on the development of public spaces in Hanoi. Using the frame of an emic reconstruction , major themes such as changes in power relations between state and society and their reflection in the city’s physical environment are explored. Embedded in the discipline of urban sociology, this thesis aims to contribute to the discussion about the correlation between the public sphere as a sociological/political category and the materiality and practices of public space. According to Sennett (2008), the public sphere is a crucial element of urbanism. Furthermore, as Eisenstadt and Schluchter (20021: 12) indicate, there is a strong relationship between the public sphere and civil society, along with political and economic liberalisation. Therefore, the question remains whether public spaces in Hanoi are an expression of a public sphere or a prerequisite for its development.
The capacity of a society to adapt and cope with disasters is generally discussed under the term resilience. It is a concept that entered vulnerability research recently and that will still need further theoretical underpinning and empirical grounding. The vulnerability context of urban poor can be approached from different angles, such as urbanism or development studies. My thesis has the aim to contribute to a better understanding of resilience within vulnerability research. I decided to follow recent approaches in vulnerability research for two reasons. First, integrated vulnerability concepts highlight the value of interdisciplinary approaches. Second, the integrated vulnerability discourse offers a analytical framework that can be modified and applied to the empirical study of vulnerability in Jakarta's kampungs. In chapter two I will introduce the theoretical embedding of the analytical framework. This will be necessary, as disaster research in social science is a relatively young research stream. Terms such as disasters, catastrophes and hazards are often used synonymously. Moreover, different definitions of vulnerability and resilience lead to diverse applications of the concepts. Thus, I would like to outline some aspects of disaster research in social science and present the definitions I will apply in this thesis. Moreover, at the end of chapter two, I will refine my research question and present the methodological framework I use for the analysis of my empirical data. The integrated vulnerability framework is pointing out the importance of the temporal and spatial scale in vulnerability analysis. In chapter three, I will therefore provide the temporal and spatial embedding of my research by presenting a historical perspective on Jakarta. Since it is not possible to present almost 500 years of city history in one chapter, I will concentrate on two aspects. The first focus will be put on the origin and development of Jakarta's kampungs. The second focus will be put on the flood situation. Accordingly, I will shortly present the different phases of Jakarta's history. In each phase I will point out the aspects relevant for kampungs and aspects that will be relevant in regards to the flood situation. Finally, I will conclude how the situation of the kampungs and its dwellers changed over time. In addition to that, I will discuss the historical development of flood risk in a separate section in order to contribute to a better understanding of recent flooding. Since the integrated vulnerability framework is a place based approach, I would like to introduce the theoretical discussion on urbanism and space in chapter four. I will show that the analytical value of the concept of slum in approaching the local level in Jakarta's kampungs is limited. I will also present the concept of informality as it is often discussed in the context of slums and urban poverty. Moreover, I will argue that informality is an integral part of megacities in developing countries and it reflects processes of self-organisation on a local level. Aspects of social organisations are, again, also important for a vulnerability analysis. I will then link the concepts of institutions and social capital in order to show how social organisation can be approached as a resource for communities. I will conclude the chapter with models of social space, particularly the concept of locality, that will provide a framework for structuring and analysing the empirical field data. In chapter five and six, I will leave the theoretical discussion and present the data I collected during my empirical research in two kampung. Chapter five will begin with a short introduction to the research approach and the methods I used in the research process. Then I will present the field data following the structure provided by the concept of locality which is basically referring to the categories of material space and social organisation. I will add two categories to this structure. In order to link the previous discussion on informality with the empirical research, I will present data on the informal-formal continuum in both research locations. In addition to this I will sum up my empirical findings on how people adapt to and cope with hazards, particularly flooding. I decided to separate description and analysis of my research findings because I would like to first provide a general understanding of the local situation in both research locations in a descriptive way before I analyse and interpret the data. Accordingly, I will analyse and interpret the data in chapter six. The main focus will be put on aspects of space and social organisation so that local communities can be conceptualised. In chapter seven I will finally analyse aspects of vulnerability and resilience by applying the integrated vulnerability framework to my research findings. Lastly, I will briefly summarize the thesis in chapter eight and provide an outlook.
Since any encryption map may be viewed as a polynomial map between finite dimensional vector spaces over finite fields, the security of a cryptosystem can be examined by studying the difficulty of solving large systems of multivariate polynomial equations. Therefore, algebraic attacks lead to the task of solving polynomial systems over finite fields. In this thesis, we study several new algebraic techniques for polynomial system solving over finite fields, especially over the finite field with two elements. Instead of using traditional Gröbner basis techniques we focus on highly developed methods from several other areas like linear algebra, discrete optimization, numerical analysis and number theory. We study some techniques from combinatorial optimization to transform a polynomial system solving problem into a (sparse) linear algebra problem. We highlight two new kinds of hybrid techniques. The first kind combines the concept of transforming combinatorial infeasibility proofs to large systems of linear equations and the concept of mutants (finding special lower degree polynomials). The second kind uses the concept of mutants to optimize the Border Basis Algorithm. We study recent suggestions of transferring a system of polynomial equations over the finite field with two elements into a system of polynomial equalities and inequalities over the set of integers (respectively over the set of reals). In particular, we develop several techniques and strategies for converting the polynomial system of equations over the field with two elements to a polynomial system of equalities and inequalities over the reals (respectively over the set of integers). This enables us to make use of several algorithms in the field of discrete optimization and number theory. Furthermore, this also enables us to investigate the use of numerical analysis techniques such as the homotopy continuation methods and Newton's method. In each case several conversion techniques have been developed, optimized and implemented. Finally, the efficiency of the developed techniques and strategies is examined using standard cryptographic examples such as CTC and HFE. Our experimental results show that most of the techniques developed are highly competitive to state-of-the-art algebraic techniques.
Commutative Gröbner bases have a lot of applications in theory and practice, because they have many nice properties, they are computable, and there exist many efficient improvements of their computations. Non-commutative Gröbner bases also have many useful properties. However, applications of non-commutative Gröbner bases are rarely considered due to high complexity of computations. The purpose of this study was to improve the computation of non-commutative Gröbner bases and investigate the applications of non-commutative Gröbner bases. Gröbner basis theory in free monoid rings was carefully revised and Gröbner bases were precisely characterized in great detail. For the computations of Gröbner bases, the Buchberger Procedure was formulated. Three methods, say interreduction on obstructions, Gebauer-Möller criteria, and detecting redundant generators, were developed for efficiently improving the Buchberger Procedure. Further, the same approach was applied to study Gröbner basis theory in free bimodules over free monoid rings. The Buchberger Procedure was also formulated and improved in this setting. Moreover, J.-C. Faugere's F4 algorithm was generalized to this setting. Finally, many meaningful applications of non-commutative Gröbner bases were developed. Enumerating procedures were proposed to semi-decide some interesting undecidable problems. All the examples in the thesis were computed using the package gbmr of the computer algebra system ApCoCoA. The package was developed by the author. It contains dozens of functions for Gröbner basis computations and many concrete applications. The package gbmr and a collection of interesting examples are available at http://www.apcocoa.org/.
This qualitative approach to research develops a case oriented design in order to examine risks of corruption in public procurement. The method involves expert interviews as the most important data collection tool and explains how to examine the information by means of a qualitative content analyze. In order to develop rigor results, the concepts of external validity, construct validity, internal validity and reliability are applied. The research design was adopted in two field investigations: A first project focuses on the challenges and chances for anticorruption when awarding contracts in a competitive dialogue. For this purpose, data was collected in an investigation of the German construction market. The results are presented in form of 16 propositions, also including policy recommendations and approaches for reform. In the framework of a further case based research project, the paper analyzes the organizational structure and working process of "China's Tangible Construction Market" (TCM). The TCM is an administrative institution where a bid inviter can register in order to announce a public need and conduct a procurement procedure at a fixed location. The analysis of expert interviews conducted during an investigation of the Chinese construction market shows that the TCM offers strong institutional support that can be helpful to curb corruption in public procurement
Thomas Bernhard and the air crash on Tettelham in 1944. "It was a spectacle of unrelieved tragedy".
(2012)
In his autobiographical essay "Ein Kind" (A Child) the Austrian writer Thomas Bernhard (1931-1998)describes the downing of a heavy World War II USAIRFORCE bomber at Tettelham near Traunstein in Upper Bavaria in 1944. The scene of the crash is now headed by a chapel and some reminiscenses of the crews' fate. Although the event is still vivid in the local history, it was not known that Thomas Bernhard had been a keen eyewitness.
This thesis investigates, how placement variations of electronic devices influence the possibility of using sensors integrated in those devices for context recognition. The vast majority of context recognition research assumes well defined, fixed sen- sor locations. Although this might be acceptable for some application domains (e.g. in an industrial setting), users, in general, will have a hard time coping with these limitations. If one needs to remember to carry dedicated sensors and to adjust their orientation from time to time, the activity recognition system is more distracting than helpful. How can we deal with device location and orientation changes to make context sensing mainstream? This thesis presents a systematic evaluation of device placement effects in context recognition. We first deal with detecting if a device is carried on the body or placed somewhere in the environ- ment. If the device is placed on the body, it is useful to know on which body part. We also address how to deal with sensors changing their position and their orientation during use. For each of these topics some highlights are given in the following. Regarding environmental placement, we introduce an active sampling ap- proach to infer symbolic object location. This approach requires only simple sensors (acceleration, sound) and no infrastructure setup. The method works for specific placements such as "on the couch", "in the desk drawer" as well as for general location classes, such as "closed wood compartment" or "open iron sur- face". In the experimental evaluation we reach a recognition accuracy of 90% and above over a total of over 1200 measurements from 35 specific locations (taken from 3 different rooms) and 12 abstract location classes. To derive the coarse device placement on the body, we present a method solely based on rotation and acceleration signals from the device. It works independent of the device orientation. The on-body placement recognition rate is around 80% over 4 min. of unconstrained motion data for the worst scenario and up to 90% over a 2 min. interval for the best scenario. We use over 30 hours of motion data for the analysis. Two special issues of device placement are orientation and displacement. This thesis proposes a set of heuristics that significantly increase the robustness of motion sensor-based activity recognition with respect to sen- sor displacement. We show how, within certain limits and with modest quality degradation, motion sensor-based activity recognition can be implemented in a displacement tolerant way. We evaluate our heuristics first on a set of synthetic lower arm motions which are well suited to illustrate the strengths and limits of our approach, then on an extended modes of locomotion problem (sensors on the upper leg) and finally on a set of exercises performed on various gym machines (sensors placed on the lower arm). In this example our heuristic raises the dis- placed recognition rate from 24% for a displaced accelerometer, which had 96% recognition when not displaced, to 82%.
Real Closed * Rings
(2010)
In this dissertation I examine a definition of real closure of commutative unitary reduced rings. I also give a characterization of rings that are real closed in this context and how one is able to arrive to such a real closure. There are sufficient examples to help the reader get a feel for real closed * rings and the real closure * of commutative unitary rings.
Landscapes and societies in Xishuangbanna Dai Autonomous Prefecture in Southwest China have undergone unprecedented changes over the last 60 years. These transformations within the landscapes manifest themselves as land cover change, for example intensification of traditional land use systems and introduction of monocultures leading to deforestation, loss of biodiversity and other forms of environmental degradation. At the same time, communities and societies within these landscapes have experienced a certain degree of economic development, mainly through exploitation of natural resources. Through changing of political and economic frameworks, they have also profound transformations within their social and socio-cultural configurations. Based on the outcomes of field work in Xishuangbanna between 2006 and 2010, this study examines institutions, institutional voids and institutional change concerning land-use in the Naban River Watershed National Nature Reserve. Combining socio-economic and ecological data, patterns of land-use change and their interrelation to local communities are explored. With the emergence of the rubber-line, a socio-ecological frontier, disparities between upland and lowland landscapes and communities are intensifying.
Database systems have been vital in all forms of data processing for a long time. In recent years, the amount of processed data has been growing dramatically, even in small projects. Nevertheless, database management systems tend to be static in terms of size and performance which makes scaling a difficult and expensive task. Because of performance and especially cost advantages more and more installed systems have a shared nothing cluster architecture. Due to the massive parallelism of the hardware programming paradigms from high performance computing are translated into data processing. Database research struggles to keep up with this trend. A key feature of traditional database systems is to provide transparent access to the stored data. This introduces data dependencies and increases system complexity and inter process communication. Therefore, many developers are exchanging this feature for a better scalability. However, explicitly managing the data distribution and data flow requires a deep understanding of the distributed system and reduces the possibilities for automatic and autonomic optimization. In this thesis we present an approach for database system scaling and allocation that features good scalability although it keeps the data distribution transparent. The first part of this thesis analyzes the challenges and opportunities for self-scaling database management systems in cluster environments. Scalability is a major concern of Internet based applications. Access peaks that overload the application are a financial risk. Therefore, systems are usually configured to be able to process peaks at any given moment. As a result, server systems often have a very low utilization. In distributed systems the efficiency can be increased by adapting the number of nodes to the current workload. We propose a processing model and an architecture that allows efficient self-scaling of cluster database systems. In the second part we consider different allocation approaches. To increase the efficiency we present a workload-aware, query-centric model. The approach is formalized; optimal and heuristic algorithms are presented. The algorithms optimize the data distribution for local query execution and balance the workload according to the query history. We present different query classification schemes for different forms of partitioning. The approach is evaluated for OLTP and OLAP style workloads. It is shown that variants of the approach scale well for both fields of application. The third part of the thesis considers benchmarks for large, adaptive systems. First, we present a data generator for cloud-sized applications. Due to its architecture the data generator can easily be extended and configured. A key feature is the high degree of parallelism that makes linear speedup for arbitrary numbers of nodes possible. To simulate systems with user interaction, we have analyzed a productive online e-learning management system. Based on our findings, we present a model for workload generation that considers the temporal dependency of user interaction.
Consumers interact with each other and within their social networks. Influentials have an overproportional influence on other consumers’ preferences and choices, thus having relevant implications for product development, marketing planning and strategic marketing. An important question that previous research has not analyzed yet, is whether and how to capture their influence on other consumers in preference-based market forecasts. This study analyzes these aspects for a representative sample of the German mobile phone market. It finds that assigning higher weights to the preferences of influentials significantly increases forecast accuracy. Other chapters of this thesis analyze the role of brokers in consumer networks and the decision process seeding points in viral marketing campaigns.
Today the world of multimedia is almost completely device- and content-centered. It focuses it’s energy nearly exclusively on technical issues such as computing power, network specifics or content and device characteristics and capabilities. In most multimedia systems, the presentation of multimedia content and the basic controls for playback are main issues. Because of this, a very passive user experience, comparable to that of traditional TV, is most often provided. In the face of recent developments and changes in the realm of multimedia and mass media, this ”traditional” focus seems outdated. The increasing use of multimedia content on mobile devices, along with the continuous growth in the amount and variety of content available, make necessary an urgent re-orientation of this domain. In order to highlight the depth of the increasingly difficult situation faced by users of such systems, it is only logical that these individuals be brought to the center of attention. In this thesis we consider these trends and developments by applying concepts and mechanisms to multimedia systems that were first introduced in the domain of usercentrism. Central to the concept of user-centrism is that devices should provide users with an easy way to access services and applications. Thus, the current challenge is to combine mobility, additional services and easy access in a single and user-centric approach. This thesis presents a framework for introducing and supporting several of the key concepts of user-centrism in multimedia systems. Additionally, a new definition of a user-centric multimedia framework has been developed and implemented. To satisfy the user’s need for mobility and flexibility, our framework makes possible seamless media and service consumption. The main aim of session mobility is to help people cope with the increasing number of different devices in use. Using a mobile agent system, multimedia sessions can be transferred between different devices in a context-sensitive way. The use of the international standard MPEG-21 guarantees extensibility and the integration of content adaptation mechanisms. Furthermore, a concept is presented that will allow for individualized and personalized selection and face the need for finding appropriate content. All of which can be done, using this approach, in an easy and intuitive way. Especially in the realm of television, the demand that such systems cater to the need of the audience is constantly growing. Our approach combines content-filtering methods, state-of-the-art classification techniques and mechanisms well known from the area of information retrieval and text mining. These are all utilized for the generation of recommendations in a promising new way. Additionally, concepts from the area of collaborative tagging systems are also used. An extensive experimental evaluation resulted in several interesting findings and proves the applicability of our approach. In contrast to the ”lean-back” experience of traditional media consumption, interactive media services offer a solution to make possible the active participation of the audience. Thus, we present a concept which enables the use of interactive media services on mobile devices in a personalized way. Finally, a use case for enriching TV with additional content and services demonstrates the feasibility of this concept.
The majority of all security problems in today's Web applications is caused by string-based code injection, with Cross-site Scripting (XSS)being the dominant representative of this vulnerability class. This thesis discusses XSS and suggests defense mechanisms. We do so in three stages: First, we conduct a thorough analysis of JavaScript's capabilities and explain how these capabilities are utilized in XSS attacks. We subsequently design a systematic, hierarchical classification of XSS payloads. In addition, we present a comprehensive survey of publicly documented XSS payloads which is structured according to our proposed classification scheme. Secondly, we explore defensive mechanisms which dynamically prevent the execution of some payload types without eliminating the actual vulnerability. More specifically, we discuss the design and implementation of countermeasures against the XSS payloads Session Hijacking'', Cross-site Request Forgery'', and attacks that target intranet resources. We build upon this and introduce a general methodology for developing such countermeasures: We determine a necessary set of basic capabilities an adversary needs for successfully executing an attack through an analysis of the targeted payload type. The resulting countermeasure relies on revoking one of these capabilities, which in turn renders the payload infeasible. Finally, we present two language-based approaches that prevent XSS and related vulnerabilities: We identify the implicit mixing of data and code during string-based syntax assembly as the root cause of string-based code injection attacks. Consequently, we explore data/code separation in web applications. For this purpose, we propose a novel methodology for token-level data/code partitioning of a computer language's syntactical elements. This forms the basis for our two distinct techniques: For one, we present an approach to detect data/code confusion on run-time and demonstrate how this can be used for attack prevention. Furthermore, we show how vulnerabilities can be avoided through altering the underlying programming language. We introduce a dedicated datatype for syntax assembly instead of using string datatypes themselves for this purpose. We develop a formal, type-theoretical model of the proposed datatype and proof that it provides reliable separation between data and code hence, preventing code injection vulnerabilities. We verify our approach's applicability utilizing a practical implementation for the J2EE application server.
In this thesis, we shall consider a certain class of algebraic cryptosystems called Gröbner Basis Cryptosystems. In 1994, Koblitz introduced the Polly Cracker cryptosystem that is based on the theory of Gröbner basis in commutative polynomials rings. The security of this cryptosystem relies on the fact that the computation of Gröbner basis is, in general, EXPSPACE-hard. Cryptanalysis of these commutative Polly Cracker type cryptosystems is possible by using attacks that do not require the computation of Gröbner basis for breaking the system, for example, the attacks based on linear algebra. To secure these (commutative) Gröbner basis cryptosystems against various attacks, among others, Ackermann and Kreuzer introduced a general class of Gröbner Basis Cryptosystems that are based on the difficulty of computing module Gröbner bases over general non-commutative rings. The objective of this research is to describe a special class of such cryptosystems by introducing the Weyl Gröbner Basis Cryptosystems. We divide this class of cryptosystems in two parts namely the (left) Weyl Gröbner Basis Cryptosystems and Two-Sided Weyl Gröbner Basis Cryptosystems. We suggest to use Gröbner bases for left and two-sided ideals in Weyl algebras to construct specific instances of such cryptosystems. We analyse the resistance of these cryptosystems to the standard attacks and provide computational evidence that secure Weyl Gröbner Basis Cryptosystems can be built using left (resp. two-sided) Gröbner bases in Weyl algebras.
Prediction of economic variables is a basic component not only for economic models, but also for many business decisions. Nevertheless it is difficult to produce accurate predictions in times of economic crises, which cause nonlinear effects in the data. In this dissertation a nonlinear model for analysis of time series with nonlinear effects is introduced. Linear autoregressive processes are extended by neural networks to overcome the problem of nonlinearity. This idea is based on the universal approximation property of single hidden layer feedforward neural networks of Hornik (1993). Univariate Autoregressive Neural Network Processes (AR-NN) as well as Vector Autoregressive Neural Network Processes (VAR-NN) and Neural Network Vector Error Correction Models (NN-VEC) are introduced. Various methods for variable selection, parameter estimation and inference are discussed. AR-NN's as well as a NN-VEC are used for prediction and analysis of the relationships between 4 variables related to the German automobile industry: The US Dollar to Euro exchange rate, the industrial output of the German automobile industry, the sales of imported cars in the USA and an index of shares of German automobile manufacturing companies. Prediction results are compared to various linear and nonlinear univariate and multivariate models.
The increasing cost of energy and the worldwide desire to reduce CO2 emissions has raised concern about the energy efficiency of information and communication technology. Whilst research has focused on data centres recently, this thesis identifies office computing environments as significant consumers of energy. Office computing environments offer great potential for energy savings: On one hand, such environments consist of a large number of hosts. On the other hand, these hosts often remain turned on 24~hours per day while being underutilised or even idle. This thesis analyzes the energy consumption within office computing environments and suggests an energy-efficient virtualized office environment. The office environment is virtualized to achieve flexible virtualized office resources that enable an energy-based resource management. This resource management stops idle services and idle hosts from consuming resources within the office and consolidates utilised office services on office hosts. This increases the utilisation of some hosts while other hosts are turned off to save energy. The suggested architecture is based on a decentralized approach that can be applied to all kinds of office computing environments, even if no centralized data centre infrastructure is available. The thesis develops the architecture of the virtualized office environment together with an energy consumption model that is able to estimate the energy consumption of hosts and network within office environments. The model enables the energy-related comparison of ordinary and virtualized office environments, considering the energy-efficient management of services. Furthermore, this thesis evaluates energy efficiency and overhead of the suggested approach. First, it theoretically proves the energy efficiency of the virtualized office environment with respect to the energy consumption model. Second, it uses Markov processes to evaluate the impact of user behaviour on the suggested architecture. Finally, the thesis develops a discrete-event simulation that enables the simulation and evaluation of office computing environments with respect to varying virtualization approaches, resource management parameters, user behaviour, and office equipment. The evaluation shows that the virtualized office environment saves more than half of the energy consumption within office computing environments, depending on user behaviour and office equipment.
The optimal quantizer in memory-size constrained vector quantization induces a quantization error which is equal to a Wasserstein distortion. However, for the optimal (Shannon-)entropy constrained quantization error a proof for a similar identity is still missing. Relying on principal results of the optimal mass transportation theory, we will prove that the optimal quantization error is equal to a Wasserstein distance. Since we will state the quantization problem in a very general setting, our approach includes the R\'enyi-$\alpha$-entropy as a complexity constraint, which includes the special case of (Shannon-)entropy constrained $(\alpha = 1)$ and memory-size constrained $(\alpha = 0)$ quantization. Additionally, we will derive for certain distance functions codecell convexity for quantizers with a finite codebook. Using other methods, this regularity in codecell geometry has already been proved earlier by Gy\"{o}rgy and Linder.
Due to the immense advance of widely accessible information systems in industrial applications, science, education and every day use, it becomes more and more difficult for users of those information systems to keep track with new and updated information. An approach to cope with this problem is to go beyond traditional search facilities and instead use the users' profiles to monitor data changes and to actively inform them about these updates - an aspect that has to be explicitly developed and integrated into a variety of information systems. This is traditionally done in an individual way, depending on the application and its platform. In this dissertation, we present a novel approach to model the semantic interrelations that specify which users to inform about which updates, based on the underlying model of the respective information system. For the first time, a meta-model that allows information system designers to tag an arbitrary data model and thus specify the event-handling semantics is presented. A formal specification of how to interpret meta-models to determine the receivers of the events completes the presented concept. For the practical realization of this new concept, model driven architecture (MDA) shows to be an ideal technical means. Using our newly developed UML profile based on data-modelling standards, an implementation of the event-handling specification can automatically be generated for a variety of different target platforms, like e.g. relational databases, using triggers. This meta-approach makes the proposed solution ideal with respect to maintainability and genericity. Our solution significantly reduces the overall development efforts for an event-handling facility. In addition, the enhanced model of the information system can be used to generate an implementation that also fulfils non-functional requirements like high performance and extensibility. The overall framework, consisting of the domain specific language (i.e. the meta-model), formal and technical transformations of how to interpret the enhanced information system model and a cost-based optimizing strategy, constitutes an integrated approach, offering several advantages over traditional implementation techniques: our framework can be applied to new information systems as well as to legacy applications without having to modify existing systems; it offers an extensible, easy-to-use, generic and thus re-usable solution and it can be tailored to and optimized for many use cases, as the practical evaluation presented in this dissertation verifies.
The results summarized in this thesis deal with the mapping and scheduling of workflow applications on heterogeneous platforms. In this context, we focus on three different types of streaming applications: * Replica placement in tree networks * In this kind of application, clients are issuing requests to some servers and the question is where to place replicas in the network such that all requests can be processed. We discuss and compare several policies to place replicas in tree networks, subject to server capacity, Quality of Service (QoS) and bandwidth constraints. The client requests are known beforehand, while the number and location of the servers have to be determined. The standard approach in the literature is to enforce that all requests of a client be served by the closest server in the tree. We introduce and study two new policies. One major contribution of this work is to assess the impact of these new policies on the total replication cost. Another important goal is to assess the impact of server heterogeneity, both from a theoretical and a practical perspective. We establish several new complexity results, and provide several efficient polynomial heuristics for NP-complete instances of the problem. * Pipeline workflow applications * We consider workflow applications that can be expressed as linear pipeline graphs. An example for this application type is digital image processing, where images are treated in steady-state mode. Several antagonist criteria should be optimized, such as throughput and latency (or a combination) as well as latency and reliability (i.e., the probability that the computation will be successful) of the application. While simple polynomial algorithms can be found for fully homogeneous platforms, the problem becomes NP-hard when tackling heterogeneous platforms. We present an integer linear programming formulation for this latter problem. Furthermore, we provide several efficient polynomial bi-criteria heuristics, whose relative performances are evaluated through extensive simulation. As a case-study, we provide simulations and MPI experimental results for the JPEG encoder application pipeline on a cluster of workstations. * Complex streaming applications * We consider the execution of applications structured as trees of operators, i.e., the application of one or several trees of operators in steady-state to multiple data objects that are continuously updated at various locations in a network. A first goal is to provide the user with a set of processors that should be bought or rented in order to ensure that the application achieves a minimum steady-state throughput, and with the objective of minimizing platform cost. We then extend our model to multiple applications: several concurrent applications are executed at the same time in a network, and one has to ensure that all applications can reach their application throughput. Another contribution of this work is to provide complexity results for different instances of the basic problem, as well as integer linear program formulations of various problem instances. The third contribution is the design of several polynomial-time heuristics, for both application models. One of the primary objectives of the heuristics for concurrent applications is to reuse intermediate results shared by multiple applications.
This thesis analyses the social impact of Payung Keluarga, an obligatory enhanced credit life microinsurance product launched by Allianz in Indonesia in 2006. Payung Keluarga automatically insures micro-borrowers who take out microcredits from microfinance institutions. In case of death, the outstanding credit balance is canceled and the beneficiary receives twice the original loan as additional payout. Payung Keluarga was conceived to ameliorate the assumed post-mortem financial crisis of low-asset families. Through qualitative-explorative field research from 2006 until 2008 I investigated if this developmental intention was realized. It is the first impact analysis on microinsurance in Indonesia. In the research process, I took the position of an observing participant. As operational project leader for Allianz in Indonesia I was virtually doing research on my own work. The resulting challenge to research neutrality is primarily mitigated by the sobering to discerning social impact which was eventually revealed. The majority of insured were married female Muslim petty traders in urban and semi-urban areas around Jakarta. Socio-economically these women stand at the upper end of the low-asset stratum. Their husbands were generally the main bread-winners of the family, and it was mostly them who received the insurance payouts. It could therefore be said that Payung Keluarga benefited the main breadwinner instead of insuring him. The study found that norms of a moral economy are still exerting significant clout on the insured. The moral economy aims at providing “subsistence insurance” for all community members through an intricate collective system of balanced exchanges. The corresponding “premium” is a denouncement of self-interested material asset accumulation. Next to structural reasons, it was this moral restriction that saw the businesses of the women stagnate at low and socially inconspicuous levels. Payung Keluarga did not help to overcome the assumed post-mortem financial crisis. In reality, such crisis did not exist since community and family support among low-asset Muslim Indonesians is normally strong enough to largely provide for the bereft family. This support is driven by the perception of death as a collective risk in the light of the moral economy and hinged on principles of balanced reciprocity. For cultural and religious reasons, the beneficiaries used most of the insurance payouts for funeral ceremonies and repayment of informal debt. With the advent of Payung Keluarga familial post-mortem assistance has been reduced. Funeral costs also seem to have been inflated by the product. It has thereby promoted a long-term societal shift from equality-seeking balanced reciprocity towards status-seeking and socially diversifying general reciprocity. In effect, Payung Keluarga has attacked cooperative social cohesion head-on where it is still strongest in a rapidly modernizing Indonesian society. This discerning and unintended impact of Payung Keluarga is hardly offset by a positive increase in financial literacy among the insured. Furthermore, the effect on “peace of mind” on the insured is ambivalent: while most insured stated to feel safer, some declared to feel less secure with their obligatory coverage for fear of interference with divine predetermination. Its overall developmental impact can be literally described as “micro”. Instead of protecting the status-quo of the family, Payung Keluarga has assumed the role of an actor of social change. Not only because it has changed the funeral pattern of the beneficiaries, but also because it promotes a far-reaching conceptual paradigm shift from balanced reciprocity, which forms a core pillar of the insured’s social structure, towards general reciprocity. The thesis hypothesizes that with sufficient insurance coverage provided, the insured will increasingly opt out of the coercively egalitarian “subsistence insurance” system. Such opt out will allow the insured to pursue a more aggressive economic asset accumulation strategy, particularly in combination with micro-credit. For the individual, this can be seen as a “liberating fortune” that would induce more women to grow their businesses to significant sizes. In parallel, it would deal a blow to cooperative social cohesion. I propose to call this the “double fortune / double blow” dilemma of microfinance. Although this thesis is exemplary, some of its findings can be generalized: The impact of microinsurance is highly dependent on cultural, religious and socio-demographic context. Any microinsurance intervention concerned with social impact should be preceded by a thick contextualization going beyond the usual demand assessments. In turn, microinsurance likewise impacts context as an actor of ambivalent social change. The complex influence of context and the role of microinsurance as an actor of social change have so far been hardly discussed in the development discourse.
The recent development of a whole plethora of new wireless technologies, such as IEEE 802.11, IEEE 802.15, IEEE 802.16, UMTS, and more recently LTE, etc, has triggered several efforts to integrate these technologies in a converged world of transparent and ubiquitous wireless connectivity. Most of these technologies have evolved around a certain use case and with some user behaviour being assumed; however, there still lacks a holistic solution to adapt access to user needs, in an automatic and transparent manner. One major problem that has to be addressed first, is mobility management between heterogeneous wireless networks. Current mobility management solutions mostly originate from cellular networking systems, which are operator specific, centralised, and focused on a single link technology. In order to deal with the wireless diversity of future wireless and mobile Internet, a new approach is needed. Adaptive wireless connectivity that is tailored around the user needs and capabilities is named context-aware mobility management. Context refers to the information describing the surroundings of the user as well as his/her behaviour, and additional semantic information that could optimise the adaption process. Context management normally entails discovering and tracking context, reasoning based on the discovered information, then adapting (or acting) upon the context-aware application or system. This context management chain is adapted throughout the thesis to the task of context-aware mobility management. The added complexity is necessary to adapt the ubiquitous access to the condition of both the user and the surrounding networks, while assuming that overlapping wireless networks could still be managed in separate management domains. Linking these management domains and aggregating this composite information in the form of a network context is one of the major contributions of this work. An overlay-based solution takes into account this scattered nature of the context management system, which is modelled as a decentralised dynamic location-based service. The proposed architecture is generalised to support ubiquitous location-based services, and a design methodology is proposed to ensure the localised impact of mobility-led context retrieval overhead.
The Sugiyama framework proposed in the seminal paper of 1981 is one of the most important algorithms in graph drawing and is widely used for visualizing directed graphs. In its common version, it draws graphs hierarchically and, hence, maps the topological direction to a geometric direction. However, such a hierarchical layout is not possible if the graph contains cycles, which have to be destroyed in a preceding step. In certain application and problem settings, e.g., bio sciences or periodic scheduling problems, it is important that the cyclic structure of the input graph is preserved and clearly visible in drawings. Sugiyama et al. also suggested apart from the nowadays standard horizontal algorithm a cyclic version they called recurrent hierarchies. However, this cyclic drawing style has not received much attention since. In this thesis we consider such cyclic drawings and investigate the Sugiyama framework for this new scenario. As our goal is to visualize cycles directly, the first phase of the Sugiyama framework, which is concerned with removing such cycles, can be neglected. The cyclic structure of the graph leads to new problems in the remaining phases, however, for which solutions are proposed in this thesis. The aim is a complete adaption of the Sugiyama framework for cyclic drawings. To complement our adaption of the Sugiyama framework, we also treat the problem of cyclic level planarity and present a linear time cyclic level planarity testing and embedding algorithm for strongly connected graphs.
Let <i>d</i> ≥ 1 be an integer and <i>E</i> a self-similar fractal set, which is the attractor of a uniform contracting iterated function system (UIFS) on R<sup>d</sup>. Denote by <i>D</i> the Hausdorff dimension, by <i>H</i><sup>D</sup><i>(E)</i> the Hausdorff measure and by diam <i>(E)</i> the diameter of <i>E</i>. If the UIFS is parametrised by its contracting factor <i>c</i>, while the set ω of fixed points of the UIFS does not depend on <i>c</i>, we will show the existence of a positive constant depending only on ω, such that the Hausdorff dimension is smaller than one and <i>H</i><sup>D</sup> = <i>(E)</i> <sup>D</sup> if <i>c</i> is smaller than this constant. We apply our result to modified versions of various classical fractals. Moreover we present a parametrised UIFS where ω depends on <i>c</i> and <i>H</i><sup>D</sup> < diam<i>(E)</i><sup>D</sup>, if <i>c</i> is small enough.
After two decades of almost complete isolation, Cambodia was rather suddenly integrated in 1991 into the global ‘free world’. As a result, the function and the role of the administration had to be modified. These transformations in the name of modernization and development have led to a significant rise of professionals, as these provide the necessary knowledge to manage the ongoing socio-economic and political transformations. Due to their educational socialization in urban centers and overseas, they consider themselves the most modern section of their society. Their common objective is to make the organizations they work for less rigid and to reform them along lines of technical-pragmatic concepts. Especially for the professionals working within the Cambodian state, the movement towards more efficiency and effectiveness in the administration and the strengthening of the rule of law is seen as a means to increase credibility of the Government on one hand and to attract foreign investment on the other. However, due to their functional specialization and expertise, professionals in Cambodia are marginalized from the overall socio-political changes in their environment. This has led to the reduction of their possibilities to communicate and organize effectively. Instead, their knowledge is applied to serve the requirements of their organizations/ patron-client networks and not as a resource to form alliances on a national or on a transnational level. A potential exists though for the professionals working in the state administration to either form a strategic group through processes of hybridization, or to form a ‘neutral’ rational administration after the ceding of the ‘old guard’ and thus their ‘disentanglement’ from old clientele structures.
With the rise of manycore processors, parallelism is becoming a mainstream necessity. Unfortunately, parallel programming is inherently more difficult than sequential programming; therefore, techniques for automatic parallelisation will become indispensable. We aim at extending the well-known polyhedron model, which promises this automation, beyond some of its current restrictions. Up to now, loop bounds and array subscripts in the modelled codes must be expressions linear in both the variables and the parameters. We lift this restriction and allow certain polynomial expressions instead of linear ones. With our extensions, we are able to handle more programs in all phases of the parallelisation process (dependence analysis, transformation of the program model, code generation). We extend Banerjee's classical dependence analysis to handle one non-linear parameter p, i.e., we are able to determine precisely the solutions of the system of conflict equalities for input programs with non-linear array accesses like A[p*i] in dependence of the residue class of p. We make contributions to three transformations desirable in automatic parallelisation. First, we show that using a generalised Simplex algorithm, which we have developed, schedules with non-linear parameters like theta(i)=floor(i/n) can be computed. In addition, such schedules can be expressed easily as a quantifier elimination problem but this approach turns out to be computationally less efficient with the available implementation. As a second transformation, we study parametric tiling which is used to adapt a parallelised program to the number of available processors at run time. Third, we present a localisation technique to exploit scratchpad memories on architectures on which data caching has to be handled by software. We transform a given code such that it keeps values which are reused in successive iterations of a sequential loop in the scratchpad. An access to a value written in an earlier iteration is served from the scratchpad to accelerate the access. In general, this transformation introduces non-linear loop bounds in the transformed model. Finally, we present an algorithm for generating code for arbitrary semi-algebraic iteration sets, i.e., for iteration sets described by polynomial inequalities in the variables and parameters. This is a vast generalisation of existing polyhedral code generation techniques. Although our algorithm is less efficient than polyhedral code generators, this paves the way for a code generator that can handle arbitrary parametric tilings and other transformations which introduce non-linear parameters (like non-linear schedules and the localisation we present) or even non-linear variables.
We consider the problem of optimal quantization with norm exponent r > 0 for Borel probabilities on R<sup>d</sup> under constrained Rényi-α-entropy of the quantizers. If the bound on the entropy becomes large, then sharp asymptotics for the optimal quantization error are well-known in the special cases α = 0 (memory-constrained quantization) and α = 1 (Shannon-entropy-constrained quantization). In this paper we determine sharp asymptotics for the optimal quantization error under large entropy bound with entropy parameter α ∈ [1+r/d, ∞]. For α ∈ [0,1+r/d[ we specify the asymptotical order of the optimal quantization error under large entropy bound. The optimal quantization error decays exponentially fast with the entropy bound and the exact decay rate is determined for all α ∈ [0, ∞].
Monetary policy is commonly assumed to impact on commodity demand via relative prices. The bank lending channel (BLC) proposes an additional effect via the quantity of loans. This has found its way into economic textbooks, although it remains empirically controversial. I present various theoretical criticisms of the BLC and its building block, the formal model by Bernanke and Blinder (1988). This model operates with lopsided loan demand, money demand and money supply functions. The logic of the BLC is valid for individual investors who are affected by a cut in bank loans. For a whole sector with a given level of interest rates a reduction of loans does not however dry up investment, but only the holding of money. Since 1988 academics have been using model by Bernanke and Blinder as a work horse to empirically address the question of the quantitative relevance of the BLC. Cecchetti (1995) und Hubbard (1995) summarize the overall evolution of the controversial debate up to then. The data used for the research is mainly from the United States. In this literature review, I mainly focus on the next and more recent cohort of empirical investigations on the BLC in Europe that follow papers by Kashyap and Stein (1995, 2000) and Kishan and Opiela (2000) on U.S. transmission mechanisms. It is crucial that these authors are the first to address the question using individual bank balance sheet data for the U.S. Until now, empirical research has produced largely inconsistent results. This is more revealing as many of these investigations have deficiencies in controlling for other transmission channels that relate to relative prices. The debate on how monetary policy works has not ended: the BLC, which stresses the importance of potential changes in the supply of loans as a result of monetary policy, and its subsequent impact on aggregate demand, became prominent recently, but the concluding empirical evidence is absent. I attempt to contribute to this debate by conducting a cross-section and panel data analysis of developed and developing countries and by choosing the availability of bank loans as a dependent variable. The latter circumvents identification problems that appear when analyzing the response of aggregated bank loans to monetary policy changes. This evidence finds no support for the prediction of the BLC that there is an additional channel of monetary transmission mechanism.
The thesis proposes a new formal framework for checking the content of web documents along individual reading paths. It is vital for the readability of web documents that their content is consistent and coherent along the possible browsing paths through the document. Manually ensuring the coherence of content along the possibly huge number of different browsing paths in a web document is time-consuming and error-prone. Existing methods for document validation and verification are not sufficiently expressive and efficient. The innovative core idea of this thesis is to combine the temporal logic CTL and description logic ALC for the representation of consistency criteria. The resulting new temporal description logics ALCCTL can - in contrast to existing specification formalisms - compactly represent coherence criteria on documents. Verification of web documents is modelled as a model checking problem of ALCCTL. The decidability and polynomial complexity of the ALCCTL model checking problem is proven and a sound, complete, and optimal model checking algorithm is presented. Case studies on real and realistic web documents demonstrate the performance and adequacy of the proposed methods. Existing methods such as symbolic model checking or XML-based document validation are outperformed in both expressiveness and speed.
A safe basis for automatic loop parallelization is the polyhedron model which represents the iteration domain of a loop nest as a polyhedron in $\mathbb{Z}^n$. However, turning the parallel loop program in the model to efficient code meets with several obstacles, due to which performance may deteriorate seriously -- especially on distributed memory architectures. We introduce a fine-grained model of the computation performed and show how this model can be applied to create efficient code.
In the past Moken and Hill people held little attraction for external actors like the state because they live in the periphery areas. Thereby, they could maintain their common way of life. However, as a result of the emergence of national security as relevant issue, resulting from communist insurgencies, opium cultivation and migration, and conservation awareness due to rapid degradation of natural resources in connection with shifting cultivation, deforestation and over exploitation, the Moken and hill people came into the focus of the state. One of the first measures a state takes when integrating new regions is registering the population and trying to control their activities. Registration provides the possibility to become Thai citizen and thereby receive civil rights common to the Thais. But integration into Thai society implies integration into the state administration as well as the economy. The Moken and hill people face new circumstances that have a far reaching impact on their livelihood. The perspectives of proper practices as seen by the state contrast with the views of the ethnic minorities. For example the policies of environment protection prohibit traditional practices like gathering sea snails in the case of the Moken, or shifting cultivation as practiced by most hill people. Consequently, the people have to find ways to cope with new situations. One means is to readily adapt to the administrative regulations. More common is, however, to negotiate, which means to find and define arenas and spaces. This study focuses mainly on how the Moken and hill people apply the issues of citizenship, space and ethnic identity to negotiate with the state in order to maintain their livelihood. It is found that in negotiation with the state, the hill people want to be accepted as Thai and try to establish a positive image instead of being seen as destroyers of the environment, drug producers, uneducated, etc. This attempt to be seen as Thai is accompanied by the demand for recognition of cultural differences. This might be due to the fact that for the hill people it is important to distinguish themselves from other minorities within the same region. In the case of the Moken this interest to establish a positive image is far less pronounced. This is possibly because they are not seen as a threat as the hill people, but only as stupid and primitive. They do not attempt to obliterate the stereotypes of being poor and stupid people because it facilitates avoidance of regulations and provides exceptional treatment. In the same time, they also want to be recognized as Thai to have equal right as the majority.
In this paper, the problem of optimal quantization is solved for uniform distributions on some higher dimensional, not necessarily self-similar $N-$adic Cantor-like sets. The optimal codebooks are determined and the optimal quantization error is calculated. The existence of the quantization dimension is characterized and it is shown that the quantization coefficient does not exist. The special case of self-similarity is also discussed. The conditions imposed are a separation property of the distribution and strict monotonicity of the first $N$ quantization error differences. Criteria for these conditions are proved and as special examples modified versions of classical fractal distributions are discussed.
This thesis develops an equilibrium framework for strategic exercise of geographical market entry option. The theoretical model analyses the impact of asymmetries of the competing firms such as follower entry barrier and asymmetric profitability on the optimal market entry timing and firm values. The duopoly model shows the existence of three types of equilibrium strategies and expresses the critical level of asymmetry which separates the equilibrium regions. The analysis proves that the softer competition does not force the stronger firm to enter the market at his preemption point and as a consequence the rent equalisation between the firms does not occur. However, it is also shown that the critical level of asymmetry is mitigated or strengthened by common economic factors such as the host market profit volatility and the interest rate. Extending the duopoly model to the oligopoly case the results present that each additional competitor delays the first market entrance compared to the duopolist leader preemption point. Hence, one additional competitor accelerates the first market entry if the number of competing firms excluding him is odd and has the reverse impact if it is even. It is further observed that continuation may disappear in some subgames of the market entry game in an oligopoly as a result of which no closed loop market entry strategy set exists. The equilibrium results of the theoretical models are tested empirically by applying the Cox proportional hazard model on entry behaviour of 61 retailers into 6 Eastern European countries from 1989 until 2005. The results explain why retailers entered certain markets earlier and why some firms succeeded more in seizing the entry opportunity. The results show that driven by the development of demand potential on the host market and by the intensity of competition, foreign retailers had a limited period of time - defined as the “window of opportunity” - to carry out their market entry.
This work is comprised of three essays that attempt to contribute to the task of reviewing the prevailing (solely market-based) contractual approach for sovereign debt restructuring. These essays particularly focus on aspects of intra-creditor coordination. Although the content of these essays is interconnected, each unit is a stand-alone entity. Essay I: The latest Argentinean debt restructuring was the first time the resolution of a modern financial crisis was completely handed over to the private financial markets without official intervention by public institutions. This essay argues that the resulting harshest haircut for private creditors in history can be at least partially related to an assurance game played by creditors. It shows that incentive schemes provided by the Argentinean government were factors facilitating this haircut. The analysis suggests that, contrary to the recognition in the literature, the effects of Collective Action Clauses and Exit Consents within a restructuring process are not equal. In the case of Argentina, the inclusion of Collective Action Clauses in the defaulted bonds could have benefited the holdout creditors. Essay II: Experience from events of sovereign debt restructuring over the last decade shows that the prevailing process is mainly shaped by exchange-offers launched by the debtor. This suggests that negotiations for changing the repayment terms of the debt take place in an ultimatum game which centers virtually all bargaining power on the debtor side. Creditors vote according to reservations values that might be influenced by fairness consideration both vis-à-vis the debtor and their fellow creditors. And, as fairness is usually a highly subjective influence, this can result in a heterogeneity of reservation values which might impede effective intra-creditor coordination for the benefit of the debtor. Essay III: Mitigating intra-creditor coordination failures has always been crucial in any proposal for an institutionalized process of restructuring sovereign bonds. However, one source of failure in creditor coordination has not been taken into consideration. The current process of sovereign debt restructuring enables the debtor to launch an exchange offer which provides incentives to inter-temporally discriminate among creditors with different reservation values. Only a creditor representation that can effectively bind in all different creditor types will mitigate this failure and thereby prevent potential conflicts of interests among creditors. Enhancing the current proposal of creditor groups so that creditors can effectively pre-commit can shield the process from this kind of coordination failure. This essay concludes with a proposal for a creation of a creditor representation body which exhibits a similar mode of operation as a celebrated institutionalized creditor representation body in the penultimate century. To summarize the conclusions drawn from these essays, the contractual approach is not yet able to guarantee effective creditor coordination due to a lack of a comprehensive and forceful permanent creditor representation. Establishing such a permanent representation body would replicate the institutional development experienced during the last heydays of bonds as a source of emerging market financing. This would lead to a significant improvement in creditor coordination. Moreover, since the result of a potential debt restructuring draws back to the ex-ante lending decision by the individual investor, this improvement could contribute to the welfare-enhancing effects of external financing by private creditors for developing economies.
For homogeneous one-dimensional Cantor sets, which are not necessarily self-similar, we show under some restrictions that the Euler exponent equals the quantization dimension of the uniform distribution on these Cantor sets. Moreover for a special sub-class of these sets we present a linkage between the Hausdorff and the Packing measure of these sets and the high-rate asymptotics of the quantization error.
For a large class of dyadic homogeneous Cantor distributions in \mathbb{R}, which are not necessarily self-similar, we determine the optimal quantizers, give a characterization for the existence of the quantization dimension, and show the non-existence of the quantization coefficient. The class contains all self-similar dyadic Cantor distributions, with contraction factor less than or equal to \frac{1}{3}. For these distributions we calculate the quantization errors explicitly.
Aspect-Oriented Programming (AOP) has been promoted as a solution for modularization problems known as the tyranny of the dominant decomposition in literature. However, when analyzing AOP languages it can be doubted that uncontrolled AOP is indeed a silver bullet. The contributions of the work presented in this thesis are twofold. First, we critically analyze AOP language constructs and their effects on program semantics to sensitize programmers and researchers to resulting problems. We further demonstrate that AOP—as available in AspectJ and similar languages—can easily result in less understandable, less evolvable, and thus error prone code—quite opposite to its claims. Second, we examine how tools relying on both static and dynamic program analysis can help to detect problematical usage of aspect-oriented constructs. We propose to use change impact analysis techniques to both automatically determine the impact of aspects and to deal with AOP system evolution. We further introduce an analysis technique to detect potential semantical issues related to undefined advice precedence. The thesis concludes with an overview of available open source AspectJ systems and an assessment of aspect-oriented programming considering both fundamentals of software engineering and the contents of this thesis.
Quantifier elimination (QE) is a powerful tool for problem solving. Once a problem is expressed as a formula, such a method converts it to a simpler, quantifier-free equivalent, thus solving the problem. Particularly many problems live in the domain of real numbers, which makes real QE very interesting. Among the so far implemented methods, QE by cylindrical algebraic decomposition (CAD) is the most important complete method. The aim of this thesis is to develop CAD-based algorithms, which can solve more problems in practice and/or provide more interesting information as output. An algorithm that satisfies these standards would concentrate on generic cases and postpone special and degenerated ones to be treated separately or to be abandoned completely. It would give a solution, which is locally correct for a region the user is interested in. It would give answers, which can provide much valuable information in particular for decision problems. It would combine these methods with more specialized ones, for subcases that allow for. It would exploit degrees of freedom in the algorithms by deciding to proceed in a way that promises to be efficient. It is the focus of this dissertation to treat these challenges. Algorithms described here are implemented in the computer logic system REDLOG and ship with the computer algebra system REDUCE.
We discuss several aspects of Peano-differentiable functions which are definable in an o-minimal structure expanding a real closed field. After recalling some already known results about o-minimal structures we develop techniques for the intrinsic study of differentiable functions in these structures. After this we study (ordinary) differentiable functions definable in an o-minimal structure and their continuiuty properties along curves of different differentiability classes. Then we generalise (ordinary) differentiability to Peano-differentiability. We study differentiability of certain Peano-derivatives of definable functions and characterise the sets of non-continuity of these derivatives. In the end we study extendability of these functions defined on closed sets and give sufficient conditions by which we can extend functions as Peano-differentiable functions.
Visual navigation of hierarchically structured graphs is a technique for interactively exploring large graphs that possess an additional hierarchical structure. This structure is expressed in form of a recursive clustering of the nodes: in call graphs of telephone networks, for instance, the nodes are identified with phone numbers; they are clustered recursively through the implicit structure of the numbers, e. g., nodes with the same area code belong to a cluster. In order to reduce the complexity and the size of the graph, only those subgraphs that are currently needed are shown in detail, while the others are collapsed, i. e., represented by meta nodes. In such a graph view the subgraphs in the areas of interest are expanded furthest, whereas those on the periphery are abstracted. As the areas of interest change over time, clusters in a view need to be expanded or contracted. First and foremost, there is need for an efficient data structure for this graph view maintenance problem. Depending on the admissible modifications of the graph and its hierarchical clustering, three variants have been discussed in the literature: in the static case, everything is fixed; in the dynamic graph variant, only edges of the graph can be inserted and deleted; finally, in the dynamic graph and tree variant the graph additionally is subject to node insertions and deletions and the clustering may change through splitting and merging of clusters. We introduce a new variant, dynamic leaves, which is based on the dynamic graph variant, but additionally allows insertion and deletion of graph nodes, i. e., leaves of the hierarchy. So far efficient data structures were known only for the static and the dynamic graph variant, i. e., neither the nodes of the graph nor the clustering could be modified. As this is unsatisfactory in an interactive editor for hierarchically structured graphs, we first generalize the approach of Buchsbaum et. al (Proc. 8th ESA, vol. 1879 of LNCS, pp. 120–131, 2000), in which graph view maintenance is formulated as a special case of range searching over tree cross products, to the new dynamic leaves variant. This generalization builds on a novel technique of superimposing a search tree over an ordered list maintenance structure. With an additional factor of roughly O(log n/log log n), this is the first data structure for the problem of graph view maintenance where the node set is dynamic. Visualizing the expanding and contracting appropriately is the second challenge. We propose a local update scheme for the algorithm of Sugiyama and Misue (IEEE Trans. on Systems, Man, and Cybernetics 21 (1991) 876– 892) for drawing compound digraphs. The layered drawings that it produces have many applications ranging from biochemical pathways to UML diagrams. Modifying the intermediate results of every step of the original algorithm locally, the update scheme is more efficient than re-applying the entire algorithm after expansion or contraction. As our experimental results on randomly generated graphs show, the average time for updating the drawing is around 50 % of the time for redrawing for dense graphs and below 20 % for sparse graphs. Also, the performance gain is not at the expense of quality as regards the area of the drawing, which increases only insignificantly, and the number of crossings, which is reduced. At the same time, the locality of the updates preserves the user ’s mental map of the graph: nodes that are are not affected stay on the same level in the same relative order and expanded edges take the same course as the corresponding contracted edge; furthermore, expansion and contraction are visually inverse. Finally, our new data structure and the update scheme are combined into an interactive editor and viewer for compound (di-)graphs. A flexible and extensible software architecture is introduced that lays the ground for future research. It employs the well-known Model-View-Controller (MVC) paradigm to separate the abstract data from its presentation. As a consequence, the purely combinatorial parts, i. e., the compound (di-)graph and its views, are reusable without the editor front-end. A proof-of-concept implementation based on the proposed architecture shows its feasibility and suitability.
Refactoring is a well known technique to enhance various aspects of an object-oriented program. It has become very popular during recent years, as it allows to overcome deficits present in many programs. Doing refactoring by hand is almost impossible due to the size and complexity of modern software systems. Automated tools provide support for the application of refactorings, but do not give hints, which refactorings to apply and why. The Snelting/Tip analysis is a program analysis, which creates a refactoring proposal for a class hierarchy by analyzing how class members are used inside a program. KABA is an adaption and extension of the Snelting/Tip analysis for Java. It has been implemented and expanded to become a semantic preserving, interactive refactoring system. Case studies of real world programs will show the usefulness of the system and its practical value.
Scheduling methodologies for real-time applications have been of keen interest to diverse research communities for several decades. Depending on the application area, algorithms have been developed that are tailored to specific requirements with respect to both the individual components of which an application is made up and the computational platform on which it is to be executed. Many real-time scheduling algorithms base their decisions solely or partly on timing constraints expressed by deadlines which must be met even under worst-case conditions. The increasing complexity of computing hardware means that worst-case execution time analysis becomes increasingly pessimistic. Scheduling hard real-time computations according to their worst-case execution times (which is common practice) will thus result, on average, in an increasing amount of spare capacity. The main goal of flexible real-time scheduling is to exploit this otherwise wasted capacity. Flexible scheduling schemes have been proposed to increase the ability of a real-time system to adapt to changing requirements and nondeterminism in the application behaviour. These models can be categorised as those whose source of flexibility is the quality of computations and those which are flexible regarding their timing constraints. This work describes a novel model which allows to specify both flexible timing constraints and quality profiles for an application. Furthermore, it demonstrates the applicability of this specification method to real-world examples and suggests a set of feasible scheduling algorithms for the proposed problem class.
The collection of various texts on D. H. Lawrence (1885-1930) represents the English writer’s first journey abroad having led the young and receptive teacher - already deeply influenced by German philosophy - into Bavaria and the Tyrol. Vividly featured in his - during his lifetime unpublished - novel "Mr Noon" the stay in Germany and Bavaria in the years 1912 and 1913 and the people he met there were to be the plot of Lawrence’s main works. In Munich Lawrence and his later German wife Frieda von Richthofen (1879-1956) were part of the so-called Schwabing-Bohème. In these circles of artists, poets, social-reformes, as well as of heroines of free love, anarchists and early fascists the author received his ideas about sex and erotics, which were performed in his famous novel "Lady Chatterley’s Lover" in 1927/1928. Especially the impact of the Austrian Doctor Otto Gross (1877-1920), a former lover of Frieda Lawrence, who tried to connect Friedrich Nietzsche’s "Will to Power" and Sigmund Freud’s Psychoanalysis, on Lawrence’s work is a remarkable criterion. The studies also follow Lawrence’s tracks into the Tyrol and his and Frieda’s wandering across the Alps to Northern Italy (1912-1913), an adventure playing the real setting of his novel "Women in Love" of 1920 and described in his essays "Twilight in Italy" (1916).
Clustered graphs are an enhanced graph model with a recursive clustering of the vertices according to a given nesting relation. This prime technique for expressing coherence of certain parts of the graph is used in many applications, such as biochemical pathways and UML class diagrams. For directed clustered graphs usually level drawings are used, leading to clustered level graphs. In this thesis we analyze the interrelation of clusters and levels and their influence on edge crossings and cluster/edge crossings.
A parallelising compilation consists of many translation and optimisation stages. The programmer may steer the compiler through these stages by supplying directives with the source code or setting compiler switches. However, for an evaluation of the effects of individual stages, their selection and their best order, this approach is not optimal. To solve this problem, we propose the following method. The compilation is cast as a sequence of program transformations. Each intermediate program runs on an Abstract Parallel Machine (APM), while the program generated by the final transformation runs on the target architecture. Our intermediate programs are all in the same language, Haskell. Thus, each program is executable and still abstract enough to be legible, which enables the evaluation of the transformation that generated it. This evaluation is supported by a cost model, which makes a performance prediction of the abstract program for a real machine. Our project, PolyAPM, provides an acyclic directed graph -- usually a tree -- of APMs whose traversal specifies different combinations and orders of transformations. From one source program, several target programs can be constructed. Their run time characteristics can be evaluated and compared. The goal of PolyAPM is not to support the one-off construction of parallel application programs. For the method's overhead to pay off, the project aims rather at supporting the construction and comparison of many similar variations of a parallel program and a comparative evaluation of parallelisation techniques. With the automation of transformations, PolyAPM can also be used to construct semi-automatic compilation systems.
Program slicing is a technique to identify statements that may influence the computations in other statements. Despite the ongoing research of almost 25 years, program slicing still has problems that prevent a widespread use: Sometimes, slices are too big to understand and too expensive and complicated to be computed for real-life programs. This thesis presents solutions to these problems: It contains various approaches which help the user to understand a slice more easily by making it more focused on the user's problem. All of these approaches have been implemented in the VALSOFT system and thorough evaluations of the proposed algorithms are presented. The underlying data structures used for slicing are program dependence graphs. They can also be used for different purposes: A new approach to clone detection based on identifying similar subgraphs in program dependence graphs is presented; it is able to detect modified clones better than other tools. In the theoretical part, this thesis presents a high-precision approach to slice concurrent procedural programs despite that optimal slicing is known to be undecidable. It is the first approach to slice concurrent programs that does not rely on inlining of called procedures.
In this work we present novel query evaluation techniques for data integration systems in different environments, ranging from a central data-warehouse approach, over distributed virtual market places, to peer-to-peer (P2P) systems. Based on a new distributed evaluation technique, the so-called HyperQueries, we present a reference architecture for distributed virtual market places. These HyperQueries enable us to dynamically construct query evaluation plans by referencing sub-plans in the Internet. Furthermore, the process of data integration is structured. Subsequently, we investigate P2P data integration systems without central instances. We introduce so-called Super-Peers which structure a P2P network. Using this Super-Peer based network we "unroll" queries. This allows us to execute even user-defined operators nearby the data sources. Finally, we propose novel, efficient join algorithms for decision support queries in central data-warehouse systems. The proposed order-preserving hashjoins and generalized hashteams are based on early sorting and early partitioning of the inputs and can speed up the query evaluation up to orders of magnitutes.
In this dissertation we generalise the notion of level planar graphs in two directions: track planarity and radial planarity. Our main results are linear time algorithms both for the planarity test and for the computation of an embedding, and thus a drawing. Our algorithms use and generalise PQ-trees, which are a data structure for efficient planarity tests.
This work presents techniques for the construction of a global data integrations system. Similar to distributed databases this system allows declarative queries in order to express user-specific information needs. Scalability towards global data integration systems and openness were major design goals for the architecture and techniques developed in this work. It is shown how service composition, extensibility and quality of service can be supported in an open system of providers for data, functionality for query processing operations, and computing power.
Deduction-based software component retrieval is a software reuse technique that uses formal specifications as component descriptors and as search keys; matching components are identified using an automated theorem prover. This dissertation contains a detailed theoretical investigation of the concept as well as the first substantial experimental evaluation of its technical feasibility.
The well-founded semantics has been accepted as the most relevant semantics for logic-based information systems. In this dissertation a framework based on a set of program transformations is presented that generalizes all major computation approaches for the well-founded semantics using a common data structure and provides a common language to describe their evaluation strategy. This rewriting system gives the formal background to analyze and combine different evaluation strategies in a common framework, or to design new algorithms and prove the correctness of its implementations at a high level just by changing the order of program transformations.
One of the most important algorithms for real quantifier elimination is the quantifier elimination by virtual substitution introduced by Weispfenning in 1988. In this thesis we present numerous algorithmic approaches for optimizing this quantifier elimination algorithm. Optimization goals are the actual running time of the implementation of the algorithm and the size of the output formula. Strategies for obtaining these goals include simplification of first-order formulas,reduction of the size of the computed elimination set, and condensing a new replacement for the virtual substitution. Local quantifier elimination computes formulas that are equivalent to the input formula only nearby a given point. We can make use of this restriction for further optimizing the quantifier elimination by virtual substitution. Finally we discuss how to solve a large class of scheduling problems by real quantifier elimination. To optimize our algorithm for solving scheduling problems we make use of the special form of the input formula and of additional information given by the description of the scheduling problem