open_access
Refine
Year of publication
Document Type
- Doctoral Thesis (229)
- Article (26)
- Preprint (10)
- Conference Proceeding (9)
- Report (4)
- Book (3)
- Part of Periodical (3)
- Master's Thesis (1)
- Other (1)
Language
- English (286) (remove)
Has Fulltext
- yes (286)
Keywords
- Computersicherheit (8)
- Maßtheorie (8)
- Graphenzeichnen (6)
- Iteriertes Funktionensystem (5)
- Marketing (5)
- Multimedia (5)
- Software Engineering (5)
- Information Retrieval (4)
- Kryptologie (4)
- Modellierung (4)
Institute
- Fakultät für Informatik und Mathematik (102)
- Wirtschaftswissenschaftliche Fakultät (53)
- Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik (47)
- Philosophische Fakultät (36)
- Mitarbeiter Lehrstuhl/Einrichtung der Wirtschaftswissenschaftlichen Fakultät (9)
- Sonstiger Autor der Fakultät für Informatik und Mathematik (9)
- Sozial- und Bildungswissenschaftliche Fakultät (8)
- Philosophische Fakultät / Südostasienkunde (6)
- Sonstiger Autor der Wirtschaftswissenschaftlichen Fakultät (5)
- Juristische Fakultät (4)
Top-k Semantic Caching
(2015)
The subject of this thesis is the intelligent caching of top-k queries in an environment with high latency and low throughput. In such an environment, caching can be used to reduce network traffic and improve response time. Slow database connections of mobile devices and to databases, which have been offshored, are practical use cases.
A semantic cache is a query-based cache that caches query results and maintains their semantic description. It reuses partial matches of previous query results. Each query that is processed by the semantic cache is split into two disjoint parts: one that can be completely answered with tuples of the cache probe query, and another that requires tuples to be transferred from the server (remainder query).
Existing semantic caches do not support top-k queries, i.e., ordered and limited queries. In this thesis, we present an innovative semantic cache that naturally supports top-k queries. The support of top-k queries in a semantic cache has considerable effects on cache elements, operations on cache elements -- like creation, difference, intersection, and union -- and query answering. Hence, we introduce new techniques for cache management and query processing. They enable the semantic cache to become a true top-k semantic cache.
In addition, we have developed a new algorithm that can estimate the lower bounds of query results of sorted queries using multidimensional histograms. Using this algorithm, our top-k semantic cache is able to pipeline partial query results of top-k queries. Thereby, query execution performance can be significantly increased.
We have implemented a prototype of a top-k semantic cache called IQCache (Intelligent Query Cache). An extensive and thorough evaluation with various benchmarks using our prototype demonstrates the applicability and performance of top-k semantic caching in practice. The experiments prove that the top-k semantic cache invariably outperforms simple hash-based caching strategies and scales very well.
The world wide web today serves as a distributed application platform. Its origins, however, go back to a simple delivery network for static hypertexts. The legacy from these days can still be observed in the communication protocol used by increasingly sophisticated clients and applications. This thesis identifies the actual security requirements of modern web applications and shows that HTTP does not fit them: user and application authentication, message integrity and confidentiality, control-flow integrity, and application-to-application authorization. We explore the other protocols in the web stack and work out why they can not fill the gap. Our analysis shows that the underlying problem is the connectionless property of HTTP. However, history shows that a fresh start with web communication is far from realistic. As a consequence, we come up with approaches that contribute to meet the identified requirements.
We first present impersonation attack vectors that begin before the actual user authentication, i.e. when secure web interaction and authentication seem to be unnecessary. Session fixation attacks exploit a responsibility mismatch between the web developer and the used web application framework. We describe and compare three countermeasures on different implementation levels: on the source code level, on the framework level, and on the network level as a reverse proxy.
Then, we explain how the authentication credentials that are transmitted for the user login, i.e. the password, and for session tracking, i.e. the session cookie, can be complemented by browser-stored and user-based secrets respectively. This way, an attacker can not hijack user accounts only by phishing the user's password because an additional browser-based secret is required for login. Also, the class of well-known session hijacking attacks is mitigated because a secret only known by the user must be provided in order to perform critical actions.
In the next step, we explore alternative approaches to static authentication credentials. Our approach implements a trusted UI and a mutually authenticated session using signatures as a means to authenticate requests. This way, it establishes a trusted path between the user and the web application without exchanging reusable authentication credentials. As a downside, this approach requires support on the client side and on the server side in order to provide maximum protection. Another approach avoids client-side support but can not implement a trusted UI and is thus susceptible to phishing and clickjacking attacks.
Our approaches described so far increase the security level of all web communication at all time. This is why we investigate adaptive security policies that fit the actual risk instead of permanently restricting all kinds of communication including non-critical requests. We develop a smart browser extension that detects when the user is authenticated on a website meaning that she can be impersonated because all requests carry her identity proof. Uncritical communication, however, is released from restrictions to enable all intended web features.
Finally, we focus on attacks targeting a web application's control-flow integrity. We explain them thoroughly, check whether current web application frameworks provide means for protection, and implement two approaches to protect web applications: The first approach is an extension for a web application framework and provides protection based on its configuration by checking all requests for policy conformity. The second approach generates its own policies ad hoc based on the observed web traffic and assuming that regular users only click on links and buttons and fill forms but do not craft requests to protected resources.
This thesis addresses a problem related to usage analysis in information retrieval
systems. Indeed, we exploit the history of search queries as support of analysis to
extract a profile model. The objective is to characterize the user and the data source
that interact in a system to allow different types of comparison (user-to-user, sourceto-
source, user-to-source). According to the study we conducted on the work done on
profile model, we concluded that the large majority of the contributions are strongly
related to the applications within they are proposed. As a result, the proposed
profile models are not reusable and suffer from several weaknesses. For instance,
these models do not consider the data source, they lack of semantic mechanisms and
they do not deal with scalability (in terms of complexity). Therefore, we propose
a generic model of user and data source profiles. The characteristics of this model
are the following. First, it is generic, being able to represent both the user and the
data source. Second, it enables to construct the profiles in an implicit way based on histories of search queries. Third, it defines the profile as a set of topics of interest,
each topic corresponding to a semantic cluster of keywords extracted by a specific
clustering algorithm. Finally, the profile is represented according to the vector space
model. The model is composed of several components organized in the form of a
framework, in which we assessed the complexity of each component.
The main components of the framework are:
• a method for keyword queries disambiguation
• a method for semantically representing search query logs in the form of a
taxonomy;
• a clustering algorithm that allows fast and efficient identification of topics of
interest as semantic clusters of keywords;
• a method to identify user and data source profiles according to the generic
model.
This framework enables in particular to perform various tasks related to usage-based
structuration of a distributed environment. As an example of application, the framework
is used to the discovery of user communities, and the categorization of data
sources. To validate the proposed framework, we conduct a series of experiments
on real logs from the search engine AOL search, which demonstrate the efficiency
of the disambiguation method in short queries, and show the relation between the
quality based clustering and the structure based clustering.
Static analysis tools and transformation engines for source code belong to the standard equipment of a software developer. Their use simplifies a developer's everyday work of maintaining and evolving software systems significantly and, hence, accounts for much of a developer's programming efficiency and programming productivity. This is also beneficial from a financial point of view, as programming errors are early detected and avoided in the the development process, thus the use of static analysis tools reduces the overall software-development costs considerably.
In practice, software systems are often developed as configurable systems to account for different requirements of application scenarios and use cases. To implement configurable systems, developers often use compile-time implementation techniques, such as preprocessors, by using #ifdef directives. Configuration options control the inclusion and exclusion of #ifdef-annotated source code and their selection/deselection serve as an input for generating tailor-made system variants on demand. Existing configurable systems, such as the linux kernel, often provide thousands of configuration options, forming a huge configuration space with billions of system variants.
Unfortunately, existing tool support cannot handle the myriads of system variants that can typically be derived from a configurable system. Analysis and transformation tools are not prepared for variability in source code, and, hence, they may process it incorrectly with the result of an incomplete and often broken tool support.
We challenge the way configurable systems are analyzed and transformed by introducing variability-aware static analysis tools and a variability-aware transformation engine for configurable systems' development. The main idea of such tool support is to exploit commonalities between system variants, reducing the effort of analyzing and transforming a configurable system. In particular, we develop novel analysis approaches for analyzing the myriads of system variants and compare them to state-of-the-art analysis approaches (namely sampling). The comparison shows that variability-aware analysis is complete (with respect to covering the whole configuration space), efficient (it outperforms some of the sampling heuristics), and scales even to large software systems. We demonstrate that variability-aware analysis is even practical when using it with non-trivial case studies, such as the linux kernel.
On top of variability-aware analysis, we develop a transformation engine for C, which respects variability induced by the preprocessor. The engine provides three common refactorings (rename identifier, extract function, and inline function) and overcomes shortcomings (completeness, use of heuristics, and scalability issues) of existing engines, while still being semantics-preserving with respect to all variants and being fast, providing an instantaneous user experience. To validate semantics preservation, we extend a standard testing approach for refactoring engines with variability and show in real-world case studies the effectiveness and scalability of our engine.
In the end, our analysis and transformation techniques show that configurable systems can efficiently be analyzed and transformed (even for large-scale systems), providing the same guarantees for configurable systems as for standard systems in terms of detecting and avoiding programming errors.
”Volkstümliche Musik” is a significant phenomenon (at least) of the early 1990s. The analysis of this phenomenon can paradigmatically represent the discursive practices and sub-thought systems of large parts of the population of the Federal Republic of Germany at this time. ”Volkstümliche Musik” depends on the political situation and indirectly deals with the needs and problems of individuals which result from such a situation and which lie in the deep structure of their mentalities. It thus takes on a cultural function.
In study 1 an introduction to the research on moral self-regulation is provided alongside with an explanation of the two manifestations of moral self-regulation: moral licensing and moral cleansing. At the core of the first study is an experiment which was designed to identify moral licensing and cleansing in the domain of honesty. The experiment merges relevant studies from social psychology and experimental economics. It assesses the question if moral self-regulation exists within the domain of honesty or more precisely, if the truth and lies are told in such a way as to balance each other out. After manipulating participants’ moral balances (either positively or negatively), rates of truth-telling are compared to a neutral baseline scenario. Since neither moral licensing nor moral cleansing is observed, the results provide no support to the initial hypothesis that moral self-regulation exists within the domain of honesty.
Study 2 builds on these results and discusses possible reasons for the absence of moral self-regulation. The research on moral hypocrisy and self-concept maintenance are presented and discussed as possible explanations. In order to shed more light on participants’ behavior, a coding procedure is presented that was used on the dataset from study 1. This approach makes it possible to quantify participants’ handwritten stories that resulted from the moral manipulation in study 1 and gain more insights on how truth-telling and lying affect the moral balance. By analyzing (dis)honesty on a more detailed level, results show that participants tend to act consistent to what they revealed about themselves in their stories.
Study 3 links together aspects of moral self-regulation, moral hypocrisy and impression management. The "looting game" is presented which lets participants loot money from a charity box being subject to altruistic punishment from observers. For their punishment decision observers are provided with a history of participants’ past actions. This design allows to assess how misconduct, punishment and the creation of a favorable impression interact and ultimately impact profits. The results indicate that moral cleansing, and not the desire to trick observers, is the reason for manipulation. Participants who loot money from the charity box do not expect to receive less punishment, rather they simply want to present a more favorable picture of themselves. On the other hand, observers fully account for the possibility of manipulation and tend to disregard a manipulated history. The looting game therefore brings the hypothesis into question that impressions are managed and manipulated to increase profits.
Quantum computing is an emerging technology that has the potential to change the perspectives and applications of computing in general. A wide range of applications are enabled: from faster algorithmic solutions of classically still difficult problems to theoretically more secure communication protocols. A quantum computer uses the quantum mechanical effects of particles or particle-like systems, and a major similarity between quantum and classical computers consists of both being abstracted as information processing machines. Whereas a classical computer operates on classical digital information, the quantum computer processes quantum information, which shares similarities with analog signals. One of the central differences between the two types of information is that classical information is more fault-tolerant when compared to its quantum counterpart.
Faults are the result of the quantum systems being interfered by external noise, but during the last decades quantum error correction codes (QECC) were proposed as methods to reduce the effect of noise. Reliable quantum circuits are the result of designing circuits that operate directly on encoded quantum information, but the circuit’s reliability is also increased by supplemental redundancies, such as sub-circuit repetitions.
Reliable quantum circuits have not been widely used, and one of the major obstacles is their vast associated resource overhead, but recent quantum computing architectures show promising scalabilities. Consequently the number of particles used for computing can be more easily increased, and that the classical control hardware (inherent for quantum computation) is also more reliable. Reliable quantum circuits haev been investigated for almost as long as general quantum computing, but their limited adoption (until recently) has not generated enough interest into their systematic design.
The continuously increasing practical relevance of reliability motivates the present thesis to investigate some of the first answers to questions related to the background and the methods forming a reliable quantum circuit design stack.
The specifics of quantum circuits are analysed from two perspectives: their probabilistic behaviour and their topological properties when a particular class of QECCs are used. The quantum phenomena, such as entanglement and superposition, are the computational resources used for designing quantum circuits. The discrete nature of classical information is missing for quantum information. An arbitrary quantum system can be in an infinite number of states, which are linear combinations of an exponential number of basis states. Any nontrivial linear combination of more than one basis states is called a state superposition. The effect of superpositions becomes evident when the state of the system is inferred (measured), as measurements are probabilistic with respect to their output: a nontrivial state superposition will collapse to one of the component basis states, and the measurement result is known exactly only after the measurement.
A quantum system is, in general, composed from identical subsystems, meaning that a quantum computer (the complete system) operates on multiple similar particles (subsystems). Entanglement expresses the impossibility of separating the state of the subsystems from the state of the complete system: the nontrivial interactions between the subsystems result into a single indivisible state. Entanglement is an additional source of probabilistic behaviour: by measuring the state of a subsystem, the states of the unmeasured subsystems will probabilistically collapse to states from a well defined set of possible states. Superposition and entanglement are the building blocks of quantum information teleportation protocols, which in turn are used in state-of-the-art fault-tolerant quantum computing architectures. Information teleportation implies that the state of a subsystem is moved to a second subsystem without copying any information during the process.
The probabilistic approach towards the design of quantum circuits is initiated by the extension of classical test and diagnosis methods. Quantum circuits are modelled similarly to classical circuits by defining gate-lists, and missing quantum gates are modelled by the single missing gate fault. The probabilistic approaches towards quantum circuits are facilitated by comparing these to stochastic circuits, which are a particular type of classical digital circuits. Stochastic circuits can be considered an emulation of analogue computing using digital components.
A first proposed design method, based on the direct comparison, is the simulation of quantum circuits using stochastic circuits by mapping each quantum gate to a stochastic computing sub-circuit. The resulting stochastic circuit is compiled and simulated on FPGAs. The obtained results are encouraging and illustrate the capabilities of the proposed simulation technique. However, the exponential number of possible quantum basis states was translated into an exponential number of stochastic computing elements.
A second contribution of the thesis is the proposal of test and diagnosis methods for both stochastic and quantum circuits. Existing verification (tomographic) methods of quantum circuits were targeting the reconstruction of the gate-lists. The repeated execution of the quantum circuit was followed by different but specific measurement at the circuit outputs. The similarities between stochastic and quantum circuits motivated the proposal of test and diagnosis methods that use a restricted set of measurement types, which minimise the number of circuit executions. The obtained simulation results show that the proposed validation methods improve the feasibility of quantum circuit tomography for small and medium size circuits.
A third contribution of the thesis is the algorithmic formalisation of a problem encountered in teleportation-based quantum computing architectures. The teleportation results are probabilistic and require corrections represented as quantum gates from a particular set. However, there are known commutation properties of these gates with the gates used in the circuit. The corrections are not applied as dynamic gate insertions (during the circuit’s execution) into the gate-lists, but their effect is tracked through the circuit, and the corrections are applied only at circuit outputs. The simulation results show that the algorithmic solution is applicable for very large quantum circuits.
Topological quantum computing (TQC) is based on a class of fault-tolerant quantum circuits that use the surface code as the underlying QECC. Quantum information is encoded in lattice-like structures and error protection is enabled by the topological properties of the lattice. The 3D structure of the lattice allows TQC computations to be visualised similarly to knot diagrams. Logical information is abstracted as strands and strand interactions (braids) represent logical quantum gates. Therefore, TQC circuits are abstracted using a geometrical description, which allows circuit input-output transformations (correlations) to be represented as geometric sub-structures.
TQC design methods were not investigated prior to this work, and the thesis introduces the topological computational model by first analysing the necessary concepts. The proposed TQC design stack follows a top-down approach: an arbitrary quantum circuit is decomposed into the TQC supported gate set; the resulting circuit is mapped to a lattice of appropriate dimensions; relevant resulting topological properties are extracted and expressed using graphs and Boolean formulas. Both circuit representations are novel and applicable to TQC circuit synthesis and validation. Moreover, the Boolean formalism is broadened into a formal mechanism for proving circuit correctness.
The thesis introduces TQC circuit synthesis, which is based on a novel logical gate geometric description, whose formal correctness is demonstrated. Two synthesis methods are designed, and both use a general planar representation of the circuit. Initial simulation results demonstrate the practicality and performance of the methods.
An additional group of proposed design methods solves the problem of automatic correlation construction. The methods use validity criteria which were introduced and analysed beforehand in the thesis. Input-output correlations existing in the circuit are inferred using both the graph and the Boolean representation.
The thesis extends the TQC state-of-the-art by recognising the importance of correlations in the validation process: correlation construction is used as a sub-routine for TQC circuit validation. The presented cross-layer validation procedure is useful when investigating both the QECC and the circuit, while a second proposed method is QECC-independent. Both methods are scalable and applicable even to very large circuits.
The thesis completes with the analysis of TQC circuit identities, where the developed Boolean formalism is used. The proofs of former known circuit identities were either missing or complex, and the presented approach reduces the length of the proofs and represents a first step towards standardising them. A new identity is developed and detailed during the process of illustrating the known circuit identities.
Reliable quantum circuits are a necessity for quantum computing to become reality, and specialised design methods are required to support the quest for scalable quantum computers. This thesis used a twofold approach towards this target: firstly by focusing on the probabilistic behaviour of quantum circuits, and secondly by considering the requirements of a promising quantum computing architecture, namely TQC. Both approaches resulted in a set of design methods enabling the investigation of reliable quantum circuits.
The thesis contributes with the proposal of a new quantum simulation technique, novel and practical test and diagnosis methods for general quantum circuits, the proposal of the TQC design stack and the set of design methods that form the stack. The mapping, synthesis and validation of TQC circuits were developed and evaluated based on a novel and promising formalism that enabled checking circuit correctness.
Future work will focus on improving the understanding of TQC circuit identities as it is hoped that these are the key for circuit compaction and optimisation. Improvements to the stochastic circuit simulation technique have the potential of spawning new insights about quantum circuits in general.
This study is an urban history of Cagayan de Oro (Northern Mindanao, Philippines), a city in a developing region, which faced urban transformation from the end of WWII in 1945 to 1980. It employed a multidisciplinary approach wherein the city’s demographic, economic and infrastructural changes were analyzed. The study revealed that Cagayan had grown and transformed from a traditional town into an urban centre whereby its demography had continuously expanded with high rates of natural increase and large streams of in-migrants; while its economic activities had shifted from agricultural production to commerce and industrialization. Its economic transition was due to the promotion of Cagayan as the “Gateway to the South” or “Gateway to Northern Mindanao” by the deciding elites who were part of the transnational economic system that supplied resources to the developed countries such as the United States and Japan. As a result, the growth of Cagayan was not directed towards the masses but instead to the elites and their foreign allies. Cagayan experienced inertia in terms of infrastructural development. Therefore the absorption of foreign structures resulted to an artificial form of urban transformation in Cagayan.
Precise, content-rich and well-structured document models are required for applications like verifying the consistency of documents. Creating such models for common documents is currently an expensive and error-prone process. In this thesis we present a novel approach to modelling and processing digital documents that uses semantic technologies. In contrast to other modelling approaches, we model the structure of documents as indicated by the content, not as defined by technical attributes like the file format. Additionally, our meta-model can be applied to a wide range of different documents, not just to a small set of documents with a predefined set of features. The models include semantic data and content relationships, which can be further extended with domain knowledge. Our new separation of technical and semantic document models fuels a standardised method for obtaining semantic models. This method is effective, suitable for live processing, and easily transferable to other document types and other domains. As it is makes extensive use of background knowledge, we also present techniques for obtaining such knowledge, and for representing complex forms of knowledge with multiple meta-layers. A flexible technique for obtaining relevant data from our document models completes the approach. This includes the ability to obtain various verification models, suitable for different types of consistency criteria and for different validation formalisms. We conclude this thesis with an evaluation that shows the viability and effectiveness of the proposed approach. We present runtime results for an implementation based on RDF/OWL and the rule language JBoss Drools that are adequate for live processing. We also provide and successfully apply techniques for measuring the quality of both document models and background knowledge.
The Internet is a global system of interconnected computers and computer networks where semi-structured data has been successfully applied for exchanging information. In nowadays Internet the huge range of actors, the large diversity of the associated device classes and domains, and the enormous amount of resource-restricted controllers in this system created new requirements and coined also a new term. Internet of Things (IoT), in this regard, refers to identifiable objects (things) and their virtual representations in an Internet-like structure. The fundamental question the thesis tries to answer is whether and how the same semi-structured data can be also applied to the IoT and the embedded domain in spite of resource-limited controllers. In order to discuss this question properties and requirements of embedded networks with regard to the IoT domain have been collected and evaluated. Thereafter the omnipresent semi-structured data exchange format in the Web, the Extensible Markup Language (XML), has been validated. The result was a list of missing requirements such as a compact representation, a representation that can be generated and consumed fast and also allows a small footprint implementation. To address the compiled requirements a binary representation of XML which nowadays is known as W3Cs Efficient XML Interchange (EXI) format has been accomplished which simultaneously optimizes performance and the utilization of computational resources and is designed to be compatible with XML. Moreover, in this work the format has been practically validated and tested. Addressing the needs of the embedded domain one result of this analyzes were optimizations to constrain runtime memory usage and to predict memory growth at runtime. A concept introduced in this thesis is LazyDOM which reduces memory requirements when processing and querying data. By means of a newly proposed code generation technique processing of EXI on ultra-constrained device classes has been enabled and resulting format modifications have been adopted by the W3C standardization. The research work described in this thesis on efficiently exchanging and processing semi-structured data on constrained embedded devices has not only triggered modifications in the W3C EXI format but even is already adopted in domain specific application standards and implementations. The above mentioned optimizations such as predictably limit the memory growth at runtime have been contributed, discussed and evaluated by the W3C experts and become a core part of the EXI specification. Even more significantly from the IoT perspective these optimizations provide the basis for the adoption of this technology in ISO and IEC standardization which is the first time for automotive and power industry to use IoT in the control plane. The implementation of EXI to conduct the evaluation as part of this thesis has become the de-facto open source reference implementation of EXI and became the basis of a number of other reference implementations such as the OpenV2G project that provides the reference implementation of the communication interface in ISO/IEC 15118. In summary the conducted research work has evaluated the options to adapt semi-structured data for the constrained embedded domain, proposed modifications and evaluated those under realistic conditions. This made it relevant for the technology as well as for application standardization despite the short period of this work. As such the research can now be taken as a basis for further challenges in the IoT field namely adopting concepts of the Semantic Web and adapting those to stimulate the quickly expanding eco-system of embedded devices.
Over the last two decades, the Internet has fundamentally changed the ways firms and consumers interact. The ongoing evolution of the Internet-enabled market environment entails new challenges for marketing research and practice, including the emergence of innovative business models, a proliferation of marketing channels, and an unknown wealth of data. This dissertation addresses these issues in three individual essays. Study 1 focuses on business models offering services for free, which have become increasingly prevalent in the online sector. Offering services for free raises new questions for service providers as well as marketing researchers: How do customers of free e-services contribute value without paying? What are the nature and dynamics of nonmonetary value contributions by nonpaying customers? Based on a literature review and depth interviews with senior executives of free e-service providers, Study 1 presents a comprehensive overview of nonmonetary value contributions in the free e-service sector, including not only word of mouth, co-production, and network effects but also attention and data as two new dimensions, which have been disregarded in marketing research. By putting their findings in the context of existing literature on customer value and customer engagement, the authors do not only shed light on the complex processes of value creation in the emerging e-service industry but also advance marketing and service research in general. Studies 2 and 3 investigate the analysis of online multichannel consumer behavior in times of big data. Firms can choose from a plethora of channels to reach consumers on the Internet, such that consumers often use a number of different channels along the customer journey. While the unprecedented availability of individual-level data enables new insights into multichannel consumer behavior, it also makes high demands on the efficiency and scalability of research approaches. Study 2 addresses the challenge of attributing credit to different channels along the customer journey. Because advertisers often do not know to what degree each channel actually contributes to their marketing success, this attribution challenge is of great managerial interest, yet academic approaches to it have not found wide application in practice. To increase practical acceptance, Study 2 introduces a graph-based framework to analyze multichannel online customer path data as first- and higher-order Markov walks. According to a comprehensive set of criteria for attribution models, embracing both scientific rigor and practical applicability, four model variations are evaluated on four, large, real-world data sets from different industries. Results indicate substantial differences to existing heuristics such as “last click wins” and demonstrate that insights into channel effectiveness cannot be generalized from single data sets. The proposed framework offers support to practitioners by facilitating objective budget allocation and improving team decisions and allows for future applications such as real-time bidding. Study 3 investigates how channel usage along the customer journey facilitates inferences on underlying purchase decision processes. To handle increasing complexity and sparse data in online multichannel environments, the author presents a new categorization of online channels and tests the approach on two large clickstream data sets using a proportional hazard model with time-varying covariates. By categorizing channels along the dimensions of contact origin and branded versus generic usage, Study 3 finds meaningful interaction effects between contacts across channel types, corresponding to the theory of choice sets. Including interactions based on the proposed categorization significantly improves model fit and outperforms alternative specifications. The results will help retailers gain a better understanding of customers’ decision-making progress in an online multichannel environment and help them develop individualized targeting approaches for real-time bidding. Using a variety of methods including qualitative interviews, Markov graphs, and survival models, this dissertation does not only advance knowledge on analyzing and managing online consumer behavior but also adds new perspectives to marketing and service research in general.
Following an “agency-oriented Urban Theory” as advanced by Smith (2001), this study takes the urban landscape of Vinh City in Central Vietnam as a starting point into an investigation of multiple visions of modernity (Eisenstadt, 2000) put forward by social actors, as well as into urban change resulting from the implementation of such visions. Focusing on the period from 1973 to 2011, it traces the application of three different visions for urban development in Vinh: The Socialist City, The Modern and Civilized City, and the Participatory City. Projects aiming at implementing these visions in Vinh that are presented in this study have one thing in common: they are informed by a specific view of what a city is and what it should be, and their implementation aims at changing the city in the desired direction. This goal involves not only physical change of the city, but also institutional change in the urban society. To grasp the interplay between visions of a modern city, their application through concrete projects, and the results of these implementations, the study operates with two specific terms: modern projects, and urban change. After introducing Vinh and its history, the thesis presents the period of the vision of The Socialist City and its application in Vinh through cooperation between Vietnam and the German Democratic Republic in the 1970s. It then moves on to contemporary period starting in the 1990s, during which varying and conflicting modern projects for the city were put forward by different social actors cooperating in joint projects on urban development: the Modern and Civilized City and the Participatory City. While the modes of cooperation differed between the two periods, the study concludes with the argument that the impact of these transnational projects has led to path-dependent, as well as ambivalent, urban change in Vinh.
In this thesis, we investigates plane drawings of undirected and directed graphs on cylinder surfaces. In the case of undirected graphs, the vertices are positioned on a line that is parallel to the cylinder’s axis and the edge curves must not intersect this line. We show that a plane drawing is possible if and only if the graph is a double-ended queue (deque) graph, i. e., the vertices of the graph can be processed according to a linear order and the edges correspond to items in the deque inserted and removed at their end vertices. A surprising consequence resulting from these observations is that the deque characterizes planar graphs with a Hamiltonian path. This result extends the known characterization of planar graphs with a Hamiltonian cycle by two stacks. By these insights, we also obtain a new characterization of queue graphs and their duals. We also consider the complexity of deciding whether a graph is a deque graph and prove that it is NP-complete. By introducing a split operation, we obtain the splittable deque and show that it characterizes planarity. For the proof, we devise an algorithm that uses the splittable deque to test whether a rotation system is planar. In the case of directed graphs, we study upward plane drawings where the edge curves follow the direction of the cylinder’s axis (standing upward planarity; SUP) or they wind around the axis (rolling upward planarity; RUP). We characterize RUP graphs by means of their duals and show that RUP and SUP swap their roles when considering a graph and its dual. There is a physical interpretation underlying this characterization: A SUP graph is to its RUP dual graph as electric current passing through a conductor to the magnetic field surrounding the conductor. Whereas testing whether a graph is RUP is NP-hard in general [Bra14], for directed graphs without sources and sink, we develop a linear-time recognition algorithm that is based on our dual graph characterization of RUP graphs.
Diese Arbeit untersucht, wie Privatinvestoren vorzeitige Kündigungsrechte in strukturierten Zinsprodukten nutzen. Als Grundlage für die Analyse dient hierbei ein neuartiger, nicht öffentlich verfügbarer Datensatz, der über einen Zeitraum von circa 13 Jahren Entscheidungen von mehr als 800.000 Privatinvestoren über ein weiteres Halten oder eine vorzeitige Kündigung von Putable Bonds (Bundesschatzbriefen) abbildet. Das Ziel der Arbeit ist es, das Verständnis von finanziellen Entscheidungsstrategien von Privatinvestoren theoretisch und empirisch zu erweitern sowie mögliche Unterschiede innerhalb dieser Investorengruppe und im Vergleich zu anderen Kapitalmarktakteuren zu identifizieren. Darüber hinaus soll die Arbeit mögliche Handlungsfelder für Emittenten und Banken, welche vergleichbare Finanzprodukte anbieten, aufzeigen.
Modern Web technology makes the dream of fully interactive and enriched video come true. Nowadays it is possible to organize videos in a non-linear way playing in a sequence unknown in advance. Furthermore, additional information can be added to the video, ranging from short descriptions to animated images and further videos. This affords an easy and efficient to use authoring tool which is capable of the management of the single media objects, as well as a clear arrangement of the links between the parts. Tools of this kind can be found rarely and do mostly not provide the full range of needed functions. While providing an interactive experience to the viewer in the Web player, parallel plot sequences and additional information lead to an increased download volume. This may cause pauses during playback while elements have to be downloaded which are displayed with the video. A good quality of experience for these videos with small waiting times and a playback without interruptions is desired. This work presents the SIVA Suite to create the previously described annotated interactive non-linear videos. We propose a video model for interactivity, non-linearity, and annotations, which is implemented in an XML format, an authoring tool, and a player. Video is the main medium, whereby different scenes are linked to a scene graph. Time controlled additional content called annotations, like text, images, audio files, or videos, is added to the scenes. The user is able to navigate in the scene graph by selecting a button at a button panel. Furthermore, other navigational elements like a table of contents or a keyword search are provided. Besides the SIVA Suite, this thesis presents algorithms and strategies for download and cache management to provide a good quality of experience while watching the annotated interactive non-linear videos. Therefor, we implemented a standard-independent player framework. Integrated into a simulation environment, the framework allows to evaluate algorithms and strategies for the calculation of start-up times, and the selection of elements to pre-fetch into and delete from the cache. Their interaction during the playback of non-linear video contents can be analyzed. The algorithms and strategies can be used to minimize interruptions in the video flow after user interactions. Our extensive evaluation showed that our techniques result in faster start-up times and lesser interruptions in the video flow than those of other players. Knowledge of the structure of an interactive non-linear video can be used to minimize the start-up time at the beginning of a video while minimizing an increase in the overall download volume.
In the Web 2.0 era, platforms for sharing and collaboratively annotating images with keywords, called tags, became very popular. Tags are a powerful means for organizing and retrieving photos. However, manual tagging is time consuming. Recently, the sheer amount of user-tagged photos available on the Web encouraged researchers to explore new techniques for automatic image annotation. The idea is to annotate an unlabeled image by propagating the labels of community photos that are visually similar to it. Most recently, an ever increasing amount of community photos is also associated with location information, i.e., geotagged. In this thesis, we aim at exploiting the location context and propose an approach for automatically annotating geotagged photos. Our objective is to address the main limitations of state-of-the-art approaches in terms of the quality of the produced tags and the speed of the complete annotation process. To achieve these goals, we, first, deal with the problem of collecting images with the associated metadata from online repositories. Accordingly, we introduce a strategy for data crawling that takes advantage of location information and the social relationships among the contributors of the photos. To improve the quality of the collected user-tags, we present a method for resolving their ambiguity based on tag relatedness information. In this respect, we propose an approach for representing tags as probability distributions based on the algorithm of Laplacian score feature selection. Furthermore, we propose a new metric for calculating the distance between tag probability distributions by extending Jensen-Shannon Divergence to account for statistical fluctuations. To efficiently identify the visual neighbors, the thesis introduces two extensions to the state-of-the-art image matching algorithm, known as Speeded Up Robust Features (SURF). To speed up the matching, we present a solution for reducing the number of compared SURF descriptors based on classification techniques, while the accuracy of SURF is improved through an efficient method for iterative image matching. Furthermore, we propose a statistical model for ranking the mined annotations according to their relevance to the target image. This is achieved by combining multi-modal information in a statistical framework based on Bayes' rule. Finally, the effectiveness of each of mentioned contributions as well as the complete automatic annotation process are evaluated experimentally.
Making multimedia data available online becomes less expensive and more convenient on a daily basis. This development promotes web phenomenons such as Facebook, Twitter, and Flickr. These phenomena and their increased acceptance in society in turn leads to a multiplication of the amount of available images online. This vast amount of, frequently public and therefore searchable, images already exceeds the zettabyte bound. Executing a similarity search on the magnitude of images that are publicly available in the Internet, and receiving a top quality result is a challenge that the scientific community has recently attempted to rise to. One approach to cope with this problem assumes the use of distributed heterogeneous Content Based Image Retrieval system (CBIRs). Following from this anticipation, the problems that emerge from a distributed query scenario must be dealt with. For example the involved CBIRs’ usage of distinct metadata formats for describing their content, as well as their unequal technical and structural information. An addition issue is the individual metrics that are used by the CBIRs to calculate the similarity between pictures, as well as their specific way of being combined. Overall, receiving good results in this environment is a very labor intensive task which has been scientifically but not yet comprehensively explored. The problem primarily addressed in this work is the collection of pictures from CBIRs, that are similar to a given picture, as a response to a distributed multimedia query. The main contribution of this thesis is the construction of a network of Content Based Image Retrieval systems that are able to extract and exploit the information about an input image’s semantic concept. This so called semantic CBIRn is mainly composed of CBIRs that are configured by the semantic CBIRn itself. Complementarily, there is a possibility that allows the integration of specialized external sources. The semantic CBIRn is able to collect and merge results of all of these attached CBIRs. In order to be able to integrate external sources that are willing to join the network, but are not willing to disclose their configuration, an algorithm was developed that approximates these configurations. By categorizing existing - as well as external - CBIRs and analyzing incoming queries, image queries are exclusively forwarded to the most suitable CBIRs. In this way, images that are not of any use for the user can be omitted beforehand. The hereafter returned images are rendered comparable in order to be able to merge them to one single result list of images, that are similar to the input image. The feasibility of the approach and the hereby obtained improvement of the search process is demonstrated by a prototypical implementation and its evaluation using classified images of ImageNet. Using this prototypical implementation an augmentation of the number of returned images that are of the same semantic concept as the input images is achieved by a factor of 4.75 with respect to a predefined non-semantic CBIRn.
UME is the notion that a user should receive informative adapted content anytime and anywhere. Personalization of videos, which adapts their content according to user preferences, is a vital aspect of achieving the UME vision. User preferences can be translated into several types of constraints that must be considered by the adaptation process, including semantic constraints directly related to the content of the video. To deal with these semantic constraints, a fine-grained adaptation, which can go down to the level of video objects, is necessary. The overall goal of this adaptation process is to provide users with adapted content that maximizes their Quality of Experience (QoE). This QoE depends at the same time on the level of the user's satisfaction in perceiving the adapted content, the amount of knowledge assimilated by the user, and the adaptation execution time. In video adaptation frameworks, the Adaptation Decision Taking Engine (ADTE), which can be considered as the "brain" of the adaptation engine, is responsible for achieving this goal. The task of the ADTE is challenging as many adaptation operations can satisfy the same semantic constraint, and thus arising in several feasible adaptation plans. Indeed, for each entity undergoing the adaptation process, the ADTE must decide on the adequate adaptation operator that satisfies the user's preferences while maximizing his/her quality of experience. The first challenge to achieve in this is to objectively measure the quality of the adapted video, taking into consideration the multiple aspects of the QoE. The second challenge is to assess beforehand this quality in order to choose the most appropriate adaptation plan among all possible plans. The third challenge is to resolve conflicting or overlapping semantic constraints, in particular conflicts arising from constraints expressed by owner's intellectual property rights about the modification of the content. In this thesis, we tackled the aforementioned challenges by proposing a Utility Function (UF), which integrates semantic concerns with user's perceptual considerations. This UF models the relationships among adaptation operations, user preferences, and the quality of the video content. We integrated this UF into an ADTE. This ADTE performs a multi-level piecewise reasoning to choose the adaptation plan that maximizes the user-perceived quality. Furthermore, we included intellectual property rights in the adaptation process. Thereby, we modeled content owner constraints. We dealt with the problem of conflicting user and owner constraints by mapping it to a known optimization problem. Moreover, we developed the SVCAT, which produces structural and high-level semantic annotation according to an original object-based video content model. We modeled as well the user's preferences proposing extensions to MPEG-7 and MPEG-21. All the developed contributions were carried out as part of a coherent framework called PIAF. PIAF is a complete modular MPEG standard compliant framework that covers the whole process of semantic video adaptation. We validated this research with qualitative and quantitative evaluations, which assess the performance and the efficiency of the proposed adaptation decision-taking engine within PIAF. The experimental results show that the proposed UF has a high correlation with subjective video quality evaluation.
The present thesis is based on four articles in the areas of labor economics, regional science and international trade. I make use of different micro-level data sets to evaluate reasons for performance disparities between firms and between workers and evaluate the interrelation of these disparities with characteristics of local labor markets. Chapter 1 of this thesis provides a discrimination between the effects of several agglomeration externalities on firms’ total factor productivity. The identification of TFP is not trivial, however. I thereby correct for biases due to unobserved output prices and the endogeneity of agglomeration economies. Traditional reasons, such as specialization, diversity and size of the county, as well as the more detailed Marshallian agglomeration economies, namely knowledge spillovers and labor market pooling, are jointly tested. It turns out that labor market pooling is the quantitatively most important agglomeration mechanism. It is captured by the correlation of the occupational composition between one county-industry and the rest of the county. The intuition behind it is that a plant readily finds suitable staff if sectors, which employ similar workers, have a large extent in the same region. Labor market pooling is still the dominant agglomeration force if the spatial boundaries of regions are changed. In general, the data demonstrate that the strength of agglomeration economies varies largely between sectors. Only for a subset of industries, some positive evidence is detected for knowledge spillovers. Chapter 2 analyzes labor market pooling in greater detail, but with a slightly different modeling than in chapter 1. Here, the central aspect of labor market pooling is based on the quality of workers and firms. The main questions are if there is a systematic matching in the labor market and if this matching pattern creates advantages for both parties. I devote attention to the identification of accurate quality measures: plants' total factor productivity and workers' fixed effect. Two different methods then yield evidence in favor of positive assortative matching. The correlation between both quality measures is positive. Wage gains amount up to 4% when both quality levels are equal. In a fairly general matching model, this shape of the wage curve arises due to complementarities of qualities in the production function. When generally higher productivities and wages in dense regions (caused by agglomeration economies and sorting) are not controlled for, the strength of matching and wage gains are overestimated. I also find that regional differences in matching quality cannot be attributed to the local density and unemployment rate. Chapter 3 applies several regression-based decomposition methods to analyze the impact of region-, worker-, firm- and sector-specific determinants on the wage level and the continuous increase in wage inequality between 1995 and 2007 in Germany. In contrast to prior studies, more than 50% of the wage dispersion and almost the entire increase in wage inequality are explained in this approach. Altogether, the entire growth of wage dispersion occurs within regions and changes in the composition of wage determinants are minor compared to changes in their returns. I find that occupational attributes are the most important wage determinant. Changes in the firm size premium in combination with assortative matching also depress wages in the bottom of the distribution while they increase wages at the top. Workers with an unemployment record or an occupation in the service, construction and logistics sectors particularly experience falling wages. Chapter 4 studies the effect of an expansion of imported intermediate inputs on establishments’ average task intensities and employment size in a middle-income country. I use confidential matched employer-employee data and information on trade transactions for the universe of Brazilian firms. Propensity Score Matching indicates that import expansion leads to an overall employment growth, higher intensities in routine and non-routine manual tasks and an increased share of intermediates exports. Thus my findings point out that intermediates imports represent onshored instead of offshored tasks. This result remains unchanged regardless of whether imports from high- or low-wage countries are considered.
Embedded networks are fundamental infrastructures of many different kinds of domains, such as home or industrial automation, the automotive industry, and future smart grids. Yet they can be very heterogeneous, containing wired and wireless nodes with different kinds of resources and service capabilities, such as sensing, acting, and processing. Driven by new opportunities and business models, embedded networks will play an ever more important role in the future, interconnecting more and more devices, even from other network domains. Realizing applications for such types of networks, however, is a highly challenging task, since various aspects have to be considered, including communication between a diverse assortment of resource-constrained nodes, such as microcontrollers, as well as flexible node infrastructure. Service Oriented Architecture (SOA) with Web services would perfectly meet these unique characteristics of embedded networks and ease the development of applications. Standardized Web services, however, are based on plain-text XML, which is not suitable for microcontroller-based devices with their very limited resources due to XML's verbosity, its memory and bandwidth usage, as well as its associated significant processing overhead. This thesis presents methods and strategies for realizing efficient XML-based Web service communication in embedded networks by means of binary XML using EXI format. We present a code generation approach to create optimized and dedicated service applications in resource-constrained embedded networks. In so doing, we demonstrate how EXI grammar can be optimally constructed and applied to the Web service and service requester context. In addition, so as to realize an optimized service interaction in embedded networks, we design and develop an optimized filter-enabled service data dissemination that takes into account the individual resource capabilities of the nodes and the connection quality within embedded networks. We show different approaches for efficiently evaluating binary XML data and applying it to resource constrained devices, such as microcontrollers. Furthermore, we will present the effectful placement of binary XML filters in embedded networks with the aim of reducing both, the computational load of constrained nodes and the network traffic. Various evaluation results of V2G applications prove the efficiency of our approach as compared to existing solutions and they also prove the seamless and successful applicability of SOA-based technologies in the microcontroller-based environment.