open_access
Refine
Year of publication
Document Type
- Doctoral Thesis (229)
- Article (26)
- Preprint (10)
- Conference Proceeding (9)
- Report (4)
- Book (3)
- Part of Periodical (3)
- Master's Thesis (1)
- Other (1)
Language
- English (286) (remove)
Has Fulltext
- yes (286)
Keywords
- Computersicherheit (8)
- Maßtheorie (8)
- Graphenzeichnen (6)
- Iteriertes Funktionensystem (5)
- Marketing (5)
- Multimedia (5)
- Software Engineering (5)
- Information Retrieval (4)
- Kryptologie (4)
- Modellierung (4)
Institute
- Fakultät für Informatik und Mathematik (102)
- Wirtschaftswissenschaftliche Fakultät (53)
- Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik (47)
- Philosophische Fakultät (36)
- Mitarbeiter Lehrstuhl/Einrichtung der Wirtschaftswissenschaftlichen Fakultät (9)
- Sonstiger Autor der Fakultät für Informatik und Mathematik (9)
- Sozial- und Bildungswissenschaftliche Fakultät (8)
- Philosophische Fakultät / Südostasienkunde (6)
- Sonstiger Autor der Wirtschaftswissenschaftlichen Fakultät (5)
- Juristische Fakultät (4)
- Sonstiger Autor der Philosophischen Fakultät (2)
- Department für Katholische Theologie (1)
- Philosophische Fakultät / Geographie (1)
- Universitätsbibliothek (1)
- Zentrale Einrichtungen (1)
In order to end poverty by 2030, the declared goal of the United Nations, a better understanding is needed which policies help poor households to escape poverty and how to end its inter-generational transmission.
Since the Millennium Declaration in September 2000, and the adoption of the Millennium Development Goals (MDGs), the delivery of basic social services, such as education, health, water supply and sanitation, has become the central focus of international development assistance.
However, the provision of basic social services is not necessarily sufficient to lead to an accumulation of human and productive capital, which would allow households to escape poverty and interrupt its inter-generational transmission.
To understand why people are poor, we need to understand what productive decisions poor households take, and to identify what constraints households face in their attempt to accumulate human, as well as, productive capital. A better understanding of such constraints could guide policies that have a long-term impact on poverty reduction and on development.
A number of factor could explain why poor households operate at unprofitable levels and why they are constrained in their investment decisions. Empirical evidence points to different explanations: cost of learning and access to information, insufficient education, risk, credit constraints, non-convex production technologies, and behavioral patterns that are inconsistent with standard neoclassical models. Currently, one of the major challenges in formulating policies that foster productive investments among the poor, seems to be to disentangle the effects of scale, credit constraints, and the lack of insurance mechanisms.
This thesis seeks to shed further light on the relative role of these three constraints. In the context of rural India, it analyzes what production and investment decisions households take and how important risk and credit constraints as well as scale effects are in these decisions. Finally, it evaluates potential policy tools that could support households in overcoming these constraints.
Today, 33% of the world's poor live in India, the vast majority of them (80.5%) in rural areas. The economic structure of rural India is still dominated by agricultural production, and consequently, this thesis concentrates on agricultural production decisions and employment in agriculture.
In particular, this thesis addresses three questions in three individual papers: First, are farm households constrained in their crop choices by agricultural production risk and to which extent can India's public works program support households in overcoming this constraint? Second, how profitable is cattle farming in rural India at different levels of investment and which barriers do households face in reaching optimal investment levels? And third, can risk in agricultural wages explain limited investment in girls' education in the presence of intra-household substitution in household chores?
The first paper focuses on crop choice of farm households. It reassesses the stylized fact that households have to trade-off between returns and risk in their crop choice in the context of Andhra Pradesh, a state in the south of India. It then explores the effect of India's flagship anti-poverty program, the National Rural Employment Guarantee Scheme (NREGS) on households' crop choice using a representative panel data set. The NREGS guarantees each household living in rural India up to a hundred days of employment per year, at state minimum wages.
The paper shows theoretically, and empirically, that the introduction of the NREGS reduces households' uncertainty about future income streams because it provides reliable employment opportunities in rural areas independently of weather shocks and crop failure. With access to the NREGS, households can compensate income losses emanating from shocks to agricultural production. Households with access to the NREGS can therefore shift their production towards riskier but also more profitable crops. These shifts in agricultural production have the potential to considerably raise the incomes of smallholder farmers.
The paper concludes that employment guarantees can, similarly to crop insurance, help households in managing agricultural productions risks. It also argues that accounting for the effects of the NREGS on crop choice and profits from agricultural production affects the cost-benefit analysis of such a program considerably.
The second paper concentrates on the profitability of farming cattle in Andhra Pradesh.
The paper also uses a representative panel dataset, and examines average and marginal returns to cattle at different levels of cattle investment. It finds average returns in the order of -8% at the mean of cattle value. These returns vary across the cattle value distribution between negative 53% (in the lowest quintile) and positive 2% (in the highest). While marginal returns are positive on average, they also vary considerably with cattle value and breed. The paper shows that average and marginal returns are considerably higher for modern variety cows, i.e. European breeds and their crossbreeds, than for traditional varieties of cows or for buffaloes. It also shows that cattle farming becomes most profitable at minimum herd sizes of five animals, due to decreasing average labor costs with increasing herd sizes.
The results of this paper suggest that cattle farming is associated with sizable non-convexities in the production technology and that substantial economies of scale, as well as high upfront expenses of acquiring and feeding high-productivity animals, might trap poorer households in low-productivity asset levels. The fact that wealthier households and households with lower costs to access veterinary services are more likely to overcome these barriers, supports this idea.
The second paper concludes that cattle farming might well generate positive returns for households in rural India, but that most households seem to operate at unprofitable levels. This could also explain the apparent paradox between widespread support of cattle farming through agricultural policy interventions and negative returns to cattle, as stressed in recent works. It argues that policy interventions that target productive assets will only be beneficial if transfers are high enough to allow households to overcome these entry barriers.
The third paper concentrates on the effect of risk on the productive decisions of households, and analyzes the effect of wage risk in agricultural employment on women's labor supply and time allocated to home production. It seeks to understand the extent to which risk raises labor supply of women to levels that can become harmful for other members of the household. The hypothesis is that in the presence of intra-household substitution effects -- for instance in the performance of household chores -- increased female labor supply might have negative effects on the time allocation of girls. If women have less time available for home production and childcare, and such activities can only be foregone at high cost, they might be forced to take older girls out of school or to cut down on the time these girls study at home in order for them to fill in for these tasks.
The paper uses cross-sectional data on the time allocation of different household members and predicts wage risk at the village level as a function of the historical rainfall distribution and a village's share of land that is under irrigation. The results show that wage risk affects the time allocation of women, increasing their labor supply and reducing the time they allocate to home production. Wage risk also increases the time girls spend on household chores and reduces their time in school. Because the observed effect of wage risk on girls' time allocated to household chores corresponds very closely to the effect observed for women, it seems plausible to attribute it to intra-household substitution effects. The observed effect of risk on girls' school time, however, is greater than the observed effect of risk on the home-production time of girls. This can be due to two reasons: First, in the presence of intra-household substitution effects, shocks in wages will not only increase female labor supply but also girls' time on household chores. And the model predicts that risk-averse households invest less in education when future school time becomes uncertain, because future school time affects the returns to current schooling. Second, if school attendance is indivisible, then girls might be forced to drop out of school temporarily or even permanently.
The paper then simulates the effect of the NREGS on the time-allocation decisions of working women and school-age girls. The results suggest that the NREGS could increase the time working women spend on household duties, because it reduces uncertainty regarding future earnings, and alleviates the need to accumulate savings. Thereby, the NREGS would reduce the pressure on girls to perform household tasks and allow them to increase the time they spend in school or studying by 6 minutes daily.
Wit these findings, this thesis contributes to a better understanding of the choices poor households in rural India face in their day-to-day decision making, and offers insights into what policies could support households in escaping poverty, and interrupt its inter-generational transmission.
In modern CMOS technology, process variations have significantly increased impact on the circuit behavior with continuously scaled transistor sizes. Manufactured devices tend to have different performances due to parameter variations during manufacturing and
in the operating context. Conventional tests generated regardless of variations could fail to rule out devices with low performance and even functional failure caused by extreme variations; the unreliability in shipped products is in turn raised. To tackle the problem, many existing test approaches have focused on identifying and testing a number of critical paths in the circuit, and aimed at the efficiency of the searching process. However, the statistical circuit model, which better describes the circuit timing behavior under variations, is not yet sufficiently investigated and employed by existing testing methodologies.
This thesis work proposes Opt-KLPG and MIRID, which can be utilized by a statistical delay testing flow. Opt-KLPG—a K Longest Paths Generation (KLPG) algorithm for optimal solutions under memory constraints—can pin-pointedly generate tests for small delay defects, which are common small timing deviations under process variations, based on the traditional KLPG algorithm. In contrast to KLPG, Opt-KLPG guarantees the optimality of the solution (the K longest sensitizable paths indeed). MIRID is a mixed-mode timing-aware simulator, incorporating effects of power-supply noise and combining an event-driven logic simulation engine with interfaces to provided electrical models. MIRID aims at evaluating delay tests in presence of process variations efficiently yet accurately, by performing logic simulation at the gate level while determining the gate delays using simplified electrical modes. The electrical models applied by the simulator focus on the IR drop effect. Electrical parameters mainly contributing to the effect
are incorporated into the model. The simulator is generic and flexible to be adapted by modifying the interfaces with minor effort. Both applications were verified in various aspects by experiments for academical/industrial circuits, and turned out to have satisfiable effectiveness and performance.
Employment of a very large number of antennas is seen as the key technology to provide future users with very high data rates. At the same time, the implementation complexity will rise due to large memories required and sophisticated signal processing algorithms employed. Continuous technology downscaling allows implementation of such complex digital designs. At the same time, its inherent variability and vulnerability to physical disturbances violate the assumption of perfectly reliable hardware operation.
This work considers Unique Word OFDM which represents the alternative to the standard Cyclic Prefix OFDM providing superior detection quality. The generalization of Unique Word OFDM to a MIMO system is performed which allows interpretation as a virtual massive MIMO system with only few physical antennas. Detection methods for the introduced generalization are discussed and their performance is quantified.
Because of the large memory size required, linear detection represents the cost and performance effective solution. The possible memory errors due to radiation effects or voltage scaling are addressed and the nonlinear MMSE detection algorithm is proposed. This algorithm keeps track of the memory errors and is able to significantly mitigate their effect on the quality of the estimated data.
Apart of memory issues, reliability of the actual computational hardware which constitutes the receiver is of concern in this work. An own implementation of the MMSE Sorted Givens Rotations is subjected to transient fault injection. The impact of faults in various parts of the implemented circuit on the detection performance is quantified. Most vulnerable components of the implemented circuit in terms of reliability are identified.
Security is another major address of this work, since most current implementations include cryptographic devices.
Fault-based attacks on such systems are known to be able to extract the secret key in feasible time.
The remaining part of this work addresses such fault injection-based malicious attacks. Countermeasures based on a combination of information and hardware redundancy are considered. Recently introduced robust codes target such attacks by providing guaranteed detection capability. The performance of these codes is assessed by application to actual cryptographic and general purpose circuits. The work introduces metrics that help to identify fault locations in the circuit which could escape detection with high probability. These locations are targeted by transistor resizing that renders fault injection unfeasible.
Recently, non- and paraverbal properties of literary texts at the level of documentary inscription (i.e. materiality), seen individually or as aspects of a so-called ‘material text’, that is, the union of materiality and verbal sign systems, received an increasing amount of attention in textual scholarship and literary studies. Here, ‘meaning’ or at least ‘semantic potentiality’ has been attributed to both or either and physical features of texts have been construed as hitherto neglected aspects of literary communication and literary aesthetics. In what follows, I will present a brief conspectus of the current debate and then try to provide a reconstruction of underlying ideas by answering the question ‘how does a material text mean?’. Taking a descriptive meta-perspective and focusing on conceptual and methodological clarification, I try to clarify the somewhat blurry expressions ‘meaning’, ‘to mean’ and the like by translating them into the distinct terminology of semiotics and transferring them into the theoretical framework of an instrumentalist notion of signs.
Three Essays on Understanding Mobile Consumer Behavior: Business Models, Perceptions, and Features
(2016)
For about a decade, consumers have been carrying the Internet in their pockets. The rapid penetration of modern smartphones has meant that more than two-thirds of the people in the West can access and use online resources, anytime and anywhere. Consumers also can communicate and share their consumption experiences instantaneously. Platforms reach users for time-critical events through highly personal communication channels, in the sense that smartphones serve as constant companions. Many mobile applications and their basic services and contents also are available for free. The digital and mobile worlds thus are changing the very means of communication, suggesting the powerful need for marketing research and practice to find the opportunities and meet the challenges of the mobile Internet. In particular, scientific investigations are required to describe new business models in the free e-service industry and the consumer behavior affected by mobile features. This thesis examines these topics in three essays.
Study 1 considers business models that offer their services without charge. Offering services for free is symptomatic of not only mobile apps (90% of all apps are available for free) but the digital economy in general. For companies offering free e-services, this situation raises several important questions, in that, without any access device restrictions, how do customers of free e-services contribute value without paying? What are the nature and dynamics of nonmonetary value contributions by nonpaying customers? With a literature review and interviews with senior executives of free e-service providers, Study 1 presents a comprehensive overview of nonmonetary value contributions in the free e-service sector, including word of mouth, co-production, and network effects. Moreover, adding attention and data into this framework reveals two further aspects that have not been addressed in prior customer value research. By putting the findings in the context of the existing literature on customer value and customer engagement, this study sheds light on the complex processes of value creation in the emerging e-service sector, while advancing marketing and service research in general.
Study 2 deepens the findings from the first study; specifically, the focus is on the way that mobile users co-produce content and how this contribution is perceived by recipients in the network. With field data and a scenario experiment, this study demonstrates that recipients appreciate mobile-generated customer reviews fundamentally differently from other reviews. In particular, they discount the helpfulness of mobile reviews, due to their text-specific content and style particularities. The very fact that a review has been identified as written on a mobile device also lowers recipients’ perceptions of its value. Recipients use information about the device as a source cue to assess their compatibility with the review contribution channel. If they perceive themselves as compatible with the method used to generate the review (mobile or non-mobile), recipients regard the review as more helpful, because they attribute the review to the quality of the reviewed subject. If they perceive it as incompatible though, recipients assume that the review reflects the personal dispositions of the reviewer and discount its helpfulness.
Finally, Study 3 takes up the attention and cross-market network effects in a mobile setting; these were two nonmonetary dimensions identified by Study 1. Platform providers should develop measures to draw the attention of nonpaying customers to the offers of their paying customers. One attention-grabbing mobile-specific feature is push notifications to the device, which provide information about temporally or spatially relevant events. More concretely, Study 3 investigates how mobile push notifications remind users of upcoming deadlines in online auctions and therefore improve late bidding success. Late bidding is a prevalent strategy, in which bidders submit their bids at the very end of an online auction. This research uses field data about an online auction platform to demonstrate that late bidders use these mobile push notifications more frequently than do bidders with different bidding patterns. Within the group of late bidders, the chance to win an auction increases with their use of push notifications. After a mobile push notification, late bidders submit bids through mobile devices but also through non-mobile channels. Less experienced late bidders also benefit from push notifications, which increase their chances of success.
In summary, this dissertation contributes to an enhanced understanding of mobile consumer behavior by using various methods, including qualitative interviews, field observations, and online experiments. From a theoretical perspective, it contributes to current knowledge about nonmonetary costumer value contributions in general and their role in mobile settings in particular. This thesis highlights the role of mobile devices in co-production and perceptions of co-produced content. It also reveals how mobile-specific interactive features, like push notifications, affect late bidding efficiency. Therefore, it specifies the role of mobile devices in cross-market effects, in that they enable the platform to direct the relationship between buyers and sellers. The insights presented herein encourage managers to reevaluate their current practices, think about whether they should label co-produced content as generated through a mobile channel or not, and contemplate whether to develop mobile push notifications as helpful features for users (not as intrusive marketing messages).
IT outsourcing to clouds bears new challenges to the technical implementation of legally compliant clouds. On the one hand, outsourcing companies have to comply with legal requirements. On the other hand, cloud providers have to support their customers in achieving compliance with these legal requirements when processing data in the cloud. Consequently, the questions arise when IT outsourcing to clouds is lawful, which legal requirements apply to data processing in clouds, and how cloud providers can support their customers on achieving legal compliance.
In this thesis, answers to these questions are given by performing a legal analysis identifying the legal requirements and a technical analysis identifying how legal requirements can be addressed in the context of cloud computing. Further, an information flow analysis is done, resulting in a system theoretical model that is able to describe information flow control in clouds based on the security classification of virtual resources and hardware resources. In a proof-of-concept implementation which is based on the OpenStack open-source cloud platform, it is shown that information flow control can be implemented as a part of cloud management and that legal compliance can be monitored and reported based on the actual assignment of virtual resources to hardware resources. Thereby, cloud providers are able to provide cloud customers with cloud resources, which are automatically assigned to hardware resources that comply with the legal requirements of the cloud customers. This consequently empowers cloud customers to utilise cloud resources according to their legal requirements and to keep control of managing the legal compliance of their data processing in clouds.
Entity disambiguation is the task of mapping ambiguous terms in natural-language text to its entities in a knowledge base. It finds its application in the extraction of structured data in RDF (Resource Description Framework) from textual documents, but equally so in facilitating artificial intelligence applications, such as Semantic Search, Reasoning and Question & Answering. We propose a new collective, graph-based disambiguation algorithm utilizing semantic entity and document embeddings for robust entity disambiguation. Robust thereby refers to the property of achieving better than state-of-the-art results over a wide range of very different data sets. Our approach is also able to abstain if no appropriate entity can be found for a specific surface form. Our evaluation shows, that our approach achieves significantly (>5%) better results than all other publicly available disambiguation algorithms on 7 of 9 datasets without data set specific tuning. Moreover, we discuss the influence of the quality of the knowledge base on the disambiguation accuracy and indicate that our algorithm achieves better results than non-publicly available state-of-the-art algorithms.
A configurable system enables users to derive individual system variants based on a selection of configuration options. To cope with the often huge number of possible configurations, several analysis approaches (e.g., for verification of configurable systems) implement different strategies to account for configurability. One popular strategy—often applied in practice—is to use sampling (i.e.,analyzing only a subset of all system variants). While sampling reduces the analysis effort significantly, the information obtained is necessarily incomplete as some variants are not analyzed. A second strategy is to identify the common parts and the variable parts of a configurable system and analyze each part separately (called feature-based strategy). As a third strategy, researchers have begun to develop family-based analyses. Family-based approaches analyze the code base of a configurable system as a whole, rather than the individual variants or parts of the system, this way exploiting similarities among individual variants to reduce analysis effort. Each of these three strategies has advantages and disadvantages, which might even prevent its application (e.g., the family-based strategy typically needs much main memory). The goal of this thesis is to enable the efficient analysis of configuable systems, even if existing strategies fail (e.g., the family-based strategy, because of memory limitations). To this end, we designed a framework that models the key aspects of configurable-system analysis strategies, independent of their implementation and of the analyses techniques (e.g., type checking or model checking). Guided by our model, we developed a number of analysis strategies for configurable systems. To learn about advantages and disadvantages of individual strategies, we compared these in a series of empirical studies. In particular, we developed and evaluated a model-checking analysis and a data-flow analysis for configurable systems. One of our key findings is that family-based analysis outperforms most sampling heuristics with respect to analysis time, while being able to make definite statements about all variants of a configurable system. Furthermore, we identified advantages and disadvantages of analysis strategies and how to mitigate them by combining strategies. In our endeavor, we identified two key problems that are common to configurable-system analyses, and we developed supporting techniques to solve them. These techniques are general and are applicable beyond our research. In particular, we developed presence-condition simplification and variability encoding. Presence-condition simplification provides a simple method to reduce the size of the output or the internal data structure of configurable-systemanalyses. Variability encoding provides a means for transforming compile-time variability to run-time variability, which enables many family-based analyses.
Our key contributions are the model of analysis strategies for configurable systems and the corresponding empirical comparisons of strategies. Our findings are backed by empirical studies, which helped broaden the community knowledge on analyses of configurable systems (indicated by citations). For these evaluations, we prepared several subject systems, which have also been used already by other researchers. Furthermore, we developed several analysis tools and demonstrated their feasibility in practical application scenarios based on code from, for example, the Linux kernel. Our tools are based on variability-aware optimizations that enable levels of scalability on configurable systems that were not possible with other tools before.
This dissertation sets out to deepen our understanding on the causes and manifestations of digital inequality through the lens of technology adoption research. To this day, digital inequality remains an important and relevant societal issue. With the rapid proliferation of digital ICT over the last 20 years the nature of the phenomenon may have evolved from an access-based to an appropriation-based issue, yet its significance has not diminished. Again this backdrop, this dissertation aims to explore which mechanisms and factors influence why and how individuals use ICT in the context of digital inequality, and, in particular, what role social influence, socio-cognitive processes, and socio-economic determinants play. In a series of essays, this dissertation first develops a theoretical understanding of the concept of social influence, which plays an important role in determining whether and how individuals use a technology. Next, a process lens is adopted to explore the underlying mechanisms that drive individuals to disengage from a new technology and may lead to digital exclusion. Building on that, this dissertation examines how digital inequality manifests itself in the specific realm of e-commerce. This dissertation concludes with a practical perspective on the issue of digital inequality aimed at policy makers seeking to bridge the gap between digitally advantaged and disadvantaged users.
Ever since the inception of the Internet, researchers have been both, enthusiastic and concerned about the social implications of Internet-enabled digitization (DiMaggio, Hargittai, Neuman, & Robinson, 2001).In particular, the issue of unequal access to digital opportunities has garnered substantial research attention and has been termed ‘digital inequality’ (Hargittai & Hinnant, 2008; Hsieh, Rai, & Keil, 2008; Kvasny & Keil, 2006; Riggins & Dewan, 2005). Generally, digital inequality refers to the unequal opportunity and ability of individuals to profit from information communication technologies (ICT) (DiMaggio & Hargittai, 2001). The phenomenon of digital inequality has also been at the heart of public debates because in the light of the ongoing digitization using ICT actively is more and more becoming a prerequisite to fully participate in society. This thesis seeks to expand research on the complex and societally relevant phenomenon of digital inequality. Specifically, I aim to explore the following focal research question:
How do individuals use ICTs and which mechanisms and factors influence individual use and non-use of ICTs in the context of digital inequality?
Advancing our knowledge in this field is particularly relevant for the following reasons. First, while understanding all stages of digital inequality is essential for both, assessing the true severity of the phenomenon and developing measures to bridge it, a large part of research has so far focused on ICT access and adoption. Yet, if digital inequality eventually translates into inequality in ‘real life’ is determined by whether individuals can use ICTs to their advantage and benefit from digital opportunities. This thesis seeks to address this research gap by shifting the attention to the factors and mechanisms that drive individual differences in ICT appropriation as opposed to ICT access and adoption. Second, digital inequality research still stands to profit from a broader methodological foundation. In fact, most of what we know about digital inequality is based on the quantitative analysis of surveys and statistical data and might limit research in exploring and better understanding the more complex and multi-layered forms of digital inequality as evident in ICT appropriation. This thesis aims at strengthening the methodological foundation of digital inequality research and at generating new and rich insights by adopting so far underrepresented research methods, in particular qualitative and internet-enabled data tracking methodology. Third, digital inequality is an interdisciplinary research field and different insights haven been gained in a diverse range of academic disciplines. In this thesis I also seek to lay a sound theoretical foundation for my own research and the research of others by integrating these otherwise separate perspectives on digital inequality. Fourth, better understanding digital inequality and potential means to bridge it is of high societal relevance. Therefore, this thesis also aims at inferring implications not only for academic research but also for practitioners, in particular public policy makers. The thesis comprises four papers that seek to address the points outlined above.
This doctoral thesis is devoted to generalize border bases to the module setting and to apply them in various ways.
First, we generalize the theory of border bases to finitely generated modules over a polynomial ring. We characterize these generalized border bases and show that we can compute them. As an application, we are able to characterize subideal border bases in various new ways and give a new algorithm for their computation. Moreover, we prove Schreyer's Theorem for border bases of submodules of free modules of finite rank over a polynomial ring.
In the second part of this thesis, we study the effect of homogenization to border bases of zero-dimensional ideals. This yields the new concept of projective border bases of homogeneous one-dimensional ideals. We show that there is a one-to-one correspondence between projective border bases and zero-dimensional closed subschemes of weighted projective spaces that have no point on the hyperplane at infinity. Applying that correspondence, we can characterize uniform zero-dimensional closed subschemes of weighted projective spaces that have a rational support over the base field in various ways. Finally, we introduce projective border basis schemes as specific subschemes of border basis schemes. We show that these projective border basis schemes parametrize all zero-dimensional closed subschemes of a weighted projective space whose defining ideals possess a projective border basis. Assuming that the base field is algebraically closed, we are able to prove that the set of all closed points of a projective border basis scheme that correspond to a uniform subscheme is a constructive set with respect to the Zariski topology.
The well-known Riemann Mapping Theorem states the existence of a conformal map of a simply connected proper domain of the complex plane onto the upper half plane. One of the main topics in geometric function theory is to investigate the behaviour of the mapping functions at the boundary of such domains. In this work, we always assume that a piecewise analytic boundary is given. Hereby, we have to distinguish regular and singular boundary points. While the asymptotic behaviour for regular boundary points can be investigated by using the Schwarz Reflection at analytic arcs, the situation for singular boundary points is far more complicated. In the latter scenario two cases have to be differentiated: analytic corners and analytic cusps. The first part of the thesis deals with the asymptotic behaviour at analytic corners where the opening angle is greater than 0. The results of Lichtenstein and Warschawski on the asymptotic behaviour of the Riemann map and its derivatives at an analytic corner are presented as well as the much stronger result of Lehman that the mapping function can be developed in a certain generalised power series which in turn enables to examine the o-minimal content of the Riemann Mapping Theorem. To obtain a similar statement for domains with analytic cusps, it is necessary to investigate the asymptotic behaviour of a Riemann map at the cusp and based on this result to determine the asymptotic power series expansion. Therefore, the aim of the second part of this work is to investigate the asymptotic behaviour of a Riemann map at an analytic cusp. A simply connected domain has an analytic cusp if the boundary is locally given by two analytic arcs such that the interior angle vanishes. Besides the asymptotic behaviour of the mapping function, the behaviour of its derivatives, its inverse, and the derivatives of the inverse are analysed. Finally, we present a conjecture on the asymptotic power series expansion of the mapping function at an analytic cusp.
Web Search engines have become an indispensable online service to retrieve content on the Internet. However, using search engines raises serious privacy issues as the latter gather large amounts of data about individuals through their search queries. Two main techniques have been proposed to privately query search engines. A first category of approaches, called unlinkability, aims at disassociating the query and the identity of its requester. A second category of approaches, called indistinguishability, aims at hiding user’s queries or user’s interests by either obfuscating user’s queries, or forging new fake queries. This paper presents a study of the level of protection offered by three popular solutions: Tor-based, TrackMeNot, and GooPIR. For this purpose, we present an efficient and scalable attack – SimAttack – leveraging a similarity metric to capture the distance between preliminary information about the users (i.e., history of query) and a new query. SimAttack de-anonymizes up to 36.7 % of queries protected by an unlinkability solution (i.e., Tor-based), and identifies up to 45.3 and 51.6 % of queries protected by indistinguishability solutions (i.e., TrackMeNot and GooPIR, respectively). In addition, SimAttack de-anonymizes 6.7 % more queries than state-of-the-art attacks and dramatically improves the performance of the attack on TrackMeNot by 23.6 %, while retaining an execution time faster by two orders of magnitude.
Following an “agency-oriented Urban Theory” as advanced by Smith (2001), this study takes the urban landscape of Vinh City in Central Vietnam as a starting point into an investigation of multiple visions of modernity (Eisenstadt, 2000) put forward by social actors, as well as into urban change resulting from the implementation of such visions. Focusing on the period from 1973 to 2011, it traces the application of three different visions for urban development in Vinh: The Socialist City, The Modern and Civilized City, and the Participatory City. Projects aiming at implementing these visions in Vinh that are presented in this study have one thing in common: they are informed by a specific view of what a city is and what it should be, and their implementation aims at changing the city in the desired direction. This goal involves not only physical change of the city, but also institutional change in the urban society. To grasp the interplay between visions of a modern city, their application through concrete projects, and the results of these implementations, the study operates with two specific terms: modern projects, and urban change. After introducing Vinh and its history, the thesis presents the period of the vision of The Socialist City and its application in Vinh through cooperation between Vietnam and the German Democratic Republic in the 1970s. It then moves on to contemporary period starting in the 1990s, during which varying and conflicting modern projects for the city were put forward by different social actors cooperating in joint projects on urban development: the Modern and Civilized City and the Participatory City. While the modes of cooperation differed between the two periods, the study concludes with the argument that the impact of these transnational projects has led to path-dependent, as well as ambivalent, urban change in Vinh.
This thesis attempts to investigate the Noether, Dedekind, and Kähler differents for a 0-dimensional scheme X in the projective n-space P^n_K over an arbitrary field K. In particular, we focus on studying the relations between the algebraic structure of these differents and geometric properties of the scheme X.
In Chapter 1 we give an outline to the problems this thesis is concerned with, a brief literature review for each problem, and the main results regarding these problems. Chapter 2 contains background results that we will need in the subsequent chapters. We introduce the concept of maximal p_j-subschemes of a 0-dimensional scheme X and give some descriptions of them and their Hilbert functions. Furthermore, we generalize the notion of a separator of a subscheme of X of degree deg(X)-1 to a set of separators of a maximal p_j-subscheme of X. In Chapter 3 we explore the Noether, Dedekind, and Kähler differents for 0-dimensional schemes X. First we define these differents for X, and take a look at how to compute these differents and examine their relations. Then we give an answer to the question "What are the Hilbert functions of these differents?" in some cases.
In Chapter 4 we use the differents to investigate the Cayley-Bacharach property of 0-dimensional schemes over an arbitrary field K. The principal results of this chapter are characterizations of CB-schemes and of arithmetically Gorenstein schemes in terms of their Dedekind differents and a criterion for a 0-dimensional smooth scheme to be a complete intersection. We also generalize some results such as Dedekind's formula and the characterization of the Cayley-Bacharach property by using Liaison theory. In addition, several propositions on the uniformities are proven. In Chapter 5 we are interested in studying the Noether, Dedekind, and Kähler differents for finite special classes of schemes and finding out some applications of these differents. First, we investigate these differents for reduced 0-dimensional almost complete intersections X in P^n_K over a perfect field K. Then we investigate the relationships between these differents and the i-th Fitting ideals of the module of Kähler differentials of the homogeneous coordinate ring of X. Finally, we look more closely at the Hilbert functions and the regularity indices of these differents for fat point schemes.
The Will to Play. Performance and Construction of Royal Masculinity in Early Modern History Plays
(2015)
Die vorliegende Arbeit untersucht Männlichkeitskonzepte in der Frühen Neuzeit, wobei das Hauptaugenmerk auf die dramatische Konstruktion der Figur des Königs gerichtet wird. Anhand von zehn Historiendramen der 1590er wird zum einen die diskursive Komplexität königlicher Männlichkeit in der Renaissance untersucht, um darauf aufbauend deren performative Darstellung zu analysieren. Im Theorieteil werden Männlichkeit und Herrschaft im elisabethanischen England mithilfe zeitgenössischer Texte diskutiert und durch den Genderdiskurs und die Performativität von Gender erweitert. Der darauf folgende Methodikteil entwickelt aus den gewonnenen Erkenntnissen eine Semiotik von königlicher Männlichkeit, die anschließend im Analyseteil anhand der ausgewählten Historiendramen evaluiert wird.
This doctoral thesis is dedicated to the analysis and the design of
symmetric cryptographic algorithms.
In the first part of the dissertation, we deal with fault-based attacks
on cryptographic circuits which belong to the field of active implementation
attacks and aim to retrieve secret keys stored on such chips. Our main focus
lies on the cryptanalytic aspects of those attacks. In particular, we target
block ciphers with a lightweight and (often) non-bijective key schedule where
the derived subkeys are (almost) independent from each other. An attacker who is
able to reconstruct one of the subkeys is thus not necessarily able to directly
retrieve other subkeys or even the secret master key by simply reversing the key
schedule. We introduce a framework based on differential fault analysis that
allows to attack block ciphers with an arbitrary number of independent subkeys
and which rely on a substitution-permutation network. These methods are then
applied to the lightweight block ciphers LED and PRINCE and we show in both
cases how to recover the secret master key requiring only a small number of
fault injections. Moreover, we investigate approaches that utilize algebraic
instead of differential techniques for the fault analysis and discuss advantages
and drawbacks. At the end of the first part of the dissertation, we explore
fault-based attacks on the block cipher Bel-T which also has a lightweight key
schedule but is not based on a substitution-permutation network but instead on
the so-called Lai-Massey scheme. The framework mentioned above is thus not
usable against Bel-T. Nevertheless, we also present techniques for the case of
Bel-T that enable full recovery of the secret key in a very efficient way using
differential fault analysis.
In the second part of the thesis, we focus on authenticated encryption
schemes. While regular ciphers only protect privacy of processed data,
authenticated encryption schemes also secure its authenticity and integrity.
Many of these ciphers are additionally able to protect authenticity and
integrity of so-called associated data. This type of data is transmitted
unencrypted but nevertheless must be protected from being tampered with during
transmission. Authenticated encryption is nowadays the standard technique to
protect in-transit data. However, most of the currently deployed schemes have
deficits and there are many leverage points for improvements. With NORX we
introduce a novel authenticated encryption scheme supporting associated data.
This algorithm was designed with high security, efficiency in both hardware and
software, simplicity, and robustness against side-channel attacks in mind. Next
to its specification, we present special features, security goals,
implementation details, extensive performance measurements and discuss
advantages over currently deployed standards. Finally, we describe our
preliminary security analysis where we investigate differential and rotational
properties of NORX. Noteworthy are in particular the newly developed
techniques for differential cryptanalysis of NORX which exploit the power of
SAT- and SMT-solvers and have the potential to be easily adaptable to other
encryption schemes as well.
The aim of this dissertation is to investigate Kaehler differential algebras and their Hilbert functions for 0-dimensional schemes in P^n. First we give relations between Kaehler differential 1-forms of fat point schemes and another fat point schemes. Then we determine the Hilbert polynomial and give a sharp bound for the regularity index of the module of Kaehler differential m-forms, for 0<m<n+2. Next, we examine the Kaehler differential algebras for fat point schemes whose supports lie on non-singular conics in P^2. Finally, we prove the Segre bounds for equimultiple fat point schemes in P^4, this result allows us to determine the regularity index of the module of Kaehler differential 1-forms, and a sharp bound for the regularity index of the module of Kaehler differential m-forms, for 1<m<6.
This thesis is divided into two parts. The first part is devoted to the curvature estimation of piecewise smooth curves using variation diminishing splines. The variation diminishing property combined with the ability to reconstruct linear functions leads to a convexity preserving approximation that is crucial if additional sign changes in the curvature estimation have to be avoided. To this end, we will first establish the foundations of variation diminishing transforms and introduce the Bernstein and the Schoenberg operator on the space of continuous functions and its generalization to the Lp-spaces. In order to be able to detect C2-singularities in piecewise smooth curves, we establish lower estimates for the approximation error in terms of the second order modulus of smoothness for Schoenberg’s variation diminishing operator. Afterwards, we consider smooth curve approximations using only finitely many samples of the curve, where the approximation, its first, and its second derivative converge uniformly to its corresponding part of the curve to be approximated. In this case, we can show that the estimated curvature converges uniformly to the real curvature if the number of samples goes to infinity. Based on the lower estimates that relates the decay rate of the approximation error with smoothness we propose a multi-scale algorithm to estimate the curvature and to detect C2-singularities. We numerically evaluate our algorithm and compare it to others to show that our algorithm achieves competitive accuracy while our curvature estimations are significantly faster to compute.
The second part deals with generalizations of the established lower estimates for the Schoenberg operator. We will show that such estimates can be obtained for linear operators on a general Banach function space with smooth range provided that the iterates of the operator converge uniformly and a semi-norm defined on the range of the operator annihilates the fixed points of the operator. To this end, we will prove by spectral properties that the iterates of every positive finite-rank operator converge uniformly. As highlight of this thesis, we show a constructive way using a Gramian matrix where the dual fixed points operate on the fixed points of an operator to derive the limit of the iterates for an arbitrary quasi-compact operator defined on a general Banach space.
Most major airports collect recordings of the position of aircrafts at specific times. Those data typically requires extensive smoothing and corrections before it can be used for later analysis. Conventional smoothing approaches fail to model the movement physically correct, i.e. do not take standstills of aircrafts into account.
In this thesis we develop a method to detect standstills, employ robust smoothing splines for data fitting, add adequate boundary conditions for the detected standstill periods (i.e. force the function to be constant and to entry- and exit-direction for the standstills to be identical) and give an algorithm to solve those approximation problems efficiently.
In the progress we give an explicit proof for the convergence of the IRLS algorithm proposed by Huber to solve M-type estimates for non-linear approximation problems. Furthermore we derive a blueprint for a method to solve separable, quadratic least squares problems with very few quadratic variables.