TY - THES A1 - März, Armin T1 - Three Essays on Understanding Mobile Consumer Behavior: Business Models, Perceptions, and Features N2 - For about a decade, consumers have been carrying the Internet in their pockets. The rapid penetration of modern smartphones has meant that more than two-thirds of the people in the West can access and use online resources, anytime and anywhere. Consumers also can communicate and share their consumption experiences instantaneously. Platforms reach users for time-critical events through highly personal communication channels, in the sense that smartphones serve as constant companions. Many mobile applications and their basic services and contents also are available for free. The digital and mobile worlds thus are changing the very means of communication, suggesting the powerful need for marketing research and practice to find the opportunities and meet the challenges of the mobile Internet. In particular, scientific investigations are required to describe new business models in the free e-service industry and the consumer behavior affected by mobile features. This thesis examines these topics in three essays. Study 1 considers business models that offer their services without charge. Offering services for free is symptomatic of not only mobile apps (90% of all apps are available for free) but the digital economy in general. For companies offering free e-services, this situation raises several important questions, in that, without any access device restrictions, how do customers of free e-services contribute value without paying? What are the nature and dynamics of nonmonetary value contributions by nonpaying customers? With a literature review and interviews with senior executives of free e-service providers, Study 1 presents a comprehensive overview of nonmonetary value contributions in the free e-service sector, including word of mouth, co-production, and network effects. Moreover, adding attention and data into this framework reveals two further aspects that have not been addressed in prior customer value research. By putting the findings in the context of the existing literature on customer value and customer engagement, this study sheds light on the complex processes of value creation in the emerging e-service sector, while advancing marketing and service research in general. Study 2 deepens the findings from the first study; specifically, the focus is on the way that mobile users co-produce content and how this contribution is perceived by recipients in the network. With field data and a scenario experiment, this study demonstrates that recipients appreciate mobile-generated customer reviews fundamentally differently from other reviews. In particular, they discount the helpfulness of mobile reviews, due to their text-specific content and style particularities. The very fact that a review has been identified as written on a mobile device also lowers recipients’ perceptions of its value. Recipients use information about the device as a source cue to assess their compatibility with the review contribution channel. If they perceive themselves as compatible with the method used to generate the review (mobile or non-mobile), recipients regard the review as more helpful, because they attribute the review to the quality of the reviewed subject. If they perceive it as incompatible though, recipients assume that the review reflects the personal dispositions of the reviewer and discount its helpfulness. Finally, Study 3 takes up the attention and cross-market network effects in a mobile setting; these were two nonmonetary dimensions identified by Study 1. Platform providers should develop measures to draw the attention of nonpaying customers to the offers of their paying customers. One attention-grabbing mobile-specific feature is push notifications to the device, which provide information about temporally or spatially relevant events. More concretely, Study 3 investigates how mobile push notifications remind users of upcoming deadlines in online auctions and therefore improve late bidding success. Late bidding is a prevalent strategy, in which bidders submit their bids at the very end of an online auction. This research uses field data about an online auction platform to demonstrate that late bidders use these mobile push notifications more frequently than do bidders with different bidding patterns. Within the group of late bidders, the chance to win an auction increases with their use of push notifications. After a mobile push notification, late bidders submit bids through mobile devices but also through non-mobile channels. Less experienced late bidders also benefit from push notifications, which increase their chances of success. In summary, this dissertation contributes to an enhanced understanding of mobile consumer behavior by using various methods, including qualitative interviews, field observations, and online experiments. From a theoretical perspective, it contributes to current knowledge about nonmonetary costumer value contributions in general and their role in mobile settings in particular. This thesis highlights the role of mobile devices in co-production and perceptions of co-produced content. It also reveals how mobile-specific interactive features, like push notifications, affect late bidding efficiency. Therefore, it specifies the role of mobile devices in cross-market effects, in that they enable the platform to direct the relationship between buyers and sellers. The insights presented herein encourage managers to reevaluate their current practices, think about whether they should label co-produced content as generated through a mobile channel or not, and contemplate whether to develop mobile push notifications as helpful features for users (not as intrusive marketing messages). KW - E-Services KW - Mobile KW - Internet KW - Consumer Behavior KW - Marketing KW - Mobiles Internet KW - Verbraucherverhalten Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3948 ER - TY - THES A1 - Alsarem, Mazen T1 - Semantic Snippets via Query-Biased Ranking of Linked Data Entities N2 - In our knowledge-driven society, the acquisition and the transfer of knowledge play a principal role. Web search engines are somehow tools for knowledge acquisition and transfer from the web to the user. The search engine results page (SERP) consists mainly of a list of links and snippets (excerpts from the results). The snippets are used to express, as efficiently as possible, the way a web page may be relevant to the query. As an extension of the existing web, the semantic web or “web 3.0” is designed to convert the presently available web of unstructured documents into a web of data consumable by both human and machines. The resulting web of data and the current web of documents coexist and interconnect via multiple mechanisms, such as the embedded structured data, or the automatic annotation. In this thesis, we introduce a new interactive artifact for the SERP: the “Semantic Snippet”. Semantic Snippets rely on the coexistence of the two webs to facilitate the transfer of knowledge to the user thanks to a semantic contextualization of the user’s information need. It makes apparent the relationships between the information need and the most relevant entities present in the web page. The generation of semantic snippets is mainly based on the automatic annotation of the LOD1’s entities in web pages. The annotated entities have different level of impor- tance, usefulness and relevance. Even with state of the art solutions for the automatic annotations of LOD entities within web pages, there is still a lot of noise in the form of erroneous or off-topic annotations. Therefore, we propose a query-biased algorithm (LDRANK) for the ranking of these entities. LDRANK adopts a strategy based on the linear consensual combination of several sources of prior knowledge (any form of con- textual knowledge, like the textual descriptions for the nodes of the graph) to modify a PageRank-like algorithm. For generating semantic snippets, we use LDRANK to find the more relevant entities in the web page. Then, we use a supervised learning algorithm to link each selected entity to excerpts from the web page that highlight the relationship between the entity and the original information need. In order to evaluate our semantic snippets, we integrate them in ENsEN (Enhanced Search Engine), a software system that enhances the SERP with semantic snippets. Finally, we use crowdsourcing to evaluate the usefulness and the efficiency of ENsEN. N2 - In unserer heutigen Wissensgesellschaft spielen der Erwerb und die Weitergabe von Wissen eine zentrale Rolle. Internetsuchmaschinen fungieren als Werkzeuge für den Erwerb und die Weitergabe von Wissen aus dem Web an den Nutzer. Die Ergebnisliste einer Suchmaschine (SERP) besteht grundsätzlich aus einer Liste von Links und Textauszügen (Snippets). Diese Snippets sollen auf möglichst effiziente Weise ausdrücken inwiefern eine Webseite für die Suchanfrage relevant ist. Als Erweiterung des bestehenden Internets, überführt das semantische Web - auch genannt “Web 3.0” - das momentan vorhandene Internet der unstrukturierten Dokumente in ein Internet der Daten, das sowohl von Menschen als auch Maschinen verwendet werden kann. Das neu geschaffene Internet der Daten und das derzeitige Internet der Dokumente existieren gleichzeitig und sie sind über eine Vielzahl von Mechanismen miteinander verbunden, wie beispielsweise über eingebettete strukturierte Daten oder eine automatische Annotation. In dieser Arbeit stellen wir ein neues interaktives Artefakt für das SERP vor: Das “Semantische Snippet”. Semantische Snippets stützen sich auf die Koexistenz der beiden Arten des Internets um mit Hilfe der Kontextualisierung des Informationsbedürfnisses eines Nutzers die Weitergabe von Wissen zu erleichtern. Sie stellen die Verbindung zwischen dem Informationsbedürfnis und den besonders relevanten Entitäten einer Webseite heraus. Die Erzeugung semantischer Snippets basiert überwiegend auf der automatisierten Annotation von Webseiten mit Entitäten aus der Linking Open Data Cloud (LOD). Die annotierten Entitäten besitzen unterschiedliche Ebenen hinsichtlich Wichtigkeit, Nützlichkeit und Relevanz. Selbst bei state-of-the-art Lösungen zur automatisierten Annotation von LOD- Entitäten in Webseiten, gibt es stets ein großes Maß an Rauschen in Form von fehlerhaften oder themenfremden Annotationen. Wir stellen deshalb einen anfragegetriebenen Algorithmus (LDRANK) für das Ranking dieser Entitäten vor. LDRANK setzt eine Strategie ein, die auf der linearen Konsensus-Kombination (engl. linear consensual combination) mehrerer a-priori Wissensquellen (jedwede Art von Kontextwissen, wie beispielsweise die textuelle Beschreibung der Knoten des Graphen) basiert um damit den PageRank-Algorithmus zu modifizieren. Zur Generierung semantischer Snippets finden wir zunächst mit Hilfe von LDRANK die relevantesten Entitäten in einer Webseite. Anschließend verwenden wir ein überwachtes Lernverfahren um jede ausgewählte Entität denjenigen Abschnitten der Webseite zuzuordnen, die die Beziehung zwischen der Entität und dem ursprünglichen Informationsbedarf am besten herausstellt. Um unsere semantischen Snippets zu evaluieren, integrieren wir sie in ENsEN (Enhanced Search Engine), ein Softwaresystem das SERP um semantische Snippets erweitert. Zum Abschluss bewerten wir die Nu ̈tzlichkeit und die Effizienz von ENsEN mittels Crowdsourcing. KW - Semantic Snippets KW - Entity Ranking KW - Web of Data KW - World Wide Web 3.0 KW - Suchmaschine KW - Suchmaschinenoptimierung KW - Ranking Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3959 ER - TY - THES A1 - Tomashevich, Victor T1 - Fault Tolerance Aspects of Virtual Massive MIMO Systems N2 - Employment of a very large number of antennas is seen as the key technology to provide future users with very high data rates. At the same time, the implementation complexity will rise due to large memories required and sophisticated signal processing algorithms employed. Continuous technology downscaling allows implementation of such complex digital designs. At the same time, its inherent variability and vulnerability to physical disturbances violate the assumption of perfectly reliable hardware operation. This work considers Unique Word OFDM which represents the alternative to the standard Cyclic Prefix OFDM providing superior detection quality. The generalization of Unique Word OFDM to a MIMO system is performed which allows interpretation as a virtual massive MIMO system with only few physical antennas. Detection methods for the introduced generalization are discussed and their performance is quantified. Because of the large memory size required, linear detection represents the cost and performance effective solution. The possible memory errors due to radiation effects or voltage scaling are addressed and the nonlinear MMSE detection algorithm is proposed. This algorithm keeps track of the memory errors and is able to significantly mitigate their effect on the quality of the estimated data. Apart of memory issues, reliability of the actual computational hardware which constitutes the receiver is of concern in this work. An own implementation of the MMSE Sorted Givens Rotations is subjected to transient fault injection. The impact of faults in various parts of the implemented circuit on the detection performance is quantified. Most vulnerable components of the implemented circuit in terms of reliability are identified. Security is another major address of this work, since most current implementations include cryptographic devices. Fault-based attacks on such systems are known to be able to extract the secret key in feasible time. The remaining part of this work addresses such fault injection-based malicious attacks. Countermeasures based on a combination of information and hardware redundancy are considered. Recently introduced robust codes target such attacks by providing guaranteed detection capability. The performance of these codes is assessed by application to actual cryptographic and general purpose circuits. The work introduces metrics that help to identify fault locations in the circuit which could escape detection with high probability. These locations are targeted by transistor resizing that renders fault injection unfeasible. KW - MIMO Systems KW - MIMO KW - Fehlertoleranz Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4047 ER - TY - THES A1 - Wölfl, Andreas T1 - Data Management in Certified Avionics Systems N2 - Data management is a cornerstone for any kind of information system - including the aerospace and aviation sector. In contrast to conventional domains, software development in the avionics domain must adhere to a legally binding certification process, called qualification. The success of the process depends on compliance with international standards, such as DO-178: Software Considerations in Airborne Systems and Equipment Certification. From a software developer's perspective, challenges arise in terms of methods and tools. Techniques that have a potential impact on the deterministic and predictable execution of avionics software are prohibited. The objective of this thesis' research is to develop a scalable method to realize data-management for multi-variant avionics software under the restrictions and constraints of the domain. Since avionics software faces very long-term life-cycles (up to 75 years), a particular focus is being placed on maintenance and evolution. Based on the insights gained in a semi-structured interview at Airbus Helicopters, industrial established approaches to implement qualified avionics software are assessed at first and compared with respect to strengths and weaknesses for data-management afterwards. As a result, a novel development approach is proposed, combining model-based techniques and product-line technology to derive the source code of highly specific data-management variants, as well as the majority of assets required for the qualification process, from a declarative system specification. In order to demonstrate the practicability of the approach in industry, a framework is presented that is deployed and applied at Airbus Helicopters to generate qualifiable data-management components for the variants of the NH90 helicopter. The maintainability is shown by means of a domain-specific optimization, in which the model-based and generative approach is used to establish safe memory overlays at compile-time. Key findings reveal a substantially reduced memory footprint (29,1% in case of a real-world scenario), as well as an significantly facilitated implementation process, which would not be accomplishable using conventional methods for software development in the avionics domain. N2 - Datenverwaltung ist der Grundstein für jegliche Art von Informationssystemen – so auch in der Luft- und Raumfahrt. Im Unterschied zu konventionellen Domänen unterliegt die Software für Avionik-Systeme einem gesetzlich vorgeschriebenen Zertifizierungsprozess, genannt Qualifizierung. Der Erfolg dieses Vorgangs hängt primär von der Einhaltung internationaler Sicherheitsnormen ab. Aus der Sicht eines Software-Entwicklers ergeben sich hier Herausforderungen hinsichtlich erlaubter Methoden und Werkzeuge, denn Techniken, die den deterministischen Ablauf eines Programms gefährden können, sind verboten. Das Ziel dieser Arbeit ist es, ein skalierbares Verfahren zu entwickeln, um eine Datenmanagement-Komponente unter Einhaltung der Sicherheitsnormen für Avionik-Systeme in mehreren Varianten nicht nur zu realisieren, sondern auch langfristig warten zu können (bis zu 75 Jahre). Basierend auf den Erkenntnissen, die durch ein semi-strukturiertes Entwickler-Interview bei Airbus Helicopters gewonnen wurden, werden industriell etablierte Methoden zur Implementierung und Qualifizierung von Avionik-Software zunächst analysiert und anschließend hinsichtlich ihrer Stärken und Schwächen für Datenmanagement bewertet. Als Resultat wird ein neuartiger Ansatz vorgestellt, der durch eine Kombination aus modellbasierten Methoden und Produktlinien-Technologie, sowohl den Quellcode einer spezifischen Datenmanagement-Variante, als auch weitere Erzeugnisse, die für die Qualifizierung zu erbringen sind, aus einer deklarativen Systemspezifikation ableitet. Als Beispiel für die Praktikabilität des Verfahrens wird die Architektur einer Werkzeugkette vorgestellt, die bei Airbus eingesetzt wird, um qualifizierbare Datenmanagement-Varianten für die eingebettete Software des NH90 Helikopters zu generieren. Die Wartbarkeit wird durch eine domänenspezifische Optimierung demonstriert, die durch den modellbasierten und generativen Ansatz eine sichere Überlappung von Speicherbereichen zur Compile-Zeit ermöglicht. Zu den Ergebnissen zählen nicht nur verringerter Speicherverbrauch (29,1% in einem realen Szenario), sondern auch eine effiziente Umsetzung, die mit den etablierten Methoden zur Software-Entwicklung in der Avionik-Domäne nicht zu erreichen wäre. KW - Avionik KW - Datenverarbeitung Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5758 ER - TY - THES A1 - Jovanovic, Philipp T1 - Analysis and Design of Symmetric Cryptographic Algorithms N2 - This doctoral thesis is dedicated to the analysis and the design of symmetric cryptographic algorithms. In the first part of the dissertation, we deal with fault-based attacks on cryptographic circuits which belong to the field of active implementation attacks and aim to retrieve secret keys stored on such chips. Our main focus lies on the cryptanalytic aspects of those attacks. In particular, we target block ciphers with a lightweight and (often) non-bijective key schedule where the derived subkeys are (almost) independent from each other. An attacker who is able to reconstruct one of the subkeys is thus not necessarily able to directly retrieve other subkeys or even the secret master key by simply reversing the key schedule. We introduce a framework based on differential fault analysis that allows to attack block ciphers with an arbitrary number of independent subkeys and which rely on a substitution-permutation network. These methods are then applied to the lightweight block ciphers LED and PRINCE and we show in both cases how to recover the secret master key requiring only a small number of fault injections. Moreover, we investigate approaches that utilize algebraic instead of differential techniques for the fault analysis and discuss advantages and drawbacks. At the end of the first part of the dissertation, we explore fault-based attacks on the block cipher Bel-T which also has a lightweight key schedule but is not based on a substitution-permutation network but instead on the so-called Lai-Massey scheme. The framework mentioned above is thus not usable against Bel-T. Nevertheless, we also present techniques for the case of Bel-T that enable full recovery of the secret key in a very efficient way using differential fault analysis. In the second part of the thesis, we focus on authenticated encryption schemes. While regular ciphers only protect privacy of processed data, authenticated encryption schemes also secure its authenticity and integrity. Many of these ciphers are additionally able to protect authenticity and integrity of so-called associated data. This type of data is transmitted unencrypted but nevertheless must be protected from being tampered with during transmission. Authenticated encryption is nowadays the standard technique to protect in-transit data. However, most of the currently deployed schemes have deficits and there are many leverage points for improvements. With NORX we introduce a novel authenticated encryption scheme supporting associated data. This algorithm was designed with high security, efficiency in both hardware and software, simplicity, and robustness against side-channel attacks in mind. Next to its specification, we present special features, security goals, implementation details, extensive performance measurements and discuss advantages over currently deployed standards. Finally, we describe our preliminary security analysis where we investigate differential and rotational properties of NORX. Noteworthy are in particular the newly developed techniques for differential cryptanalysis of NORX which exploit the power of SAT- and SMT-solvers and have the potential to be easily adaptable to other encryption schemes as well. N2 - Diese Doktorarbeit beschäftigt sich mit der Analyse und dem Entwurf von symmetrischen kryptographischen Algorithmen. Im ersten Teil der Dissertation befassen wir uns mit fehlerbasierten Angriffen auf kryptographische Schaltungen, welche dem Gebiet der aktiven Seitenkanalangriffe zugeordnet werden und auf die Rekonstruktion geheimer Schlüssel abzielen, die auf diesen Chips gespeichert sind. Unser Hauptaugenmerk liegt dabei auf den kryptoanalytischen Aspekten dieser Angriffe. Insbesondere beschäftigen wir uns dabei mit Blockchiffren, die leichtgewichtige und eine (oft) nicht-bijektive Schlüsselexpansion besitzen, bei denen die erzeugten Teilschlüssel voneinander (nahezu) unabhängig sind. Ein Angreifer, dem es gelingt einen Teilschlüssel zu rekonstruieren, ist dadurch nicht in der Lage direkt weitere Teilschlüssel oder sogar den Hauptschlüssel abzuleiten indem er einfach die Schlüsselexpansion umkehrt. Wir stellen Techniken basierend auf differenzieller Fehleranalyse vor, die es ermöglichen Blockchiffren zu analysieren, welche eine beliebige Anzahl unabhängiger Teilschlüssel einsetzen und auf Substitutions-Permutations Netzwerken basieren. Diese Methoden werden im Anschluss auf die leichtgewichtigen Blockchiffren LED und PRINCE angewandt und wir zeigen in beiden Fällen wie der komplette geheime Schlüssel mit einigen wenigen Fehlerinjektionen rekonstruiert werden kann. Darüber hinaus untersuchen wir Methoden, die algebraische statt differenzielle Techniken der Fehleranalyse einsetzen und diskutieren deren Vor- und Nachteile. Am Ende des ersten Teils der Dissertation befassen wir uns mit fehlerbasierten Angriffen auf die Blockchiffre Bel-T, welche ebenfalls eine leichtgewichtige Schlüsselexpansion besitzt jedoch nicht auf einem Substitutions-Permutations Netzwerk sondern auf dem sogenannten Lai-Massey Schema basiert. Die oben genannten Techniken können daher bei Bel-T nicht angewandt werden. Nichtsdestotrotz werden wir auch für den Fall von Bel-T Verfahren vorstellen, die in der Lage sind den vollständigen geheimen Schlüssel sehr effizient mit Hilfe von differenzieller Fehleranalyse zu rekonstruieren. Im zweiten Teil der Doktorarbeit beschäftigen wir uns mit authentifizierenden Verschlüsselungsverfahren. Während gewöhnliche Chiffren nur die Vertraulichkeit der verarbeiteten Daten sicherstellen, gewährleisten authentifizierende Verschlüsselungsverfahren auch deren Authentizität und Integrität. Viele dieser Chiffren sind darüber hinaus in der Lage auch die Authentizität und Integrität von sogenannten assoziierten Daten zu gewährleisten. Daten dieses Typs werden in nicht-verschlüsselter Form übertragen, müssen aber dennoch gegen unbefugte Veränderungen auf dem Transportweg geschützt sein. Authentifizierende Verschlüsselungsverfahren bilden heutzutage die Standardtechnologie um Daten während der Übertragung zu beschützen. Aktuell eingesetzte Verfahren weisen jedoch oftmals Defizite auf und es existieren vielfältige Ansatzpunkte für Verbesserungen. Mit NORX stellen wir ein neuartiges authentifizierendes Verschlüsselungsverfahren vor, welches assoziierte Daten unterstützt. Dieser Algorithmus wurde vor allem im Hinblick auf Einsatzgebiete mit hohen Sicherheitsanforderungen, Effizienz in Hardware und Software, Einfachheit, und Robustheit gegenüber Seitenkanalangriffen entwickelt. Neben der Spezifikation präsentieren wir besondere Eigenschaften, angestrebte Sicherheitsziele, Details zur Implementierung, umfassende Performanz-Messungen und diskutieren Vorteile gegenüber aktuellen Standards. Schließlich stellen wir Ergebnisse unserer vorläufigen Sicherheitsanalyse vor, bei der wir uns vor allem auf differenzielle Merkmale und Rotationseigenschaften von NORX konzentrieren. Erwähnenswert sind dabei vor allem die für die differenzielle Kryptoanalyse von NORX entwickelten Techniken, die auf die Effizienz von SAT- und SMT-Solvern zurückgreifen und das Potential besitzen relativ einfach auch auf andere Verschlüsselungsverfahren übertragen werden zu können. KW - cryptography KW - cryptanalysis KW - authenticated encryption KW - NORX KW - fault-based attacks KW - Kryptologie KW - Computersicherheit Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3319 ER - TY - THES A1 - Ruppert, Julia T1 - Asymptotic Expansion for the Time Evolution of the Probability Distribution Given by the Brownian Motion on Semialgebraic Sets N2 - In this thesis, we examine whether the probability distribution given by the Brownian Motion on a semialgebraic set is definable in an o-minimal structure and we establish asymptotic expansions for the time evolution. We study the probability distribution as an example for the occurrence of special parameterized integrals of a globally subanalytic function and the exponential function of a globally subanalytic function. This work is motivated by the work of Comte, Lion and Rolin, which considered parameterized integrals of globally subanalytic functions, of Cluckers and Miller, which examined parameterized integrals of constructible functions, and by the work of Cluckers, Comte, Miller, Rolin and Servi, which treated oscillatory integrals of globally subanalytic functions. In the one dimensional case we show that the probability distribution on a family of sets, which are definable in an o-minimal structure, are definable in the Pfaffian closure. In the two-dimensional case we investigate asymptotic expansions for the time evolution. As time t approaches zero, we show that the integrals behave like a Puiseux series, which is not necessarily convergent. As t tends towards infinity, we show that the probability distribution is definable in the expansion of the real ordered field by all restricted analytic functions if the semialgebraic set is bounded. For this purpose, we apply results for parameterized integrals of globally subanalytic functions of Lion and Rolin. By establishing the asymptotic expansion of the integrals over an unbounded set, we demonstrate that this expansion has the form of convergent Puiseux series with negative exponents and their logarithm. Subsequently, we get that the asymptotic expansion is definable in an o-minimal structure. Finally, we study the three-dimensional case and give the proof that the probability distribution given by the Brownian Motion behaves like a Puiseux series as time t tends towards zero. As t approaches infinity and the semialgebraic set is bounded, it can be ascertained that the probability distribution has the form of a constructible function by results of Cluckers and Miller and therefore it is definable in an o-minimal structure. If the semialgebraic set is unbounded, we establish the asymptotic expansions and prove that the probability distribution given by the Brownian Motion on unbounded sets has an asymptotic expansion of the form of a constructible function. In consequence of that, the asymptotic expansion is definable in an o-minimal structure. KW - o-minimality KW - asymptotic expansions KW - globally subanalytic sets KW - exponential parameterized integrals KW - Brownian Motion KW - Brownsche Bewegung KW - O-Minimalität KW - Asymptotische Entwicklung Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5069 ER - TY - THES A1 - Boshe, Patricia T1 - Data Protection Legal Reforms in Africa N2 - This work illustrates reform approaches in Africa using an international legal comparative approach. The research uses Tanzania and Senegal as the primary case studies and France, the United Kingdom and Germany as secondary case studies to illustrate how Europe reformed data protection regimes through the transposition of the EU Data Protection Directive of 1995. Chapter one introduces the work; explaining the forces towards data protection regulations and their basis. Chapter two provides for a ‘back-to-back' comparison in three countries (France, Germany and United Kingdom) against the 1995 Data Protection Directive. The idea behind this chapter is to draw a picture on how the legal culture and the pre-existing notions of the right to privacy inform on data protection legal reforms and determines the nature, contents, context and interpretation of adopted regime for data protection. Eventually, all these aspects affect the nature and extent of protection offered regardless of the substance of the law adopted. Chapter three gives a narrative explanation of nature and perceptions of the right to privacy in Africa and how this may affect data protection reforms in Africa. In the same disposition, African customary legal systems and practices are explained providing a reader with a picture of the overall nature of African systems that makes up an African legal culture. The overview of African privacy perception and legal system is necessary for assessing the workability of any data protection regime to be adopted in Africa which in effect answers the first research question. The chapter draws its rationale from chapter two. In understanding African perceptions of privacy and the African legal culture, one can be able to predict the content and context of the reforms and maybe how the judiciary might interpret the laws based on local perceptions and supporting systems. An overview of the African data protection architecture or rather human right architecture is provided in chapter four; ideally to provide a reader with a picture of the enforcement systems in Africa as a continent. This is followed by chapter five discussing the two major legal systems in Africa; the civil law and the common law system. The chapter also illustrates the position of African landscape in relation to legal harmonization/unification. This aspect is considered necessary because data protection regimes are more focused on legal harmonization and hence the question of how well or to what extent Africa as a continent can bring about harmonization in law became inevitable. Eventually, the chapter offers a comparative mirror analysis of the primary case studies, i.e. Senegal and Tanzania. The analysis is made on the reform approach taken, motivation behind the reforms and on the regime erected (this is done through textual analysis of the law and the draft bill respectively). Chapter six concludes the work by answering research questions based on findings and scrutiny from each chapter. It is concluded that there is a very slim chance for the African States to cling on the cultural defence against the adoption of the Western frameworks for data protection. It is also concluded that, lest Africa becomes an active participant in the global process that informs on data protection challenges and regulations, it faces a danger of becoming a puppet of foreign data protection regulation, which may or may not fit African legal culture. The chapter also illustrates how Africa as a continent and the African States individually have taken up data protection reforms blindly. The motivations for the reforms are vaguely stated and unclear. In the majority of legal instruments, the reforms are not taken as a move towards securing and protecting individual rights rather a purely political move influenced by economic motivations. The reforms are to a large extent, a mere impression to align with global data protection regimes and hence lack the political will to enforce the laws. KW - Data Protection Reforms KW - African Privacy KW - African Legal Reforms KW - African Privacy KW - Data Protection Africa Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5147 ER - TY - THES A1 - Reislhuber, Josef T1 - Optical Graph Recognition N2 - Graphs are an important model for the representation of structural information between objects. One identifies objects and nodes as well as a binary relation between objects and edges. Graphs have many uses, e. g., in social sciences, life sciences and engineering. There are two primary representations: abstract and visual. The abstract representation is well suited for processing graphs by computers and is given by an adjacency list, an adjacency matrix or any abstract data structure. A visual representation is used by human users who prefer a picture. Common terms are diagram, scheme, plan, or network. The objective of Graph Drawing is to transform a graph into a visual representation called the drawing of a graph. The goal is a “nice” drawing. In this thesis we introduce Optical Graph Recognition. Optical Graph Recognition (OGR) reverses Graph Drawing and transforms a digital image of a graph into an abstract representation. Our approach consists of four phases: Preprocessing where we determine which pixels of an image are part of the graph, Segmentation where we recognize the nodes, Topology Recognition where we detect the edges and Postprocessing where we enrich the recognized graph with additional information. We apply established digital image processing methods and make use of the special property that the image contains nodes that are connected by edges. We have focused on developing algorithms that need as little parameters as possible or to automatically calibrate the parameters. Most false recognition results are caused by crossing edges as this makes tracing the edges difficult and can lead to other recognition errors. We have evaluated hand-drawn and computer-drawn graphs. Our algorithms have a very high recognition rate for computer-drawn graphs, e. g., from a set of 100000 computer-drawn graphs over 90% were correctly recognized. Most false recognition results where observed for hand-drawn graphs as they can include drawing errors and inaccuracies. For universal usability we have implemented a prototype called OGRup for mobile devices like smartphones or tablet computers. With our software it is possible to directly take a picture of a graph via a built in camera, recognize the graph, and then use the result for further processing. Furthermore, in order to gain more insight into the way a person draws a graph by hand, we have conducted a field study. KW - Bildverarbeitung KW - Graphenzeichnen Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5159 ER - TY - THES A1 - de Ponte Müller, Fabian T1 - Cooperative Relative Positioning for Vehicular Environments N2 - Fahrerassistenzsysteme sind ein wesentlicher Baustein zur Steigerung der Sicherheit im Straßenverkehr. Vor allem sicherheitsrelevante Applikationen benötigen eine genaue Information über den Ort und der Geschwindigkeit der Fahrzeuge in der unmittelbaren Umgebung, um mögliche Gefahrensituationen vorherzusehen, den Fahrer zu warnen oder eigenständig einzugreifen. Repräsentative Beispiele für Assistenzsysteme, die auf eine genaue, kontinuierliche und zuverlässige Relativpositionierung anderer Verkehrsteilnehmer angewiesen sind, sind Notbremsassitenten, Spurwechselassitenten und Abstandsregeltempomate. Moderne Lösungsansätze benutzen Umfeldsensorik wie zum Beispiel Radar, Laser Scanner oder Kameras, um die Position benachbarter Fahrzeuge zu schätzen. Dieser Sensorsysteme gemeinsame Nachteile sind deren limitierte Erfassungsreichweite und die Notwendigkeit einer direkten und nicht blockierten Sichtlinie zum Nachbarfahrzeug. Kooperative Lösungen basierend auf einer Fahrzeug-zu-Fahrzeug Kommunikation können die eigene Wahrnehmungsreichweite erhöhen, in dem Positionsinformationen zwischen den Verkehrsteilnehmern ausgetauscht werden. In dieser Dissertation soll die Möglichkeit der kooperativen Relativpositionierung von Straßenfahrzeugen mittels Fahrzeug-zu-Fahrzeug Kommunikation auf ihre Genauigkeit, Kontinuität und Robustheit untersucht werden. Anstatt die in jedem Fahrzeug unabhängig ermittelte Position zu übertragen, werden in einem neuartigem Ansatz GNSS-Rohdaten, wie Pseudoranges und Doppler-Messungen, ausgetauscht. Dies hat den Vorteil, dass sich korrelierte Fehler in beiden Fahrzeugen potentiell herauskürzen. Dies wird in dieser Dissertation mathematisch untersucht, simulativ modelliert und experimentell verifiziert. Um die Zuverlässigkeit und Kontinuität auch in "gestörten" Umgebungen zu erhöhen, werden in einem Bayesischen Filter die GNSS-Rohdaten mit Inertialsensormessungen aus zwei Fahrzeugen fusioniert. Die Validierung des Sensorfusionsansatzes wurde im Rahmen dieser Dissertation in einem Verkehrs- sowie in einem GNSS-Simulator durchgeführt. Zur experimentellen Untersuchung wurden zwei Testfahrzeuge mit den verschiedenen Sensoren ausgestattet und Messungen in diversen Umgebungen gefahren. In dieser Arbeit wird gezeigt, dass auf Autobahnen, die Relativposition eines anderen Fahrzeugs mit einer Genauigkeit von unter einem Meter kontinuierlich geschätzt werden kann. Eine hohe Zuverlässigkeit in der longitudinalen und lateralen Richtung können erzielt werden und das System erweist 90% der Zeit eine Unsicherheit unter 2.5m. In ländlichen Umgebungen wächst die Unsicherheit in der relativen Position. Mit Hilfe der on-board Sensoren können Fehler bei der Fahrt durch Wälder und Dörfer korrekt gestützt werden. In städtischen Umgebungen werden die Limitierungen des Systems deutlich. Durch die erschwerte Schätzung der Fahrtrichtung des Ego-Fahrzeugs ist vor Allem die longitudinale Komponente der Relativen Position in städtischen Umgebungen stark verfälscht. N2 - Advanced driver assistance systems play an important role in increasing the safety on today's roads. The knowledge about the other vehicles' positions is a fundamental prerequisite for numerous safety critical applications, making it possible to foresee critical situations, warn the driver or autonomously intervene. Forward collision avoidance systems, lane change assistants or adaptive cruise control are examples of safety relevant applications that require an accurate, continuous and reliable relative position of surrounding vehicles. Currently, the positions of surrounding vehicles is estimated by measuring the distance with e.g. radar, laser scanners or camera systems. However, all these techniques have limitations in their perception range, as all of them can only detect objects in their line-of-sight. The limited perception range of today's vehicles can be extended in future by using cooperative approaches based on Vehicle-to-Vehicle (V2V) communication. In this thesis, the capabilities of cooperative relative positioning for vehicles will be assessed in terms of its accuracy, continuity and reliability. A novel approach where Global Navigation Satellite System (GNSS) raw data is exchanged between the vehicles is presented. Vehicles use GNSS pseudorange and Doppler measurements from surrounding vehicles to estimate the relative positioning vector in a cooperative way. In this thesis, this approach is shown to outperform the absolute position subtraction as it is able to effectively cancel out common errors to both GNSS receivers. This is modeled theoretically and demonstrated empirically using simulated signals from a GNSS constellation simulator. In order to cope with GNSS outages and to have a sufficiently good relative position estimate even in strong multipath environments, a sensor fusion approach is proposed. In addition to the GNSS raw data, inertial measurements from speedometers, accelerometers and turn rate sensors from each vehicle are exchanged over V2V communication links. A Bayesian approach is applied to consider the uncertainties inherently to each of the information sources. In a dynamic Bayesian network, the temporal relationship of the relative position estimate is predicted by using relative vehicle movement models. Also real world measurements in highway, rural and urban scenarios are performed in the scope of this work to demonstrate the performance of the cooperative relative positioning approach based on sensor fusion. The results show that the relative position of another vehicle towards the ego vehicle can be estimated with sub-meter accuracy in highway scenarios. Here, good reliability and 90% availability with an uncertainty of less than 2.5m is achieved. In rural environments, drives through forests and towns are correctly bridged with the support of on-board sensors. In an urban environment, the difficult estimation of the ego vehicle heading has a mayor impact in the relative position estimate, yielding large errors in its longitudinal component. KW - Fahrerassistenzsystem Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5411 ER - TY - THES A1 - Hanauer, Kathrin T1 - Linear Orderings of Sparse Graphs N2 - The Linear Ordering problem consists in finding a total ordering of the vertices of a directed graph such that the number of backward arcs, i.e., arcs whose heads precede their tails in the ordering, is minimized. A minimum set of backward arcs corresponds to an optimal solution to the equivalent Feedback Arc Set problem and forms a minimum Cycle Cover. Linear Ordering and Feedback Arc Set are classic NP-hard optimization problems and have a wide range of applications. Whereas both problems have been studied intensively on dense graphs and tournaments, not much is known about their structure and properties on sparser graphs. There are also only few approximative algorithms that give performance guarantees especially for graphs with bounded vertex degree. This thesis fills this gap in multiple respects: We establish necessary conditions for a linear ordering (and thereby also for a feedback arc set) to be optimal, which provide new and fine-grained insights into the combinatorial structure of the problem. From these, we derive a framework for polynomial-time algorithms that construct linear orderings which adhere to one or more of these conditions. The analysis of the linear orderings produced by these algorithms is especially tailored to graphs with bounded vertex degrees of three and four and improves on previously known upper bounds. Furthermore, the set of necessary conditions is used to implement exact and fast algorithms for the Linear Ordering problem on sparse graphs. In an experimental evaluation, we finally show that the property-enforcing algorithms produce linear orderings that are very close to the optimum and that the exact representative delivers solutions in a timely manner also in practice. As an additional benefit, our results can be applied to the Acyclic Subgraph problem, which is the complementary problem to Feedback Arc Set, and provide insights into the dual problem of Feedback Arc Set, the Arc-Disjoint Cycles problem. KW - directed graph KW - graph algorithms KW - feedback arc set KW - linear ordering KW - cycle cover KW - Graphentheorie KW - Lineares Ordnungsproblem KW - Schwach besetzte Matrix Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5524 ER - TY - INPR A1 - Lucke, Robin Jan T1 - How Securitization Theory Can Benefit from Psychology Findings N2 - Securitization Theory has been applied and advanced continuously since the publication of the seminal work “Security – A New Framework for Analysis” by Buzan et al. in 1998. Various extensions, clarifications and definitions have been added over the years. Ontological and epistemological debates as well as debates about the normativity of the concept have taken place, furthering the approach incrementally and adapting it to new empirical cases. This paper aims at contributing to the improvement of the still useful framework in a more general way by amending it with well-established findings from another discipline: Psychology. The exploratory article will point out what elements of Securitization Theory might benefit most from incorporating insights from Psychology and in which ways they might change our understanding of the phenomenon. Some well-studied phenomena in the field of (Social) Psychology, it is argued here, play an important role for the construction and perception of security threats and the acceptance of the audience to grant the executive branch extraordinary measures to counter these threats: availability heuristic, loss-aversion and social identity theory are central psychological concepts that can help us to better understand how securitization works, and in which situations securitizing moves have great or little chances to reverberate. The empirical cases of the 9/11 and Paris terror attacks will serve to illustrate the potential of this approach, allowing for variances in key factors, among them: (point in) time, system of government and ideological orientation. As a hypotheses-generating pilot study, the paper will conclude by discussing further research possibilities in the field of Securitization. KW - Securitization KW - Psychology KW - Copenhagen School KW - Security Studies KW - Securitization Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6216 ER - TY - THES A1 - Seidler, Anna-Raissa T1 - Changing for the Better? Essays on the Role of Institutional Logics and Information System in Organizational Sustainability Transformations N2 - An increasing number of companies report that eco-sustainable initiatives have a positive impact on firms’ economic performance and concurrently allow the combination of social and commercial goals by optimizing environmental and economic decisions simultaneously. These initiatives are considered an integral part of organizational sustainability transformations, which are a special case of multilayered, complex organizational change efforts that relate to environmental, organizational, and individual factors. Institutional logics and information systems (IS) have shown to be two important perspectives from which to explore mechanisms and processes central to organizational sustainability transformations. Institutional logics offer a unique perspective to investigate organizational change for sustainability because they provide a new approach to organizational change that incorporates macro structures, culture, and agency to explain how actions are enabled or constrained. It thus allows for insights into the complex and miscellaneous interplay of external and internal determinants that govern organizational transformation processes towards sustainability. By providing insights into institutional changes of practice and behaviors, an institutional logic perspective allows for a detailed analysis of organizational transformations. Within these change processes, IS have shown to be an efficient and pervasive tool to leverage sustainability by integrating human and technological factors. Since IS have become a key resource for the encouragement of organizational sustainability transformations, adopting an IS perspective allows for an understanding of mechanisms and processes that enable IS to foster sustainability in organizations. Thus, this dissertation draws on four studies by investigateing an institutional logic perspective as well as an IS perspective to explore organizational sustainability transformations and facilitate an in-depth understanding of organizational, human, and technological factors that encourage sustainability in organizational transformations. KW - Institutional Logics KW - Information Systems KW - Organizational Sustainability Transformations KW - Organizational Change KW - Informationssystem KW - Organisationswandel Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6330 ER - TY - THES A1 - Kell, Christian T1 - A Structure-based Attack on the Linearized Braid Group-based Diffie-Hellman Conjugacy Problem in Combination with an Attack using Polynomial Interpolation and the Chinese Remainder Theorem N2 - This doctoral thesis is dedicated to improve a linear algebra attack on the so-called braid group-based Diffie-Hellman conjugacy problem (BDHCP). The general procedure of the attack is to transform a BDHCP to the problem of solving several simultaneous matrix equations. A first improvement is achieved by reducing the solution space of the matrix equations to matrices that have a specific structure, which we call here the left braid structure. Using the left braid structure the number of matrix equations to be solved reduces to one. Based on the left braid structure we are further able to formulate a structure-based attack on the BDHCP. That is to transform the matrix equation to a system of linear equations and exploiting the structure of the corresponding extended coefficient matrix, which is induced by the left braid structure of the solution space. The structure-based attack then has an empirically high probability to solve the BDHCP with significantly less arithmetic operations than the original attack. A third improvement of the original linear algebra attack is to use an algorithm that combines Gaussian elimination with integer polynomial interpolation and the Chinese remainder theorem (CRT), instead of fast matrix multiplication as suggested by others. The major idea here is to distribute the task of solving a system of linear equations over a giant finite field to several much smaller finite fields. Based on our empirically measured bounds for the degree of the polynomials to be interpolated and the bit size of the coefficients and integers to be recovered via the CRT, we conclude an improvement of the run time complexity of the original algorithm by a factor of n^8 bit operations in the best case, and still n^6 in the worst case. KW - Linear algebra attack KW - Braid group-based cryptography KW - Row echelon form calculation using polynomial interpolation and the chinese remainder theorem KW - Diffie-Hellman conjugacy problem KW - Kryptologie KW - Zopfgruppe KW - Diffie-Hellman-Algorithmus Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6476 ER - TY - THES A1 - Mosch, Philipp T1 - Four Essays on Digital Transformation Strategies from the Perspectives of Capital Markets, Incumbents and Start-ups N2 - This dissertation uses four studies to examine the context-contingent strategic factors that are critical to the success of digital transformation strategies from the perspectives of capital markets, incumbents, and start-ups. It focuses on a better understanding of (1) digital innovations and their quantitative evaluation, (2) power disruptions in digitally servitized supply chains, (3) strategic measures and dynamics in digital B2B platform markets, and (4) strategizing by data-driven start-ups in digitalized business networks. Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11337 ER - TY - THES A1 - Grösbrink, Carl-Friederich T1 - Three Essays on Firm Value and Firm Risk and their Relation to IT-Exposure, Corporate Social Responsibility, and Religiosity N2 - 1. IT-Exposure and Firm Value: We analyze the joint influence of a firm’s information technology (IT)-Exposure and investment behavior on firm value. Estimating a firm’s (partial) IT-Exposure allows for distinguishing between firms with a business model that is challenged by IT above and below market average. Hence, we estimate the annual IT-Exposure of a firm using a 3-factor Fama-French model extended by an IT-proxy. Subsequently, we analyze the relationship with Tobin’s Q in a panel data context, accounting for the relationship between IT-Exposure and investments proxied by R&D as well as CapEx. We use more than 48,000 firm-year observations for firms in the Russell 3000 Index covering the period 1990 to 2018. Although IT-Exposure has a negative impact on firm value, this discount can be overcompensated by up to 2.1 times by sufficient investments through R&D and CapEx, giving a firm with an average Tobin’s Q a premium of 14.8% to 19.2%, while controlling for endogeneity. 2. Corporate Social Responsibility, Risk, and Firm Value: An Unconditional Quantile Regression Approach: This paper examines the impact of corporate social responsibility (CSR) on firm risk, comprising total risk, idiosyncratic risk, and systematic risk, as well as firm value. We focus on analyzing the interrelationships along the entire distribution of the dependent variables, thus estimating an unconditional quantile regression (UQR). The analysis is based on CSR scores from Refinitiv and MSCI, using up to 12,013 firm-year observations over the period 2002 to 2019 for all U.S. companies listed on NYSE, NASDAQ, and AMEX. UQR reveals strongly heterogeneous effects along the unconditional quantiles of the dependent variables, which are reflected in sign changes, magnitude and significance variations. For CSR we find a risk-reducing as well as value-enhancing effect. When applying fixed effects OLS, we can just partly confirm the risk-reducing and value-enhancing effect of CSR shown in the literature. 3. Heterogenous Effects of Religiosity on Firm Risk and Firm Value: An Unconditional Quantile Regression Approach: This paper examines the impact of religiosity on firm risk, comprising total risk, idiosyncratic risk, and systematic risk, as well as firm value. We focus on analyzing the interrelationships along the entire distribution of the dependent variables, thus estimating an unconditional quantile regression (UQR). The analysis is based on all U.S. companies listed on NYSE, NASDAQ, and AMEX for the period from 1980 through 2020. UQR reveals strongly heterogeneous effects along the unconditional quantiles of the dependent variables, which are reflected in sign changes, magnitude and significance variations. Overall, the risk-reducing effect of religiosity is more pronounced in the higher quantiles of the distribution. We further observe a value-reducing as well as value-enhancing religiosity effect. When applying fixed effects OLS, we can confirm the risk-reducing and non-existing value effect of religiosity shown in the literature. The robustness of our results is underpinned by a battery of additional tests. KW - Firm Value KW - Firm Risk KW - IT-Exposure KW - Corporate Social Responsibility KW - Religiosity Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11558 ER - TY - THES A1 - Seruset, Marco T1 - Three Essays on Price Discovery, Stock Liquidity, and Crash Risk N2 - Abstract 1: This paper investigates whether market quality, uncertainty, investor sentiment and attention, and macroeconomic news affect bitcoin price discovery in spot and futures markets. Over the period December 2017 – March 2019, we find significant time variation in the contribution to price discovery of the two markets. Increases in price discovery are mainly driven by relative trading costs and volume, and by uncertainty to a lesser extent. Additionally, medium-sized trades contain most information in terms of price discovery. Finally, higher news-based bitcoin sentiment increases the informational role of the futures market, while attention and macroeconomic news have no impact on price discovery. Abstract 2: We investigate whether local religious norms affect stock liquidity for U.S. listed companies. Over the period 1997–2020, we find that firms located in more religious areas have higher liquidity, as reflected by lower bid-ask spreads. This result persists after the inclusion of additional controls, such as governance metrics, and further sensitivity and endogeneity analyses. Subsample tests indicate that the impact of religiosity on stock liquidity is particularly evident for firms operating in a poor information environment. We further show that firms located in more religious areas have lower price impact of trades and smaller probability of information-based trading. Overall, our findings are consistent with the notion that religiosity, with its antimanipulative ethos, probably fosters trust in corporate actions and information flows, especially when little is known about the firm. Finally, we conjecture an indirect firm value implication of religiosity through the channel of stock liquidity. Abstract 3: This study shows that higher physical distance to institutional shareholders is associated with higher stock price crash risk. Since monitoring costs increase with distance, the results are consistent with the monitoring theory of local institutional investors. Cross-sectional analyses show that the effect of proximity on crash risk is more pronounced for firms with weak internal governance structures. The significant relation between distance and crash risk still holds under the implementation of the Sarbanes-Oxley Act, however, to a lower extent. Also, the existence of the channel of bad news hoarding is confirmed. Finally, I show that there is heterogeneity in distance-induced monitoring activities of different types of institutions. Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11563 ER - TY - THES A1 - Lehner, Sabrina T1 - The Asymptotic Behaviour of the Riemann Mapping Function at Analytic Cusps N2 - The well-known Riemann Mapping Theorem states the existence of a conformal map of a simply connected proper domain of the complex plane onto the upper half plane. One of the main topics in geometric function theory is to investigate the behaviour of the mapping functions at the boundary of such domains. In this work, we always assume that a piecewise analytic boundary is given. Hereby, we have to distinguish regular and singular boundary points. While the asymptotic behaviour for regular boundary points can be investigated by using the Schwarz Reflection at analytic arcs, the situation for singular boundary points is far more complicated. In the latter scenario two cases have to be differentiated: analytic corners and analytic cusps. The first part of the thesis deals with the asymptotic behaviour at analytic corners where the opening angle is greater than 0. The results of Lichtenstein and Warschawski on the asymptotic behaviour of the Riemann map and its derivatives at an analytic corner are presented as well as the much stronger result of Lehman that the mapping function can be developed in a certain generalised power series which in turn enables to examine the o-minimal content of the Riemann Mapping Theorem. To obtain a similar statement for domains with analytic cusps, it is necessary to investigate the asymptotic behaviour of a Riemann map at the cusp and based on this result to determine the asymptotic power series expansion. Therefore, the aim of the second part of this work is to investigate the asymptotic behaviour of a Riemann map at an analytic cusp. A simply connected domain has an analytic cusp if the boundary is locally given by two analytic arcs such that the interior angle vanishes. Besides the asymptotic behaviour of the mapping function, the behaviour of its derivatives, its inverse, and the derivatives of the inverse are analysed. Finally, we present a conjecture on the asymptotic power series expansion of the mapping function at an analytic cusp. N2 - Der wohlbekannte Riemannsche Abbildungssatz liefert die Existenz einer konformen Abbildung eines einfach zusammenhängenden, echten Teilgebietes der komplexen Ebene auf die obere Halbebene. Die Untersuchung des Verhaltens solcher Abbildungen am Rand der Gebiete ist ein zentrales Thema der geometrischen Funktionentheorie. In der vorliegenden Arbeit gehen wir stets von einem stückweise analytischen Rand aus. Dabei müssen wir reguläre und singuläre Randpunkte unterscheiden. Während das Verhalten einer Riemann-Abbildung an regulären Randpunkten mit Hilfe des Schwarzschen Spiegelungsprinzips an analytischen Kurvenbögen einfach zu bestimmen ist, gestaltet sich dies in der Situation von singulären Randpunkten sehr viel komplizierter. In diesem Fall müssen wir erneut eine Unterscheidung treffen, nämlich ob es sich um eine analytische Ecke oder eine analytische Spitze handelt. Der erste Teil der Dissertation beschäftigt sich mit dem asymptotischen Verhalten einer Riemann-Abbildung an analytischen Ecken. Eine solche liegt vor, falls der Öffnungswinkel zwischen den analytischen Kurvenstücken an dem singulären Randpunkt größer als 0 ist. Es werden die Ergebnisse von Lichtenstein und Warschawski präsentiert, die sich mit dem Verhalten der Riemann-Abbildung und deren Ableitungen beschäftigt haben, sowie das stärkere Ergebnis von Lehman welches besagt, dass die Abbildung in eine verallgemeinerte Potenzreihe entwickelt werden kann. Unter Verwendung dieses Resultats konnte bereits der o-minimale Gehalt des Riemannschen Abbildungssatzes untersucht werden. Um ein ähnliches Ergebnis für den Fall, dass das Gebiet analytische Spitzen hat, zu erhalten, benötigen wir zunächst das asymptotische Verhalten der Riemann-Abbildung an einer Spitze. Darauf aufbauend kann die asymptotische Reihenentwicklung untersucht werden. Daher zielt der zweite Abschnitt dieser Arbeit darauf ab, das Verhalten der Abbildung an einer Spitze zu bestimmen. Dabei sprechen wir von einer analytischen Spitze, wenn der Rand des Gebietes lokal durch zwei reguläre analytische Bögen gegeben ist, deren Öffnungswinkel verschwindet. Neben dem asymptotischen Verhalten der Abbildung wird auch das Verhalten der Ableitungen, ihrer Umkehrfunktion und deren Ableitungen untersucht. Abschließend präsentieren wir eine Vermutung über die asymptotische Reihenentwicklung der Abbildung an einer analytischen Spitze. KW - Riemann mapping theorem KW - analytic cusp KW - asymptotic behaviour KW - boundary behaviour KW - Geometrische Funktionentheorie KW - Randverhalten KW - Riemannscher Abbildungssatz Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3587 ER - TY - THES A1 - Petit, Albin T1 - Introducing Privacy in Current Web Search Engines N2 - During the last few years, the technological progress in collecting, storing and processing a large quantity of data for a reasonable cost has raised serious privacy issues. Privacy concerns many areas, but is especially important in frequently used services like search engines (e.g., Google, Bing, Yahoo!). These services allow users to retrieve relevant content on the Internet by exploiting their personal data. In this context, developing solutions to enable users to use these services in a privacy-preserving way is becoming increasingly important. In this thesis, we introduce SimAttack an attack against existing protection mechanism to query search engines in a privacy-preserving way. This attack aims at retrieving the original user query. We show with this attack that three representative state-of-the-art solutions do not protect the user privacy in a satisfactory manner. We therefore develop PEAS a new protection mechanism that better protects the user privacy. This solution leverages two types of protection: hiding the user identity (with a succession of two nodes) and masking users' queries (by combining them with several fake queries). To generate realistic fake queries, PEAS exploits previous queries sent by the users in the system. Finally, we present mechanisms to identify sensitive queries. Our goal is to adapt existing protection mechanisms to protect sensitive queries only, and thus save user resources (e.g., CPU, RAM). We design two modules to identify sensitive queries. By deploying these modules on real protection mechanisms, we establish empirically that they dramatically improve the performance of the protection mechanisms. KW - Privacy KW - Search Engine KW - Unlinkability KW - Indistinguishability KW - Suchmaschine KW - Datensicherung KW - Computersicherheit KW - Privatsphäre Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4652 ER - TY - THES A1 - Kriegl, Markus T1 - Generalizations and Applications of Border Bases N2 - This doctoral thesis is devoted to generalize border bases to the module setting and to apply them in various ways. First, we generalize the theory of border bases to finitely generated modules over a polynomial ring. We characterize these generalized border bases and show that we can compute them. As an application, we are able to characterize subideal border bases in various new ways and give a new algorithm for their computation. Moreover, we prove Schreyer's Theorem for border bases of submodules of free modules of finite rank over a polynomial ring. In the second part of this thesis, we study the effect of homogenization to border bases of zero-dimensional ideals. This yields the new concept of projective border bases of homogeneous one-dimensional ideals. We show that there is a one-to-one correspondence between projective border bases and zero-dimensional closed subschemes of weighted projective spaces that have no point on the hyperplane at infinity. Applying that correspondence, we can characterize uniform zero-dimensional closed subschemes of weighted projective spaces that have a rational support over the base field in various ways. Finally, we introduce projective border basis schemes as specific subschemes of border basis schemes. We show that these projective border basis schemes parametrize all zero-dimensional closed subschemes of a weighted projective space whose defining ideals possess a projective border basis. Assuming that the base field is algebraically closed, we are able to prove that the set of all closed points of a projective border basis scheme that correspond to a uniform subscheme is a constructive set with respect to the Zariski topology. KW - (generalized) border bases KW - Schreyer's theorem KW - weighted projective spaces KW - zero-dimensional projective schemes KW - uniformity conditions Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3628 ER - TY - THES A1 - Liebig, Jörg T1 - Analysis and Transformation of Configurable Systems N2 - Static analysis tools and transformation engines for source code belong to the standard equipment of a software developer. Their use simplifies a developer's everyday work of maintaining and evolving software systems significantly and, hence, accounts for much of a developer's programming efficiency and programming productivity. This is also beneficial from a financial point of view, as programming errors are early detected and avoided in the the development process, thus the use of static analysis tools reduces the overall software-development costs considerably. In practice, software systems are often developed as configurable systems to account for different requirements of application scenarios and use cases. To implement configurable systems, developers often use compile-time implementation techniques, such as preprocessors, by using #ifdef directives. Configuration options control the inclusion and exclusion of #ifdef-annotated source code and their selection/deselection serve as an input for generating tailor-made system variants on demand. Existing configurable systems, such as the linux kernel, often provide thousands of configuration options, forming a huge configuration space with billions of system variants. Unfortunately, existing tool support cannot handle the myriads of system variants that can typically be derived from a configurable system. Analysis and transformation tools are not prepared for variability in source code, and, hence, they may process it incorrectly with the result of an incomplete and often broken tool support. We challenge the way configurable systems are analyzed and transformed by introducing variability-aware static analysis tools and a variability-aware transformation engine for configurable systems' development. The main idea of such tool support is to exploit commonalities between system variants, reducing the effort of analyzing and transforming a configurable system. In particular, we develop novel analysis approaches for analyzing the myriads of system variants and compare them to state-of-the-art analysis approaches (namely sampling). The comparison shows that variability-aware analysis is complete (with respect to covering the whole configuration space), efficient (it outperforms some of the sampling heuristics), and scales even to large software systems. We demonstrate that variability-aware analysis is even practical when using it with non-trivial case studies, such as the linux kernel. On top of variability-aware analysis, we develop a transformation engine for C, which respects variability induced by the preprocessor. The engine provides three common refactorings (rename identifier, extract function, and inline function) and overcomes shortcomings (completeness, use of heuristics, and scalability issues) of existing engines, while still being semantics-preserving with respect to all variants and being fast, providing an instantaneous user experience. To validate semantics preservation, we extend a standard testing approach for refactoring engines with variability and show in real-world case studies the effectiveness and scalability of our engine. In the end, our analysis and transformation techniques show that configurable systems can efficiently be analyzed and transformed (even for large-scale systems), providing the same guarantees for configurable systems as for standard systems in terms of detecting and avoiding programming errors. KW - Configurable Systems KW - Variability-aware Analysis KW - Variability-aware Refactoring KW - Software Product Lines KW - Refactoring KW - Statische Analyse KW - Präprozessor KW - Softwarewartung KW - C Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-2996 ER - TY - THES A1 - Braun, Bastian T1 - Web-based Secure Application Control N2 - The world wide web today serves as a distributed application platform. Its origins, however, go back to a simple delivery network for static hypertexts. The legacy from these days can still be observed in the communication protocol used by increasingly sophisticated clients and applications. This thesis identifies the actual security requirements of modern web applications and shows that HTTP does not fit them: user and application authentication, message integrity and confidentiality, control-flow integrity, and application-to-application authorization. We explore the other protocols in the web stack and work out why they can not fill the gap. Our analysis shows that the underlying problem is the connectionless property of HTTP. However, history shows that a fresh start with web communication is far from realistic. As a consequence, we come up with approaches that contribute to meet the identified requirements. We first present impersonation attack vectors that begin before the actual user authentication, i.e. when secure web interaction and authentication seem to be unnecessary. Session fixation attacks exploit a responsibility mismatch between the web developer and the used web application framework. We describe and compare three countermeasures on different implementation levels: on the source code level, on the framework level, and on the network level as a reverse proxy. Then, we explain how the authentication credentials that are transmitted for the user login, i.e. the password, and for session tracking, i.e. the session cookie, can be complemented by browser-stored and user-based secrets respectively. This way, an attacker can not hijack user accounts only by phishing the user's password because an additional browser-based secret is required for login. Also, the class of well-known session hijacking attacks is mitigated because a secret only known by the user must be provided in order to perform critical actions. In the next step, we explore alternative approaches to static authentication credentials. Our approach implements a trusted UI and a mutually authenticated session using signatures as a means to authenticate requests. This way, it establishes a trusted path between the user and the web application without exchanging reusable authentication credentials. As a downside, this approach requires support on the client side and on the server side in order to provide maximum protection. Another approach avoids client-side support but can not implement a trusted UI and is thus susceptible to phishing and clickjacking attacks. Our approaches described so far increase the security level of all web communication at all time. This is why we investigate adaptive security policies that fit the actual risk instead of permanently restricting all kinds of communication including non-critical requests. We develop a smart browser extension that detects when the user is authenticated on a website meaning that she can be impersonated because all requests carry her identity proof. Uncritical communication, however, is released from restrictions to enable all intended web features. Finally, we focus on attacks targeting a web application's control-flow integrity. We explain them thoroughly, check whether current web application frameworks provide means for protection, and implement two approaches to protect web applications: The first approach is an extension for a web application framework and provides protection based on its configuration by checking all requests for policy conformity. The second approach generates its own policies ad hoc based on the observed web traffic and assuming that regular users only click on links and buttons and fill forms but do not craft requests to protected resources. N2 - Das heutige World Wide Web ist eine verteilte Plattform für Anwendungen aller Art: von einfachen Webseiten über Online Banking, E-Mail, multimediale Unterhaltung bis hin zu intelligenten vernetzten Häusern und Städten. Seine Ursprünge liegen allerdings in einem einfachen Netzwerk zur Übermittlung statischer Inhalte auf der Basis von Hypertexten. Diese Ursprünge lassen sich noch immer im verwendeten Kommunikationsprotokoll HTTP identifizieren. In dieser Arbeit untersuchen wir die Sicherheitsanforderungen moderner Web-Anwendungen und zeigen, dass HTTP diese Anforderungen nicht erfüllen kann. Zu diesen Anforderungen gehören die Authentifikation von Benutzern und Anwendungen, die Integrität und Vertraulichkeit von Nachrichten, Kontrollflussintegrität und die gegenseitige Autorisierung von Anwendungen. Wir untersuchen die Web-Protokolle auf den unteren Netzwerk-Schichten und zeigen, dass auch sie nicht die Sicherheitsanforderungen erfüllen können. Unsere Analyse zeigt, dass das grundlegende Problem in der Verbindungslosigkeit von HTTP zu finden ist. Allerdings hat die Geschichte gezeigt, dass ein Neustart mit einem verbesserten Protokoll keine Option für ein gewachsenes System wie das World Wide Web ist. Aus diesem Grund beschäftigt sich diese Arbeit mit unseren Beiträgen zu sicherer Web-Kommunikation auf der Basis des existierenden verbindungslosen HTTP. Wir beginnen mit der Beschreibung von Session Fixation-Angriffen, die bereits vor der eigentlichen Anmeldung des Benutzers an der Web-Anwendung beginnen und im Erfolgsfall die temporäre Übernahme des Benutzerkontos erlauben. Wir präsentieren drei Gegenmaßnahmen, die je nach Eingriffsmöglichkeiten in die Web-Anwendung umgesetzt werden können. Als nächstes gehen wir auf das Problem ein, dass Zugangsdaten im WWW sowohl zwischen den Teilnehmern zu Authentifikationszwecken kommuniziert werden als auch für jeden, der Kenntnis dieser Daten erlangt, wiederverwendbar sind. Unsere Ansätze binden das Benutzerpasswort an ein im Browser gespeichertes Authentifikationsmerkmal und das sog. Session-Cookie an ein Geheimnis, das nur dem Benutzer und der Web-Anwendung bekannt ist. Auf diese Weise kann ein Angreifer weder ein gestohlenes Passwort noch ein Session-Cookie allein zum Zugriff auf das Benutzerkonto verwenden. Darauffolgend beschreiben wir ein Authentifikationsprotokoll, das vollständig auf die Übermittlung geheimer Zugangsdaten verzichtet. Unser Ansatz implementiert eine vertrauenswürdige Benutzeroberfläche und wirkt so gegen die Manipulation derselben in herkömmlichen Browsern. Während die bisherigen Ansätze die Sicherheit jeglicher Web-Kommunikation erhöhen, widmen wir uns der Frage, inwiefern ein intelligenter Browser den Benutzer - wenn nötig - vor Angriffen bewahren kann und - wenn möglich - eine ungehinderte Kommunikation ermöglichen kann. Damit trägt unser Ansatz zur Akzeptanz von Sicherheitslösungen bei, die ansonsten regelmäßig als lästige Einschränkungen empfunden werden. Schließlich legen wir den Fokus auf die Kontrollflussintegrität von Web-Anwendungen. Bösartige Benutzer können den Zustand von Anwendungen durch speziell präparierte Folgen von Anfragen in ihrem Sinne manipulieren. Unsere Ansätze filtern Benutzeranfragen, die von der Anwendung nicht erwartet wurden, und lassen nur solche Anfragen passieren, die von der Anwendung ordnungsgemäß verarbeitet werden können. KW - Computersicherheit KW - Datensicherung KW - Internet Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3048 ER - TY - THES A1 - Sipal, Bilge T1 - Border Basis Schemes N2 - The basic idea of border basis theory is to describe a zero-dimensional ring P/I by an order ideal of terms whose residue classes form a K-vector space basis of P/I. The O-border basis scheme is a scheme that parametrizes all zero-dimensional ideals that have an O-border basis. In general, the O-border basis scheme is not an affine space. Subsequently, in [Huib09] it is proved that if an order ideal with "d" elements is defined in a two-dimensional polynomial ring and it is of some special shapes, then the O-border basis scheme is isomorphic to the affine space of dimension 2d. This thesis is dedicated to find a more general condition for an O-border basis scheme to be isomorphic to an affine space of dimension "nd" that is independent of the shape of the order ideal with "d" elements and "n" is the dimension of the polynomial ring that the order ideal is defined in. We accomplish this in 6 Chapters. In Chapters 2 and 3 we develop the concepts and properties of border basis schemes. In Chapter 4 we transfer the smoothness criterion (see [Huib05]) for the point (0,...,0) in a Hilbert scheme of points to the monomial point of the border basis scheme by employing the tools from border basis theory. In Chapter 5 we explain trace and Jacobi identity syzygies of the defining equations of a O-border basis scheme and characterize them by the arrow grading. In Chapter 6 we give a criterion for the isomorphism between 2d dimensional affine space and O-border basis scheme by using the results from Chapters 3 and Chapter 4. The techniques from other chapters are applied in Chapter 6.1 to segment border basis schemes and in Chapter 6.2 to O-border basis schemes for which O is of the sawtooth form. KW - Border Bases, Border Basis Scheme, Monomial point, Cotangent Space, Hilbert Schemes KW - Polynomring KW - Basis (Mathematik) KW - Kommutative Algebra, Randbasen, Randbasen Schema Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4702 ER - TY - THES A1 - Fischer, Andreas T1 - An Evaluation Methodology for Virtual Network Embedding N2 - The increasing scale and complexity of computer networks imposes a need for highly flexible management mechanisms. The concept of network virtualization promises to provide this flexibility. Multiple arbitrary virtual networks can be constructed on top of a single substrate network. This allows network operators and service providers to tailor their network topologies to the specific needs of any offered service. However, the assignment of resources proves to be a problem. Each newly defined virtual network must be realized by assigning appropriate physical resources. For a given set of virtual networks, two questions arise: Can all virtual networks be accommodated in the given substrate network? And how should the respective resources be assigned? The underlying problem is commonly known as the Virtual Network Embedding problem. A multitude of algorithms has already been proposed, aiming to provide solutions to that problem under various constraints. For the evaluation of these algorithms typically an empirical approach is adopted, using artificially created random problem instances. However, due to complex effects of random problem generation the obtained results can be hard to interpret correctly. A structured evaluation methodology that can avoid these effects is currently missing. This thesis aims to fill that gap. Based on a thorough understanding of the problem itself, the effects of random problem generation are highlighted. A new simulation architecture is defined, increasing the flexibility for experimentation with embedding algorithms. A novel way of generating embedding problems is presented which migitates the effects of conventional problem generation approaches. An evaluation using these newly defined concepts demonstrates how new insights on algorithm behavior can be gained. The proposed concepts support experimenters in obtaining more precise and tangible evaluation data for embedding algorithms. KW - Virtual Network Embedding KW - Empirical Evaluation KW - Network Virtualization KW - Experimental Algorithmics KW - Virtuelles Netz KW - Virtualisierung KW - Algorithmus KW - Virtuelles Netz KW - Kombinatorische Einbettung Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4793 ER - TY - THES A1 - Löwe, Stefan T1 - Effective Approaches to Abstraction Refinement for Automatic Software Verification N2 - This thesis presents various techniques that aim at enabling more effective and more efficient approaches for automatic software verification. After a brief motivation why automatic software verification is getting ever more relevant, we continue with detailing the formalism used in this thesis and on the concepts it is built on. We then describe the design and implementation of the value analysis, an analysis for automatic software verification that tracks state information concretely. From a thorough evaluation based on well over 4 000 verification tasks from the latest edition of the International Competition on Software Verification (SV-COMP), we learn that this plain value analysis leads to an efficient verification process for many verification tasks, but at the same time, fails to solve other verification tasks due to state-space explosion. From this insight we infer that some form of abstraction technique must be added to the value analysis in order to also allow the successful verification of large and complex verification tasks. As a solution, we propose to incorporate counterexample-guided abstraction refinement (CEGAR) and interpolation into the value domain. To this end, we design a novel interpolation procedure, that extracts from infeasible counterexamples interpolants for the value domain, allowing to form a precision strong enough to exclude these infeasible counterexamples, and to make progress in the CEGAR loop. We then describe several optimizations and extensions to these concepts, such that the value analysis with CEGAR becomes competitive for automatic software verification. As the next step, we combine the value analysis with CEGAR with a predicate analysis, to obtain a more precise and efficient composite analysis based on CEGAR. This composite analysis is indeed on a par with the world’s leading software verification tools, as witnessed by the results of SV-COMP’13 where this approach achieved the 2 nd place in the overall ranking. After having available competitive CEGAR-based analyses for the value domain, the predicate domain, and the combination thereof, we then turn our attention to techniques that have the goal to make all these CEGAR-based approaches more successful. Our first novel idea in this regard is based on the concept of infeasible sliced prefixes, which allow the computation of different precisions from a single infeasible counterexample. This adds choice to the CEGAR loop, while without this enhancement, no choice for a specific precision, i. e., a specific refinement, is possible. In our evaluation we show, for both the value analysis and the predicate analysis, that choosing different infeasible sliced prefixes during the refinement step leads to major differences in verification effectiveness and verification efficiency. Extending on the concept of infeasible sliced prefixes, we define several heuristics in order to precisely select a single refinement from a set of possible refinements. We make this new concept, which we refer to as guided refinement selection, available to both the value and predicate analysis, and in a large-scale evaluation we try to answer the question which selection technique leads to well suited abstractions and thus, to a more effective verification process. Additionally, we present the idea of inter-analysis refinement selection, where the refinement component of a composite analysis may decide which of its component analyses is best to be refined, and in yet another evaluation we highlight the positive effects of this technique. Finally, we present the results of SV-COMP’16, where the verifier we contributed and which is based on the concepts and ideas presented in this thesis achieved the 1 st place in the category DeviceDriversLinux64. KW - software verification, model checking, counterexample guided abstraction refinement, CEGAR, interpolation, sliced prefixes, refinement selection, value analysis, predicate analysis, CPAchecker, automatic, automated KW - Programmverifikation Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4815 ER - TY - THES A1 - Nagel, Volker T1 - Three Essays on Moral Self-Regulation of Honesty and Impression Management N2 - In study 1 an introduction to the research on moral self-regulation is provided alongside with an explanation of the two manifestations of moral self-regulation: moral licensing and moral cleansing. At the core of the first study is an experiment which was designed to identify moral licensing and cleansing in the domain of honesty. The experiment merges relevant studies from social psychology and experimental economics. It assesses the question if moral self-regulation exists within the domain of honesty or more precisely, if the truth and lies are told in such a way as to balance each other out. After manipulating participants’ moral balances (either positively or negatively), rates of truth-telling are compared to a neutral baseline scenario. Since neither moral licensing nor moral cleansing is observed, the results provide no support to the initial hypothesis that moral self-regulation exists within the domain of honesty. Study 2 builds on these results and discusses possible reasons for the absence of moral self-regulation. The research on moral hypocrisy and self-concept maintenance are presented and discussed as possible explanations. In order to shed more light on participants’ behavior, a coding procedure is presented that was used on the dataset from study 1. This approach makes it possible to quantify participants’ handwritten stories that resulted from the moral manipulation in study 1 and gain more insights on how truth-telling and lying affect the moral balance. By analyzing (dis)honesty on a more detailed level, results show that participants tend to act consistent to what they revealed about themselves in their stories. Study 3 links together aspects of moral self-regulation, moral hypocrisy and impression management. The "looting game" is presented which lets participants loot money from a charity box being subject to altruistic punishment from observers. For their punishment decision observers are provided with a history of participants’ past actions. This design allows to assess how misconduct, punishment and the creation of a favorable impression interact and ultimately impact profits. The results indicate that moral cleansing, and not the desire to trick observers, is the reason for manipulation. Participants who loot money from the charity box do not expect to receive less punishment, rather they simply want to present a more favorable picture of themselves. On the other hand, observers fully account for the possibility of manipulation and tend to disregard a manipulated history. The looting game therefore brings the hypothesis into question that impressions are managed and manipulated to increase profits. KW - moral licensing KW - moral cleansing KW - dishonesty KW - self-image KW - altruistic punishment KW - Selbstevaluation KW - Ethik KW - Verhaltensökonomie KW - Theorie Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-2896 ER - TY - JOUR A1 - Krah, Hans T1 - German "volsktümliche Musik" of the Early Nineties and "Modern Society": Strategies of De-Individualisation as a Contribution to a Collective Re-Organisation N2 - ”Volkstümliche Musik” is a significant phenomenon (at least) of the early 1990s. The analysis of this phenomenon can paradigmatically represent the discursive practices and sub-thought systems of large parts of the population of the Federal Republic of Germany at this time. ”Volkstümliche Musik” depends on the political situation and indirectly deals with the needs and problems of individuals which result from such a situation and which lie in the deep structure of their mentalities. It thus takes on a cultural function. T3 - Schriften zur Kultur- und Mediensemiotik | Online - 2015.1.2 Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-2968 ER - TY - JOUR A1 - Rockenberger, Annika T1 - Materiality and Meaning in Literary Studies N2 - Recently, non- and paraverbal properties of literary texts at the level of documentary inscription (i.e. materiality), seen individually or as aspects of a so-called ‘material text’, that is, the union of materiality and verbal sign systems, received an increasing amount of attention in textual scholarship and literary studies. Here, ‘meaning’ or at least ‘semantic potentiality’ has been attributed to both or either and physical features of texts have been construed as hitherto neglected aspects of literary communication and literary aesthetics. In what follows, I will present a brief conspectus of the current debate and then try to provide a reconstruction of underlying ideas by answering the question ‘how does a material text mean?’. Taking a descriptive meta-perspective and focusing on conceptual and methodological clarification, I try to clarify the somewhat blurry expressions ‘meaning’, ‘to mean’ and the like by translating them into the distinct terminology of semiotics and transferring them into the theoretical framework of an instrumentalist notion of signs. T3 - Schriften zur Kultur- und Mediensemiotik | Online - 2016.2.2 KW - Behinderter Mensch KW - Gesellschaft Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4085 ER - TY - THES A1 - Kurz, Thomas T1 - Adapting Semantic Web Information Retrieval to Multimedia N2 - The amount of audio, video and image data on the Web is immensely growing, which leads to data management problems based on the hidden character of Multimedia. Therefore the interlinking of semantic concepts and media data with the aim to bridge the gap between the Internet of documents and the Web of Data has become a common practice. However, the value of connecting media to its semantic meta data is limited due to lacking access methods and the absence of an adapted query language specialized for media assets and fragments. This thesis aims to extend the standard query language for the Semantic Web (SPARQL) with media specific concepts and functions. The main contributions of the work are an exhaustive survey on Multimedia query languages of the last 3 decades, the SPARQL extension specification itself and an approach for the efficient evaluation of the new query concepts. Additionally I elaborate and evaluate a meta data based media fragment similarity approach, which provides a basis for further language extensions. N2 - Das Wachstum multimedialer Daten wie Audio, Video und Bilder war in den letzten Jahren immens. Das Besondere an dieser Art der Daten ist die versteckte Semantik, die sich nur schwer mit herkömmlichen Information Retrieval Funktionen verbinden lässt und dadurch zu Problemen im Management der Multimedia Daten führt. Konzepte des Semantic Web eignen sich allerdings sehr gut, diese Lücke zu schließen, was sich in vielen Szenarien bereits positiv etabliert hat. Nichtsdestotrotz fehlen mit geeigneten Zugriffsmethoden und einer adaptierten Anfragesprache wichtige Teile, um dieses Konzept der verlinkten Multimedia Daten abzurunden und voll in einem End-to-End Prozess zu verwenden. In dieser Arbeit stelle ich eine Erweiterung der Standard-Anfragesprache im Semantic Web (SPARQL) um multimedia-spezifische Funktionen vor. Der wissenschaftliche Beitrag lässt sich dabei in drei Teile gliedern: Ein umfassendes Survey zu Multimedia Anfragesprachen der letzten 30 Jahre, die Erweiterung von SPARQL inklusive einer geeigneten Methodik zur Anfrageoptimierung, sowie ein Ansatz zur fragment-basierten Ähnlichkeitsberechnung von Bildern mit zugehöriger Evaluierung. KW - SPARQL KW - Semantic Web KW - Multimedia KW - SPARQL KW - Multimedia KW - Semantic Web KW - Web of Data KW - SPARQL-MM Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8276 ER - TY - THES A1 - Wimbauer, Lisa Kristina T1 - Innovate with Crowds. Co-Creation and Idea Evaluation in Internal and External Crowdsourcing. N2 - Crowdsourcing seems to be a promising approach for organizations to overcome challenges widely discussed in innovation and organizational research. However, the extent to which an organization can leverage the benefits from crowdsourcing is contingent on which type of crowd is addressed and how crowds are used. Based on unique data from crowdsourcing contests, the dissertation provides insights how to innovate with internal and external crowds in order to utilize their potential for co-creation and idea evaluation. KW - crowdsourcing KW - idea evaluation KW - collaboration KW - co-creation KW - crowd-based project allocation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8359 ER - TY - THES A1 - Planche, Benjamin T1 - Bridging the Realism Gap for CAD-Based Visual Recognition N2 - Computer vision aims at developing algorithms to extract high-level information from images and videos. In the industry, for instance, such algorithms are applied to guide manufacturing robots, to visually monitor plants, or to assist human operators in recognizing specific components. Recent progress in computer vision has been dominated by deep artificial neural network, i.e., machine learning methods simulating the way that information flows in our biological brains, and the way that our neural networks adapt and learn from experience. For these methods to learn how to accurately perform complex visual tasks, large amounts of annotated images are needed. Collecting and labeling such domain-relevant training datasets is, however, a tedious—sometimes impossible—task. Therefore, it has become common practice to leverage pre-available three-dimensional (3D) models instead, to generate synthetic images for the recognition algorithms to be trained on. However, methods optimized over synthetic data usually suffer a significant performance drop when applied to real target images. This is due to the realism gap, i.e., the discrepancies between synthetic and real images (in terms of noise, clutter, etc.). In my work, three main directions were explored to bridge this gap. First, an innovative end-to-end framework is proposed to render realistic depth images from 3D models, as a growing number of solutions (especially in the industry) are utilizing low-cost depth cameras (e.g., Microsoft Kinect and Intel RealSense) for recognition tasks. Based on a thorough study of these devices and the different types of noise impairing them, the proposed framework simulates their inner mechanisms, comprehensively modeling vital factors such as sensor noise, material reflectance, surface geometry, etc. Able to simulate a wide panel of depth sensors and to quickly generate large datasets, this framework is used to train algorithms for various recognition tasks, consistently and significantly enhancing their performance compared to other state-of-the-art simulation tools. In some cases, however, relevant 2D or 3D object representations to generate synthetic samples are not available. Considering this different case of data scarcity, a solution is then proposed to incrementally build a representation of visual scenes from partial observations. Provided observations are localized from one to another based on their content and registered in a global memory with spatial properties. Simultaneously, this memory can be queried to render novel views of the scene. Furthermore, unobserved regions can be hallucinated in memory, in consistence with previous observations, hallucinations, and global priors. The efficacy of the proposed mnemonic and generative system, trainable end-to-end, is demonstrated on various 2D and 3D use-cases. Finally, an advanced convolutional neural network pipeline is introduced, tackling the realism gap from a novel angle. While most methods addressing this problem focus on bringing synthetic samples—or the knowledge acquired from them—closer to the real target domain, the proposed solution performs the opposite process, mapping unseen target images into controlled synthetic domains. The pre-processed samples can then be handed to downstream recognition methods, themselves purely trained on similar synthetic data, to greatly improve their accuracy. For each approach, a variety of qualitative and quantitative studies are detailed, providing successful comparisons to state-of-the-art methods. By proposing solutions to bridge the realism gap from either side, as well as a pipeline to improve the acquisition and generation of new visual content, this thesis provides a unique perspective on the challenges of data scarcity when building robust recognition systems. N2 - Die Computer Vision strebt an, Algorithmen zum Extrahieren hochwertiger Informationen von Bildern und Videos zu entwickeln. In der Industrie werden solche Algorithmen beispielsweise angewendet, um Fertigungsroboter zu steuern, um Betriebe visuell zu überwachen, oder um Mitarbeiter bei der Erkennung bestimmter Komponenten zu unterstützen. Die kürzlichen Fortschritte im Bereich Computer Vision wurden von tiefen künstlichen neuronalen Netzen dominiert. Diese Methoden des maschinelles Lernens (Machine Learning) simulieren die Art und Weise, in der die Information in unseren biologischen Gehirnen verarbeitet wird und in der unsere neuronale Netze sich anpassen und aus Erfahrung lernen. Damit diese Methoden zur genauen Ausführung komplexer visueller Aufgaben befähigt werden, müssen sie mit einer großen Anzahl von annotierten Bildern trainiert werden. Die Erhebung und Kennzeichnung entsprechender Trainingsdatensätze ist jedoch eine langwierige und manchmal sogar unmögliche Aufgabe. Deswegen ist es zur gängigen Praxis geworden, stattdessen die vorhandenen 3D-Modelle zur Generierung synthetischer Bilder einzusetzen, damit die Erkennungsalgorithmen mit Hilfe dieser Bilder trainiert werden. Allerdings, bei der Anwendung auf die realen Zielbilder, erleiden die Methoden, die durch synthetische Daten angepasst wurden, einen erheblichen Leistungsabfall. Dies geschieht aufgrund der Realismuslücke (Realism Gap), das heißt durch die Diskrepanzen zwischen synthetischen und realen Bildern (hinsichtlich von Rauschen, Störungen usw.). In meiner Arbeit wurden drei Hauptrichtungen untersucht, um diese Lücke zu schließen. Zuerst wird ein innovatives End-to-End-Framework vorgeschlagen, um realistische Tiefenbilder von 3D-Modellen zu rendern, denn immer mehr Lösungen (insbesondere in der Industrie) verwenden kostengünstige Tiefen-Kameras (z. B. Microsoft Kinect und Intel RealSense) für die Erkennungsaufgaben. Aufgrund einer gründlichen Untersuchung dieser Geräte und der verschiedenen Arten von Rauschen, die dem Aufnahmen beeinträchtigen, simuliert das vorgeschlagene Framework deren innere Mechanismen, indem Schlüsselfaktoren wie Sensorrauschen, Reflektionsgrade der Materialien, Oberflächengeometrie usw. umfassend modelliert werden. Dieses Framework ist in der Lage eine breite Palette von Tiefensensoren zu simulieren und schnell große Datensätze zu generieren. Dies wird eingesetzt, um die Algorithmen für verschiedene Erkennungsaufgaben zu trainieren und deren Leistung im Vergleich zu anderen hochmodernen Simulationsmethoden konsistent und erheblich zu verbessern. In manchen Fällen sind jedoch keine relevanten 2D- oder 3D-Objektdarstellungen zur Erzeugung von synthetischen Bildern verfügbar. Ausgehend von dieser Problematik des Datenmangels wurde eine Lösung vorgeschlagen, in der die Rekonstruktion von visuellen Szenen aus Teilbeobachtungen schrittweise durchgeführt wird. Die Bilder werden anhand ihres Inhalts in Bezug zueinander lokalisiert und in einer globalen Gedächtnisstruktur mit räumlichen Eigenschaften registriert Gleichzeitig kann dieses Gedächtnis abgerufen werden, um neuen Ansichten der Szene zu rendern. Darüber hinaus können bisher unbeobachtete Regionen in Übereinstimmung mit früheren Beobachtungen, Halluzinationen und globalen Vorwissen im Gedächtnis halluziniert werden. Die Wirksamkeit des vorgeschlagenen, durchgehend trainierbaren mnemonischen und generativen Systems, wird anhand von verschiedenen 2D- und 3D-Anwendungsfällen demonstriert. Schließlich wird eine auf Convolutional Neural Networks (CNNs) basierte weiter entwickelte Pipeline vorgestellt, die die Realismuslücke aus einem neuen Blickwinkel angeht. Während die meisten Methoden, die sich mit diesem Problem befassen, sich darauf konzentrieren, synthetische Datenproben (bzw. daraus erworbenes Wissen) näher an die echte/reale Zieldomäne zu bringen, führt die vorgeschlagene Lösung den umgekehrten Prozess durch, indem ungesehene Zielbilder in den kontrollierten synthetischen Domänen abgebildet werden. Die vorbehandelten Datenproben können dann für die nachgeschalteten Erkennungsalgorithmen übergeben werden, die selbst anhand der ähnlichen synthetischen Daten trainiert wurden, um deren Genauigkeit deutlich zu verbessern. Für jeden Ansatz werden verschiedene qualitative und quantitative Studien durchgeführt, um mit sie den neuesten Methoden zu vergleichen. Insgesamt werden in dieser Arbeit Methoden zur Überbrückung der Realismuslücke auf beiden Seiten sowie eine Lösung zur Verbesserung der Erfassung und Generierung neuer visueller Inhalte beschrieben. Daher bietet diese Dissertation eine neuartige Perspektive auf die Herausforderungen der Datenknappheit bei der Entwicklung robuster Erkennungssysteme. KW - computer vision KW - machine learning KW - domain adaptation KW - realism gap KW - visual understanding Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8361 ER - TY - THES A1 - Fink, Thomas T1 - Curvature Detection by Integral Transforms N2 - In various fields of image analysis, determining the precise geometry of occurrent edges, e.g. the contour of an object, is a crucial task. Especially the curvature of an edge is of great practical relevance. In this thesis, we develop different methods to detect a variety of edge features, among them the curvature. We first examine the properties of the parabolic Radon transform and show that it can be used to detect the edge curvature, as the smoothness of the parabolic Radon transform changes when the parabola is tangential to an edge and also, when additionally the curvature of the parabola coincides with the edge curvature. By subsequently introducing a parabolic Fourier transform and establishing a precise relation between the smoothness of a certain class of functions and the decay of the Fourier transform, we show that the smoothness result for the parabolic Radon transform can be translated into a change of the decay rate of the parabolic Fourier transform. Furthermore, we introduce an extension of the continuous shearlet transform which additionally utilizes shears of higher order. This extension, called the Taylorlet transform, allows for a detection of the position and orientation, as well as the curvature and other higher order geometric information of edges. We introduce novel vanishing moment conditions which enable a more robust detection of the geometric edge features and examine two different constructions for Taylorlets. Lastly, we translate the results of the Taylorlet transform in R^2 into R^3 and thereby allow for the analysis of the geometry of object surfaces. KW - Curvature KW - Wavelet KW - Shearlet KW - Parabolic Radon transform KW - Edge classification KW - Krümmung KW - Wavelet-Transformation KW - Shearlet KW - Radon-Transformation KW - Konturfindung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7684 ER - TY - THES A1 - Lucas, Yvan T1 - Credit card fraud detection using machine learning with integration of contextual knowledge N2 - We have proposed a strategy for the creation of attributes based on hidden Markov models (HMM) characterizing the transaction from different points of view. This strategy makes it possible to integrate a broad spectrum of sequential information into the attributes of transactions. In fact, we model the authentic and fraudulent behavior of merchants and card holders according to two univariate characteristics: the date and the amount of transactions. In addition, attributes based on HMMs are created in a supervised manner, thereby reducing the need for expert knowledge for the creation of the fraud detection system. Ultimately, our HMM-based multi-perspective approach allows automated data pre-processing to model time correlations to complement and eventually replace transaction aggregation strategies to improve detection efficiency. Experiments carried out on a large set of credit card transaction data from the real world (46 million transactions carried out by Belgian card holders between March and May 2015) have shown that the strategy proposed for data preprocessing based on HMM can detect more fraudulent transactions when combined with the strategy of preprocessing reference data based on expert knowledge for the detection of credit card fraud. KW - Kreditkartenmissbrauch KW - Maschinelles Lernen Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7713 ER - TY - THES A1 - Ansah, Frimpong T1 - Performance and optimization technologies for software defined industrial networks N2 - The concept of programmable networks is radically changing the way communication infrastructures are designed, integrated, and operated. Currently, the topic is spearheaded by concepts such as software-defined networking, forwarding and control element separation, and network function virtualization. Notably, software-defined networking has attracted significant attention in telecommunication and data centers and thus already in some production-grade networks. Despite the prevalence of software-defined networking in these domains, industrial networks are yet to see its benefits to encourage adoption. However, the misconceptions around the concept itself, the role of virtualization, and algorithms pose a significant obstacle. Furthermore, the desire to accommodate new services in the automation industry results in a pattern of constantly increasing complexity of industrial networks, which is compounded by the requirement to provide stringent deterministic service guarantees considering characteristically different applications and thus posing a significant challenge for management, configuration, and maintenance as existing solutions are architecturally inflexible. Therefore, the first contribution of this thesis addresses the misconceptions around software-defined networking by providing a comparative analysis of programmable network concepts, detailing where software-defined networks compare with other concepts and how its principles can be leveraged to evolve industrial networks. Armed with the fundamental principles of programmable networks, the second contribution identifies virtualization technologies and proposes novel algorithms to provide varied quality of service guarantees on converged time-sensitive Ethernet networks using software-defined networking concepts. Finally, a performance analysis of a software-defined hybrid deployment solution for control and management of time-sensitive Ethernet networks that integrates proposed novel algorithms is presented as an industrial use-case that enables industrial operators to harness the full potential of time-sensitive networks. KW - Performance KW - Software Defined Industrial Networks KW - Virtual Network Embedding KW - Schedulability Analysis KW - Worst-case Delay Analysis KW - Software Defined Industrial Networks KW - Deterministic Petri-net and Queuing networks KW - Virtual Network Embedding and Worst-case Delay Analysis KW - Schedulability Analysis KW - Performance Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9002 PB - Universität Passau CY - Passau ER - TY - THES A1 - Charpenay, Victor T1 - Semantics for the Web of Things: Modeling the Physical World as a Collection of Things and Reasoning with their Descriptions N2 - The main research question of this thesis is to develop a theory that would provide foundations for the development of Web of Things (WoT) systems. A theory for WoT shall provide a model of the ‘things’ WoT agents relate to such that these relations determine what interactions take place between these agents. This thesis presents a knowledge-based approach in which the semantics of WoT systems is given by a transformation (an homomorphism) between a graph representing agent interactions and a knowledge graph describing ‘things’. It focuses on three aspects of knowledge graphs in particular: the vocabulary with which assertions can be made, the rules that can be defined over this vocabulary and its serialization to efficiently exchange pieces of a knowledge graph. Each aspect is developed in a dedicated chapter, with specific contributions to the state-of-the-art. The need for a unified vocabulary to describe ‘things’ in WoT and the Internet of Things (IoT) has been identified early on in the literature. Many proposals have been consequently published, in the form of Web ontologies. In Ch. 2, a systematic review of these proposals is being developed, as well as a comparison with the data models of the principal IoT frameworks and protocols. The contribution of the thesis in that respect is an alignment between the Thing Description (TD) model and the Semantic Sensor Network (SSN) ontology, two standards of the World Wide Web Consortium (W3C). The scope of this thesis is generally limited to Web standards, especially those defined by the Resource Description framework (RDF). Web ontologies do not only expose a vocabulary but also rules to extend a knowledge graph by means of reasoning. Starting from a set of TD documents, new relations between ‘things’ can be “discovered” this way, indicating possible interactions between the servients that relate to them. The experiments presented in Ch. 3 were done on the basis of this semantic discovery framework on two use cases: a building automation use case provided by Intel Labs and an industrial control use case developed internally at Siemens. The relations to discover often involve anonymous nodes in the knowledge graph: the chapter also introduces a novel skolemization algorithm to correctly process these nodes on a well-defined fragment of the Web Ontology Language (OWL). Finally, because this semantic discovery framework relies on the exchange of TD documents, Ch. 4 introduces a binary format for RDF that proves efficient in serializing TD assertions such that even the smallest WoT agents, i.e. micro-controllers, can store and process them. A formalization for the semantics-preserving compaction and querying of TD documents is also introduced in this chapter, at the basis of an embedded RDF store called the µRDF store. The ability of all WoT agents to query logical assertions about themselves and their environment, as found in TD documents, is a first step towards knowledge-based intelligent systems that can operate autonomously and dynamically in a decentralized way. The µRDF store is an attempt to illustrate the practical outcomes of the theory of WoT developed throughout this thesis. N2 - Die Dissertation entwickelt eine theoretische Grundlage für die Spezifikation Web of Things (WoT)-Systemen. Die Spezifikation der WoT-Systeme basiert auf einem Modell für die Dinge, oder Things, mit denen WoT-Agenten Beziehungen schaffen, welche Interaktionen zwischen Agenten erlauben. Diese Dissertation stellt einen wissensbasierten Ansatz vor, in dem die Semantik von WoT-Systemen als eine Transformation von einem Graphen von Agenten-Interaktionen nach einem Knowledge Graph definiert ist. Diese Arbeit deckt genau drei Aspekte Knowledge Graphs ab: das Vokabular, mit dem logische Schlüsse formuliert werden, die Regeln, die auf einem Vokabular basieren können und Serialisierung, um den effizienten Austausch zwischen Teilen eines Knowledge Graph zu ermöglichen. Alle drei Aspekt werden mit ihren wissenschaftlichen Beitrag in einem eigenen Kapitel adressiert. Der Bedarf an einem vereinigten Vokabular, um im WoT und dem Internet of Things (IoT) Things zu beschreiben, wurden in der Literatur frühzeitig identifiziert. Viele Ansatze wurden diesbezüglich vor allem als Web Ontologien veröffentlicht. Im Kapitel 2 werden diese Ansätze miteinander, sowie mit Datenmodelle dominierender IoT-Frameworks und Protokolle verglichen. Der Beitrag der Dissertation diesbezüglich ist die Verschmelzung des WoT Thing Description (TD) Modells und der Semantic Sensor Network (SSN) Ontologie, zwei vom World Wide Web Consortium (W3C) veröffentlichte Standards, in eine einzige Ontologie. Der Rahmen dieser Dissertation wird auf Web Standards begrenzt, insbesondere im Resource Description Framework (RDF) enthaltenen Standards. Web Ontologien bestehen nicht nur aus einm Vokabular, sondern auch aus Regel, um einen Knowledge Graphen durch Inferenz zu erweitern. Anhand einer Menge von TD-Dokumenten können neue Beziehungen zwischen Things abgeleitet werden und dadurch neue Interaktionen zwischen denjenigen Agenten, die sich auf diese Things beziehen eingeführt werden. Die im Kapitel 3 beschriebenen Experimente setzen dieses semantische Framework in zwei Domäne um: Gebäudeautomatisierung und Industrielle Kontrollsysteme. Das Erkennen impliziter Beziehungen zwischen Things hängt in bestimmten Fällen von sogenannten anonymen Knoten im Graphen ab: das Kapitel führt einen neuen Skolemization Algorithmus ein, um diese Knoten für einen bestimmten Teil der Web Ontology Language (OWL) korrekt zu verarbeiten. Zum Schluss, da die Umsetzung dieses semantischen Frameworks den Austausch von TD-Dokumenten erfordert, wird im Kapitel 4 ein binäres Format für RDF eingeführt, welches sich als sehr effizient für die Serialisierung erweist, damit auch kleine WoT Agenten, nämlich Mikrocontroller, TD-Dokumente speichern und verarbeiten können. Eine formale Definition für die Verdichtung und die Abfrage von TD-Dokumenten wird in diesem Kapitel eingeführt. Das Kapitel beschreibt auch die Implementierung einer eingebetteten RDF Datenbank, die µRDF Store genannt wurde. Die Fähigkeit WoT-Agenten logische Schlüsse über sich selbst und ihre Umgebung zu ziehen ist der erste Schritt in Richtung eines wissensbasierten intelligenten Systems, das autonom, dynamisch und dezentral agieren kann. Der µRDF Store zeigt die praktischen Vorteile, der in dieser Dissertation entwickelten Theorie für WoT auf. KW - Semantic Web KW - Web of Things KW - Internet of Things KW - Thing Description KW - Web Ontologies KW - Semantic Web KW - Internet der Dinge Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7578 ER - TY - THES A1 - Golovko, Dimitri T1 - Three Essays on the influence of company Facebook and traditional channel activities on recruitment success N2 - The appearance of web and online media has created a substantial change in the manner by which employers and applicants interact. The development of web 1.0 applications with one-way communication and the advancement of web 2.0 technologies with interactive components have extended the spectrum of recruitment channels. The new recruitment media channels have led the selection and analysis of their impact out of interaction on each other to a new challenge within academical literature. This dissertation addresses these issues in three separate essays. Study 1 focuses on the impact of Facebook as a social media recruitment channel on recruitment success. Many companies embed Facebook into their recruitment strategy as an additional recruitment channel for reaching potential applicants and motivating them to apply for available positions. Study 1 analyzes these activities and addresses the question of whether different Facebook activities influence recruitment success above and beyond other undertakings on traditional and online media channels. Study 1 concludes that on Facebook, company posts with a general focus and posts containing work or recruitment information both have a positive impact on recruitment success. The results of Study 1 are validated by company interviews with human resources (HR) managers who are responsible for the overall HR strategy of the company. Study 1 is the first academic work within HR and marketing research, which analyzes the impact of a company’s Facebook activities. Study 2 examines the impact of traditional media recruitment channels on recruitment success. Many companies employ traditional media channels for their recruitment marketing actions with the aim of achieving recruitment success. Study 2 uses media richness theory as a basis for analyzing the impact of a company’s activities within traditional media channels on recruitment success. Study 2 concludes that exhibition fair and online marketing activities influence recruitment success. In connection with brand equity theory, Study 2 also verifies whether the addition of Facebook activities reinforces the impact of traditional media channels on recruitment success. The results indicate that general Facebook activities have a reinforcing impact on exhibition fair and print media recruitment practices. Finally, Study 3 focuses on both the literature overview of traditional and social media recruitment practices and social media influence from the marketing literature. It also summarizes and categorizes previous research on the influence of traditional, online, and social media recruitment practices; the effect of a multichannel mix; and the influence of social media and social networking sites on different business outcomes from the marketing literature. Additionally, Study 3 identifies the research gaps and provides recommendations for future studies. This dissertation uses vector autoregression modelling, including a validation with the help of company interviews and the employment of media richness, signaling, and brand equity theories, combined with a thorough analysis of the research need. The dissertation closes the research gap regarding the analysis of the impact of Facebook, online, and traditional media on recruitment success. It also adds new perspectives to the HR and marketing literature. KW - Social Media KW - Recruitment Success Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-7645 ER - TY - THES A1 - Berndl, Emanuel T1 - Embedding a Multimedia Metadata Model into a Workflow-driven Environment Using Idiomatic Semantic Web Technologies N2 - The Semantic Web exists for about 20 years by now, but its applicability as well as its presence does not live up to the standards of its original idea. Incorporated Semantic Web Technologies do have an initial barrier to learn and apply, which can discourage many potential users. This leads to less available data overall in addition to decreased data quality. This work solves parts of the aforementioned problem by supporting idiomatic entry to those Semantic Web Technologies, allowing for "easier" accessibility and usability. Anno4j is a Java library that implements a form of Object-Relational Mapping for RDF data. With its application, RDF data can be created via a mapping by simply instantiating Java objects - an object-oriented programming concept the user is familiar with. On the other side, requesting persisted data is supported by a path-based querying possibility, while other features like transactional behaviour, code generation, and automated validation of input contribute to a more effective, comprehensive, and straightforward usage. A use-case is provided by the MICO Platform, a centralized software instance that connects autonomous multimedia extractors in a workflow-driven fashion. This leads to a rich metadata background for the inserted multimedia files, enabling them to be used in diverse scenarios as well as unlocking yet hidden semantics. For this task it was necessary to design and implement a metadata model that is able to aggregate and merge the varying extractor results under a common denominator: the MICO Metadata Model. The results of this work allow the use case to incorporate idiomatic Semantic Web Technologies which are then usable natively by non-Semantic Web experts. Additionally, an increase has been achieved in forms of data integration, synchronisation, integrity and validity, as well as an overall more comprehensive and rich implementation of the multimedia extractors. KW - Semantic Web KW - Multimedia KW - Workflows KW - Multimedia KW - Metadaten KW - RDF KW - Semantic Web Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6708 ER - TY - THES A1 - Kolesnikov, Sergiy T1 - Feature Interactions in Configurable Software Systems N2 - Software has become an important part of our life. Therefore, the number of different applications scenarios and user requirements of software systems grows rapidly. To satisfy these requirements, software vendors build configurable software systems that can be tailored to diverse needs without rebuilding them from scratch, which reduces costs and development time. Despite considerable advances in software engineering, which allow building high-quality configurable software systems, some challenges remain. One of these challenges is the feature interaction problem that arises when parts (features), from which a configurable system is composed, interact in unexpected ways, and inadvertently change the behavior or quality attributes (such as performance) of the system. The goal of this dissertation is to systematically study the nature of feature interactions, their causes, their influence on performance of configurable systems, and, based on empirical results, suggest ways of improving techniques for detecting and predicting feature interactions. More specifically, we compared and evaluated different strategies for the analysis of configurable software systems. The results of our evaluation complement empirical data from previous work about how different analysis strategies for configurable software systems compare with respect to different aspects, such as performance. These results shall be used to develop effective and scalable techniques and tools for analysis of configurable software including feature-interaction detection and prediction techniques and tools. Technically, we used a machine-learning technique to quantify the influence of feature interactions on performance of real-world configurable systems. We studied the characteristics of interactions that have the largest influence on performance and found that interactions among few features have higher influence than interactions among many features. With a growing number of interacting features, the influence of the corresponding interactions decreases consistently. This implies that interactions involving multiple features can be ignored in practice because of their marginal influence on performance. We also investigated the causes of the interactions and were able to identify several patterns that link these interactions to the architecture of the systems: For example, we found that if a data processing system consisted of multiple features that processed the same data in sequence then these features interacted. The identified patterns can help to anticipate performance interactions already at an early development stage when a system’s architecture is designed. Furthermore, considering that control-flow interactions (observable at the level of control flow among features) are easier to detect than performance interactions (externally observable through measuring performance of different combinations of features), we conducted a case study on two configurable systems. In this case study, we investigated a possible relation among control-flow feature interactions and performance feature interactions. We also discussed how this relation can be exploited by interaction detection and performance prediction techniques to make them more time efficient and precise. Our case study on two real-world configurable systems revealed that a relation indeed exists, and we were able to show how it can be used to reduce the search space of possibly existing performance interactions. The study can serve as a blueprint for further studies that can rely on our conceptual framework for investigating relations among external and internal interactions. Overall, the contribution of this dissertation consists of scientific and technical insights, practical tool implementations, empirical evaluations, and case studies that advance the current state of research in the area of feature interactions in configurable software systems. In particular, we provide insights into the causes of feature interactions and their influence on performance of real-world configurable systems (e.g., interaction patterns, decreasing influence of interactions with growing number of involved features). Our results also suggest ways of improving techniques for detecting and predicting feature interactions (e.g., ignoring interactions among multiple features, reducing the search space based on relations among interactions). KW - Configurable software system KW - Feature interaction KW - Performance influence model KW - Software product line KW - Variability-aware software analysis strategy KW - Softwareentwicklung KW - Qualitätssicherung Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6739 ER - TY - THES A1 - Stahlbauer, Andreas T1 - Abstract Transducers for Software Analysis and Verification N2 - Whenever software faults can endanger human life, property, or the environment, the absence of faults must be ensured with utmost care and the best technologies available. Evidence is needed showing that all requirements are satisfied and that the risk of faults is reduced. One technique to conduct such a verification task—composed of the software to verify, the specification to check, and a model of the environment—is software model checking. To conduct a verification task with a model checker, different models of the task are constructed. We distinguish between two types of task models: syntactic task models and semantic task models, which define the respective syntactic structure (control flow) and semantic structure (state transitions, invariants) of the verification task. When constructing such models, we can observe that similar structures and substructures reappear within and among different verification tasks. For example, the same assertions to check can appear in different functions, or the same predicate can be part of different invariants to describe sets of program states. Similarities that appear during the model construction process can be the result of solving similar reasoning problems, often solved using computationally expensive procedures (as typical for model checking), over and over again. Not reusing results of solving similar problems, not having a means for conducting repeated efforts automatically, or not trying to reduce the number of similar reasoning efforts, is a waste of precious resources. To address these problems, we present a common conceptual and technical foundation for sharing syntactic and semantic task artifacts for reuse, within and among verification runs. Both the syntactic construction of a verification task and the construction of its semantic model—which describes all possible behaviors and states—are covered. We study how commonalities and regularities in the task models can be taken into account to facilitate the process of sharing task artifacts for reuse, and to make the overall verification process more efficient and effective. We introduce abstract transducers as the theoretical foundation of this thesis: a type of finite-state transducers with an inherent notion of abstraction for states, the input alphabet, and its output alphabet. Abstracting these transducers allows us to widen both the set of input words for that they produce output and the sets of output words. Abstract transducers are instantiated as task artifact transducers to map from program structures to task artifacts to share. We show that the notion of abstraction provides a means for increasing the scope for that task artifacts are shared for reuse. We present two instances of task artifact transducers: Yarn transducers and precision transducers. We use Yarn transducers for providing code to weave into the control-flow structure of a computer program, and present the Loom analysis as a means for orchestrating the weaving process. Precision transducers provide a means for sharing abstraction precisions for reuse, thus aid in defining the level of abstraction of a semantic task model. For both types of transducers, we provide empirical evidence on their practical applicability, for example, to verify Linux kernel modules, and show that they can help in increasing the verification performance. KW - Program Analysis KW - Software Model Checking KW - Automata Theory KW - Transduktor KW - Formale Beschreibungstechnik KW - Modellgetriebene Entwicklung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8468 ER - TY - THES A1 - Ihl, Andreas T1 - Four investigations of arising phenomena in contemporary work settings: the cases of mindfulness practices and crowdworking online platforms N2 - New arising phenomena in the occupational realm strongly shape contemporary work settings. These developments heavily affect how individuals work within and beyond organizational boundaries. Two phenomena associated with the changing nature of work have been especially prevalent in work settings and intensively discussed in public debates. First, organizations started to introduce mindfulness practices to their workforce. Rooted in spirituality and formerly used in clinical therapy, mindfulness is applied as a human resource development practice to train employees and managers to cope with the increased work intensification. Second, digitization and the importance of individualization opened up the path for work settings beyond organizational boundaries on crowdworking online platforms. On these online platforms, workers process tasks independently and remotely. Research just started to address the implications and meaning of mindfulness practices in organizations and the rise of crowdworking platforms. Several questions remain unanswered. This dissertation addresses unanswered but pressing questions related to these two phenomena shaping contemporary work settings. Structured in four essays the first two essays address the application and meaning of mindfulness practices. The first essay analyzes the meaning and interpretations of these new practices within organizations. The second essay takes contextual factors of the organizational environment into account and investigates their relevance for the successful implementation of mindfulness practices. The second two essays are dedicated to work attitudes and behavior on crowdworking online platform. Essay three captures individuals’ motivation for working on such platforms and their effects for workers’ work performance. The last essay deals with the role of professional crowdworking online communities in the work experience and asses the effects of social support in these communities on occupational identification, work meaningfulness and finally on work engagement. Each essay in this dissertation generates new insights on arising phenomena in contemporary work settings. They address several timely yet unanswered research questions for these rising phenomena and thereby offer a deeper and more nuanced understanding of the role mindfulness practices and crowdworking online platforms play in the context of the future of work. KW - Future of work KW - Work settings KW - Arising phenomena KW - Mindfulness practices KW - Crowdworking online platforms KW - Organisation KW - Organisationsverhalten KW - Arbeitsbeziehungen KW - Crowdworking KW - Digitalisierung KW - Achtsamkeit Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8696 ER - TY - THES A1 - Garchery, Mathieu T1 - User-centered intrusion detection using heterogeneous data N2 - With the frequency and impact of data breaches raising, it has become essential for organizations to automate intrusion detection via machine learning solutions. This generally comes with numerous challenges, among others high class imbalance, changing target concepts and difficulties to conduct sound evaluation. In this thesis, we adopt a user-centered anomaly detection perspective to address selected challenges of intrusion detection, through a real-world use case in the identity and access management (IAM) domain. In addition to the previous challenges, salient properties of this particular problem are high relevance of categorical data, limited feature availability and total absence of ground truth. First, we ask how to apply anomaly detection to IAM audit logs containing a restricted set of mixed (i.e. numeric and categorical) attributes. Then, we inquire how anomalous user behavior can be separated from normality, and this separation evaluated without ground truth. Finally, we examine how the lack of audit data can be alleviated in two complementary settings. On the one hand, we ask how to cope with users without relevant activity history ("cold start" problem). On the other hand, we seek how to extend audit data collection with heterogeneous attributes (i.e. categorical, graph and text) to improve insider threat detection. After aggregating IAM audit data into sessions, we introduce and compare general anomaly detection methods for mixed data to a user identification approach, designed to learn the distinction between normal and malicious user behavior. We find that user identification outperforms general anomaly detection and is effective against masquerades. An additional clustering step allows to reduce false positives among similar users. However, user identification is not effective against insider threats. Furthermore, results suggest that the current scope of our audit data collection should be extended. In order to tackle the "cold start" problem, we adopt a zero-shot learning approach. Focusing on the CERT insider threat use case, we extend an intrusion detection system by integrating user relations to organizational entities (like assignments to projects or teams) in order to better estimate user behavior and improve intrusion detection performance. Results show that this approach is effective in two realistic scenarios. Finally, to support additional sources of audit data for insider threat detection, we propose a method representing audit events as graph edges with heterogeneous attributes. By performing detection at fine-grained level, this approach advantageously improves anomaly traceability while reducing the need for aggregation and feature engineering. Our results show that this method is effective to find intrusions in authentication and email logs. Overall, our work suggests that masquerades and insider threats call for different detection methods. For masquerades, user identification is a promising approach. To find malicious insiders, graph features representing user context and relations to other entities can be informative. This opens the door for tighter coupling of intrusion detection with user identities, roles and privileges used in IAM solutions. KW - Anomalie KW - Authenitifikation KW - Computersicherheit Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8704 ER - TY - THES A1 - Koop, Martin T1 - Preventing the Leakage of Privacy Sensitive User Data on the Web N2 - Das Aufzeichnen der Internetaktivität ist mit der Verknüpfung persönlicher Daten zu einer Schlüsselressource für viele kostenpflichtige und kostenfreie Dienste im Web geworden. Diese Dienste sind zum einen Webanwendungen, wie beispielsweise die von Google bereitgestellten Karten/Navigation oder Websuche, die täglich kostenlos verwendet werden. Zum anderen sind es alle Webseiten, die meist kostenlos Nachrichten oder allgemeine Informationen zu verschiedenen Themen bereitstellen. Durch das Aufrufen und die Nutzung dieser Webdienste werden alle Informationen, die im Webdienst verarbeitet werden, an den Dienstanbieter weitergeben. Dies umfasst nicht nur die im Benutzerkonto des Webdienstes gespeicherte Profildaten wie Name oder Adresse, sondern auch die Aktivität mit dem Webdienst wie das anklicken von Links oder die Verweildauer. Darüber hinaus gibt es jedoch auch unzählige Drittparteien, welche zumeist im Hintergrund in die Webdienste eingebunden sind und das Benutzerverhalten der kompletten Webaktivität - Webseiten übergreifend - mitspeichern sowie auswerten. Der Einsatz verschiedener, in der Regel für den Benutzer verborgener Techniken, dient dazu das Online-Verhalten der Benutzer genau zu verfolgen und viele sensible Daten zu sammeln. Dieses Verhalten wird als Web-Tracking bezeichnet und wird hauptsächlich von Werbeunternehmen genutzt. Die gesammelten Daten sind oft personenbezogen und eine wertvolle Ressourcen der Unternehmen, um Beispielsweise passend zum Benutzerprofil personalisierte Werbung schalten zu können. Mit der Nutzung dieser personenbezogenen Daten entstehen aber auch weitreichendere Auswirkungen, welche sich unter anderem in Preisanpassungen für Benutzer mit speziellen Profilattributen, wie der Nutzung von teuren Endgeräten, widerspiegeln. Ziel dieser Arbeit ist es die Privatsphäre der Nutzer im Internet zu steigern und die Nutzerverfolgung von Web-Tracking signifikant zu reduzieren. Dabei stellen sich vier Herausforderungen, die jeweils einen Forschungsschwerpunkt dieser Arbeit bilden: (1) Systematische Analyse und Einordnung eingesetzter Tracking-Techniken, (2) Untersuchung vorhandener Schutzmechanismen und deren Schwachstellen,(3) Konzeption einer Referenzarchitektur zum Schutz vor Web-Tracking und (4) Entwurf einer automatisierten Testumgebungen unter Realbedingungen, um die Reduzierung von Web-Tracking in den entwickelten Schutzmaßnahmen zu untersuchen. Jeder dieser Forschungsschwerpunkte stellt neue Beiträge bereit, um einheitlich das übergeordnete Ziel zu erreichen: der Entwicklung von Schutzmaßnahmen gegen die Preisgabe sensibler Benutzerdaten im Internet. Der erste wissenschaftliche Beitrag dieser Dissertation ist eine umfassende Evaluation eingesetzter Web-Tracking Techniken und Methoden, sowie deren Gefahren, Risiken und Implikationen für die Privatsphäre der Internetnutzer. Die Evaluation beinhaltet zusätzlich die Untersuchung vorhandener Tracking-Schutzmechanismen und deren Schwachstellen. Die gewonnenen Erkenntnisse sind maßgeblich für die in dieser Arbeit neu entwickelten Ansätze und verbessern den bisherigen nicht hinreichend gewährleisteten Schutz vor Web-Tracking. Der zweite wissenschaftliche Beitrag ist die Entwicklung einer robusten Klassifizierung von Web-Tracking, der Entwurf einer effizienten Architektur zur Langzeituntersuchung von Web-Tracking sowie einer interaktiven Visualisierung des Auftreten von Web-Tracking im Internet. Dabei basiert der neue Klassifizierungsansatz, um Tracking zu identifizieren, auf der Entropie Messung des Informationsgehalts von Cookies. Die Resultate der Web-Tracking Langzeitstudien sind unter anderem 1.209 identifizierte Tracking-Domains auf den meistbesuchten Webseiten in Deutschland. Hierbei wurden innerhalb der Top 25 Webseiten im Durchschnitt 45 Tracking-Elemente pro Webseite gefunden. Der Tracker mit dem höchsten Potenzial zum Erstellen eines Benutzerprofils war doubleclick.com, da er 90% der Webseiten überwacht. Die Auswertung des untersuchten Tracking-Netzwerks ergab weiterhin einen detaillierten Einblick in die Tracking-Technik mithilfe von Weiterleitungslinks. Dabei haben wir 1,2 Millionen HTTP-Traces von monatelangen Crawls der 50.000 international meistbesuchten Webseiten analysiert. Die Ergebnisse zeigen, dass 11,6% dieser Webseiten HTTP-Redirects, verborgen in Webseiten-Links, zum Tracken verwenden. Dies wird eingesetzt, um den Webseitenverlauf des Benutzers nach dem Klick durch eine Kette von (Tracking-)Servern umzuleiten, welche in der Regel nicht sichtbar sind, bevor das beabsichtigte Link-Ziel geladen wird. In diesem Szenario erfasst der Tracker wertvolle Verbindungs-Metadaten zu Inhalt, Thema oder Benutzerinteressen der Website. Die Visualisierung des Tracking Ökosystem stellen wir in einem interaktiven Open-Source Web-Tool bereit. Der dritte wissenschaftliche Beitrag dieser Dissertation ist die Konzeption von zwei neuartigen Schutzmechanismen gegen Web-Tracking und der Aufbau einer automatisierten Simulationsumgebung unter Realbedingungen, um die Effektivität der Umsetzungen zu verifizieren. Der Fokus liegt auf den beiden meist verwendeten Tracking-Verfahren: Cookies (hierbei wird eine eindeutigen ID auf dem Gerät des Benutzers gespeichert), sowie Browser-Fingerprinting. Letzteres beschreibt eine Methode zum Sammeln einer Vielzahl an Geräteeigenschaften, um den Benutzer eindeutig zu (re- )identifizieren, ohne eine eindeutige ID auf dem Gerät zu speichern. Um die Effektivität der in dieser Arbeit entwickelten Schutzmechanismen vor Web-Tracking zu untersuchen, implementierten und evaluierten wir die Schutzkonzepte direkt im Chromium Browser. Das Ergebnis zeigt eine erfolgreiche Reduzierung von Web-Tracking um 44%. Zusätzlich verbessert das in dieser Arbeit entwickelte Konzept “Site Isolation” den Datenschutz des privaten Browsing-Modus, ermöglicht das Setzen eines manuellen Speicher-Zeitlimits von Cookies und schützt den Browser gegen verschiedene Bedrohungen wie CSRF (Cross-Site Request Forgery) oder CORS (Cross-Origin Ressource Sharing). Site Isolation speichert dabei den Status der lokalen Website in separaten Containern und kann dadurch diverse Tracking-Methoden wie Cookies, lokalStorage oder redirect tracking verhindern. Bei der Auswertung von 1,6 Millionen Webseiten haben wir gezeigt, dass der Tracker doubleclick.com das höchste Potenzial besitzt, den Nutzer zu verfolgen und auf 25% der 40.000 international meistbesuchten Webseiten vertreten ist. Schließlich demonstrieren wir in unserem erweiterten Chromium-Browser einen robusten Browser-Fingerprinting-Schutz. Der Test unseres Prototyps mittels 70.000 Browsersitzungen zeigt, dass unser Browser den Nutzer vor sogenanntem Browser-Fingerprinting Tracking schützt. Im Vergleich zu fünf anderen Browser-Fingerprint-Tools erzielte unser Prototyp die besten Ergebnisse und ist der erste Schutzmechanismus gegen Flash sowie Canvas Fingerprinting. KW - Web Tracking, Cookies, Browser Fingerprinting, Redirects, Site Isolation KW - Datenschutz KW - Computersicherheit KW - Objektverfolgung Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8717 ER - TY - THES A1 - Alshawish, Ali T1 - Risk-based Security Management in Critical Infrastructure Organizations N2 - Critical infrastructure and contemporary business organizations are experiencing an ongoing paradigm shift of business towards more collaboration and agility. On the one hand, this shift seeks to enhance business efficiency, coordinate large-scale distribution operations, and manage complex supply chains. But, on the other hand, it makes traditional security practices such as firewalls and other perimeter defenses insufficient. Therefore, concerns over risks like terrorism, crime, and business revenue loss increasingly impose the need for enhancing and managing security within the boundaries of these systems so that unwanted incidents (e.g., potential intrusions) can still be detected with higher probabilities. To this end, critical infrastructure organizations step up their efforts to investigate new possibilities for actively engaging in situational awareness practices to ensure a high level of persistent monitoring as well as on-site observation. Compliance with security standards is necessary to ensure that organizations meet regulatory requirements mostly shaped by a set of best practices. Nevertheless, it does not necessarily result in a coherent security strategy that considers the different aims and practical constraints of each organization. In this regard, there is an increasingly growing demand for risk-based security management approaches that enable critical infrastructures to focus their efforts on mitigating the risks to which they are exposed. Broadly speaking, security management involves the identification, assessment, and evaluation of long-term (or overall) objectives and interests as well as the means of achieving them. Due to the critical role of such systems, their decision-makers tend to enhance the system resilience against very unpleasant outcomes and severe consequences. That is, they seek to avoid decision options associated with likely extreme risks in the first place. Practically speaking, this risk attitude can significantly influence the decision-making process in such critical organizations. Towards incorporating the aversion to extreme risks into security management decisions, this thesis investigates thoroughly the capabilities of a recently emerged theory of games with payoffs that are probability distributions. Unlike traditional optimization techniques, this theory provides an alternative decision technique that is more robust to extreme risks and uncertainty. Furthermore, this thesis proposes a new method that gives a decision maker more control over the decision-making process through defining loss regions with different importance levels according to people's risk attitudes. In this way, the static decision analysis used in the distribution-valued games is transformed into a dynamic process to adapt to different subjective risk attitudes or account for future changes in the decision caused by a learning process or other changes in the context. Throughout their different parts, this thesis shows how theoretical models, simulation, and risk assessment models can be combined into practical solutions. In this context, it deals with three facets of security management: allocating limited security resources, prioritizing security actions, and tweaking decision making. Finally, the author discusses experiences and limitations distilled from this research and from investigating the new theory of games, which can be taken into account in future approaches. KW - Security Management KW - Game Theory KW - Critical Infrastructures KW - Risk Attitude KW - Uncertainty KW - Spieltheorie KW - Risikomanagement Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10026 ER - TY - THES A1 - Lang, Thomas T1 - AI-Supported Interactive Segmentation of 3D Volumes N2 - The segmentation of volumetric datasets, i.e., the partitioning of the data into disjoint sub-volumes with the goal to extract information about these regions,is a difficult problem and has been discussed in medical imaging for decades. Due to the ever-increasing imaging capabilities, in particular in X-ray computed tomography (CT) or magnetic resonance imaging, segmentation in industrial applications also gains interest. Especially in industrial applications the generated datasets increase in size. Hence, most applications apply well-known techniques in a 2+1-dimensional manner,i.e., they apply image segmentation procedures on each slice separately and track the progress along the axis of the volume in which the slices are stacked on. This discards the information on preceding or subsequent slices, which is often assumed to be nearly identical. However, in the industrial context this might prove wrong since industrial parts might change their appearance significantly over the course of even a few slices. Moreover, artifacts can further distort the content of the slices. Therefore, three-dimensional processing of voxel volumes has to be preferred, which induces constraints upon the segmentation procedures. For example, they must not consider global information as it is usually not feasible in big scans to compute them efficiently. Yet another frequent problem is that applications focus on individual parts only and algorithms are tailored to that case. Most prominent medical segmentation procedures do so by applying methods to specifically find the liver and only the liver of a patient, for example. The implication is that the same method then cannot be applied to find other parts of the scan and such methods have to be designed individually for any object to be segmented. Flexible segmentation methods are needed too specifically when partitioning unique scans. We define a unique scan to be a voxel dataset for which no comparable volume exists. Classical examples include the use case of cultural heritage where not only the objects themselves are unique but also scan parameters are optimized to obtain the best image quality possible for that specific scan. This thesis aims at introducing novel methods for voxelwise classifications based on local geometric features. The latter are computed from local environments around each voxel and extract information in similar ways as humans do, namely by observing their similarity to geometric or textural primitives. These features serve as the foundation to learning the proposed voxelwise classifiers and to discriminate between segmented and unsegmented voxels. On the one hand, they perform fully automated clustering of volumes for which a representative random sample is extracted first. On the other hand, a set of segmenting classifiers can be trained from few seed voxels, i.e., volume elements for which a domain expert marked if they belong to the components that shall be segmented. The interactive selection offers the advantage that no completely labeled voxel volumes are necessary and hence that unique scans of objects can be segmented for which no comparable scans exist. Overall, it will be shown that all proposed segmentation methods are effectively of linear runtime with respect to the number of voxels in the volume. Thus, voxel volumes without size restrictions can be segmented in an efficient linear pass through the volume. Finally, the segmentation performance is evaluated on selected datasets which shows that the introduced methods can achieve good results on scans from a broad variety of domains for both small and big voxel volumes. N2 - Die Segmentierung von Volumendaten, also die Partitionierung der Daten in disjunkte Teilvolumen zur weiteren Informationsextraktion, ist ein Problem, welches in der medizinischen Bildverarbeitung seit Jahrzehnten behandelt wird. Bedingt durch die sich ständig verbessernden Bilderfassungsmethoden, speziell im Bereich der Röntgen-Computertomographie (CT) oder der Magnetresonanztomographie, gewinnt die Segmentierung von industriellen Volumendaten auch an Wichtigkeit. Insbesondere im industriellen Kontext steigt die Größe der zu segmentierenden Daten jedoch rasant an, so dass sich die meisten Segmentierungsapplikationen auf den 2+1-dimensionalen Fall beschränken, also Bilder verarbeiten und die Ergebnisse über mehrere Bilder hinweg verfolgen. Jedoch werden somit beispielsweise geometrische Informationen über benachbarte Schichten ignoriert. Diese können sich aber gerade im industriellen Bereich signifikant ändern. Aus diesem Grund ist hier die dreidimensionale Bildverarbeitung vorzuziehen. Dadurch ergeben sich neue Einschränkungen, beispielsweise können keine globalen Informationen zur Segmentierung herangezogen werden, da diese typischerweise nicht effizient berechenbar sind. Ferner fokussieren sich dreidimensionale Methoden aus medizinischen Bereichen zumeist auf bestimmte Bestandteile der Daten, wie einzelne Organe. Dies schränkt die Generalität dieser Methoden signifikant ein und somit sind separate Verfahren für jedes zu segmentierende Objekt notwendig. Flexible Methoden sind darüber hinaus bei Anwendung auf einzigartige Scans erforderlich. Ein einzigartiger Scan ist ein Voxelvolumen, für welches kein vergleichbares Datum existiert. Klassische Beispiele sind Kulturgutdigitalisate, da dort nicht nur die Objekte einzigartig sind, sondern auch die Aufnahmeparameter spezifisch für diesen einen Scan optimiert wurden. Die vorliegende Dissertation führt neuartige Methoden zur voxelweisen dreidimensionalen Segmentierung von Volumendaten auf Basis lokaler geometrischer Informationen ein. Die Bewertung dieser Informationen imitiert die menschliche Objektwahrnehmung, indem lokale Regionen mit geometrischen oder strukturellen Primitiven verglichen werden. Mit Hilfe dieser Bewertungen werden voxelweise anzuwendende Klassifikatoren trainiert, welche zwischen erwünschten und unerwünschten Voxeln unterscheiden sollen. Ein Teil dieser Klassifikatoren führt eine vollautomatische Clustering-Analyse durch, nachdem eine repräsentative und zufällig ausgewählte Teilmenge fester Größe an Voxeln selektiert wurde. Die verbliebenen Segmentierungsalgorithmen erhalten Trainingsdaten in Form von Seed-Voxeln, also wenige Volumenelemente, die von einem Domänenexperten markiert wurden. Diese interaktive Herangehensweise ermöglicht das Einbringen von Expertenwissen ohne die Notwendigkeit vollständig annotierter Trainingsvolumen, wodurch auch einzigartige Scans segmentiert werden können. Für alle Verfahren wird dargelegt, dass die eingeführten Algorithmen von asymptotisch linearer Laufzeit in der Anzahl der Voxel im Volumen sind. Somit können Voxeldaten ohne Größenbeschränkungen in einem effizienten linearen Durchgang verarbeitet werden. Abschließend wird die Performanz der vorgestellten Verfahren auf ausgewählten Daten evaluiert und aufgezeigt, dass mit denselben wenigen Verfahren gute Ergebnisse auf vielen unterschiedlichen Domänen und gleichfalls auf kleinen und großen Volumen erzielt werden können. KW - Segmentation KW - Computed Tomography KW - Artificial Intelligence KW - Active Learning KW - Interactive KW - Machine learning KW - Image processing Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9221 ER - TY - THES A1 - Niedermeier, Michael T1 - Towards High Performability in Advanced Metering Infrastructures N2 - The current movement towards a smart grid serves as a solution to present power grid challenges by introducing numerous monitoring and communication technologies. A dependable, yet timely exchange of data is on the one hand an existential prerequisite to enable Advanced Metering Infrastructure (AMI) services, yet on the other a challenging endeavor, because the increasing complexity of the grid fostered by the combination of Information and Communications Technology (ICT) and utility networks inherently leads to dependability challenges. To be able to counter this dependability degradation, current approaches based on high-reliability hardware or physical redundancy are no longer feasible, as they lead to increased hardware costs or maintenance, if not both. The flexibility of these approaches regarding vendor and regulatory interoperability is also limited. However, a suitable solution to the AMI dependability challenges is also required to maintain certain regulatory-set performance and Quality of Service (QoS) levels. While a part of the challenge is the introduction of ICT into the power grid, it also serves as part of the solution. In this thesis a Network Functions Virtualization (NFV) based approach is proposed, which employs virtualized ICT components serving as a replacement for physical devices. By using virtualization techniques, it is possible to enhance the performability in contrast to hardware based solutions through the usage of virtual replacements of processes that would otherwise require dedicated hardware. This approach offers higher flexibility compared to hardware redundancy, as a broad variety of virtual components can be spawned, adapted and replaced in a short time. Also, as no additional hardware is necessary, the incurred costs decrease significantly. In addition to that, most of the virtualized components are deployed on Commercial-Off-The-Shelf (COTS) hardware solutions, further increasing the monetary benefit. The approach is developed by first reviewing currently suggested solutions for AMIs and related services. Using this information, virtualization technologies are investigated for their performance influences, before a virtualized service infrastructure is devised, which replaces selected components by virtualized counterparts. Next, a novel model, which allows the separation of services and hosting substrates is developed, allowing the introduction of virtualization technologies to abstract from the underlying architecture. Third, the performability as well as monetary savings are investigated by evaluating the developed approach in several scenarios using analytical and simulative model analysis as well as proof-of-concept approaches. Last, the practical applicability and possible regulatory challenges of the approach are identified and discussed. Results confirm that—under certain assumptions—the developed virtualized AMI is superior to the currently suggested architecture. The availability of services can be severely increased and network delays can be minimized through centralized hosting. The availability can be increased from 96.82% to 98.66% in the given scenarios, while decreasing the costs by over 60% in comparison to the currently suggested AMI architecture. Lastly, the performability analysis of a virtualized service prototype employing performance analysis and a Musa-Okumoto approach reveals that the AMI requirements are fulfilled. KW - Advanced Metering Infrastructure KW - Virtualization KW - Performability KW - Energieversorgung KW - Virtualisierung Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8597 ER - TY - THES A1 - Hatzesberger, Simon T1 - Strongly Asymptotically Optimal Methods for the Pathwise Global Approximation of Stochastic Differential Equations with Coefficients of Super-linear Growth N2 - Our subject of study is strong approximation of stochastic differential equations (SDEs) with respect to the supremum and the L_p error criteria, and we seek approximations that are strongly asymptotically optimal in specific classes of approximations. For the supremum error, we prove strong asymptotic optimality for specific tamed Euler schemes relating to certain adaptive and to equidistant time discretizations. For the L_p error, we prove strong asymptotic optimality for specific tamed Milstein schemes relating to certain adaptive and to equidistant time discretizations. To illustrate our findings, we numerically analyze the SDE associated with the Heston–3/2–model originating from mathematical finance. KW - Stochastic differential equation KW - Strong approximation KW - Strong asymptotic optimality KW - Asymptotic lower error bounds KW - Asymptotic upper error bounds KW - Stochastische Differentialgleichung KW - Approximation Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8100 ER - TY - THES A1 - Nguyen, Van Nghia T1 - Internationale Standards für die Vollstreckung von Zivilurteilen: Aktuelle Situation und mögliche Lösungen für Vietnam N2 - Das Ziel der vorliegenden Dissertation ist es, Erkenntnisse für eine mögliche Verbesserung des vietnamesischen Zwangsvollstreckungsrechts zu gewinnen, um in Vietnam ein effektives und effizientes Vollstreckungsverfahren zu erreichen, das im Einklang mit den internationalen Standards steht. Das erste Kapitel untersucht die neuen internationalen Standards im Bereich der Vollstreckung von Zivilurteilen und behandelt sie im Vergleich mit wichtigen Grundsätzen der Vollstreckung von Gerichtsurteilen in Vietnam. Der zweite Kapitel widmet sich dem Aufbau und der Organisation der Vollstreckungsbehörden und dabei insbesondere den folgenden Themen: Den Vorteilen des Aufbaus eines Berufsverbands der Gerichtsvollzieher, welcher alle Mitglieder des Berufsstandes umfasst. Das dritte Kapitel zeigt unter anderem, dass wirksame Mechanismen zur Vollstreckung von Entschreidungen den Grundsatz der Verhältnismäßigkeit einhalten müssen. Das vierte Kapitel stellt die internationalen Normen über den einstweiligen Rechtsschutz dar, der ein unverzichtbares Mittel ist, um die Durchsetzung von Zivilurteilen zu gewährleisten. Basierend auf den Ergebnissen aus den vier Kapiteln ergeben sich eine Reihe von wertvollen Erkenntnissen für die Verbesserung des vietnamesischen Rechtssystems und die Verbesserung der Effizienz der Vollstreckung zivilgerichtlicher Urteile. KW - Internationale Standards KW - Vollstreckung von Zivilurteilen KW - mögliche Lösungen Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8191 ER - TY - THES A1 - Kopp, Katrina T1 - Essays on Fraud and Forensic Accounting - Research from a German Accounting Perspective N2 - Investment fraud, cybercrime, inconsistencies in health care or the emission scams at the car manufacturers, economic crime (fraud) manifests itself in many facets. For Germany, the cases of FlowTex, Comroad, HRE-Bad-Bank, Holzmann, Volkswagen and the current fraud suspicions at Porsche AG are prominent examples with mostly appalling consequences (Ballwieser and Dobler 2003; Kögler 2015; Meck, Nienhaus, and von Petersdorff 2011; Peemöller and Hofmann 2005). Nevertheless, newspapers without reports on fraud have become scarce. Headlines such as: "Corruption - the daily business" impress hardly anyone, not least because of their certain regularity. The cases revealed publicly are, however, only the tip of the iceberg, as reported by renowned experts (Bundeskriminalamt 2018; LKA 2018). Currently, the State Criminal Police Office (Landeskriminalamt (LKA)) of Baden-Württemberg and its department for economic and environmental crime and corruption is concerned with 72 major proceedings (LKA 2018). However, fraud could be avoided or at least contained by appropriate preventive measures (Bundeskriminalamt 2018; Bussmann 2004; Hlavica, Klapproth, and Hülsberg 2011). Consequently, the pressure on companies and employees to demonstrate compliant and ethical behavior and to meet the demands of stakeholders at all times within their business activities has grown (Buff 2000). This raises the question about which precautionary measures a company can and must implement (Weick and Sutcliffe 2015). Although corporate awareness of this issue has increased, most in-house detection of fraud is accidental, suggesting that companies are still lacking appropriately functioning and systematic (early) detection mechanism (Hlavica et al. 2011). If a company is accused of fraud, this usually has serious repercussions on its corporate reputation. Prior research found that capital market reputation-based penalties for affected companies are on average 7.5 times higher than penalties imposed by the legal system (Karpoff, Lee, and Martin 2008). Furthermore, the accusation of fraud also affects the external auditor’s reputation, since lacking the detection of manipulations in clients’ (financial) reports not only damages public confidence in the accuracy of firms’ financial statements but also in the reliability of the auditor's report. Therefore, it is not surprising that the demand for greater supervision and control of firms’ (financial) reporting as well as for reliable work of statutory auditors continually increases (Herkendell 2007). Although to a lesser extent, this is also the case for the determination of material (accounting) errors within a firm’s financial statements, which are often difficult to distinguish from accounting fraud. According to the International Accounting Standard (IAS) 8.5, published by the International Accounting Standards Board (IASB), errors are omissions and/or misstatements of items that result from the nonapplication or misapplication of trusted information (IASB 2003). Thus, accounting errors and accounting fraud both result in incorrect information of a firm’s financial reports and consequently affect stakeholders’ decision-making. One resulting attempt in counteracting the broad demand for appropriate protective measures was the implementation of a two-stage enforcement system involving the German Financial Reporting Enforcement Panel (Deutsche Prüfstelle für Rechnungslegung (DPR)) as part of the adopted Financial Reporting Enforcement Act (Bilanzkontrollgesetz (BilKoG)) in 2004. The primary objective of the Federal Government's implementation of this mechanism was to strengthen investors' lost confidence in the German capital market, the information content of financial reporting, and Germany as a financial center in the international competition. In addition, the enforcement system serves as a sanctioning instrument for firms in the event of an error detection and subsequent adverse error disclosure via the German federal registry (elektronischer Bundesanzeiger). This adverse error disclosure not only sanctions denounced firms but also questions the quality of the annual financial statement audit and thus the quality of the responsible audit firm. Hence, the often thin line between firms’ unintentional accounting errors, purposive engagement in earnings management, and intentional fraud in particular presents an increasing challenge for the audit profession. The objective of my cumulative dissertation is to provide a comprehensive overview of fraud and forensic accounting as well as insights into the distinct dimensions among the concepts of errors, earnings management and fraud from a German accounting perspective. I aim at achieving this objective in three steps: First (1), by providing an overview of discipline-specific education possibilities, existing forensic accounting practices, institutions, and current developments in research. Second (2), by assessing auditors’ obligations and responsibilities for the detection of irregularities within the scope of the annual financial statement audit and whether including forensic services into the service portfolio of audit firms can help increase their audit quality due to spillover effects. Third (3), by examining firms’ reputation (re-)building management in response to financial violations and how this process is associated with managing multiple (stakeholder) reputations. This dissertation is composed of three individual papers whereby each considers one of the above outlined focus areas KW - Wirtschaftskriminalität Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-8264 ER - TY - INPR A1 - Yakouchyk, Katsiaryna T1 - Belarusian State Ideology: A Strategy of Flexible Adaptation N2 - While in some Eastern European countries a wave of colored revolutions challenged existing political orders, Belarus has remained largely untouched by mass protests. In Minsk, the diffusion of democratic ideas leading to the mobilization of population meets a stable authoritarian regime. Nevertheless, the stagnating democratization process cannot be only attributed to the strong authoritarian rule and abuse of power. Indeed, Belarusian president Alexander Lukashenko still enjoys popularity by a large part of the population. Although international observers report that elections in Belarus have never been free and fair, few commentators doubt that Lukashenko would not have won in democratic elections. This evidence suggests that the regime succeeded in building a strong legitimizing basis, which has not been seriously challenged during the last two decades. This paper explores the authoritarian stability in Belarus by looking at the patterns of state ideology. The government effectively spreads state ideology since the early 2000s. Ideology departments have been created in almost all state institutions. The education sector has been affected by the introduction of the compulsory course "The Fundamentals of Belarusian State Ideology" at all universities, and increasing attention to the patriotic education at schools. Based on document analysis, I trace the creation of "ideological vertical" in Belarus and focuse on the issue of ideology in education and youth policy sectors. Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6028 ER - TY - THES A1 - Bähne, Katharina T1 - The Will to Play. Performance and Construction of Royal Masculinity in Early Modern History Plays N2 - Die vorliegende Arbeit untersucht Männlichkeitskonzepte in der Frühen Neuzeit, wobei das Hauptaugenmerk auf die dramatische Konstruktion der Figur des Königs gerichtet wird. Anhand von zehn Historiendramen der 1590er wird zum einen die diskursive Komplexität königlicher Männlichkeit in der Renaissance untersucht, um darauf aufbauend deren performative Darstellung zu analysieren. Im Theorieteil werden Männlichkeit und Herrschaft im elisabethanischen England mithilfe zeitgenössischer Texte diskutiert und durch den Genderdiskurs und die Performativität von Gender erweitert. Der darauf folgende Methodikteil entwickelt aus den gewonnenen Erkenntnissen eine Semiotik von königlicher Männlichkeit, die anschließend im Analyseteil anhand der ausgewählten Historiendramen evaluiert wird. KW - masculinity KW - royalty KW - history plays KW - gender KW - early modern period KW - Männlichkeit KW - König KW - Historisches Drama KW - Genderforschung KW - Geschichte 1592-1599 KW - Englisch Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-3329 ER - TY - THES A1 - Milisavljevic, Maria T1 - Looking Behind the Scenes. The History of the Royal Court Theatre Through the Lens of Prominent Productions N2 - An investigative study of newly released archive material representing six decades of Royal Court Theatre history including analyses of the productions of John Osborne’s Look Back in Anger, Edward Bond’s Early Morning, Caryl Churchill’s Cloud Nine, Jim Allen’s Perdition, Sarah Kane’s Blasted, and debbie tucker green’s Stoning Mary. Sixty years after the first season of the English Stage Company was launched at the Royal Court Theatre there is no theatre maker and theatre scholar in the world who has not heard of this first writer’s theatre in Britain: it famously put the angry young Jimmy Porter on stage, it helped put an end to stage censorship in Britain and has through the years been one of the most important engines for new writing in the English speaking world. The who-is-who of British playwrighting started off, visited or ended up at “the most important theatre in Europe” (New York Times). But no matter how big the names attached to a theatre are, it is the everyday battles of budgets, politics and compromises that really are a theatre’s history. Like a detective story, this first independent study of the Royal Court delves deep into the newly opened Royal Court Archives to fully bring to light some of the most controversial decisions, struggles and compromises that shaped the Royal Court. KW - Contemporary British Drama KW - Royal Court Theatre KW - New Writing KW - Theatre History KW - Production Analysis KW - Royal Court Theatre, London KW - Geschichte 1955 - 2015 Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5172 ER - TY - THES A1 - Böhm, Matthias T1 - The influence of situational interest on the appropriate use of cognitive learning strategies N2 - This study explores the role of two facets of situational interest, interestingness and personal significance, as predictors of the adequate use of three types of cognitive learning strategies (rehearsal strategies, organizational strategies, and elaboration strategies). In order to attain this goal, it introduces a new measure of the adequacy of the use of cognitive learning strategies by using the distance between teachers’ estimates of appropriate use of learning strategies for a specific task and students’ reported strategic behavior. Based on a theoretical model of the use of cognitive learning strategies, the study shows, by means of structural equation modeling, that different facets of situational interest play different roles in predicting students’ surface and deep processing. In summary, it was found that experienced personal significance played a major role in predicting deep-processing strategies for a significant proportion of the 34 tasks in this study, whereas interestingness fell short of expectations. Limitations did arise owing to some missing values, which may blur the findings at the lower interest and achievement end for the student sample. Nevertheless, suggestions have been made for future research, which can help teachers of history classes to determine components of success, namely experienced personal significance, when designing tasks and consequently provide effective learning tasks to their classes. KW - situational interest KW - learning strategies KW - history education KW - ESEM KW - Lernpsychologie KW - Lerntechnik KW - Kognition KW - Kognitives Lernen KW - Lerntechnik KW - Interesse KW - Geschichtsunterricht Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4919 ER - TY - THES A1 - Ndegwa, Geofffey T1 - Evaluating dry woodlands degradation and on-farm tee management in kenyan drylands N2 - Tropical dry forests and woodlands are comprised of trees that are specially adapted to the harsh climatic and edaphic conditions, providing important ecosystem services for communities in an environment where other types of tropical tree species would not survive. Due to cyclic droughts which results in crop failure and death of livestock, the inhabitants turn to charcoal production through selective logging of preferred hardwood species for their livelihood support. This places the already fragile dryland ecosystem under risk of degradation, further impacting negatively on the lives of the inhabitants. The main objective of the doctoral study was to evaluate the nature of degradation caused by selective logging for charcoal production and how this could be addressed to ensure the woodlands recover without impacting negatively on the producers’ livelihoods. To achieve this objective, the author formulated four main specific objectives namely: 1) To assess the impact of selective logging for charcoal production on the dry woodlands in Mutomo District; 2) To evaluate the characteristics of the charcoal producers that enforces their continued participation in the trade; 3) To assess the potential for adoption of agroforestry to supply wood for charcoal production, and; 4) To evaluate the potential for recovery of the degraded woodlands through sustainable harvesting of wood for charcoal production. The findings based on the four objectives were compiled into to four scientific papers as a part of a cumulative dissertation. Three of these papers have already been published in peer reviewed journals while the final one is under review. The study used primary data collected in Mutomo District, Kenya through a forest inventory and household survey both conducted between December 2012 and June 2013. The study confirmed that the main use of selectively harvested trees is charcoal production. Consequently, this leads to degradation of the woodlands through reduction in tree species richness, diversity and density. Furthermore, the basal area of the preferred species is significantly less than the other species. However, the results also show that the woodlands have a high potential to recover if put under a suitable management regime since they have a high number of saplings. The study recommends a harvesting rate of 80% of the Mean Annual Increment (MAI), which would ensure the woodlands recover after 64 years. This is about twice the duration it would take if no harvesting is allowed but it would be easier to implement as it allows the producers to continue earning some money for their livelihood. The study also demonstrates that charcoal production is an important livelihood source for many poor residents of Mutomo District who have no alternative sources of income. As such, addressing the problem of this degradation would require an innovative approach that does not compromise on the livelihoods of these poor people. An intervention that involves total ban on charcoal production would therefore not be acceptable or even feasible unless people are assured of alternative sources of income. The study recommends an intervention with overarching objectives geared towards: 1) diversification of the livelihood sources of the producers to gradually reduce their dependence on charcoal; 2) introduction of preferred charcoal trees in agroforestry systems especially through Famer Managed Natural Regeneration (FMNR) to reduce pressure on the natural woodlands; 3) controlled harvesting of hardwoods for charcoal production from the natural woodlands at a rate below the MAI; 4) promotion of efficient carbonisation technologies and practices to increase charcoal recovery; 5) promotion of efficient combustion technology and cooking practices to reduce demand side pressure, and; 6) encourage fuel switching to other fuels like LPG and electricity. KW - Tropical dry woodlands and forests; Charcoal production; Forest degradation; Famer Managed Natural Regeneration (FMNR); Sustainable biomass; Forest Mean Annual Increment KW - Kenia KW - Trockenwald KW - Holzkohle KW - Gewinnung KW - Forstwirtschaft KW - Nachhaltigkeit Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4761 N1 - Es handelt sich um eine aufsatzbasierte Dissertation. Die Abstracts der einzelnen Aufsätze sind in diesem Dokument enthalten. Die Volltexte der Aufsätze sind über einen Link im Text erreichbar. ER - TY - THES A1 - Satur, Luzile Mae T1 - Vibrancy of Public Spaces: Inclusivity and Participation Amidst the Challenges in Transformative Process in the City of Cagayan de Oro, Philippines N2 - This study examines the dynamics which lead to revitalization of everyday life in the public spaces of Cagayan de Oro, a medium-sized urban center in Northern Mindanao, the Philippines. By employing the oriental philosophies together with western thoughts such as Henri Lefebvre, Alain Touraine and Jürgen Habermas, this study elucidates that the core of perceived, lived and conceived spaces is ‘the Subject.’ Once the Subject utilizes the public sphere to instill social action, social space is ultimately produced. Hawkers, grassroots environmental activists, street readers and artists are the social Subjects who partake in the vibrancy of public spaces. The social Subjects utilize public spaces as venues of social transformation. Thus, this study argues that the social Subjects’ role in democratic process lead to inclusivity of the marginalized sector in the public spaces of the city. KW - Public space KW - Öffentlicher Raum KW - Cagayan de Oro KW - Philippines KW - Southeast Asia KW - Cagayan de Oro KW - Sociology Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6136 ER - TY - THES A1 - Yakouchyk, Katsiaryna T1 - Post-Soviet Region, Authoritarian Stability, and Autocracy Promotion: Limits to the European Union’s External Governance N2 - This cumulative thesis consists of six single contributions: five independent essays and an introductory chapter. All of the conributions have been published or accepted for publication. The overarching scope of these essays is to analyze different factors accounting for the stability of post-Soviet authoritarian regimes and the obstacles, which Western democracy promoters can face when dealing with autocrats. The starting point for these enquiries has been the striking inability of Western democracies and in particular of the European Union to encourage and to assist political transformation in the majority of the post-Soviet republics. Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-6146 ER - TY - THES A1 - Czech, Susanne T1 - Clarence Streit's Union Now and the Idea of an Anglo-American Union: A Movement Away From Imperialism to a World State? N2 - This thesis is concerned with proposals that aimed at transforming or reforming the English-speaking world so that it could continue to dominate the world in the future. In the late 19th and early 20th centuries, these ideas emerged from the discourse of Anglo-Saxonism that represented the Anglo-Saxon ‘race’ as the most developed ‘race’ in the world which could, therefore, ‘legitimately’ rule the world. In the later 20th century, an Atlantic discourse developed, which appeared to address further nations in the group of world leaders. However, it seems to rely on similar discursive elements as Anglo-Saxonism, which only includes the English-speaking world. The construction of the respective discourses is examined in late 19th/early 20th century writings by authors broadly associated with the British Empire as well as in Union Now, a 1939 book by U.S.-American Clarence K. Streit. The latter part presents the focus of this thesis. Streit developed a new concept of a world order in which a world state – the Atlantic Union – was to be established. In a first step, it should only be founded by a nucleus of the 15 ‘leading’ democracies in the world and should subsequently be expanded. In addition to the connection between Anglo-Saxonism and Atlanticism which is investigated in Streit's writings, his network and prominence are analyzed, as are the resolutions Streit's supporters introduced into the U.S. Congress and Streit's stance on imperialism. KW - Clarence Streit KW - Union Now KW - Atlantic Union KW - Anglo-Saxonism KW - Atlanticism Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9202 ER - TY - THES A1 - Maimunah, Siti T1 - Reclaiming "Tubuh-Tanah Air": A Life Project on Doing Feminist Political Ecology at the Capitalist Frontier N2 - My study examines how the configuration of the capitalist frontier through extractivism shapes ethnicity, gender, and intersectionality in the areas surrounding a nickel mine in Sorowako (East Luwu District, South Sulawesi), logging and coal mining along the Lalang River (Murung Raya District, Central Kalimantan) Indonesia. The colonial frontier intersects with the capitalist frontier and provides the circumstances for its formation. The colonial restrictions, religions, commodities, the imposition of labor discipline, and political changes have molded ethnic identities and relationships with nature. Furthermore, using autoethnography and Feminist Political Ecology, I combine my experiences as a woman academic-activist with the experiences of the people in my research area. I identify how communities and individuals interact with the multiple-frontier in everyday life by defining the configuration of the frontier from above and below. Thus, my dissertation contributes to understanding how I, the community, and the capitalist frontier landscape are contained and can potentially transform into multidimensional resistance. I develop a link between the body as the interior frontier and the extractive landscape to be transformed into a perspective of “Tubuh-tanah air” as a future arena of engagement and resistance to extractivism. KW - Extractivism KW - Capitalist frontier KW - Feminist Political Ecology KW - Resistance KW - Tubuh-tanah air Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11656 ER - TY - THES A1 - Qiu, Ruyi T1 - The Role of Context in Stimulus-Response Binding and Retrieval N2 - Feature binding has been proven to be a common and general mechanism underlying human information processing and action control. There is strong evidence showing that when humans perform a task, stimuli (e.g., the target, the distractor) and responses are bound together into an episodic representation, called an event file or a stimulus-response (S-R) episode, which can be retrieved upon feature repetition. As compared with the target and the distractor, the context (i.e., an additional stimulus presented together with the target and the distractor, but not associated with any response keys throughout the whole course of the task), which is considered as task-irrelevant, did not receive that much attention in previous studies. The current thesis was aimed to provide insights into the different roles the context plays in S-R binding and retrieval. Specifically, in Study One and Two, the role of context as an element that can be integrated into an S-R episode was investigated, with a focus on the saliency and the inter-trial variability of the contextual stimulus. Both properties were found to influence how the context is integrated into an S-R episode. More specifically, results show that both saliency and inter-trial variability determine whether the context is directly bound in a binary fashion with the response, or it enters in to a configural binding together with another stimulus and the response. In Study Three, intrigued by the role of context as an event segmentation factor in the event perception literature, whether the context can demarcate the integration window of an S-R episode was tested. Results provide consistent evidence that sharing a common context leads to a stronger binding between a stimulus and the response, as compared with the condition when these elements are separated by different contexts, thereby suggesting a binding principle of common context. Taken together, the current thesis specifies the role of context in S-R binding and retrieval, and sheds some light on how contextual information influences human behavior. Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11390 ER - TY - THES A1 - Schiffbeck, Adrian T1 - The influence of religious motivations on young people`s civic engagement N2 - With respect to religious motivations for political participation and civic engagement, scholars have set their focus on social capital and the recruiting potential for volunteers inside religious communities. Less attention has been paid to individual religious impulses, but also to reasons for which people step out from deliberative processes, especially in regard to the Eastern European Orthodox cultural context. We are approaching this under-explored research field in this dissertation, with focus on the Romanian city of Timișoara, and look at three particular aspects: the influence of religious perceptions on political protests (analysis of the 1989 revolution), the manner in which religion motivates young people to volunteer (view on a local community project) and factors determining people to retreat from public engagement (analysis of citizens` local committees). The research methods are qualitative - interviews, group discussions and analytical interpretation. Results show that non- or less religious young people are encouraged to protest by an indefinable supernatural force and motivated by moral interests (need for dignity and a fair treatment / procedural justice), more than material ones (distributive justice). When engaging in the community, the impulse partially comes from an intrinsic spirituality and a privatized experience with the divine. Giving up civic engagement has nothing to do with remuneration, but with the need for freedom of expression and moral appreciation. KW - Civic engagement, Community, Motivation, Religion, Protest, Young people KW - Rumänien KW - Soziales Engagement KW - Religion KW - Jugend KW - Bürgerbeteiligung, Jugend, Motivation, Protest, Unsichtbare Religion Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10291 ER - TY - THES A1 - Csizmadia, Bence T1 - The EUrope of Differentiated Territorial Integration? Regional Cross-Border Governance in the Multi-Level Governance System of the EU. A Case Study of the EUSALP and the EUSDR N2 - Netzwerke der Regional Cross-Border Governance haben in der EU insbesondere in den vergangenen zwei Jahrzehnten einen spürbaren Bedeutungszuwachs erfahren. Sie stellen hochkomplexe Governancestrukturen dar und bieten für beteiligte Akteure einen großen Mehrwert. Die EUSDR und EUSALP sind eine weitere Fortentwicklung von RCBG und können in verschiedenen Bereichen nachweisbare Erfolge vorweisen, werden jedoch der hohen im Vorfeld postulierten Erwartungshaltung nicht gerecht. RCBG-Netzwerke und insbesondere die Makroregionalen Strategien tragen in gewisser Weise zu einer territorialen Differenzierung der EU bei, diese sind jedoch noch weit davon entfernt, dass die zum Teil postulierte Erwartungshaltung bezüglich einer „(Makro-)Regionalisierung der EU“ erfüllt wird. KW - Multi-Level-Governance KW - Regional Cross-Border Governance KW - EUSALP KW - EUSDR KW - Macro-regional strategies KW - EUSDR KW - EUSALP KW - Regionalentwicklung Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9959 ER - TY - THES A1 - Fritz, Manuela T1 - Health challenges of the 21st century - Empirical essays on the health and economic burden of non-communicable diseases and climate change in Southeast Asia N2 - In the ongoing 21st century, low- and middle-income countries will face two health challenges that are thoroughly different from what these countries have been dealing with in preceding centuries. First, they are confronted with surging rates of non-communicable diseases (NCDs), and second, climate change will take its toll and is predicted to cause catastrophic health impairments and exacerbate chronic health conditions further. Both will pose a disproportionate health and economic burden on low- and middle-income countries, which are also the countries least able to cope with them. By threatening individual health and socioeconomic improvements, and by putting an immense burden on already constrained health care systems, they impede the progress in poverty reduction and widen health inequities between the rich and the poor. Against this background, this thesis investigates the potential of NCD prevention and treatment measures in the context of Southeast Asia, with case studies in Indonesia. Specifically, it seeks to understand what kind of health interventions have the potential to be (cost-)effective considering the cultural background, lifestyle, health literacy and health system capacities in the region. Further, this thesis analyzes the interplay between NCDs and climate change and assesses the financial burden that both might pose in the decades to come. Hence, this thesis contributes to a better understanding of how the two health challenges of the 21st century, NCDs and climate change, can be addressed in the context of Southeast Asia and offers insights into what type of health policies and interventions can play a supportive role. KW - Health KW - Climate change KW - Non-communicable diseases KW - Southeast Asia Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-12649 ER - TY - THES A1 - Dwi Laksmana, Dimas T1 - Knowledge in the making : embodying transdisciplinary moments on organic agriculture in Yogyakarta, Indonesia N2 - Organic agriculture in Java, Indonesia, has been historically intertwined with social movements that struggled for more economically, ecologically, culturally, and socially sustainable agriculture. While these grassroots movements emerged under an authoritarian government that showed little interest in organic agriculture, the turn of the 21st century saw the rapid involvement of the Indonesian government in supporting, regulating and, arguably, commodifying organic agriculture. Institutionalization triggered diverse responses from competing organic actors, reflecting their different standpoints and knowledges. In this context, a transdisciplinary approach is deemed suitable to provide context-specific insights into organic agriculture. This dissertation draws on anthropology and Science and Technology Studies (STS) to explore the politics of knowledge of organic agriculture in Yogyakarta, Indonesia, as a contribution to a critique of transdisciplinarity. My interest on the hierarchization of different knowledges is inspired by the work of anthropologists of knowledge that asks how the communities they study construct knowledge and how they themselves construct knowledge about these communities. Since transdisciplinary knowledge is co-produced by science and society and reflects their embedded power relations, transdisciplinary research needs to be open to different interpretations, and reflexive towards the unequal distribution of resources, accountability, and responsibility. By linking these two lines of thought, I examine the making of knowledges through reflexive transdisciplinary work. I reflect on how “epistemic living space” (Felt 2009) and “co-presence” (Chua 2015) affect research and shape the politics of knowledge of organic agriculture in Yogyakarta, Indonesia. I argue that the hierarchization of different knowledges of organic agriculture was intertwined with my shifting positionalities, as a field researcher in Indonesia and PhD student at Passau University, as I moved between these two different “field sites”. This cumulative dissertation is divided into two parts. In Part I, “Knowledge in the making”, I present my contributions towards transdisciplinary knowledge production and politics of knowledge of organic agriculture. Part II, “Publications”, comprises the three stand-alone papers. The first contribution is my formulation of the notion of knowledge in the making. The second is my exploration of the ways that reflexive transdisciplinary work, and living and intersubjective experience shape knowledge in the making. The third is my demonstration of how an understanding of knowledge in the making sheds lights on the politics of knowledge of organic agriculture. This approach serves to examine the politics involved in synthesizing the conceptualizations of organic agriculture employed by different actors into one overarching narrative, such as sustainable agriculture or alternative agriculture. My final contribution is the notion of transdisciplinary moments, a conceptualization of transdisciplinary research practice that accounts for the politics of knowledge in which both scientific and extra-scientific actors are embedded. As a conclusion, I share the lessons learned from pursuing a PhD as a cumulative dissertation in an unstructured setting within a German–Indonesian research project on Indonesian organic agriculture. Finally, I identify bodies of literature and strands of thinking for future engagement within transdisciplinary research and discuss their potential to contribute to radical change in the institutional and value structures of contemporary academia. KW - politics of knowledge KW - epistemic living space KW - STS Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-12667 CY - Passau ER - TY - THES A1 - Alhamzeh, Alaa T1 - Language Reasoning by means of Argument Mining and Argument Quality N2 - Understanding of financial data has always been a point of interest for market participants to make better informed decisions. Recently, different cutting edge technologies have been addressed in the Financial Technology (FinTech) domain, including numeracy understanding, opinion mining and financial ocument processing. In this thesis, we are interested in analyzing the arguments of financial experts with the goal of supporting investment decisions. Although various business studies confirm the crucial role of argumentation in financial communications, no work has addressed this problem as a computational argumentation task. In other words, the automatic analysis of arguments. In this regard, this thesis presents contributions in the three essential axes of theory, data, and evaluation to fill the gap between argument mining and financial text. First, we propose a method for determining the structure of the arguments stated by company representatives during the public announcement of their quarterly results and future estimations through earnings conference calls. The proposed scheme is derived from argumentation theory at the micro-structure level of discourse. We further conducted the corresponding annotation study and published the first financial dataset annotated with arguments: FinArg. Moreover, we investigate the question of evaluating the quality of arguments in this financial genre of text. To tackle this challenge, we suggest using two levels of quality metrics, considering both the Natural Language Processing (NLP) literature of argument quality assessment and the financial era peculiarities. Hence, we have also enriched the FinArg data with our quality dimensions to produce the FinArgQuality dataset. In terms of evaluation, we validate the principle of ensemble learning on the argument identification and argument unit classification tasks. We show that combining a traditional machine learning model along with a deep learning one, via an integration model (stacking), improves the overall performance, especially in small dataset settings. In addition, despite the fact that argument mining is mainly a domain dependent task, to this date, the number of studies that tackle the generalization of argument mining models is still relatively small. Therefore, using our stacking approach and in comparison to the transfer learning model of DistilBert, we address and analyze three real-world scenarios concerning the model robustness over completely unseen domains and unseen topics. Furthermore, with the aim of the automatic assessment of argument strength, we have investigated and compared different (refined) versions of Bert-based models that incorporate external knowledge in the decision layer. Consequently, our method outperforms the baseline model by 13 ± 2% in terms of F1-score through integrating Bert with encoded categorical features. Beyond our theoretical and methodological proposals, our model of argument quality assessment, annotated corpora, and evaluation approaches are publicly available, and can serve as strong baselines for future work in both FinNLP and computational argumentation domains. Hence, directly exploiting this thesis, we proposed to the community, a new task/challenge related to the analysis of financial arguments: FinArg-1, within the framework of the NTCIR-17 conference. We also used our proposals to react to the Touché challenge at the CLEF 2021 conference. Our contribution was selected among the «Best of Labs». KW - NLP, Argument Mining, Argument Quality Assessment, Financial Argumentation, Earnings Conference Calls Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-12699 ER - TY - THES A1 - Gehrke, Esther T1 - Three essays on the production and investment decisions of households living in rural India N2 - In order to end poverty by 2030, the declared goal of the United Nations, a better understanding is needed which policies help poor households to escape poverty and how to end its inter-generational transmission. Since the Millennium Declaration in September 2000, and the adoption of the Millennium Development Goals (MDGs), the delivery of basic social services, such as education, health, water supply and sanitation, has become the central focus of international development assistance. However, the provision of basic social services is not necessarily sufficient to lead to an accumulation of human and productive capital, which would allow households to escape poverty and interrupt its inter-generational transmission. To understand why people are poor, we need to understand what productive decisions poor households take, and to identify what constraints households face in their attempt to accumulate human, as well as, productive capital. A better understanding of such constraints could guide policies that have a long-term impact on poverty reduction and on development. A number of factor could explain why poor households operate at unprofitable levels and why they are constrained in their investment decisions. Empirical evidence points to different explanations: cost of learning and access to information, insufficient education, risk, credit constraints, non-convex production technologies, and behavioral patterns that are inconsistent with standard neoclassical models. Currently, one of the major challenges in formulating policies that foster productive investments among the poor, seems to be to disentangle the effects of scale, credit constraints, and the lack of insurance mechanisms. This thesis seeks to shed further light on the relative role of these three constraints. In the context of rural India, it analyzes what production and investment decisions households take and how important risk and credit constraints as well as scale effects are in these decisions. Finally, it evaluates potential policy tools that could support households in overcoming these constraints. Today, 33% of the world's poor live in India, the vast majority of them (80.5%) in rural areas. The economic structure of rural India is still dominated by agricultural production, and consequently, this thesis concentrates on agricultural production decisions and employment in agriculture. In particular, this thesis addresses three questions in three individual papers: First, are farm households constrained in their crop choices by agricultural production risk and to which extent can India's public works program support households in overcoming this constraint? Second, how profitable is cattle farming in rural India at different levels of investment and which barriers do households face in reaching optimal investment levels? And third, can risk in agricultural wages explain limited investment in girls' education in the presence of intra-household substitution in household chores? The first paper focuses on crop choice of farm households. It reassesses the stylized fact that households have to trade-off between returns and risk in their crop choice in the context of Andhra Pradesh, a state in the south of India. It then explores the effect of India's flagship anti-poverty program, the National Rural Employment Guarantee Scheme (NREGS) on households' crop choice using a representative panel data set. The NREGS guarantees each household living in rural India up to a hundred days of employment per year, at state minimum wages. The paper shows theoretically, and empirically, that the introduction of the NREGS reduces households' uncertainty about future income streams because it provides reliable employment opportunities in rural areas independently of weather shocks and crop failure. With access to the NREGS, households can compensate income losses emanating from shocks to agricultural production. Households with access to the NREGS can therefore shift their production towards riskier but also more profitable crops. These shifts in agricultural production have the potential to considerably raise the incomes of smallholder farmers. The paper concludes that employment guarantees can, similarly to crop insurance, help households in managing agricultural productions risks. It also argues that accounting for the effects of the NREGS on crop choice and profits from agricultural production affects the cost-benefit analysis of such a program considerably. The second paper concentrates on the profitability of farming cattle in Andhra Pradesh. The paper also uses a representative panel dataset, and examines average and marginal returns to cattle at different levels of cattle investment. It finds average returns in the order of -8% at the mean of cattle value. These returns vary across the cattle value distribution between negative 53% (in the lowest quintile) and positive 2% (in the highest). While marginal returns are positive on average, they also vary considerably with cattle value and breed. The paper shows that average and marginal returns are considerably higher for modern variety cows, i.e. European breeds and their crossbreeds, than for traditional varieties of cows or for buffaloes. It also shows that cattle farming becomes most profitable at minimum herd sizes of five animals, due to decreasing average labor costs with increasing herd sizes. The results of this paper suggest that cattle farming is associated with sizable non-convexities in the production technology and that substantial economies of scale, as well as high upfront expenses of acquiring and feeding high-productivity animals, might trap poorer households in low-productivity asset levels. The fact that wealthier households and households with lower costs to access veterinary services are more likely to overcome these barriers, supports this idea. The second paper concludes that cattle farming might well generate positive returns for households in rural India, but that most households seem to operate at unprofitable levels. This could also explain the apparent paradox between widespread support of cattle farming through agricultural policy interventions and negative returns to cattle, as stressed in recent works. It argues that policy interventions that target productive assets will only be beneficial if transfers are high enough to allow households to overcome these entry barriers. The third paper concentrates on the effect of risk on the productive decisions of households, and analyzes the effect of wage risk in agricultural employment on women's labor supply and time allocated to home production. It seeks to understand the extent to which risk raises labor supply of women to levels that can become harmful for other members of the household. The hypothesis is that in the presence of intra-household substitution effects -- for instance in the performance of household chores -- increased female labor supply might have negative effects on the time allocation of girls. If women have less time available for home production and childcare, and such activities can only be foregone at high cost, they might be forced to take older girls out of school or to cut down on the time these girls study at home in order for them to fill in for these tasks. The paper uses cross-sectional data on the time allocation of different household members and predicts wage risk at the village level as a function of the historical rainfall distribution and a village's share of land that is under irrigation. The results show that wage risk affects the time allocation of women, increasing their labor supply and reducing the time they allocate to home production. Wage risk also increases the time girls spend on household chores and reduces their time in school. Because the observed effect of wage risk on girls' time allocated to household chores corresponds very closely to the effect observed for women, it seems plausible to attribute it to intra-household substitution effects. The observed effect of risk on girls' school time, however, is greater than the observed effect of risk on the home-production time of girls. This can be due to two reasons: First, in the presence of intra-household substitution effects, shocks in wages will not only increase female labor supply but also girls' time on household chores. And the model predicts that risk-averse households invest less in education when future school time becomes uncertain, because future school time affects the returns to current schooling. Second, if school attendance is indivisible, then girls might be forced to drop out of school temporarily or even permanently. The paper then simulates the effect of the NREGS on the time-allocation decisions of working women and school-age girls. The results suggest that the NREGS could increase the time working women spend on household duties, because it reduces uncertainty regarding future earnings, and alleviates the need to accumulate savings. Thereby, the NREGS would reduce the pressure on girls to perform household tasks and allow them to increase the time they spend in school or studying by 6 minutes daily. Wit these findings, this thesis contributes to a better understanding of the choices poor households in rural India face in their day-to-day decision making, and offers insights into what policies could support households in escaping poverty, and interrupt its inter-generational transmission. KW - Indien KW - Ländlicher Raum KW - Armut KW - India, Risk, Human capital, Investment, Livestock KW - Sozioökonomischer Wandel Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4234 ER - TY - THES A1 - Klaghstan, Merza T1 - Multimedia data dissemination in opportunistic systems N2 - Opportunistic networks (OppNets) are human-centric mobile ad-hoc networks, in which neither the topology nor the participating nodes are known in advance. Routing is dynamically planned following the store-carry-and-forward paradigm, which takes advantage of people mobility. This widens the range of communication and supports indirect end-to-end data delivery. But due to individuals’ mobility, OppNets are characterized by frequent communication disruptions and uncertain data delivery. Hence, these networks are mostly used for exchanging small messages like disaster alarms or traffic notifications. Other scenarios that require the exchange of larger data (e.g. video) are still challenging due to the characteristics of this kind of networks. However, there are still multimedia sharing scenarios where a user might need switching from infrastructural communications to an ad-hoc alternative. Examples are the cases of 1) absence of infrastructural networks in far rural areas, 2) high costs due to roaming or limited data volumes or 3) undesirable censorship by third parties while exchanging sensitive content. Consequently, we target in this thesis a video dissemination scheme in OppNets. For the video delivery problem in the sparse opportunistic networks, we propose a solution with the objective of reducing the video playout delay, so that enabling the recipient to play the video content as soon as possible even if at a low quality. Furthermore, the received video reaches later a higher quality level, ensuring a better viewing experience. The proposed solution encloses three contributions. The first one is given by granulating the videos at the source node into smaller parts, and associating them with unequal redundancy degrees. This is technically based on using the Scalable Video Coding (SVC), which encodes a video into several layers of unequal importance for viewing the content at different quality levels. Layers are routed using the Spray-and-Wait routing protocol, with different redundancy factors for the different layers depending on their importance degree. In this context as well, a video viewing QoE metric is proposed, which takes the values of the perceived video quality, delivery delay and network overhead into consideration, and on a scalable basis. Second, we take advantage of the small units of the Network Abstraction Layer (NAL), which compose SVC layers. NAL units are packetized together under specific size constraints to optimize granularity. Packets sizes are tuned in an adaptive way, with regard to the dynamic network conditions. Each node is enabled to record a history of environmental information regarding the contacts and forwarding opportunities, and use this history to predict future opportunities and optimize the sizes accordingly. Lastly, the receiver (destination) node is pushed into action by reacting to missing data parts in a composite ``backward'' loss concealment mechanism. So, the receiver asks first for the missing data from other nodes in the network in the form of request-response. Then, since the transmission is concerned with video content, video frame loss error concealment techniques are also exploited at the receiver side. Consequently, we propose to combine the two techniques in the loss concealment mechanism, which is enabled then to react to missing data parts. To study the feasibility and the applicability of the proposed solutions, simulation-driven experiments are performed, and statistical results are collected and analyzed. Consequently, we have got promising results that show the applicability of video dissemination in opportunistic delay tolerant networks, and open the door for a range of possible future works. KW - Opportunistisches Netzwerk KW - Multimedia KW - Videoübertragung Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4438 ER - TY - THES A1 - He, Xiaobing T1 - Threat Assessment for Multistage Cyber Attacks in Smart Grid Communication Networks N2 - In smart grids, managing and controlling power operations are supported by information and communication technology (ICT) and supervisory control and data acquisition (SCADA) systems. The increasing adoption of new ICT assets in smart grids is making smart grids vulnerable to cyber threats, as well as raising numerous concerns about the adequacy of current security approaches. As a single act of penetration is often not sufficient for an attacker to achieve his/her goal, multistage cyber attacks may occur. Due to the interdependence between the power grid and the communication network, a multistage cyber attack not only affects the cyber system but impacts the physical system. This thesis investigates an application-oriented stochastic game-theoretic cyber threat assessment framework, which is strongly related to the information security risk management process as standardized in ISO/IEC 27005. The proposed cyber threat assessment framework seeks to address the specific challenges (e.g., dynamic changing attack scenarios and understanding cascading effects) when performing threat assessments for multistage cyber attacks in smart grid communication networks. The thesis looks at the stochastic and dynamic nature of multistage cyber attacks in smart grid use cases and develops a stochastic game-theoretic model to capture the interactions of the attacker and the defender in multistage attack scenarios. To provide a flexible and practical payoff formulation for the designed stochastic game-theoretic model, this thesis presents a mathematical analysis of cascading failure propagation (including both interdependency cascading failure propagation and node overloading cascading failure propagation) in smart grids. In addition, the thesis quantifies the characterizations of disruptive effects of cyber attacks on physical power grids. Furthermore, this thesis discusses, in detail, the ingredients of the developed stochastic game-theoretic model and presents the implementation steps of the investigated stochastic game-theoretic cyber threat assessment framework. An application of the proposed cyber threat assessment framework for evaluating a demonstrated multistage cyber attack scenario in smart grids is shown. The cyber threat assessment framework can be integrated into an existing risk management process, such as ISO 27000, or applied as a standalone threat assessment process in smart grid use cases. KW - Smart Grids KW - Game Theory KW - Cascading Failures KW - Threat Assessment KW - Communication Networks KW - Intelligentes Stromnetz KW - Sicherheit KW - Spieltheorie Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-5051 ER - TY - RPRT A1 - Mägdefrau, Jutta A1 - Kufner, Sabrina A1 - Hank, Barbara A1 - Kainz, Hubert A1 - Michler, Andreas A1 - Mendl, Hans A1 - Lengdobler, Bettina A1 - Karpfinger, Franz ED - Mägdefrau, Jutta T1 - Standards and Indicators for the Development of Skills in Teacher Education N2 - Which skills may student teachers be expected to have, after all, after having completed their first period of practical training at school? After three to six semesters of studying and a six-week long period of practical training at school, what may be expected from them when it comes to those professional skills as being expected from professional teachers? Which skills may they be expected to have in their fifth semester of studying, at a stage when, in the context of their studies, student teachers in Bavaria are doing their subject-related periods of practical training? And what may a trainee teacher be expected to have achieved at the end of his or her teaching training? Finally, which skills may professional teachers be expected to have which may not be expected from trainee teachers? Questions like these give expression to a problem a team of professors, lecturers and responsible school supervisors has dedicated their efforts to. The overall objective of the collaboration was to define skills and indicators that describe developing processes between the beginning of studies and being a professional teacher. For this purpose, one reached back to current empirical findings from the field of pedagogical and psychological teaching quality research. Generally, due to the fact that pedagogical work focuses on achieving certain objectives, it is always normative. Usually empiricists hesitate when it comes to deductively deriving normative skills from the findings of quality research, even more as the data situation concerning certain sub-dimensions is rather contradictory. But still: in the course of an individual, biographical professionalization process, teacher training aims at providing each individual with abilities that will make him/her a good teacher or allow for “high-quality” teaching. Everybody working in the field of teacher training is aware of the fact that, when it comes to the practical aspects of teacher training, general formulations require concrete action. An illustrative example is the individual counselling of students or trainee teachers: After having given a lesson, each student and his/her tutor reflect on positive and less positive aspects by reaching back to quality criteria. And this is exactly what this paper is about: It tries to offer an advisory tool for students, trainee teachers, teachers, school administrations and school supervising authorities, by providing information about which concrete skills may be expected at a defined stage of professionalization and how these skills can be identified. This paper is the result of a two-year collaboration; its release is meant to contribute to a debate on the topic among a larger public. Following the introductory remarks, in the first section of the paper Mägdefrau, Kufner and Hank present the theoretical basis: The criteria for selecting the dimensions of qualities of teaching as well as the spiral-curricular structure of the standards given in Section II are explained. Furthermore, there are some brief comments on possible practical applications. Then in Sections II and III the standards of the collaborating authors are presented. Section II presents the standards according to the chosen dimensions of teacher behavior. That is, the reader will find the respective skills and indicators for the four phases of the Bavarian teacher education system (period of practical training at school, subject-didactic period of practical training, traineeship, and professional teacher training) in sequence. The first two phases refer to the university phase, the third to the training phase at school after graduating, and the forth describes the skills teachers should have at their disposal and continually develop through teacher trainings. (For an overview, see Figure 1.) This juxtaposition allows for easily pursuing the skills development process supposed to be achieved phase by phase for each dimension and sub-dimension. Then section III once again gives an overview of the standards according to the four phases, so that e. g. students in their first semester may see at first sight what skills they are expected to have after their first period of practical training at school. KW - teacher education KW - Bildungsstandards KW - Lehrerbildung KW - Unterrichtsqualität Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-4957 N1 - Zur deutschen Fassung: http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-27278 ER - TY - THES A1 - Niklaus, Christina T1 - From Complex Sentences to a Formal Semantic Representation using Syntactic Text Simplification and Open Information Extraction N2 - Sentences that present a complex linguistic structure act as a major stumbling block for Natural Language Processing (NLP) applications whose predictive quality deteriorates with sentence length and complexity. The task of Text Simplification (TS) may remedy this situation. It aims to modify sentences in order to make them easier to process, using a set of rewriting operations, such as reordering, deletion or splitting. These transformations are executed with the objective of converting the input into a simplified output, while preserving its main idea and keeping it grammatically sound. State-of-the-art syntactic TS approaches suffer from two major drawbacks: first, they follow a very conservative approach in that they tend to retain the input rather than transforming it, and second, they ignore the cohesive nature of texts, where context spread across clauses or sentences is needed to infer the true meaning of a statement. To address these problems, we present a discourse-aware TS framework that is able to split and rephrase complex English sentences within the semantic context in which they occur. By generating a fine-grained output with a simple canonical structure that is easy to analyze by downstream applications, we tackle the first issue. For this purpose, we decompose a source sentence into smaller units by using a linguistically grounded transformation stage. The result is a set of selfcontained propositions, with each of them presenting a minimal semantic unit. To address the second concern, we suggest not only to split the input into isolated sentences, but to also incorporate the semantic context in the form of hierarchical structures and semantic relationships between the split propositions. In that way, we generate a semantic hierarchy of minimal propositions that benefits downstream Open Information Extraction (IE) tasks. To function well, the TS approach that we propose requires syntactically well-formed input sentences. It targets generalpurpose texts in English, such as newswire or Wikipedia articles, which commonly contain a high proportion of complex assertions. In a second step, we present a method that allows state-of-the-art Open IE systems to leverage the semantic hierarchy of simplified sentences created by our discourseaware TS approach in constructing a lightweight semantic representation of complex assertions in the form of semantically typed predicate-argument structures. In that way, important contextual information of the extracted relations is preserved that allows for a proper interpretation of the output. Thus, we address the problem of extracting incomplete, uninformative or incoherent relational tuples that is commonly to be observed in existing Open IE approaches. Moreover, assuming that shorter sentences with a more regular structure are easier to process, the extraction of relational tuples is facilitated, leading to a higher coverage and accuracy of the extracted relations when operating on the simplified sentences. Aside from taking advantage of the semantic hierarchy of minimal propositions in existing Open IE Abstract approaches, we also develop an Open IE reference system, Graphene. It implements a relation extraction pattern upon the simplified sentences. The framework we propose is evaluated within our reference TS implementation DisSim. In a comparative analysis, we demonstrate that our approach outperforms the state of the art in structural TS both in an automatic and a manual analysis. It obtains the highest score on three simplification datasets from two different domains with regard to SAMSA (0.67, 0.57, 0.54), a recently proposed metric targeted at automatically measuring the syntactic complexity of sentences which highly correlates with human judgments on structural simplicity and grammaticality. These findings are supported by the ratings from the human evaluation, which indicate that our baseline implementation DisSim returns fine-grained simplified sentences that achieve a high level of syntactic correctness and largely preserve the meaning of the input. Furthermore, a comparative analysis with the annotations contained in the RST Discourse Treebank (RST-DT) reveals that we are able to capture the contextual hierarchy between the split sentences with a precision of approximately 90% and reach an average precision of almost 70% for the classification of the rhetorical relations that hold between them. Finally, an extrinsic evaluation shows that when applying our TS framework as a pre-processing step, the performance of state-ofthe-art Open IE systems can be improved by up to 32% in precision and 30% in recall of the extracted relational tuples. Accordingly, we can conclude that our proposed discourse-aware TS approach succeeds in transforming sentences that present a complex linguistic structure into a sequence of simplified sentences that are to a large extent grammatically correct, represent atomic semantic units and preserve the meaning of the input. Moreover, the evaluation provides sufficient evidence that our framework is able to establish a semantic hierarchy between the split sentences, generating a fine-grained representation of complex assertions in the form of hierarchically ordered and semantically interconnected propositions. Finally, we demonstrate that state-of-the-art Open IE systems benefit from using our TS approach as a pre-processing step by increasing both the accuracy and coverage of the extracted relational tuples for the majority of the Open IE approaches under consideration. In addition, we outline that the semantic hierarchy of simplified sentences can be leveraged to enrich the output of existing Open IE systems with additional meta information, thus transforming the shallow semantic representation of state-of-the-art approaches into a canonical context-preserving representation of relational tuples. KW - Text Simplification KW - Syntactic Simplification KW - Open Information Extraction KW - Semantic Representation KW - Complex Sentences Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10540 ER - TY - THES A1 - Mandarawi, Waseem T1 - Multi-objective Network Virtualization and its Applicability to Industrial Networks N2 - Network virtualization provides high flexibility for deploying communication services in dense and heterogeneous environments. Two main approaches (dimensions) that are usually combined exist: Network Function Virtualization (NFV) technologies for functionality virtualization and Virtual Network Embedding (VNE) algorithms for resource virtualization. These approaches can be applied to different network levels, such as factory and enterprise levels of industrial networks. Several objectives and constraints, that might be conflicting, shall be considered when network virtualization is applied, mainly in complex topologies. This thesis proposes a network virtualization model that considers both virtualization dimensions, two network levels, and different objectives and constraints. The network levels considered are two primary levels in industrial networks. However, this consideration does not restrict the model to a particular environment or certain levels. The considered objectivities/constraints are topology, reliability, security, performance, and resource usage. Based on this model, we first build an overall combined solution for autonomic and composite virtual networking. This solution considers both virtualization dimensions, two network levels, and target objectives. Furthermore, this solution combines three novel virtualization sub-approaches that consider performance, reliability, and performance. However, the sub-approaches apply to different combinations of levels and dimensions, and the reliability approach additionally considers the resource usage objective. After presenting all solutions, we map them to the defined model. Regarding applicability to industrial networks, the combined approach is applied to an enterprise-level Industrial Internet of Things (IIoT) use case inspired by the smart factory concept in Industry 4.0. However, the sub-approaches are applied to more specific use cases. The performance and reliability solutions are integrated with relevant components of the Time Sensitive Networks (TSN) standard as a modern technology for industrial networks. The goal is to enrich the reliability and performance capabilities of TSN with the flexibility of network virtualization. In the combined approach, we compose and embed an environment-aware Extended Virtual Network (EVN) that represents the physical devices, virtual application functions, and required Service Function Chains (SFCs). We use the graph transformation method to transform abstract application requirements (represented by an Application Request (AR)) into an EVN. Both EVN composition and embedding methods consider the Substrate Network (SN) topology and different security, reliability, performance, and resource usage policies. These policies are applied with a certain priority and depend on the properties of communicating entities such as location and type. The EVN is embedded using property-based node mapping, reliability-aware branching, and a greedy chain embedding heuristic. The chain embedding heuristic is evaluated using a random topology that represents the use case. The performance sub-approach is NFV-based and is applied to a specific use case with Time-critical Traffic (TCT) flows. We develop and evaluate a complete framework for virtualizing Time-aware Shaper (TAS) using high-performance NFV. The reliability sub-approach is VNE-based and is applied to a specific factory level use case. We develop minimal and maximal branching heuristics based on a reliability-aware k-shortest path algorithm and compare them using a typical factory topology. We then integrate these algorithms with a Frame Replication and Elimination for Reliability (FRER) simulator to realize reliability policies by the autonomic and efficient configuration of a supporting technology. The security sub-approaches are related to both virtualization dimensions and are applied to generic enterprise-level use cases. However, the applicability of the security aspect to industrial networks is only shown in the combined (EVN) approach and its use case. We research the autonomic security management in Network Function Virtualization Infrastructure (NFVI) with the main goal of early reaction to threats through SFC reconfiguration through Virtual Network Function (VNF) live migration. This goal is approached by supporting the security measurements with a decision making architecture that considers, on the one hand, the threats and events in the environment and, on the other hand, the Service Level Agreement (SLA) between the NFVI provider and user. For this purpose, we classify the VNF-specific attacks and define possible early detectable behavior patterns. Finally, we develop a security-aware VNE heuristic that considers the security requirements of the Virtual Network (VN) and the security capabilities of the SN. This approach is modified in the combined approach to consider deploying virtualized security VNFs. KW - Network Virtualization KW - Industrial Networks KW - Virtual Network Embedding KW - Network Function Virtualization KW - Time Sensitive Networking Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10606 ER - TY - THES A1 - Alyousef, Ammar T1 - E-Mobility Management: Towards a Grid-friendly Smart Charging Solution N2 - Replacing fossil-fueled vehicles with Electric Vehicles (EVs) poses new challenges for power distribution networks. Specifically speaking, the electrification of the mobility sector relies on the ability to process and analyze information on when, where, for how long, or how fast charging processes will take place. Nevertheless, such kind of information is typically difficult to acquire or insufficiently predictable due to the dynamic nature of the system. Also, the increasing adoption rate of the renewable energy sources, specifically the domestic Photovoltaic (PV) systems, and the potentially associated grid defection scenarios will significantly impact the cost and efforts required to operate the grid in terms of power quality and demand-supply aspects. However, such emerging requirements have arguably not been taken into account when the distribution grid was built originally. Besides, expanding the distribution and transmission capacity is a very costly and lengthy process. Therefore, any proposed solution should be cost-effective as well as environment-, grid- and user-friendly. To this end, the advancements in Information and Communications Technology (ICT) are increasingly adopted and applied. This thesis addresses the rapidly growing EV sector and deals with the problems to overcome potential power quality degradation caused by the challenges mentioned above. Since time switch and radio ripple control as existing solutions in Germany are costly and neither very effective nor scalable as it requires hardware retrofitting of existing public Charging Stations (CSs), the primary focus of this work is the development of an appropriate, standards-based, scalable, and smart charging solution of EVs. Such a solution can, in turn, boost the usage of renewable energy by ensuring that the existing grid infrastructure can operate within its permissible limits while maintaining acceptable levels of power quality. This work introduces a new definition of the concept, “grid-friendly EV charging”, where the power demand of a CS is adjusted depending on the real-time status of a power grid. In this regard, the conflicting concerns of stakeholders in an EV ecosystem are considered. For example, a Distribution System Operator (DSO) does not want to reveal a lot of technical details about the power grid or its status. Similarly, a Charging Service Provider (CSP) wants to keep its clients happy without sharing the details of its business model with others, namely, DSOs. For that sake, a distributed smart charging architecture is proposed in this thesis. It is event-driven and responds in nearly real-time to unforeseen and critical grid situations such as high/low voltage, congestion, phase unbalance, and harmonics. In that regard, the publish/subscribe messaging pattern, used as a part of the architecture, enables an efficient and well-performing communication scheme among the different components. Moreover, an indication mechanism about the different issues in a power grid is developed; it adopts the traffic light model. It works as a black box to separate smart controllers for each CS and configured only by the CSP. Smart chargers enable a smooth adjustment of the charging power to avoid drastic changes in the grid state. To that end, two types of intelligent controllers are developed and tested. While the first controller is inspired by the fuzzy logic, the second one is inspired by the slow-start mechanism used in TCP to control congestion in computer networks. A simulative approach is applied to evaluate the solution, thereby, a topology of a real low voltage grid with realistic load and generation profiles is used. Furthermore, a set of metrics is defined regarding the main concerns of stakeholders: voltage, overloading, fairness, the satisfaction of EV users and grid operator, as well as the grid-friendly behavior of a CS/ EV user. The evaluation shows that the solution is able to guarantee a safe operation of the grid. The proposed system can ensure a grid-friendly charging by sacrificing of a small portion of user satisfaction, that sacrifice of a user is awarded via a points-based reward system. Last but not least, the proposed distributed controllers are compared to two other controllers: (1) a decentralized controller based only on sensing the local voltage and (2) a very strict centralized controller focusing on grid-friendliness. The latter ensures proportional fairness among users regarding the objective function of the optimization problem solved in each simulation step. The distributed controllers are superior to the decentralized controller in terms of grid friendly and fairness and converge in general to the centralized one. KW - E-Mobility KW - Smart Charging KW - Grid-Friendliness KW - Elektromobilität KW - Lademanagement KW - Netzstabilität Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-9302 ER - TY - THES A1 - Soubeiga, Sidiki T1 - Four empirical essays on the role of institutional quality, political instability and entrepreneurship for economic growth and jobs in Sub-Saharan Africa N2 - Most Sub-Saharan African (SSA) countries experienced sound economic growth and a declining rate of poverty over the last two decades. Though, by far, the SSA region remains the poorest in the world and faces tremendous political, social, and economic challenges. Moreover, due to the COVID-19 pandemic, SSA entered into a recession with a GDP growth rate of minus 5% in 2020 as ever recorded over 25 years. This has also induced an increase in poverty in the region, which adds up to the structural challenges and further highlight the need of sound policies to address economic growth, governance, jobs, and poverty for the region to meet the Sustainable Development Goals (SDGs) in 2030 and beyond. This thesis examines the effects of institutional quality, political instability, and a government targeted entrepreneurship program on the accumulation of human, physical, and financial capital by households and firms. In the literature, these factors are identified as the key determinants of economic growth and job creation, yet this thesis contributes to a knowledge gap, especially at the microeconomic level, on how households and firms accumulate these factors in the presence of weak institutional quality, political instability, and government targeted entrepreneurship programs. In particular, this thesis investigates heterogeneity as well as a single country study of the effects of institutional quality and political instability; it also employs a randomized controlled trial (RCT) to assess the impacts of two different targeted entrepreneurship support programs; and finally, it taps on data from this field experiment to assess the performance of two different targeting mechanisms for selecting growth-oriented entrepreneurs. Each paper is self-contained and three among the four papers were written with co-authors. The first paper assesses the effects of institutional quality and political instability on household assets and human capital accumulation in 19 Sub-Saharan African countries for the period 2003-16. In this paper, the concept of instability is enlarged to include factual instability as measured by the number of political violence and civil unrest events, perceived instability as measured by the perceptions of the quality of institutions by households, and the interplay between factual and perceived instability. Contrary to most previous analyses, this paper takes into account household wealth distribution to show how the effects of political instability differ for poor vs. rich households. For identification, I exploit the variation of factual and perceived instability across 185 administrative regions in the 19 countries. My regressions control for a large range of confounding factors measured at the levels of households, regions, and countries. Overall, factual and perceived instability are associated with higher investments in assets, and factual instability is also associated with more investment in house improvements, yet it is negatively associated with the ownership of financial accounts. With regard to the heterogeneous effects, increased factual or perceived instability is associated with more investments in physical capital but less investments in financial and human capital among rich households, and with less investments in physical, financial and human capital among poor households. These findings suggest that political instability might enhance the accumulation of wealth by rich households and reduce that of poor households, implying that the detrimental effects of political instability have lasting consequences for poor households, especially when poor households are exposed to an actual or even just perceived deteriorating quality of the country’s institutions. The second paper, written with Nicolas Büttner and Michael Grimm, analyzes households’ investments in assets and their consumption, and education and health expenditures when exposed to actual instability as measured by the number of political violence and protest events in Burkina Faso. There is a large, rather macroeconomic, literature that shows that political instability and social conflict are associated with poor economic outcomes including lower investment and reduced economic growth. However, there is only very little research on the impact of instability on households’ behavior, in particular their saving and investment decisions. This paper merges six rounds of household survey data and a geo-referenced time series of politically motivated events and fatalities from the Armed Conflict Location and Event Data project (ACLED) to analyze households’ decisions when exposed to instability in Burkina Faso. For identification, the paper exploits variation in the intensity of political instability across time and space while controlling for time-effects and municipality fixed effects as well as rainfall and nighttime light intensity, and many other potential confounders. The results show a negative effect of political instability on financial savings, the accumulation of durables, investment in house improvements, as well as on investment in education and health. Instability seems, in particular, to lead to a reshuffling from investment expenditures to increased food consumption, implying lower growth prospects in the future. With respect to economic growth, the sizable education and health effects seem to be particularly worrisome. The third paper, written with Michael Grimm and Michael Weber, employs a randomized controlled trial (RCT) to assess the short-term effects of a government support program targeted at already existing and new firms located in a semi-urban area in Burkina Faso. Most support programs targeted at small firms in low- and middle-income countries fail to generate transformative effects and employment at a larger scale. Bad targeting, too little flexibility and the limited size of the support are some of the factors that are often seen as important constraints. This paper assesses the short-term effects of a randomized targeted government support program to a pool of small and medium-sized firms that have been selected based on a rigorous business plan competition (BPC). One group received large cash grants of up to US$8,000, flexible in use. A second group received cash grants of an equally important size, but earmarked to business development services (BDSs) and thus less flexible and with a required own contribution of 20%. A third group serves as a control group. All firms operate in agri-business or related activities in a semi-urban area in the Centre-Est and Centre-Sud regions of Burkina Faso. An assessment of the short-term impacts shows that beneficiaries of cash grants engage in better business practices, such as formalization and bookkeeping. They also invest more, though, this does not translate into higher profits and employment yet. Beneficiaries of cash grants and BDSs show a higher ability to innovate. The results also show that cash grants cushioned the adverse effects of the COVID-19 pandemic for the beneficiaries. More generally, this study adds to the thin literature on support programs implemented in a fragile-state context. The fourth paper, written with Michael Weber, examines the selection of entrepreneurs based on expert judgments for a BPC in Burkina Faso. To support job creation in developing countries, governments allocate significant funds to a typically small number of new or already existing micro, small, and medium-sized enterprises (MSMEs) that are growth oriented. Increasingly, these enterprises are picked through BPCs where thematic experts are asked to make the selection. So far, there exists contrasting and limited evidence on the effectiveness and efficiency of these expert judgments for screening growth-oriented entrepreneurs among contestants in BPCs. Alternative or complementary approaches such as evaluation and selection algorithms are discussed in the literature but evidence on their performance is thin. This paper uses a principal component analysis (PCA) to build a metric for comparing the performance of these alternative mechanisms for targeting entrepreneurs with high potential to grow. The results show expert subjectivity bias in judging contestant entrepreneurs. The paper finds that the scores from the expert judgment and those from the algorithm perform similarly well for picking the top-ranked or talented entrepreneurs. It also finds that both types of scores have predictive power, i.e. have statistically significantly associated with 17 firm performance outcomes measured 10 or 34 months after the BPC started. Yet, the predictive power, as measured by the magnitude of the regression coefficients, is higher for the algorithm metric, even when it is considered jointly with expert judgment scores. Despite the statistical superiority of the algorithm, expert assessments at least through pitches of entrepreneurs have proved useful in many settings where free-riding or misuse of public funds may occur. Hence, efficiency and precision could be achieved by relying on a reasoned combination of expert judgments and an algorithm for targeting growth-oriented entrepreneurs. These four papers bring new insights on the relationship between weak institutions, political instability, and targeted government support to entrepreneurship for increasing the accumulation of financial, physical, and human capital, and productivity. And these are the key factors for spurring economic growth and creating jobs in SSA. These findings suggest that efficient institutions building in SSA countries would enhance citizen perceptions of good governance which would reduce political instability and enable households including the poor to accumulate productive assets, increase their productivity and reduce poverty. The findings also suggest that targeted government entrepreneurship support programs, e.g. in the forms of cash grants with monitored disbursements yet flexible in use, can enhance firms’ human capital, productive assets, and innovations, even in the short term. Moreover, the targeting mechanism of such programs could be made more effective and efficient by relying on a combinaison of expert judgments and an algorithm for picking growth-oriented entrepreneurs. KW - Subsaharisches Afrika KW - Wirtschaftsentwicklung KW - Wirtschaftswachstum Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10326 ER - TY - THES A1 - Schmid, Matthias T1 - Towards Storing 3D Model Graphs in Relational Databases N2 - The increasing relevance of massive graph data reinforces the need for adequate graph data management. While several graph database engines have been developed, the storage of graph data in a relational database management system, and therefore the seamless integration into existing information systems remains an open challenge. Motivated by the use case to integrate Building Information Modeling (BIM) data into the MonArch system, we propose a solution that transforms the BIM data into a property graph and stores this graph in the database system. We present a novel approach to efficiently store property graph data in a relational database management system using JSON functionality and redundant storage of edges in adjacency lists and show how to import huge data sets into this schema. Applying this approach, we import data sets of up to nearly 1 TB of disk space within the relational database, while only having 96 GB of main memory available. We also present a new approach of how to retrieve data from this database schema, translating queries written in the popular property graph query language Cypher into SQL. Hence, we provide an intuitive way to write semantically complex queries. We also demonstrate the efficiency of our approach using the standardized Linked Data Benchmark Council – Social Network Benchmark (LDBC - SNB) framework. Our approach increases the throughput for this benchmark by up to 85 times, compared to existing approaches for RDBMS. In addition, we propose a new method to transform BIM data into the property graph model and how to apply the aforementioned property graph storage to this data. We can import IFC models of up to 300 MB within five minutes. We show the suitability of our approach using our own use case specific benchmark, which we integrated into the previously mentioned Social Network Benchmark. For our interactive use case-specific queries, we achieve response times faster than 5 ms in 99% of all executions. Finally, we present how the aforementioned approach to store BIM data in a relational database management system is integrated into the existing MonArch system by splitting the different functionalities of our approach into a microservice architecture. N2 - Die steigende Relevanz von riesigen Graphdatenmengen verstärkt die Notwendigkeit von adäquatem Graphdaten Management. Während bereits mehrere Graphdatenbanken entwickelt wurden, bleibt die Speicherung von Graphdaten in relationalen Datenbanken und die damit verbundene nahtlose Integration in bereits existierende Informationssysteme eine ungelöste Herausforderung. Motiviert durch unseren eigenen Anwendungsfall Building Information Modeling (BIM)Daten in das MonArch Informationssystem zu integrieren, schlagen wir einen Ansatz vor, BIM Daten in eine Property Graph Form umzuwandeln und diesen in der Datenbank zu speichern. Um dies zu erreichen, stellen wir einen neuartigen Ansatz vor, um Property Graphen in einem relationalen Datenbanksystem zu speichern, indem wir Funktionalitäten wie JSON und die redundante Speicherung von Kanten in Adjazenzlisten kombinieren und zeigen wie große Mengen dieser Daten in das Schema importiert werden können. Durch die Anwendung unseres Ansatzes können wir Datensätze von bis zu 1 TB in das Datenbanksystem importieren, während wir nur 96 GB Hauptspeicher zur Verfügung haben. Wir stellen außerdem einen neuen Ansatz vor, um Daten aus dem zuvor genannten Schema abzufragen, indem wir die beliebte Graphanfragesprache Cypher in die Sprache SQL übersetzen. Dadurch erreichen wir eine intuitive Art semantisch komplexe Anfragen zu schreiben. Zusätzlich zeigen wir die Effizienz unseres Ansatzes, indem wir das standardisierte Evaluationsframework Social Network Benchmark des Linked Data Benchmar Council (LDBC – SNB) verwenden. Unser Ansatz erhöht den Durchsatz dieses Benchmarks, im Vergleich zu existierenden Ansätzen für relationale Datenbanksysteme, auf einen bis zu 85-fachen Durchsatz. Zusätzlich schlagen wir eine neue Methode vor, um BIM Daten in das Property Graph Modell zu übertragen und wie das zuvor vorgestellte Speichermodel verwendet werden kann, um diese Daten zu speichern. Damit können wir IFC Modelle mit bis zu 300 MB in unter 5 Minuten in unser System importieren. Schließlich zeigen wir die Eignung unseres Ansatzes, indem wir einen eigenen Benchmark spezifisch für unseren Anwendungsfall verwenden, welchen wir in den zuvor erwähnte Social Network Benchmark integriert haben. Für unsere anwendungsfallspezifischen Anfragen erreichen wir Antwortzeiten von unter 5 ms in 99% der Ausführungen. KW - Graph-based database models KW - Relational database model KW - Property graph KW - Industry Foundation Classes KW - IFC Store Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10353 ER - TY - THES A1 - Schlenker, Florian T1 - Delaunay Configuration B-Splines N2 - The generalization of univariate splines to higher dimensions is not straightforward. There are different approaches, each with its own advantages and drawbacks. A promising approach using Delaunay configurations and simplex splines is due to Neamtu. After recalling fundamentals of univariate splines, simplex splines, and the wellknown, multivariate DMS-splines, we address Neamtu’s DCB-splines. He defined two variants that we refer to as the nonpooled and the pooled approach, respectively. Regarding these spline spaces, we contribute the following results. We prove that, under suitable assumptions on the knot set, both variants exhibit the local finiteness property, i.e., these spline spaces are locally finite-dimensional and at each point only a finite number of basis candidate functions have a nonzero value. Additionally, we establish a criterion guaranteeing these properties within a compact region under mitigated assumptions. Moreover, we show that the knot insertion process known from univariate splines does not work for DCB-splines and reason why this behavior is inherent to these spline spaces. Furthermore, we provide a necessary criterion for the knot insertion property to hold true for a specific inserted knot. This criterion is also sufficient for bivariate, nonpooled DCB-splines of degrees zero and one. Numerical experiments suggest that the sufficiency also holds true for arbitrary spline degrees. Univariate functions can be approximated in terms of splines using the Schoenberg operator, where the approximation error decreases quadratically as the maximum distance between consecutive knots is reduced. We show that the Schoenberg operator can be defined analogously for both variants of DCB-splines with a similar error bound. Additionally, we provide a counterexample showing that the basis candidate functions of nonpooled DCB-splines are not necessarily linearly independent, contrary to earlier statements in the literature. In particular, this implies that the corresponding functions are not a basis for the space of nonpooled DCB-splines. N2 - Univariate Splines können nicht unmittelbar auf mehrere Dimensionen verallgemeinert werden. Jedoch gibt es verschiedene Ansätze mit jeweils unterschiedlichen Vor- und Nachteilen. Eine vielversprechende Herangehensweise, die Delaunay-Konfigurationen und Simplex-Splines verwendet, stammt von Neamtu. Nachdem wir die Grundlagen von univariaten Splines, Simplex-Splines und den bekannten multivariaten DMS-Splines wiederholt haben, beschäftigen wir uns mit Neamtus DCB-Splines. Er führte zwei verschiedene Varianten ein, die als nichtaggregierter beziehungsweise aggregierter Ansatz bezeichnet werden. In Bezug auf diese Splineräume präsentieren wir die folgenden Ergebnisse. Wir zeigen zum einen, dass beide Varianten unter geeigneten Voraussetzungen an die Knotenmenge die sogenannte Lokale-Endlichkeits-Eigenschaft besitzen. Dies bedeutet, dass die Splineräume lokal endlichdimensional sind und dass an jedem Punkt nur eine endliche Anzahl der Kandidaten an Basisfunktionen einen von null verschiedenen Wert aufweist. Zusätzlich ermitteln wir ein Kriterium, welches diese Eigenschaften auf einem kompakten Gebiet auch unter schwächeren Voraussetzungen garantiert. Darüber hinaus zeigen wir, dass der von den univariaten Splines her bekannte Prozess des Knoteneinfügens für DCB-Splines nicht funktioniert, und begründen, warum dieses Verhalten in der Natur dieser Splineräume liegt. Außerdem geben wir ein notwendiges Kriterium dafür an, dass die Knoteneinfüge-Eigenschaft für einen bestimmten einzufügenden Knoten gegeben sein kann. Für bivariate nicht-aggregierte DCB-Splines von Grad null und eins ist dieses Kriterium auch hinreichend. Numerische Experimente legen ferner die Vermutung nahe, dass dies unabhängig vom Splinegrad der Fall ist. Univariate Funktionen können mithilfe des Schoenberg-Operators durch Splines approximiert werden. Dabei hat eine Verringerung des maximalen Abstands zweier aufeinanderfolgender Knoten eine quadratische Verringerung des Approximationsfehlers zur Folge. Wir zeigen, dass der Schoenberg-Operator für beide Variaten von DCB-Splines auf analoge Art und Weise und mit einer ähnlichen Fehlerschranke definiert werden kann. Zusätzlich geben wir ein Gegenbeispiel an, das zeigt, dass die Basisfunktions-Kandidaten der nicht-aggregierten DCB-Splines nicht notwendigerweise linear unabhängig sind, was einen Gegensatz zu früheren Behauptungen in der Literatur darstellt. Dies impliziert insbesondere, dass die entsprechenden Funktionen keine Basis für den Raum der nicht-aggregierten DCB-Splines bilden. KW - Multivariate splines KW - Simplex splines KW - Delaunay triangulations KW - Delaunay configurations KW - Neamtu, Marian KW - Spline KW - Bivariater Spline KW - Spline-Raum KW - Simplexspline KW - Neamtu, Marian Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11225 ER - TY - THES A1 - Sonnleitner, Mathias T1 - The power of random information for numerical approximation and integration N2 - This thesis investigates the quality of randomly collected data by employing a framework built on information-based complexity, a field related to the numerical analysis of abstract problems. The quality or power of gathered information is measured by its radius which is the uniform error obtainable by the best possible algorithm using it. The main aim is to present progress towards understanding the power of random information for approximation and integration problems. In the first problem considered, information given by linear functionals is used to recover vectors, in particular from generalized ellipsoids. This is related to the approximation of diagonal operators which are important objects of study in the theory of function spaces. We obtain upper bounds on the radius of random information both in a convex and a quasi-normed setting, which extend and, in some cases, improve existing results. We conjecture and partially establish that the power of random information is subject to a dichotomy determined by the decay of the length of the semiaxes of the generalized ellipsoid. Second, we study multivariate approximation and integration using information given by function values at sampling point sets. We obtain an asymptotic characterization of the radius of information in terms of a geometric measure of equidistribution, the distortion, which is well known in the theory of quantization of measures. This holds for isotropic Sobolev as well as Hölder and Triebel-Lizorkin spaces on bounded convex domains. We obtain that for these spaces, depending on the parameters involved, typical point sets are either asymptotically optimal or worse by a logarithmic factor, again extending and improving existing results. Further, we study isotropic discrepancy which is related to numerical integration using linear algorithms with equal weights. In particular, we analyze the quality of lattice point sets with respect to this criterion and obtain that they are suboptimal compared to uniform random points. This is in contrast to the approximation of Sobolev functions and resolves an open question raised in the context of a possible low discrepancy construction on the two-dimensional sphere. KW - information-based complexity KW - cubature KW - numerical approximation KW - discrepancy KW - Monte Carlo integration KW - Komplexität / Algorithmus KW - Numerische Integration KW - Monte-Carlo-Integration KW - Gewichteter Funktionenraum KW - Scattered-Data-Interpolation Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-11305 ER - TY - THES A1 - Silva, Vivian dos Santos T1 - A Composite Syntactic-Semantic Interpretable Text Entailment Approach Exploring Commonsense Knowledge Graphs N2 - Natural Language Processing has an important role in Artificial Intelligence for easing human-machine interaction. Processing human language, though, poses many challenges, among which is the semantics-related phenomenon known as language variability, the fact that the same thing can be said in several ways. NLP applications' inputs and outputs can be expressed in different forms, whose equivalence can be verified through inference. The textual entailment paradigm was established to enable the creation of a unifying framework for applied inference, providing a means of delivering other NLP task from handling inference issues in an ad-hoc manner, using instead the outputs of an inference-dedicated mechanism. Text entailment, the task of determining whether a piece of text logically follows from another piece of text, involves different scenarios, which can range from a simple syntactic variation to more complex semantic relationships between sentences. However, most approaches try a one-size-fits-all solution that usually favors some scenario to the detriment of another. The commonsense world knowledge necessary to support more complex inferences is also usually employed in a limited way, with most approaches sticking to shallow semantic information, leaving more elaborate semantic relationships aside. Furthermore, most systems still work as a "black box", providing a yes/no answer that does not explain the underlying reasoning process. This thesis aims at addressing these issues by proposing a composite interpretable approach for recognizing text entailment where the entailment pair is analyzed so the most relevant phenomenon is detected and the suitable method can be used to solve it. Syntactic variations are dealt with through the analysis of the sentences' syntactic structures, and semantic relationships are detected with the aid of a knowledge graph built from natural language dictionary definitions. Also, if a semantic matching is involved, the answer is made interpretable through the generation of natural language justifications that explain the semantic relationship between the pieces of text. The result is the XTE - Explainable Text Entailment - a system that outperforms well-established tools based on single-technique entailment algorithms, and that also gives an important step towards Explainable AI, allowing the inference model interpretation, making the semantic reasoning process explicit and understandable. KW - Textual Entailment KW - Knowledge Graph KW - Semantic Interpretability Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10706 ER - TY - THES A1 - Schmid, Josef T1 - Learning-Based Quality of Service Prediction in Cellular Vehicle Communication N2 - Network communication has become a part of everyday life, and the interconnection among devices and people will increase even more in the future. A new area where this development is on the rise is the field of connected vehicles. It is especially useful for automated vehicles in order to connect the vehicles with other road users or cloud services. In particular for the latter it is beneficial to establish a mobile network connection, as it is already widely used and no additional infrastructure is needed. With the use of network communication, certain requirements come along. One of them is the reliability of the connection. Certain Quality of Service (QoS) parameters need to be met. In case of degraded QoS, according to the SAE level specification, a downgrade of the automated system can be required, which may lead to a takeover maneuver, in which control is returned back to the driver. Since such a handover takes time, prediction is necessary to forecast the network quality for the next few seconds. Prediction of QoS parameters, especially in terms of Throughput (TP) and Latency (LA), is still a challenging task, as the wireless transmission properties of a moving mobile network connection are undergoing fluctuation. In this thesis, a new approach for prediction Network Quality Parameters (NQPs) on Transmission Control Protocol (TCP) level is presented. It combines the knowledge of the environment with the low level parameters of the mobile network. The aim of this work is to perform a comprehensive study of various models including both Location Smoothing (LS) grid maps and Learning Based (LB) regression ones. Moreover, the possibility of using the location independence of a model as well as suitability for automated driving is evaluated. N2 - Netzwerkkommunikation ist zu einem Teil des täglichen Lebens geworden, und die Vernetzung von Geräten und Menschen wird in Zukunft noch weiter zunehmen. Ein neuer Bereich, in dem diese Entwicklung zunimmt, sind vernetzte Fahrzeuge. Es ist vorteilhaft automatisierte Fahrzeuge mit anderen Verkehrsteilnehmern oder Cloud-Diensten zu verbinden. Insbesondere für letztere ist der Einsatz einer mobilen Netzwerkverbindung zweckmäßig, da sie bereits weit verbreitet ist und keine zusätzliche Infrastruktur erfordert. Mit der Nutzung des Netzwerkes gehen auch einige Anforderungen einher. Die Zuverlässigkeit der Verbindung ist entscheidend. Kann keine ausreichende Qualität der Verbindung erfüllt werden kann nach SAE Spezifikation das Herunterstufen der Automatisierungsstufe erforderlich sein. In letzter Konsequenz kann diese schließlich zu einem Übernahmemanöver führen, wobei die Kontrolle wieder an den Fahrer zurückgegeben wird. Da ein solcher Wechsel Zeit in Anspruch nimmt, ist eine Vorhersage erforderlich, um die Netzqualität in den nächsten Sekunden zu prognostizieren. Eine solche Vorhersage der Dienstgüte (Quality of Service (QoS)), besonders hinsichtlich Durchsatz und Latenz, nach wie vor eine recht anspruchsvolle Aufgabe, da die drahtlosen Übertragungseigenschaften einer sich bewegenden mobilen Netzwerkverbindung großen Schwankungen unterliegen. In dieser Dissertation wird ein neuer Ansatz für die Vorhersage von Network Quality Parameters (NQPs) auf der Ebene des Transmission Control Protocol (TCP) vorgestellt. Er kombiniert das Wissen der Umgebung mit den Parametern des Mobilfunknetzes. Das Ziel dieser Arbeit ist eine umfangreiche Untersuchung verschiedener Modelle, darunter sind sowohl Lokalisationsglättende Kachel-Karten wie auch Regressionsverfahren aus dem Bereich des Maschinellen Lernens. Darüber hinaus wird dessen die Möglichkeit der Nutzung der Ortsunabhängigkeit eines Modells erörtert sowie Eignung für automatisiertes Fahren evaluiert. Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-10772 ER - TY - THES A1 - Mexis, Nico T1 - A Comprehensive Comparison of Fuzzy Extractor Schemes Employing Different Error Correction Codes N2 - This thesis deals with fuzzy extractors, security primitives often used in conjunction with Physical Unclonable Functions (PUFs). A fuzzy extractor works in two stages: The generation phase and the reproduction phase. In the generation phase, an Error Correction Code (ECC) is used to compute redundant bits for a given PUF response, which are then stored as helper data, and a key is extracted from the response. Then, in the reproduction phase, another (possibly noisy) PUF response can be used in conjunction with this helper data to extract the original key. It is clear that the performance of the fuzzy extractor is strongly dependent on the underlying ECC. Therefore, a comparison of ECCs in the context of fuzzy extractors is essential in order to make them as suitable as possible for a given situation. It is important to note that due to the plethora of various PUFs with different characteristics, it is very unrealistic to propose a single metric by which the suitability of a given ECC can be measured. First, we give a brief introduction to the topic, followed by a detailed description of the background of the ECCs and fuzzy extractors studied. Then, we summarise related work and describe an implementation of the ECCs under consideration. Finally, we carry out the actual comparison of the ECCs and the thesis concludes with a summary of the results and suggestions for future work. N2 - Diese Arbeit befasst sich mit Fuzzy Extractors, Sicherheitsprimitiven, die häufig in Verbindung mit Physical Unclonable Functions (PUFs) verwendet werden. Ein Fuzzy Extractor arbeitet in zwei Phasen: Der Generierungsphase und der Reproduktionsphase. In der Generierungsphase wird ein Fehlerkorrekturverfahren (ECC) verwendet, um redundante Bits für eine gegebene PUF-Antwort zu berechnen, die dann als Hilfsdaten gespeichert werden, und ein Schlüssel wird aus der Antwort extrahiert. In der Reproduktionsphase kann dann eine andere (möglicherweise verrauschte) PUF-Antwort zusammen mit diesen Hilfsdaten verwendet werden, um den ursprünglichen Schlüssel zu extrahieren. Es ist klar, dass die Leistung des Fuzzy Extractors stark von der Leistung des zugrunde liegenden ECC abhängt. Daher ist es unerlässlich, ECCs in Verbindung mit Fuzzy Extractors zu vergleichen, um sie für eine bestimmte Situation so geeignet wie möglich zu machen. Es ist wichtig, darauf hinzuweisen, dass es aufgrund der Vielzahl verschiedener PUFs mit unterschiedlichen Eigenschaften sehr unrealistisch ist, eine einzige Metrik vorzuschlagen, mit der die Eignung eines bestimmten ECCs gemessen werden kann. Wir beginnen mit einer kurzen Einführung in das Thema, gefolgt von einer detaillierten Beschreibung des Hintergrunds der untersuchten ECCs und Fuzzy Extractors. Anschließend fassen wir verwandte Arbeiten zusammen und beschreiben eine Implementierung der untersuchten ECCs. Schließlich führen wir den eigentlichen Vergleich der ECCs durch und schließen die Arbeit mit einer Zusammenfassung der Ergebnisse und Vorschlägen für zukünftige Arbeiten ab. T2 - Ein umfassender Vergleich von Fuzzy Extractors unter Verwendung verschiedener Fehlerkorrekturverfahren KW - Fuzzy extractor KW - Error correction KW - Vorwärtsfehlerkorrektur KW - Codierungstheorie KW - Physical unclonable function KW - Biometrie Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-12914 VL - 2023 ER - TY - THES A1 - Welearegai, Gebrehiwet Biyane T1 - Precise Detection of Injection Attacks in Real-world Applications N2 - Code injection attacks like the one used in the high-profile 2017 Equifax breach, have become increasingly common, ranking at the top of OWASP’s list of critical web application vulnerabilities. The injection attacks can also target embedded applications running on processors like ARM and Xtensa by exploiting memory bugs and maliciously altering the program’s behavior or even taking full control over a system. Especially, ARM’s support of low power consumption without sacrificing performance is leading the industry to shift towards ARM processors, which advances the attention of injection attacks as well. In this thesis, we are considering web applications and embedded applications (running on ARM and Xtensa processors) as the target of injection attacks. To detect injection attacks in web applications, taint analysis is mostly proposed but the precision, scalability, and runtime overhead of the detection depend on the analysis types (e.g., static vs dynamic, sound vs unsound). Moreover, in the existing dynamic taint tracking approach for Java- based applications, even the most performant can impose a slowdown of at least 10–20% and often far more. On the other hand, considering the embedded applications, while some initial research has tried to detect injection attacks (i.e., ROP and JOP) on ARM, they suffer from high performance or storage overhead. Besides, the Xtensa has been neglected though used in most firmware-based embedded WiFi home automation devices. This thesis aims to provide novel approaches to precisely detect injection attacks on both the web and embedded applications. To that end, we evaluate JavaScript static analysis frameworks to evaluate the security of a hybrid app (JS & native) from an industrial partner, provide RIVULET – a tool that precisely detects injection attacks in Java-based real-world applications, and investigate injection attacks detection on ARM and Xtensa platforms using hardware performance counters (HPCs) and machine learning (ML) techniques. To evaluate the security of the hybrid application, we initially compare the precision, scalability, and code coverage of two widely-used static analysis frameworks—WALA and SAFE. The result of our comparison shows that SAFE provides higher precision and better code coverage at the cost of somewhat lower scalability. Based on these results, we analyze the data flows of the hybrid app via taint analysis by extending the SAFE’s taint analysis and detected a potential for injection attacks of the hybrid application. Similarly, to detect injection attacks in Java-based applications, we provide Rivulet which monitors the execution of developer-written functional tests using dynamic taint tracking. Rivulet uses a white-box test generation technique to re-purpose those functional tests to check if any vulnerable flow could be exploited. We compared Rivulet to the state-of-the-art static vulnerability detector Julia on benchmarks and Rivulet outperformed Julia in both false positives and false negatives. We also used Rivulet to detect new vulnerabilities. Moreover, for applications running on ARM and Xtensa platforms, we investigate ROP1 attack detection by combining HPCs and ML techniques. We collect data exploiting real- world vulnerable applications and small benchmarks to train the ML. For ROP attack detection on ARM, we also implement an online monitor which labels a program’s execution as benign or under attack and stops its execution once the latter is detected. Evaluating our ROP attack detection approach on ARM provides a detection accuracy of 92% for the offline training and 75% for the online monitoring. Similarly, our ROP attack detection on the firmware-only Xtensa processor provides an overall average detection accuracy of 79%. Last but not least, this thesis shows how relevant taint analysis is to precisely detect injection attacks on web applications and the power of HPC combined with machine learning in the control flow injection attacks detection on ARM and Xtensa platforms. Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-12926 ER - TY - THES A1 - Danner, Dominik T1 - Towards Quality of Service and Fairness in Smart Grid Applications N2 - Due to the increasing amount of distributed renewable energy generation and the emerging high demand at consumer connection points, e. g., electric vehicles, the power distribution grid will reach its capacity limit at peak load times if it is not expensively enhanced. Alternatively, smart flexibility management that controls user assets can help to better utilize the existing power grid infrastructure for example by sharing available grid capacity among connected electric vehicles or by disaggregating flexibility requests to hybrid photovoltaic battery energy storage systems in households. Besides maintaining an acceptable state of the power distribution grid, these smart grid applications also need to ensure a certain quality of service and provide fairness between the individual participants, both of which are not extensively discussed in the literature. This thesis investigates two smart grid applications, namely electric vehicle charging-as-a-service and flexibility-provision-as-a-service from distributed energy storage systems in private households. The electric vehicle charging service allocation is modeled with distributed queuing-based allocation mechanisms which are compared to new probabilistic algorithms. Both integrate user constraints (arrival time, departure time, and energy required) to manage the quality of service and fairness. In the queuing-based allocation mechanisms, electric vehicle charging requests are packetized into logical charging current packets, representing the smallest controllable size of the charging process. These packets are queued at hierarchically distributed schedulers, which allocate the available charging capacity using the time and frequency division multiplexing technique known from the networking domain. This allows multiple electric vehicles to be charged simultaneously with variable charging currents. To achieve high quality of service and fairness among electric vehicle charging processes, dynamic weights are introduced into a weighted fair queuing scheduler that considers electric vehicle departure time and required energy for prioritization. The distributed probabilistic algorithms are inspired by medium access protocols from computer networking, such as binary exponential backoff, and control the quality of service and fairness by adjusting sampling windows and waiting periods based on user requirements. The second smart grid application under investigation aims to provide flexibility provision-as-a-service that disaggregates power flexibility requests to distributed battery energy storage systems in private households. Commonly, the main purpose of stationary energy storage is to store energy from a local photovoltaic system for later use, e. g., for overnight charging of an electric vehicle. This is optimized locally by a home energy management system, which also allows the scheduling of external flexibility requests defined by the deviation from the optimal power profile at the grid connection point, for example, to perform peak shaving at the transformer. This thesis discusses a linear heuristic and a meta heuristic to disaggregate a flexibility request to the single participating energy management systems that are grouped into a flexibility pool. Thereby, the linear heuristic iteratively assigns portions of the power flexibility to the most appropriate energy management system for one time slot after another, minimizing the total flexibility cost or maximizing the probability of flexibility delivery. In addition, a multi-objective genetic algorithm is proposed that also takes into account power grid aspects, quality of service, and fairness among par-ticipating households. The genetic operators are tailored to the flexibility disaggregation search space, taking into account flexibility and energy management system constraints, and enable power-optimized buffering of fitness values. Both smart grid applications are validated on a realistic power distribution grid with real driving patterns and energy profiles for photovoltaic generation and household consumption. The results of all proposed algorithms are analyzed with respect to a set of newly defined metrics on quality of service, fairness, efficiency, and utilization of the power distribution grid. One of the main findings is that none of the tested algorithms outperforms the others in all quality of service metrics, however, integration of user expectations improves the service quality compared to simpler approaches. Furthermore, smart grid control that incorporates users and their flexibility allows the integration of high-load applications such as electric vehicle charging and flexibility aggregation from distributed energy storage systems into the existing electricity distribution infrastructure. However, there is a trade-off between power grid aspects, e. g., grid losses and voltage values, and the quality of service provided. Whenever active user interaction is required, means of controlling the quality of service of users’ smart grid applications are necessary to ensure user satisfaction with the services provided. Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus4-13731 ER - TY - THES A1 - Wilhelm, Mario T1 - Approaching Disaster Vulnerability in a Megacity: Community Resilience to Flooding in two Kampungs in Jakarta T1 - Vulnerabilität gegenüber Katastrophen in Megastädten: Die Resilienz zweier Kampung-Gemeinschaften in Jakarta N2 - The capacity of a society to adapt and cope with disasters is generally discussed under the term resilience. It is a concept that entered vulnerability research recently and that will still need further theoretical underpinning and empirical grounding. The vulnerability context of urban poor can be approached from different angles, such as urbanism or development studies. My thesis has the aim to contribute to a better understanding of resilience within vulnerability research. I decided to follow recent approaches in vulnerability research for two reasons. First, integrated vulnerability concepts highlight the value of interdisciplinary approaches. Second, the integrated vulnerability discourse offers a analytical framework that can be modified and applied to the empirical study of vulnerability in Jakarta's kampungs. In chapter two I will introduce the theoretical embedding of the analytical framework. This will be necessary, as disaster research in social science is a relatively young research stream. Terms such as disasters, catastrophes and hazards are often used synonymously. Moreover, different definitions of vulnerability and resilience lead to diverse applications of the concepts. Thus, I would like to outline some aspects of disaster research in social science and present the definitions I will apply in this thesis. Moreover, at the end of chapter two, I will refine my research question and present the methodological framework I use for the analysis of my empirical data. The integrated vulnerability framework is pointing out the importance of the temporal and spatial scale in vulnerability analysis. In chapter three, I will therefore provide the temporal and spatial embedding of my research by presenting a historical perspective on Jakarta. Since it is not possible to present almost 500 years of city history in one chapter, I will concentrate on two aspects. The first focus will be put on the origin and development of Jakarta's kampungs. The second focus will be put on the flood situation. Accordingly, I will shortly present the different phases of Jakarta's history. In each phase I will point out the aspects relevant for kampungs and aspects that will be relevant in regards to the flood situation. Finally, I will conclude how the situation of the kampungs and its dwellers changed over time. In addition to that, I will discuss the historical development of flood risk in a separate section in order to contribute to a better understanding of recent flooding. Since the integrated vulnerability framework is a place based approach, I would like to introduce the theoretical discussion on urbanism and space in chapter four. I will show that the analytical value of the concept of slum in approaching the local level in Jakarta's kampungs is limited. I will also present the concept of informality as it is often discussed in the context of slums and urban poverty. Moreover, I will argue that informality is an integral part of megacities in developing countries and it reflects processes of self-organisation on a local level. Aspects of social organisations are, again, also important for a vulnerability analysis. I will then link the concepts of institutions and social capital in order to show how social organisation can be approached as a resource for communities. I will conclude the chapter with models of social space, particularly the concept of locality, that will provide a framework for structuring and analysing the empirical field data. In chapter five and six, I will leave the theoretical discussion and present the data I collected during my empirical research in two kampung. Chapter five will begin with a short introduction to the research approach and the methods I used in the research process. Then I will present the field data following the structure provided by the concept of locality which is basically referring to the categories of material space and social organisation. I will add two categories to this structure. In order to link the previous discussion on informality with the empirical research, I will present data on the informal-formal continuum in both research locations. In addition to this I will sum up my empirical findings on how people adapt to and cope with hazards, particularly flooding. I decided to separate description and analysis of my research findings because I would like to first provide a general understanding of the local situation in both research locations in a descriptive way before I analyse and interpret the data. Accordingly, I will analyse and interpret the data in chapter six. The main focus will be put on aspects of space and social organisation so that local communities can be conceptualised. In chapter seven I will finally analyse aspects of vulnerability and resilience by applying the integrated vulnerability framework to my research findings. Lastly, I will briefly summarize the thesis in chapter eight and provide an outlook. N2 - Die Kapazität einer Gesellschaft sich an Katastrophen anzupassen und diese zu bewältigen wird mit dem Begriff der Resilienz beschrieben. Es ist auch ein Konzept, dass sich kürzlich in der Vulnerabilitätsforschung etabliert hat und deshalb einer weiteren empirischen Fundierung bedarf. Dem Vulnerabilitätskontext armer Bevölkerungsgruppen in Megastädten kann man sich mit Hilfe verschiedener Ansätze nähern, wie beispielsweise Ansätze aus der Stadtforschung oder Entwicklungsstudien. Meine Forschungsarbeit hat sich zum Ziel gesetzt zu einem besseren Verständnis des Resilienzkonzepts in der Vulnerabilitätsforschung beizutragen. Ich habe mich aus zwei Gründen dazu entschlossen neuere Ansätze der Vulnerabilitätsforschung als Grundlage in meiner empirischen Forschung zu verfolgen. Zum einen unterstreicht der Vulnerabilitätsdiskurs den Wert interdisziplinärer Ansätze. Zum anderen bietet der integrierte Vulnerabilitätsdiskurs einen analytischen Rahmen der für eine empirische Studie der Vulnerabiltät der Kamung-Gemeinschaften in Jakarta herangezogen werden kann. In Kapitel zwei werde ich die theoretische Einbettung des methodologischen Analyserahmens vorstellen, der zu der Erhebung der empirischen Daten verwendet wurde. Dies wird deshalb von Bedeutung sein, da Katastrophenforschung innerhalb der Sozialwissenschaften eine relatives junges Forschungsfeld ist. Der integrierte Vulnerabilitätsansatz betont die Bedeutung von zeitlichen und räumlichen Skalen. Deshalb werde ich in Kapitel drei die zeitliche und räumliche Einbettung meiner Forschung darstellen und eine historische Betrachtung der Stadtentwicklung in Jakarta aufzeigen. Dabei werde ich mich auf zwei Aspekte fokusieren, Der erste Fokus wird auf den Ursprung und Entwicklung von den Kampungs in Jakarta gesetzt. Der zweite Fokus beschäftigt sich mit der Hochwassersituation in Jakarta. In Kapitel vier wird die theoretische Diskussion zu Urbanismusstudien und Raum vorgestellt. Dabei werde ich zeigen, dass das Konzept "Slum" nur einen geringen analytischen Wert besitzt. In diesem Zusammenhang werde ich argumentieren, dass Informalität integraler Bestandteil von Megastädten ist und Selbstorganisationsprozesse auf lokaler Ebene reflektiert. Ich werde dann den Institutionenbegriff mit dem Diskurs zu "Sozialkapital" verbinden um zu zeigen, dass für lokale Gemeinschaften soziale Organisation eine Ressource bedeuten kann. In Kapitel fünf und sechs werde ich die empirischen Daten meiner Feldforschung darstellen. Dabei werde ich den Forschungsansatz und die Methoden vorstellen. De empirischen Daten werden mit Hilfe des Konzepts der Lokalität strukturiert. Abschliessend werden die Daten interpretiert. Der integrierte Vulnerabilitätsansatz wird in Kapitel sieben dazu genutzt, die Aspekte von Vulnerabilität und Resilienz herauszuarbeiten. In Kapitel acht werden die Ergebnisse zusammen gefasst und ich gebe einen kurzen Ausblick. KW - Resilience KW - Kampung KW - Jakarta KW - Flood KW - Vulnerability KW - Megastadt KW - Katastrophe KW - Reaktion KW - Jakarta Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-26889 ER - TY - THES A1 - Ullah, Ehsan T1 - New Techniques for Polynomial System Solving N2 - Since any encryption map may be viewed as a polynomial map between finite dimensional vector spaces over finite fields, the security of a cryptosystem can be examined by studying the difficulty of solving large systems of multivariate polynomial equations. Therefore, algebraic attacks lead to the task of solving polynomial systems over finite fields. In this thesis, we study several new algebraic techniques for polynomial system solving over finite fields, especially over the finite field with two elements. Instead of using traditional Gröbner basis techniques we focus on highly developed methods from several other areas like linear algebra, discrete optimization, numerical analysis and number theory. We study some techniques from combinatorial optimization to transform a polynomial system solving problem into a (sparse) linear algebra problem. We highlight two new kinds of hybrid techniques. The first kind combines the concept of transforming combinatorial infeasibility proofs to large systems of linear equations and the concept of mutants (finding special lower degree polynomials). The second kind uses the concept of mutants to optimize the Border Basis Algorithm. We study recent suggestions of transferring a system of polynomial equations over the finite field with two elements into a system of polynomial equalities and inequalities over the set of integers (respectively over the set of reals). In particular, we develop several techniques and strategies for converting the polynomial system of equations over the field with two elements to a polynomial system of equalities and inequalities over the reals (respectively over the set of integers). This enables us to make use of several algorithms in the field of discrete optimization and number theory. Furthermore, this also enables us to investigate the use of numerical analysis techniques such as the homotopy continuation methods and Newton's method. In each case several conversion techniques have been developed, optimized and implemented. Finally, the efficiency of the developed techniques and strategies is examined using standard cryptographic examples such as CTC and HFE. Our experimental results show that most of the techniques developed are highly competitive to state-of-the-art algebraic techniques. KW - Polynomlösung KW - Algebra KW - Lineare Algebra KW - Algorithmus KW - polynomial system solving KW - techniques KW - linear algebra KW - border bases KW - mutant strategies Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-26815 ER - TY - THES A1 - Xiu, Xingqiang T1 - Non-commutative Gröbner Bases and Applications N2 - Commutative Gröbner bases have a lot of applications in theory and practice, because they have many nice properties, they are computable, and there exist many efficient improvements of their computations. Non-commutative Gröbner bases also have many useful properties. However, applications of non-commutative Gröbner bases are rarely considered due to high complexity of computations. The purpose of this study was to improve the computation of non-commutative Gröbner bases and investigate the applications of non-commutative Gröbner bases. Gröbner basis theory in free monoid rings was carefully revised and Gröbner bases were precisely characterized in great detail. For the computations of Gröbner bases, the Buchberger Procedure was formulated. Three methods, say interreduction on obstructions, Gebauer-Möller criteria, and detecting redundant generators, were developed for efficiently improving the Buchberger Procedure. Further, the same approach was applied to study Gröbner basis theory in free bimodules over free monoid rings. The Buchberger Procedure was also formulated and improved in this setting. Moreover, J.-C. Faugere's F4 algorithm was generalized to this setting. Finally, many meaningful applications of non-commutative Gröbner bases were developed. Enumerating procedures were proposed to semi-decide some interesting undecidable problems. All the examples in the thesis were computed using the package gbmr of the computer algebra system ApCoCoA. The package was developed by the author. It contains dozens of functions for Gröbner basis computations and many concrete applications. The package gbmr and a collection of interesting examples are available at http://www.apcocoa.org/. KW - Gröbner-Basis KW - Assoziativer Ring KW - Endlich erzeugter Modul KW - Gruppentheorie KW - non-commutative polynomial KW - non-commutative Gröbner basis KW - applications KW - Gebauer-Möller KW - optimization Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-26827 ER - TY - THES A1 - Beck, Lotte T1 - Anticorruption in Public Procurement - A Qualitative Research Design T1 - Antikorruption in öffentlichen Ausschreibungen. Ein qualitativer Forschungsansatz N2 - This qualitative approach to research develops a case oriented design in order to examine risks of corruption in public procurement. The method involves expert interviews as the most important data collection tool and explains how to examine the information by means of a qualitative content analyze. In order to develop rigor results, the concepts of external validity, construct validity, internal validity and reliability are applied. The research design was adopted in two field investigations: A first project focuses on the challenges and chances for anticorruption when awarding contracts in a competitive dialogue. For this purpose, data was collected in an investigation of the German construction market. The results are presented in form of 16 propositions, also including policy recommendations and approaches for reform. In the framework of a further case based research project, the paper analyzes the organizational structure and working process of "China's Tangible Construction Market" (TCM). The TCM is an administrative institution where a bid inviter can register in order to announce a public need and conduct a procurement procedure at a fixed location. The analysis of expert interviews conducted during an investigation of the Chinese construction market shows that the TCM offers strong institutional support that can be helpful to curb corruption in public procurement N2 - In dieser Arbeit werden im Rahmen von qualitativen Studien Korruptionsrisiken bei der Vergabe von öffentlichen Aufträgen untersucht. Die Arbeit besteht im Wesentlichen aus drei Teilen: In einem ersten Teil wird ein qualitativer Forschungsansatz basierend auf Experteninterviews und einer qualitativen Inhaltsanalyse entwickelt. Im Zentrum des zweiten Teils der Arbeit steht die Analyse des Wettbewerblichen Dialogs. Hierfür wurden 23 Experteninterviews mit Vertretern des deutschen Bauwesens geführt, um Chancen und Risiken für die Korruptionsprävention bei Anwendung dieses jüngsten europäischen Ausschreibungsverfahrens herauszuarbeiten. Der dritte Teil der Arbeit befasst sich mit der chinesischen Einrichtung "Tangible Construction Market" (TCM), die die Organisation von öffentliche Vergabeverfahren im chinesischen Bauwesen unterstützt. Durch die Analyse von 20 Experteninterviews wird der Arbeitsprozess des TCMs beschrieben. Es wird untersucht, wie der TCM Korruption vorbeugen kann und an welchen Stellen Verbesserungspotential ausgeschöpft werden kann. KW - Öffentliche Ausschreibung KW - Korruption KW - Prävention KW - Bauwesen KW - China KW - Antikorruption KW - Wettbewerblicher Dialog KW - Experteninterview KW - Inhaltsanalyse KW - Fallstudien KW - Anticorruption KW - Procurement KW - construction KW - interview KW - competitive dialogue KW - Tangible Construction Market Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-26801 ER - TY - RPRT A1 - Weithmann, Michael T1 - Thomas Bernhard and the air crash on Tettelham in 1944. "It was a spectacle of unrelieved tragedy". N2 - In his autobiographical essay "Ein Kind" (A Child) the Austrian writer Thomas Bernhard (1931-1998)describes the downing of a heavy World War II USAIRFORCE bomber at Tettelham near Traunstein in Upper Bavaria in 1944. The scene of the crash is now headed by a chapel and some reminiscenses of the crews' fate. Although the event is still vivid in the local history, it was not known that Thomas Bernhard had been a keen eyewitness. KW - Bernhard KW - Thomas Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-26783 ER - TY - THES A1 - Wehner, Stefanie T1 - Transformation of rural space from an institutional perspective. Socio-economic development and land use change in Xishuangbanna, Southwest China T1 - Transformation ländlicher Räume aus institutioneller Perspektive. Sozio-ökonomische Entwicklung und Landnutzungswandel in Xishuangbanna, Südwest-China. N2 - Landscapes and societies in Xishuangbanna Dai Autonomous Prefecture in Southwest China have undergone unprecedented changes over the last 60 years. These transformations within the landscapes manifest themselves as land cover change, for example intensification of traditional land use systems and introduction of monocultures leading to deforestation, loss of biodiversity and other forms of environmental degradation. At the same time, communities and societies within these landscapes have experienced a certain degree of economic development, mainly through exploitation of natural resources. Through changing of political and economic frameworks, they have also profound transformations within their social and socio-cultural configurations. Based on the outcomes of field work in Xishuangbanna between 2006 and 2010, this study examines institutions, institutional voids and institutional change concerning land-use in the Naban River Watershed National Nature Reserve. Combining socio-economic and ecological data, patterns of land-use change and their interrelation to local communities are explored. With the emergence of the rubber-line, a socio-ecological frontier, disparities between upland and lowland landscapes and communities are intensifying. N2 - Wie viele andere Regionen weltweit haben Landschaften und Gesellschaften Xishuangbannas, einer Präfektur im Südwesten Chinas in den letzten 60 Jahren rasante Veränderungen erfahren. Auf Landschaftsebene manifestieren sich diese Transformationen als Landnutzungswandel, z. B. durch Intensivierung traditioneller Landnutzungspraktiken und Modernisierung der Landwirtschaft. Besonders die Einführung von Monokulturen führt zu Entwaldung, Verlust der biologischen Diversität und anderen Arten von Umweltdegradation und -zerstörung. Zur gleichen Zeit haben Dorfgemeinschaften und Gesellschaften zu einem gewissen Grad durch wirtschaftliche Entwicklung von der Ausbeutung der natürlichen Ressourcen profitiert, einhergehend mit tiefgreifenden Veränderungen ihrer sozialen und sozioökonomischen Gefüge. Besonders über die Zusammenhänge zwischen ökologischen und sozialen Veränderungen ist bisher wenig bekannt. Die Möglichkeit zur Feldforschung in und über eine abgelegene, aber gleichzeitig diverse und dynamische Region wie Xishuangbanna bot eine wertvolle Gelegenheit, Mensch-Umweltbeziehungen näher zu erforschen. Die Forschung für diese Doktorarbeit erfolgte im Rahmen eines multidisziplinären Forschungsprojektes mit dem Titel "Erhaltung von Kulturlandschaften durch Diversifizierung der Ressourcen-Nutzung, Strategien und Technologien für Agrar-Ökosystemen im bergigen Südwesten Chinas", auch bekannt als "Living Landscapes China" (LILAC). KW - Sozialökologie KW - Landnutzungswandel KW - Ländliches Südwest China KW - Institutioneller Wandel KW - Socio-ecology KW - Land use change KW - Rural Southwest China KW - Institutional change Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-25883 N1 - Also available as paperback from: Der Andere Verlag. ISBN 978-3-86247-228-4 ER - TY - THES A1 - Kunze, Kai T1 - Compensating for On-Body Placement Effects in Activity Recognition T1 - Kompensation positionsbezogener Artefakte in Aktivitätserkennung N2 - This thesis investigates, how placement variations of electronic devices influence the possibility of using sensors integrated in those devices for context recognition. The vast majority of context recognition research assumes well defined, fixed sen- sor locations. Although this might be acceptable for some application domains (e.g. in an industrial setting), users, in general, will have a hard time coping with these limitations. If one needs to remember to carry dedicated sensors and to adjust their orientation from time to time, the activity recognition system is more distracting than helpful. How can we deal with device location and orientation changes to make context sensing mainstream? This thesis presents a systematic evaluation of device placement effects in context recognition. We first deal with detecting if a device is carried on the body or placed somewhere in the environ- ment. If the device is placed on the body, it is useful to know on which body part. We also address how to deal with sensors changing their position and their orientation during use. For each of these topics some highlights are given in the following. Regarding environmental placement, we introduce an active sampling ap- proach to infer symbolic object location. This approach requires only simple sensors (acceleration, sound) and no infrastructure setup. The method works for specific placements such as "on the couch", "in the desk drawer" as well as for general location classes, such as "closed wood compartment" or "open iron sur- face". In the experimental evaluation we reach a recognition accuracy of 90% and above over a total of over 1200 measurements from 35 specific locations (taken from 3 different rooms) and 12 abstract location classes. To derive the coarse device placement on the body, we present a method solely based on rotation and acceleration signals from the device. It works independent of the device orientation. The on-body placement recognition rate is around 80% over 4 min. of unconstrained motion data for the worst scenario and up to 90% over a 2 min. interval for the best scenario. We use over 30 hours of motion data for the analysis. Two special issues of device placement are orientation and displacement. This thesis proposes a set of heuristics that significantly increase the robustness of motion sensor-based activity recognition with respect to sen- sor displacement. We show how, within certain limits and with modest quality degradation, motion sensor-based activity recognition can be implemented in a displacement tolerant way. We evaluate our heuristics first on a set of synthetic lower arm motions which are well suited to illustrate the strengths and limits of our approach, then on an extended modes of locomotion problem (sensors on the upper leg) and finally on a set of exercises performed on various gym machines (sensors placed on the lower arm). In this example our heuristic raises the dis- placed recognition rate from 24% for a displaced accelerometer, which had 96% recognition when not displaced, to 82%. KW - Kontextbezogenes System KW - Mustererkennung KW - Aktivitätserkennung KW - positionsbezogener Effekte KW - sensor displacement KW - on-body KW - context recognition Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-26114 ER - TY - THES A1 - Capco, Jose T1 - Real Closed * Rings T1 - Reelle abgeschlossene * Ringe N2 - In this dissertation I examine a definition of real closure of commutative unitary reduced rings. I also give a characterization of rings that are real closed in this context and how one is able to arrive to such a real closure. There are sufficient examples to help the reader get a feel for real closed * rings and the real closure * of commutative unitary rings. KW - real closed rings KW - real closure Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-25915 ER - TY - THES A1 - Karoly, Andrea T1 - Investment Strategies under Uncertainty: Theory and evidence of preemption in case of geographical market entrance T1 - Investition unter Unsicherheit N2 - This thesis develops an equilibrium framework for strategic exercise of geographical market entry option. The theoretical model analyses the impact of asymmetries of the competing firms such as follower entry barrier and asymmetric profitability on the optimal market entry timing and firm values. The duopoly model shows the existence of three types of equilibrium strategies and expresses the critical level of asymmetry which separates the equilibrium regions. The analysis proves that the softer competition does not force the stronger firm to enter the market at his preemption point and as a consequence the rent equalisation between the firms does not occur. However, it is also shown that the critical level of asymmetry is mitigated or strengthened by common economic factors such as the host market profit volatility and the interest rate. Extending the duopoly model to the oligopoly case the results present that each additional competitor delays the first market entrance compared to the duopolist leader preemption point. Hence, one additional competitor accelerates the first market entry if the number of competing firms excluding him is odd and has the reverse impact if it is even. It is further observed that continuation may disappear in some subgames of the market entry game in an oligopoly as a result of which no closed loop market entry strategy set exists. The equilibrium results of the theoretical models are tested empirically by applying the Cox proportional hazard model on entry behaviour of 61 retailers into 6 Eastern European countries from 1989 until 2005. The results explain why retailers entered certain markets earlier and why some firms succeeded more in seizing the entry opportunity. The results show that driven by the development of demand potential on the host market and by the intensity of competition, foreign retailers had a limited period of time - defined as the “window of opportunity” - to carry out their market entry. KW - Auslandsinvestition KW - Nash-Gleichgewicht KW - Duopol KW - Oligopol KW - Markteintritt KW - dynamische Strategien KW - Preemption KW - Nash-equilibrium KW - Cox propotional hazard modell KW - closed loop strategy KW - geographical market entrance Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-12000 ER - TY - THES A1 - Engelen, Christian T1 - Three Essays on Intra-Creditor Coordination Failures in Sovereign Debt Restructuring T1 - Drei Aufsätze zu Koordinationsfehlern unter Gläubigern im Rahmen der Umschuldung von Staatsschulden N2 - This work is comprised of three essays that attempt to contribute to the task of reviewing the prevailing (solely market-based) contractual approach for sovereign debt restructuring. These essays particularly focus on aspects of intra-creditor coordination. Although the content of these essays is interconnected, each unit is a stand-alone entity. Essay I: The latest Argentinean debt restructuring was the first time the resolution of a modern financial crisis was completely handed over to the private financial markets without official intervention by public institutions. This essay argues that the resulting harshest haircut for private creditors in history can be at least partially related to an assurance game played by creditors. It shows that incentive schemes provided by the Argentinean government were factors facilitating this haircut. The analysis suggests that, contrary to the recognition in the literature, the effects of Collective Action Clauses and Exit Consents within a restructuring process are not equal. In the case of Argentina, the inclusion of Collective Action Clauses in the defaulted bonds could have benefited the holdout creditors. Essay II: Experience from events of sovereign debt restructuring over the last decade shows that the prevailing process is mainly shaped by exchange-offers launched by the debtor. This suggests that negotiations for changing the repayment terms of the debt take place in an ultimatum game which centers virtually all bargaining power on the debtor side. Creditors vote according to reservations values that might be influenced by fairness consideration both vis-à-vis the debtor and their fellow creditors. And, as fairness is usually a highly subjective influence, this can result in a heterogeneity of reservation values which might impede effective intra-creditor coordination for the benefit of the debtor. Essay III: Mitigating intra-creditor coordination failures has always been crucial in any proposal for an institutionalized process of restructuring sovereign bonds. However, one source of failure in creditor coordination has not been taken into consideration. The current process of sovereign debt restructuring enables the debtor to launch an exchange offer which provides incentives to inter-temporally discriminate among creditors with different reservation values. Only a creditor representation that can effectively bind in all different creditor types will mitigate this failure and thereby prevent potential conflicts of interests among creditors. Enhancing the current proposal of creditor groups so that creditors can effectively pre-commit can shield the process from this kind of coordination failure. This essay concludes with a proposal for a creation of a creditor representation body which exhibits a similar mode of operation as a celebrated institutionalized creditor representation body in the penultimate century. To summarize the conclusions drawn from these essays, the contractual approach is not yet able to guarantee effective creditor coordination due to a lack of a comprehensive and forceful permanent creditor representation. Establishing such a permanent representation body would replicate the institutional development experienced during the last heydays of bonds as a source of emerging market financing. This would lead to a significant improvement in creditor coordination. Moreover, since the result of a potential debt restructuring draws back to the ex-ante lending decision by the individual investor, this improvement could contribute to the welfare-enhancing effects of external financing by private creditors for developing economies. N2 - Die vorliegende Arbeit umfasst drei Aufsätze, die versuchen, einen Beitrag zur Debatte über die Ausgestaltung eines (rein marktbasierten) vertraglichen Ansatzes zur Umschuldung von Staatsanleihen aufstrebender Volkswirtschaften zu leisten. Ein Schwerpunkt der Arbeit liegt dabei auf dem Problem mangelhafter Koordination unter den Anleihebesitzern. Obwohl eine inhaltliche Verbindung besteht, ist jeder dieser Aufsätze als eine eigenständige Einheit zu betrachten. Aufsatz I: Bei der jüngsten Umschuldung argentinischer Staatsanleihen kam es zum ersten Mal in der neueren Geschichte von Finanzkrisen dazu, dass die Krisenbewältigung ein vollständig marktbasierter Prozess ohne Intervention des öffentlichen Sektors war. In diesem Aufsatz wird dargestellt, in welcher Form der hieraus resultierende höchste Forderungsverzicht von privaten Investoren in der Geschichte der Umschuldung staatlicher Anleihen zumindest teilweise auf eine mangelhafte Koordination im Rahmen eines Assurance-Spiels unter den Gläubigern zurückgeführt werden kann. Außerdem bestehen für den Schuldner im Rahmen eines solchen Spiels Anreize, die hieraus entstehenden Probleme in der Koordination der Anleihebesitzer durch bestimmte vertragliche Elemente zu seinem Vorteil auszunutzen. Die Analyse zeigt, dass im Gegensatz zur Wahrnehmung in der Literatur die Effekte von so genannten „Exit Consents“ und „Collective Action Clauses“ nicht identisch sind. Hätten die Anleihen Argentiniens derartige Mehrheitsklauseln aufgewiesen, hätte die Koordination unter den Gläubigern hiervon profitiert. Aufsatz II: Umschuldungsverhandlungen des letzten Jahrzehnts haben gezeigt, dass Angebote des Schuldners zum Tausch alter gegen neue Anleihen das bis dato vorherrschende Prozedere für die Anpassung der vertraglichen Rückzahlungsvereinbarungen darstellt. Die Verhandlungen zwischen Gläubiger- und Schuldnerseite über die Details dieser Anpassung bewegen sich dabei im Rahmen eines Ultimatum-Spiels, bei dem der Schuldner praktisch über die gesamte Verhandlungsmacht verfügt. Gläubiger entscheiden über die Annahme eines solchen Angebotes aufgrund eines Reservationswertes, welcher durch Fairnessempfindungen gegenüber dem Schuldner sowie den übrigen Gläubigern beeinflusst werden kann. Die Subjektivität solcher Empfindungen kann dabei zur Heterogenität der Reservationswerte führen, welche sich wiederum negativ auf die Effektivität der Koordination unter Anleihebesitzern auswirken kann. Der Schuldner wäre dann in der Lage, diese mangelhafte Effektivität zu seinem Vorteil auszunutzen. Aufsatz III: Ein zentraler Aspekt verschiedener Vorschläge für einen institutionalisierten Prozess der Restrukturierung von Staatsanleihen ist seit jeher die Vermeidung von Mängeln der Koordination unter den Anleihegläubigern. Ein Umstand findet hierbei bisher jedoch noch nicht ausreichend Beachtung: Der momentan vorherrschende Prozess von Umschuldungsverhandlungen ermöglicht es dem Schuldner, den Gläubigern Angebote über den Austausch der entsprechenden Anleihen zu unterbreiten. Für das Schuldnerland bietet die Gestaltungsfreiheit derartiger Angebote jedoch einen Anreiz, zwischen verschiedenen Typen von Gläubigern zeitlich zu diskriminieren. Obwohl dies für den Schuldner vorteilhaft ist, führt eine Diskriminierung zu einem verlängerten und damit ineffizienten Umschuldungsprozess. Lediglich eine effektive Gläubigervertretung, welche alle Gläubiger in ein gemeinsames Votum mit einbinden kann, wäre in der Lage, dies zu verhindern. Eine Erweiterung der aktuellen Vorschläge zur Bildung von „Creditor Groups“ könnte hierbei helfen, den Umschuldungsprozess vor derartigen Mängeln der Gläubigerkoordination zu schützen. Daher skizziert dieser Aufsatz eine derartige Gläubigervertretung, welche in ihrer Funktionsweise Ähnlichkeit mit einer vergleichbaren Institution im vorletzten Jahrhundert hat. Abschließend lassen sich damit die Aussagen der Aufsätze wie folgt zusammenfassen: Der aktuelle Status des rein marktbasierten Ansatzes der Umschuldung von Staatsanleihen aufstrebender Volkswirtschaften ist aufgrund des Mangels einer umfassenden und wirkungsvollen Gläubigervertretung noch nicht in der Lage, eine effektive Koordination unter den Anleihebesitzern zu gewährleisten. Die Einrichtung einer derartigen Gläubigervertretung würde die institutionelle Entwicklung während der Hochzeit der Anleihemärkte im vorletzten Jahrhundert nachzeichnen, welche zu einer signifikanten Verbesserung in der Gläubigerkoordination geführt hat. Und da das Ergebnis eines potentiellen Umschuldungprozesses auch einen Einfluss auf die ex-ante-Investitionsentscheidung des einzelnen Anleihebesitzers hat, könnte dies einen Beitrag zu den wohlfahrtserhöhenden Effekten einer Außenfinanzierung aufstrebender Volkswirtschaften durch private Anleihegläubiger leisten. KW - Öffentliche Schulden KW - Politische Institution KW - Umschuldung KW - Umschuldungsverhandlungen KW - Koordinationsfehler KW - Vertraglicher Ansatz KW - Internationale Finanzarchitektur KW - Emerging Market Staatsanleihen KW - Sovereign Debt Restructuring KW - Coordination Failures KW - Contractual Approach KW - International Financial Architecture KW - Emerging Market Bonds Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-11960 ER - TY - INPR A1 - Kreitmeier, Wolfgang T1 - Optimal quantization for uniform distributions on Cantor-like sets N2 - In this paper, the problem of optimal quantization is solved for uniform distributions on some higher dimensional, not necessarily self-similar $N-$adic Cantor-like sets. The optimal codebooks are determined and the optimal quantization error is calculated. The existence of the quantization dimension is characterized and it is shown that the quantization coefficient does not exist. The special case of self-similarity is also discussed. The conditions imposed are a separation property of the distribution and strict monotonicity of the first $N$ quantization error differences. Criteria for these conditions are proved and as special examples modified versions of classical fractal distributions are discussed. KW - Maßtheorie KW - Quantisierung KW - Iteriertes Funktionensystem KW - Fraktale Dimension KW - optimal quantization KW - quantization dimension KW - quantization coefficient KW - self-similar probabilities Y1 - 2008 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-12449 ER - TY - INPR A1 - Kreitmeier, Wolfgang T1 - Optimal Quantization for Dyadic Homogeneous Cantor Distributions N2 - For a large class of dyadic homogeneous Cantor distributions in \mathbb{R}, which are not necessarily self-similar, we determine the optimal quantizers, give a characterization for the existence of the quantization dimension, and show the non-existence of the quantization coefficient. The class contains all self-similar dyadic Cantor distributions, with contraction factor less than or equal to \frac{1}{3}. For these distributions we calculate the quantization errors explicitly. KW - Maßtheorie KW - Fraktale Dimension KW - Iteriertes Funktionensystem KW - Cantor-Menge KW - Hausdorff-Dimension KW - Hausdorff-Maß KW - Quantization KW - homogeneous Cantor measures KW - Quantization dimension KW - Quantization coefficient Y1 - 2005 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-3845 N1 - Die Endfassung des Artikels kann beim Verfasser angefordert werden. Kontaktinformation: opus@uni-passau.de ER - TY - INPR A1 - Kreitmeier, Wolfgang T1 - Asymptotic order of quantization for Cantor distributions in terms of Euler characteristic, Hausdorff and Packing measure N2 - For homogeneous one-dimensional Cantor sets, which are not necessarily self-similar, we show under some restrictions that the Euler exponent equals the quantization dimension of the uniform distribution on these Cantor sets. Moreover for a special sub-class of these sets we present a linkage between the Hausdorff and the Packing measure of these sets and the high-rate asymptotics of the quantization error. KW - Maßtheorie KW - Fraktale Dimension KW - Iteriertes Funktionensystem KW - Cantor-Menge KW - Hausdorff-Dimension KW - Hausdorff-Maß KW - Homogeneous Cantor set KW - Euler characteristic KW - Euler exponent KW - quantization dimension KW - quantization coefficient KW - Hausdorff dimension Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-7374 N1 - Die Endfassung des Artikels kann beim Verfasser angefordert werden. Kontaktinformation: opus@uni-passau.de ER - TY - INPR A1 - Kreitmeier, Wolfgang T1 - Optimal quantization of probabilities concentrated on small balls N2 - We consider probability distributions which are uniformly distributed on a disjoint union of balls with equal radius. For small enough radius the optimal quantization error is calculated explicitly in terms of the ball centroids. We apply the results to special self-similar measures. KW - Maßtheorie KW - Quantisierung KW - Iteriertes Funktionensystem KW - Schwerpunkt KW - optimal quantization KW - centroid KW - self-similar probabilities Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-12010 ER - TY - THES A1 - Guppenberger, Michael T1 - Enhancing Information Systems with Event-Handling - A Non-Invasive Approach T1 - Erweiterung von Informationssystemen um Event-Handling - Ein Nicht-Invasiver Ansatz N2 - Due to the immense advance of widely accessible information systems in industrial applications, science, education and every day use, it becomes more and more difficult for users of those information systems to keep track with new and updated information. An approach to cope with this problem is to go beyond traditional search facilities and instead use the users' profiles to monitor data changes and to actively inform them about these updates - an aspect that has to be explicitly developed and integrated into a variety of information systems. This is traditionally done in an individual way, depending on the application and its platform. In this dissertation, we present a novel approach to model the semantic interrelations that specify which users to inform about which updates, based on the underlying model of the respective information system. For the first time, a meta-model that allows information system designers to tag an arbitrary data model and thus specify the event-handling semantics is presented. A formal specification of how to interpret meta-models to determine the receivers of the events completes the presented concept. For the practical realization of this new concept, model driven architecture (MDA) shows to be an ideal technical means. Using our newly developed UML profile based on data-modelling standards, an implementation of the event-handling specification can automatically be generated for a variety of different target platforms, like e.g. relational databases, using triggers. This meta-approach makes the proposed solution ideal with respect to maintainability and genericity. Our solution significantly reduces the overall development efforts for an event-handling facility. In addition, the enhanced model of the information system can be used to generate an implementation that also fulfils non-functional requirements like high performance and extensibility. The overall framework, consisting of the domain specific language (i.e. the meta-model), formal and technical transformations of how to interpret the enhanced information system model and a cost-based optimizing strategy, constitutes an integrated approach, offering several advantages over traditional implementation techniques: our framework can be applied to new information systems as well as to legacy applications without having to modify existing systems; it offers an extensible, easy-to-use, generic and thus re-usable solution and it can be tailored to and optimized for many use cases, as the practical evaluation presented in this dissertation verifies. N2 - Bedingt durch die immer stärkere Durchdringung rechnergestützter Informationssysteme in Industrie, Forschung, Ausbildung und anderen Bereichen des täglichen Lebens wird es für Anwender immer schwieriger, für sie relevante Änderungen an den dort gespeicherten Datenbeständen nachzuverfolgen. Dem wird häufig dadurch begegnet, dass über die Fähigkeiten traditioneller Suchmöglichkeiten hinaus gegangen wird und Profile der Anwender verwendet werden, um sie aktiv über relevante Änderungen zu informieren. Dieser Aspekt muss für unterschiedlichste Informationssysteme explizit entwickelt und integriert werden, zudem meist abhängig von der fachlichen Domäne der Anwendung und deren Plattform. In dieser Dissertation präsentieren wir einen neuartigen Ansatz, mit dessen Hilfe die semantischen Vorgaben, welche Anwender über welche Änderungen informiert werden sollen, ausgehend vom zugrunde liegenden Datenmodell der Anwendung des jeweiligen Systems modelliert werden können. Erstmalig wird ein Meta-Modell vorgestellt, das Entwicklern und Architekten ermöglicht, ein beliebiges Modell eines Informationssystems mit zusätzlichen Informationen auszuzeichnen und damit die Semantik der Event-Handling-Komponente vorzugeben. Zudem wird ein formales Konzept präsentiert, das spezifiziert wie diese Auszeichnungen für die Bestimmung der Informationsempfänger zu interpretieren sind. Im Hinblick auf die Realisierung dieses Konzepts erweist sich Model Driven Architecture (MDA) als ideales technisches Mittel. Mit Hilfe eines eigens entwickelten UML Profils, das sich auf existierende Standards zur Datenmodellierung stützt, kann automatisch eine Implementierung der Event-Handling-Komponenten für eine Vielzahl unterschiedlichster Zielplattformen generiert werden. Als Beispiel wäre die Verwendung relationaler Datenbanken zusammen mit Datenbanktriggern zu nennen. Dieser Ansatz stellt eine ideale Lösung im Hinblick auf Wartbarkeit und Allgemeingültigkeit dar, wodurch auch der Entwicklungsaufwand minimiert wird. Zudem bietet unser Ansatz auch die Möglichkeit, bei der Implementierung dieser Komponente auch nicht-funktionale Anforderungen - wie beispielsweise möglichst optimale Performanz und Erweiterbarkeit - zu erfüllen. Das hier präsentierte Framework, bestehend aus der domänen-spezifischen Sprache (in Form des Meta-Modells), den formalen und technischen Transformationsvorschriften für die Interpretation der Spezifikation sowie einer kostenbasierten Optimierungsstrategie, stellt einen integrierten Ansatz dar, der im Vergleich zu traditionellen Ansätzen einige Vorteile bietet: so kann dieser Ansatz ohne Modifikation existierender Systeme verwendet werden, stellt eine erweiterbare, einfach benutzbare, und zugleich wiederverwendbare Lösung dar und kann für beliebige Anwendungsfälle maßgeschneidert und optimiert werden, wie die Evaluation unserer Lösung anhand echter Szenarien in dieser Dissertation zeigt. KW - Notifikation KW - Abonnement KW - Modellierung KW - Generierung KW - Relationale Datenbank KW - MDA KW - Trigger KW - Aktive Datenbanken KW - Metamodellierung KW - Publish/Subscribe KW - Publish/Subscribe KW - Notification KW - Active Database Systems KW - MDA KW - Trigger Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-22485 ER - TY - THES A1 - Dietz, Sebastian T1 - Autoregressive Neural Network Processes - Univariate, Multivariate and Cointegrated Models with Application to the German Automobile Industry T1 - Autoregressive Neuronale Netze - Univariate, Multivariate und Kointegrierte Modelle mit einer Anwendung aus dem Bereich der deutschen Automobilindustrie N2 - Prediction of economic variables is a basic component not only for economic models, but also for many business decisions. Nevertheless it is difficult to produce accurate predictions in times of economic crises, which cause nonlinear effects in the data. In this dissertation a nonlinear model for analysis of time series with nonlinear effects is introduced. Linear autoregressive processes are extended by neural networks to overcome the problem of nonlinearity. This idea is based on the universal approximation property of single hidden layer feedforward neural networks of Hornik (1993). Univariate Autoregressive Neural Network Processes (AR-NN) as well as Vector Autoregressive Neural Network Processes (VAR-NN) and Neural Network Vector Error Correction Models (NN-VEC) are introduced. Various methods for variable selection, parameter estimation and inference are discussed. AR-NN's as well as a NN-VEC are used for prediction and analysis of the relationships between 4 variables related to the German automobile industry: The US Dollar to Euro exchange rate, the industrial output of the German automobile industry, the sales of imported cars in the USA and an index of shares of German automobile manufacturing companies. Prediction results are compared to various linear and nonlinear univariate and multivariate models. KW - Nichtlineare Zeitreihenanalyse KW - Zeitreihenanalyse KW - Ökonometrie KW - Nichtlineare Optimierung KW - Nonlinear Time Series Analysis KW - Neural Networks KW - Nonlinear Optimization KW - Econometrics Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-22524 ER - TY - THES A1 - Berl, Andreas T1 - Energy Efficiency in Office Computing Environments T1 - Energieeffizienz in Büroumgebungen N2 - The increasing cost of energy and the worldwide desire to reduce CO2 emissions has raised concern about the energy efficiency of information and communication technology. Whilst research has focused on data centres recently, this thesis identifies office computing environments as significant consumers of energy. Office computing environments offer great potential for energy savings: On one hand, such environments consist of a large number of hosts. On the other hand, these hosts often remain turned on 24~hours per day while being underutilised or even idle. This thesis analyzes the energy consumption within office computing environments and suggests an energy-efficient virtualized office environment. The office environment is virtualized to achieve flexible virtualized office resources that enable an energy-based resource management. This resource management stops idle services and idle hosts from consuming resources within the office and consolidates utilised office services on office hosts. This increases the utilisation of some hosts while other hosts are turned off to save energy. The suggested architecture is based on a decentralized approach that can be applied to all kinds of office computing environments, even if no centralized data centre infrastructure is available. The thesis develops the architecture of the virtualized office environment together with an energy consumption model that is able to estimate the energy consumption of hosts and network within office environments. The model enables the energy-related comparison of ordinary and virtualized office environments, considering the energy-efficient management of services. Furthermore, this thesis evaluates energy efficiency and overhead of the suggested approach. First, it theoretically proves the energy efficiency of the virtualized office environment with respect to the energy consumption model. Second, it uses Markov processes to evaluate the impact of user behaviour on the suggested architecture. Finally, the thesis develops a discrete-event simulation that enables the simulation and evaluation of office computing environments with respect to varying virtualization approaches, resource management parameters, user behaviour, and office equipment. The evaluation shows that the virtualized office environment saves more than half of the energy consumption within office computing environments, depending on user behaviour and office equipment. N2 - Die steigenden Kosten von Energie und die weltweiten Bemühungen CO2-Emmissionen zu reduzieren, führt aktuell zu einer intensiven Untersuchung der Energieeffizienz von Informations- und Kommunikationstechnologien. Während ein großer Teil der aktuellen Forschung sich auf Rechenzentren fokussiert, betrachtet diese Arbeit Büroumgebungen mit ihren Rechnern und dem verbindenden Netzwerk. Eine energieeffiziente Architektur wird vorgeschlagen, die auf die Virtualisierung und Konsolidierung von Diensten setzt, ohne auf zentralisierte Rechenzentrumshardware oder Thin Clients angewiesen zu sein. KW - Energieeffizienz KW - Virtualisierung KW - Konsolidierung KW - Büro KW - Energy efficiency KW - virtualization KW - consolidation KW - office computing environments Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-22516 ER - TY - INPR A1 - Kreitmeier, Wolfgang T1 - Optimal vector quantization in terms of Wasserstein distance N2 - The optimal quantizer in memory-size constrained vector quantization induces a quantization error which is equal to a Wasserstein distortion. However, for the optimal (Shannon-)entropy constrained quantization error a proof for a similar identity is still missing. Relying on principal results of the optimal mass transportation theory, we will prove that the optimal quantization error is equal to a Wasserstein distance. Since we will state the quantization problem in a very general setting, our approach includes the R\'enyi-$\alpha$-entropy as a complexity constraint, which includes the special case of (Shannon-)entropy constrained $(\alpha = 1)$ and memory-size constrained $(\alpha = 0)$ quantization. Additionally, we will derive for certain distance functions codecell convexity for quantizers with a finite codebook. Using other methods, this regularity in codecell geometry has already been proved earlier by Gy\"{o}rgy and Linder. KW - Maßtheorie KW - Transporttheorie KW - Quantisierung KW - Entropie KW - Wasserstein distance KW - optimal quantization error KW - codecell convexity KW - R\'enyi-$\alpha$-entropy Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-22502 N1 - This is a preprint of an article accepted for publication in the Journal of Multivariate Analysis ISSN 0047-259X. The original publication is available at http://www.elsevier.com/. The digital object identifier (DOI) of the definitive article is 10.1016/j.jmva.2011.04.005. ER - TY - THES A1 - Hölbling, Günther T1 - Personalized Means of Interacting with Multimedia Content T1 - Persönliche Wege der Interaktion mit multimedialen Inhalten N2 - Today the world of multimedia is almost completely device- and content-centered. It focuses it’s energy nearly exclusively on technical issues such as computing power, network specifics or content and device characteristics and capabilities. In most multimedia systems, the presentation of multimedia content and the basic controls for playback are main issues. Because of this, a very passive user experience, comparable to that of traditional TV, is most often provided. In the face of recent developments and changes in the realm of multimedia and mass media, this ”traditional” focus seems outdated. The increasing use of multimedia content on mobile devices, along with the continuous growth in the amount and variety of content available, make necessary an urgent re-orientation of this domain. In order to highlight the depth of the increasingly difficult situation faced by users of such systems, it is only logical that these individuals be brought to the center of attention. In this thesis we consider these trends and developments by applying concepts and mechanisms to multimedia systems that were first introduced in the domain of usercentrism. Central to the concept of user-centrism is that devices should provide users with an easy way to access services and applications. Thus, the current challenge is to combine mobility, additional services and easy access in a single and user-centric approach. This thesis presents a framework for introducing and supporting several of the key concepts of user-centrism in multimedia systems. Additionally, a new definition of a user-centric multimedia framework has been developed and implemented. To satisfy the user’s need for mobility and flexibility, our framework makes possible seamless media and service consumption. The main aim of session mobility is to help people cope with the increasing number of different devices in use. Using a mobile agent system, multimedia sessions can be transferred between different devices in a context-sensitive way. The use of the international standard MPEG-21 guarantees extensibility and the integration of content adaptation mechanisms. Furthermore, a concept is presented that will allow for individualized and personalized selection and face the need for finding appropriate content. All of which can be done, using this approach, in an easy and intuitive way. Especially in the realm of television, the demand that such systems cater to the need of the audience is constantly growing. Our approach combines content-filtering methods, state-of-the-art classification techniques and mechanisms well known from the area of information retrieval and text mining. These are all utilized for the generation of recommendations in a promising new way. Additionally, concepts from the area of collaborative tagging systems are also used. An extensive experimental evaluation resulted in several interesting findings and proves the applicability of our approach. In contrast to the ”lean-back” experience of traditional media consumption, interactive media services offer a solution to make possible the active participation of the audience. Thus, we present a concept which enables the use of interactive media services on mobile devices in a personalized way. Finally, a use case for enriching TV with additional content and services demonstrates the feasibility of this concept. N2 - Die heutige Welt der Medien und der multimedialen Inhalte ist nahezu ausschließlich inhalts- und geräteorientiert. Im Fokus verschiedener Systeme und Entwicklungen stehen oft primär die Art und Weise der Inhaltspräsentation und technische Spezifika, die meist geräteabhängig sind. Die zunehmende Menge und Vielfalt an multimedialen Inhalten und der verstärkte Einsatz von mobilen Geräten machen ein Umdenken bei der Konzeption von Multimedia Systemen und Frameworks dringend notwendig. Statt an eher starren und passiven Konzepten, wie sie aus dem TV Umfeld bekannt sind, festzuhalten, sollte der Nutzer in den Fokus der multimedialen Konzepte rücken. Um dem Nutzer im Umgang mit dieser immer komplexeren und schwierigen Situation zu helfen, ist ein Umdenken im grundlegenden Paradigma des Medienkonsums notwendig. Durch eine Fokussierung auf den Nutzer kann der beschriebenen Situation entgegengewirkt werden. In der folgenden Arbeit wird auf Konzepte aus dem Bereich Nutzerzentrierung zurückgegriffen, um diese auf den Medienbereich zu übertragen und sie im Sinne einer stärker nutzerspezifischen und nutzerorientierten Ausrichtung einzusetzen. Im Fokus steht hierbei der TV-Bereich, wobei die meisten Konzepte auch auf die allgemeine Mediennutzung übertragbar sind. Im Folgenden wird ein Framework für die Unterstützung der wichtigsten Konzepte der Nutzerzentrierung im Multimedia Bereich vorgestellt. Um dem Trend zur mobilen Mediennutzung Sorge zu tragen, ermöglicht das vorgestellte Framework die Nutzung von multimedialen Diensten und Inhalten auf und über die Grenzen verschiedener Geräte und Netzwerke hinweg (Session mobility). Durch die Nutzung einer mobilen Agentenplattform in Kombination mit dem MPEG-21 Standard konnte ein neuer und flexibel erweiterbarer Ansatz zur Mobilität von Benutzungssitzungen realisiert werden. Im Zusammenhang mit der stetig wachsenden Menge an Inhalten und Diensten stellt diese Arbeit ein Konzept zur einfachen und individualisierten Selektion und dem Auffinden von interessanten Inhalten und Diensten in einer kontextspezifischen Weise vor. Hierbei werden Konzepte und Methoden des inhaltsbasierten Filterns, aktuelle Klassifikationsmechanismen und Methoden aus dem Bereich des ”Textminings” in neuer Art und Weise in einem Multimedia Empfehlungssystem eingesetzt. Zusätzlich sind Methoden des Web 2.0 in eine als Tag-basierte kollaborative Komponente integriert. In einer umfassenden Evaluation wurde sowohl die Umsetzbarkeit als auch der Mehrwert dieser Komponente demonstriert. Eine aktivere Beteiligung im Medienkonsum ermöglicht unsere iTV Komponente. Sie unterstützt das Anbieten und die Nutzung von interaktiven Diensten, begleitend zum Medienkonsum, auf mobilen Geräten. Basierend auf einem Szenario zur Anreicherung von TV Sendungen um interaktive Dienste konnte die Umsetzbarkeit dieses Konzepts demonstriert werden. KW - Empfehlungssystem KW - Personalisierung KW - Interaktives Fernsehen KW - Künstliche Intelligenz KW - Kollaborative Filterung KW - Recommender system KW - personalization KW - interactive TV KW - session mobility Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-24210 ER - TY - THES A1 - Johns, Martin T1 - Code Injection Vulnerabilities in Web Applications - Exemplified at Cross-site Scripting T1 - Code-injection Verwundbarkeiten in Web Anwendungen am Beispiel von Cross-site Scripting N2 - The majority of all security problems in today's Web applications is caused by string-based code injection, with Cross-site Scripting (XSS)being the dominant representative of this vulnerability class. This thesis discusses XSS and suggests defense mechanisms. We do so in three stages: First, we conduct a thorough analysis of JavaScript's capabilities and explain how these capabilities are utilized in XSS attacks. We subsequently design a systematic, hierarchical classification of XSS payloads. In addition, we present a comprehensive survey of publicly documented XSS payloads which is structured according to our proposed classification scheme. Secondly, we explore defensive mechanisms which dynamically prevent the execution of some payload types without eliminating the actual vulnerability. More specifically, we discuss the design and implementation of countermeasures against the XSS payloads Session Hijacking'', Cross-site Request Forgery'', and attacks that target intranet resources. We build upon this and introduce a general methodology for developing such countermeasures: We determine a necessary set of basic capabilities an adversary needs for successfully executing an attack through an analysis of the targeted payload type. The resulting countermeasure relies on revoking one of these capabilities, which in turn renders the payload infeasible. Finally, we present two language-based approaches that prevent XSS and related vulnerabilities: We identify the implicit mixing of data and code during string-based syntax assembly as the root cause of string-based code injection attacks. Consequently, we explore data/code separation in web applications. For this purpose, we propose a novel methodology for token-level data/code partitioning of a computer language's syntactical elements. This forms the basis for our two distinct techniques: For one, we present an approach to detect data/code confusion on run-time and demonstrate how this can be used for attack prevention. Furthermore, we show how vulnerabilities can be avoided through altering the underlying programming language. We introduce a dedicated datatype for syntax assembly instead of using string datatypes themselves for this purpose. We develop a formal, type-theoretical model of the proposed datatype and proof that it provides reliable separation between data and code hence, preventing code injection vulnerabilities. We verify our approach's applicability utilizing a practical implementation for the J2EE application server. N2 - Cross-site Scripting (XSS) ist eine der häufigsten Verwundbarkeitstypen im Bereich der Web Anwendungen. Die Dissertation behandelt das Problem XSS ganzheitlich: Basierend auf einer systematischen Erarbeitung der Ursachen und potentiellen Konsequenzen von XSS, sowie einer umfassenden Klassifikation dokumentier Angriffsarten, wird zunächst eine Methodik vorgestellt, die das Design von dynamischen Gegenmaßnahmen zur Angriffseingrenzung erlaubt. Unter Verwendung dieser Methodik wird das Design und die Evaluation von drei Gegemaßnahmen für die Angriffsunterklassen "Session Hijacking", "Cross-site Request Forgery" und "Angriffe auf das Intranet" vorgestellt. Weiterhin, um das unterliegende Problem grundsätzlich anzugehen, wird ein Typ-basierter Ansatz zur sicheren Programmierung von Web Anwendungen beschrieben, der zuverlässigen Schutz vor XSS Lücken garantiert. KW - Computersicherheit KW - World Wide Web KW - XSS KW - SQL Injection KW - Security KW - Web KW - XSS KW - SQL Injection Y1 - 2009 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-23626 ER - TY - THES A1 - Ali, Rashid T1 - Weyl Gröbner Basis Cryptosystems N2 - In this thesis, we shall consider a certain class of algebraic cryptosystems called Gröbner Basis Cryptosystems. In 1994, Koblitz introduced the Polly Cracker cryptosystem that is based on the theory of Gröbner basis in commutative polynomials rings. The security of this cryptosystem relies on the fact that the computation of Gröbner basis is, in general, EXPSPACE-hard. Cryptanalysis of these commutative Polly Cracker type cryptosystems is possible by using attacks that do not require the computation of Gröbner basis for breaking the system, for example, the attacks based on linear algebra. To secure these (commutative) Gröbner basis cryptosystems against various attacks, among others, Ackermann and Kreuzer introduced a general class of Gröbner Basis Cryptosystems that are based on the difficulty of computing module Gröbner bases over general non-commutative rings. The objective of this research is to describe a special class of such cryptosystems by introducing the Weyl Gröbner Basis Cryptosystems. We divide this class of cryptosystems in two parts namely the (left) Weyl Gröbner Basis Cryptosystems and Two-Sided Weyl Gröbner Basis Cryptosystems. We suggest to use Gröbner bases for left and two-sided ideals in Weyl algebras to construct specific instances of such cryptosystems. We analyse the resistance of these cryptosystems to the standard attacks and provide computational evidence that secure Weyl Gröbner Basis Cryptosystems can be built using left (resp. two-sided) Gröbner bases in Weyl algebras. KW - Gröbner-Basis KW - Weyl-Algebra KW - Kryptologie KW - Public Key Cryptosystem KW - Non commutative Gröbner Basis Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-23195 ER - TY - THES A1 - Rabl, Tilmann T1 - Efficiency in Cluster Database Systems - Dynamic and Workload-Aware Scaling and Allocation T1 - Effizienz in Cluster-Datenbanksystemen - Dynamische und Arbeitslastberücksichtigende Skalierung und Allokation N2 - Database systems have been vital in all forms of data processing for a long time. In recent years, the amount of processed data has been growing dramatically, even in small projects. Nevertheless, database management systems tend to be static in terms of size and performance which makes scaling a difficult and expensive task. Because of performance and especially cost advantages more and more installed systems have a shared nothing cluster architecture. Due to the massive parallelism of the hardware programming paradigms from high performance computing are translated into data processing. Database research struggles to keep up with this trend. A key feature of traditional database systems is to provide transparent access to the stored data. This introduces data dependencies and increases system complexity and inter process communication. Therefore, many developers are exchanging this feature for a better scalability. However, explicitly managing the data distribution and data flow requires a deep understanding of the distributed system and reduces the possibilities for automatic and autonomic optimization. In this thesis we present an approach for database system scaling and allocation that features good scalability although it keeps the data distribution transparent. The first part of this thesis analyzes the challenges and opportunities for self-scaling database management systems in cluster environments. Scalability is a major concern of Internet based applications. Access peaks that overload the application are a financial risk. Therefore, systems are usually configured to be able to process peaks at any given moment. As a result, server systems often have a very low utilization. In distributed systems the efficiency can be increased by adapting the number of nodes to the current workload. We propose a processing model and an architecture that allows efficient self-scaling of cluster database systems. In the second part we consider different allocation approaches. To increase the efficiency we present a workload-aware, query-centric model. The approach is formalized; optimal and heuristic algorithms are presented. The algorithms optimize the data distribution for local query execution and balance the workload according to the query history. We present different query classification schemes for different forms of partitioning. The approach is evaluated for OLTP and OLAP style workloads. It is shown that variants of the approach scale well for both fields of application. The third part of the thesis considers benchmarks for large, adaptive systems. First, we present a data generator for cloud-sized applications. Due to its architecture the data generator can easily be extended and configured. A key feature is the high degree of parallelism that makes linear speedup for arbitrary numbers of nodes possible. To simulate systems with user interaction, we have analyzed a productive online e-learning management system. Based on our findings, we present a model for workload generation that considers the temporal dependency of user interaction. N2 - Datenbanksysteme sind seit langem die Grundlage für alle Arten von Informationsverarbeitung. In den letzten Jahren ist das Datenaufkommen selbst in kleinen Projekten dramatisch angestiegen. Dennoch sind viele Datenbanksysteme statisch in Bezug auf ihre Kapazität und Verarbeitungsgeschwindigkeit was die Skalierung aufwendig und teuer macht. Aufgrund der guten Geschwindigkeit und vor allem aus Kostengründen haben immer mehr Systeme eine Shared-Nothing-Architektur, bestehen also aus unabhängigen, lose gekoppelten Rechnerknoten. Da dieses Konstruktionsprinzip einen sehr hohen Grad an Parallelität aufweist, werden zunehmend Programmierparadigmen aus dem klassischen Hochleistungsrechen für die Informationsverarbeitung eingesetzt. Dieser Trend stellt die Datenbankforschung vor große Herausforderungen. Eine der grundlegenden Eigenschaften traditioneller Datenbanksysteme ist der transparente Zugriff zu den gespeicherten Daten, der es dem Nutzer erlaubt unabhängig von der internen Organisation auf die Daten zuzugreifen. Die resultierende Unabhängigkeit führt zu Abhängigkeiten in den Daten und erhöht die Komplexität der Systeme und der Kommunikation zwischen einzelnen Prozessen. Daher wird Transparenz von vielen Entwicklern für eine bessere Skalierbarkeit geopfert. Diese Entscheidung führt dazu, dass der die Datenorganisation und der Datenfluss explizit behandelt werden muss, was die Möglichkeiten für eine automatische und autonome Optimierung des Systems einschränkt. Der in dieser Arbeit vorgestellte Ansatz zur Skalierung und Allokation erhält den transparenten Zugriff und zeichnet sich dabei durch seine vollständige Automatisierbarkeit und sehr gute Skalierbarkeit aus. Im ersten Teil dieser Dissertation werden die Herausforderungen und Chancen für selbst-skalierende Datenbankmanagementsysteme behandelt, die in auf Computerclustern betrieben werden. Gute Skalierbarkeit ist eine notwendige Eigenschaft für Anwendungen, die über das Internet zugreifbar sind. Lastspitzen im Zugriff, die die Anwendung überladen stellen ein finanzielles Risiko dar. Deshalb werden Systeme so konfiguriert, dass sie eventuelle Lastspitzen zu jedem Zeitpunkt verarbeiten können. Das führt meist zu einer im Schnitt sehr geringen Auslastung der unterliegenden Systeme. Eine Möglichkeit dieser Ineffizienz entgegen zu steuern ist es die Anzahl der verwendeten Rechnerknoten an die vorliegende Last anzupassen. In dieser Dissertation werden ein Modell und eine Architektur für die Anfrageverarbeitung vorgestellt, mit denen es möglich ist Datenbanksysteme auf Clusterrechnern einfach und effizient zu skalieren. Im zweiten Teil der Arbeit werden verschieden Möglichkeiten für die Datenverteilung behandelt. Um die Effizienz zu steigern wird ein Modell verwendet, das die Lastverteilung im Anfragestrom berücksichtigt. Der Ansatz ist formalisiert und optimale und heuristische Lösungen werden präsentiert. Die vorgestellten Algorithmen optimieren die Datenverteilung für eine lokale Ausführung aller Anfragen und balancieren die Last auf den Rechnerknoten. Es werden unterschiedliche Arten der Anfrageklassifizierung vorgestellt, die zu verschiedenen Arten von Partitionierung führen. Der Ansatz wird sowohl für Onlinetransaktionsverarbeitung, als auch Onlinedatenanalyse evaluiert. Die Evaluierung zeigt, dass der Ansatz für beide Felder sehr gut skaliert. Im letzten Teil der Arbeit werden verschiedene Techniken für die Leistungsmessung von großen, adaptiven Systemen präsentiert. Zunächst wird ein Datengenerierungsansatz gezeigt, der es ermöglicht sehr große Datenmengen völlig parallel zu erzeugen. Um die Benutzerinteraktion von Onlinesystemen zu simulieren wurde ein produktives E-learningsystem analysiert. Anhand der Analyse wurde ein Modell für die Generierung von Arbeitslasten erstellt, das die zeitlichen Abhängigkeiten von Benutzerinteraktion berücksichtigt. KW - Verteiltes Datenbanksystem KW - Cluster KW - Allokation KW - Skalierung KW - Effizienzsteigerung KW - Clusterdatenbanksystem KW - Dynamic Allocation KW - Autonomic Scaling KW - Cluster Database System Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:739-opus-25821 ER -