open_access
Refine
Year of publication
Document Type
- Doctoral Thesis (445)
- Part of Periodical (139)
- Article (56)
- Book (49)
- Report (21)
- Preprint (16)
- Conference Proceeding (10)
- Master's Thesis (10)
- Other (10)
- Bachelor Thesis (7)
- Review (6)
- Part of a Book (4)
- Study Thesis (1)
Language
- German (479)
- English (286)
- Multiple languages (5)
- Spanish (3)
- French (1)
Has Fulltext
- yes (774)
Keywords
- Lehrerbildung (30)
- Deutschland (22)
- Österreich (18)
- Universität Passau (13)
- Kultursemiotik (12)
- Maßtheorie (12)
- Mediensemiotik (12)
- Graphenzeichnen (9)
- Computersicherheit (8)
- Didaktik (7)
Institute
- Philosophische Fakultät (206)
- Fakultät für Informatik und Mathematik (112)
- Mitarbeiter Lehrstuhl/Einrichtung der Fakultät für Informatik und Mathematik (70)
- Wirtschaftswissenschaftliche Fakultät (65)
- Mitarbeiter Lehrstuhl/Einrichtung der Wirtschaftswissenschaftlichen Fakultät (40)
- Juristische Fakultät (39)
- Zentrum für Lehrerbildung und Fachdidaktik (32)
- Department für Katholische Theologie (28)
- Universitätsbibliothek (28)
- Philosophische Fakultät / Pädagogik (23)
Raum und Grenze
(2019)
Raum und Grenze, und deren Verhältnis zu Individuen wie Kollektiven, stellen dabei zentrale Denkfiguren wie Operatoren der ‚Sinn‘generierung dar. Die folgenden Ausführungen versuchen aus semiotischer Sicht einen Überblick über die verschiedenen Bedeutungen, Artikulationsformen und Anwendungsbereiche dieser Parameter zu geben. Sie orientieren sich an der Skizze, wie sie in Krah 1999 dargelegt wurde, und ergänzen, modifizieren und neujustieren diese aus der Perspektive und dem Stand von 2018.
Damit sind Grenzen ein, wenn nicht gar der prominenteste Untersuchungsgegenstand der Kultursemiotik, also derjenigen Teildisziplin der Wissenschaft von den Zeichen, die kulturelle Prozesse in ihren zeichenhaften Veräußerungen beschreibt. Dementsprechend bezeichnen Gräf/Schmöller die Grenze selbst als eine „semiotische Größe“, da sie stets in kulturellen kommunikativen Kontexten verankert ist. Die Kultursemiotik untersucht anhand medial materialisierter Texte, in welcher Weise Kulturen in ihren kommunikativen Akten und zeichenhaften Handlungen Grenzen und Ordnungen konstruieren und rekonstruiert darüber die historischen und diskursiven Kontexte des Denkens und Wissens, in denen jene verortet sind.
Schriften zur Kultur- und Mediensemiotik | Online ist ein Open Access Journal des Virtuellen Zentrums für kultursemiotische Forschung / Virtual Centre for Cultural Semiotics (www.kultursemiotik.com) und wird herausgegeben von Martin Nies.
Inhalt der vierten Ausgabe der Schriften zur Kultur- und Mediensemiotik | Online
Martin Nies
Einleitung : Raumsemiotik - Zur Kodierung von Räumen und Grenzen
Martin Nies
B/Orders - Schwellen - Horizonte : Modelle der Beschreibung von Räumen und Grenzen in ästhetischer Kommunikation - Eine raumsemiotische Positionsbestimmung
Hans Krah
Raum und Grenze : eine semiotische Bestandsaufnahme – Mit dem Beispiel des Bunkers im ästhetischen Diskurs globaler Katastrophenszenarien
Stephanie Großmann & Stefan Halft
Raum-Poetik : Dimensionen raumsemiotischer Textanalyse am Beispiel von E.T.A. Hoffmanns Die Bergwerke zu Falun (1819)
Matthias Bauer
Sozialer und semiotischer Raum : Querbezüge, Übergänge und Grenzverschiebungen nebst einigen Anmerkungen zur kulturwissenschaftlichen Relevanz des Ähnlichkeitsdenkens
Damaris Nübel
Zur Funktion von Nicht-Orten und Heterotopien in Identitätsnarrativen der Kinder- und Jugendliteratur
Lesley Penné
Gedächtnis-Semiosphären und Vergangenheitsbewältigung in der deutschsprachigen Literatur Belgiens
Claudia Gremler
Queer Space? Der nicht-binäre Raum in der deutschen Gegenwartsliteratur am Beispiel Antje Rávic Strubels
Maren Conrad & Franziska Trapp
Zirkus und Raum : eine Semiotik der Performanz
Lisa Gaupp
Symbolische Räume kultureller Diversität : Verhandlungen, Grenzen und Überschreitungen in den performativen Künsten
Lil Helle Thomas
Architektonische Grenzgänge zwischen Raum und Wahrnehmung : das Sanatorium Purkersdorf
Christian Zolles
Der Wald des Mittelalters : Konstituierung eines alteritären Kulturraums im 11. bis 13. Jahrhundert
Hedwig Wagner
Daten – Räume – Datenräume : zum Verhältnis von Orten, digitalisierten Ortsinformationen und deren Speicher- und Nutzungsorten
Theodoros Konstantakopoulos
Inklusionen, Exklusionen, Implikationen : die semantischen Grenzbereiche der Teilhabe
Age and radial growth rate are key data on understanding some aspects of tropical forest dynamics and ecology. In species that produce annual tree rings, tree-ring analysis allows the most precise estimate of these two parameters. The present study assessed the age and radial growth rate of three Hymenaea species inhabiting four of the six biomes found in Brazil. Out of these four biomes, two harbor the largest rainforests in South America, the Amazon Forest on the west and the Atlantic Forest in the east. The Cerrado biome is an open and seasonally drier vegetation found between them and the Pantanal is a wetland in the west. The H. courbaril species inhabits almost the entire Neotropical lowlands while H. parvifolia and H. stigonocarpa are restricted to the Amazon and Cerrado biomes, respectively. To investigate these species dynamics within different biomes, age and radial growth rate were calculated for 217 trees through tree-ring analyses. The oldest H. courbaril and. H. parvifolia trees were 316 and 371 years old, respectively, while H. stigonocarpa trees were considerably younger, up to 144 years old. Hymenaea courbaril trees showed the widest variation in average growth rate, from 1.00 to 6.63 mm per year, while the other two species showed a narrower variation from 0.89 to 2.81 mm per year. The studied populations presented distinct trends in the lifetime growth pattern that seems to be related to the biome of provenance. Overall, trees from the Amazon forest showed a trend of increasing growth rate up to about 100 years followed by a decreasing of it, while trees growing in the Pantanal and Atlantic forest showed only decreasing growth rates. In the Cerrado, trees showed a constant pattern of growth rate up to 50 years followed by a clear decline. It is important to highlight that different species of Hymenaea showed similar growth trends within the same biome. In larger trees, the average growth rate is lower in the Cerrado, which is characterized by deeper water tables and more dystrophic soils while the growth rates in the Amazon and Atlantic Forests are 60% and 79% higher, respectively. This study represents one of the most comprehensive datasets of trees age and growth rate of tropical congeneric species under such large geographical range.
A tropical tree-ring study is presented using 36 specimens of Cariniana estrellensis from the Mata Atlantica Biome within the State of São Paulo: Caetetus and Carlos Botelho. We aimed to assess the suitability of this species for chronology building, as well as for dendroclimatological studies, with the help of its lifetime growth trajectories. Cariniana estrellensis forms visible tree rings with a dense sequence of parenchyma bands at the end of the latewood, followed by a relatively distant sequence of parenchyma bands in the subsequent early wood of a tree ring. However, it was impossible to establish a chronology, solely by tree-ring width measurements and crossdating, for a number of reasons, including sequences of problematic wood anatomy, abundance of wedging rings and probably missing rings. Therefore, building a robust chronology for this species requires a multi-parameter approach, however, no experience is currently available. Therefore, to reveal possible climate-growth relationships for Cariniana estrellensis at both sites, we tested to correlation analyses of microclimatic conditions with tree growth and investigated patterns of lifetime growth trajectories. Annual precipitation is over 1300 mm at both sites, with the dry season primarily between June and August. Both sites showed clear differences in their microclimatic regime and topography. Overall, light availability is the most likely crucial factor for the studied species. A significantly lower photosynthetic active radiation and daily photoperiod was found at Carlos Botelho by the strong influence of orographic rainfall, foggy conditions and shade caused by the adjacent mountain chain. Consequently, trees at this site generally showed a lower average annual growth rate as compared to the Caetetus site with differences between juvenile and mature growth phases remaining nearly constant throughout their lives. In contrast, trees from Caetetus had less consistent growth phases, with increased growth after the juvenile growth phase. Thus, it can be concluded that dendroclimatological studies using growth characteristics have the potential to clarify the generally complex stand dynamics of Cariniana estrellensis. However, the development of tree-ring chronologies, based on tree-ring width analyses of cores or discs is nearly impossible.
Deforestation in tropical regions is raising fragmentation to alarming levels. Not only does it lead to losses of forest area, but also the abiotic and biotic changes on forest edge areas alter the development of the remaining trees. We aimed to assess the impacts of forest fragmentation on the growth of tropical emergent trees. We sampled the endangered species Aspidosperma polyneuron (Apocynaceae) at forest edge and interior in the highly fragmented Brazilian Atlantic Forest. We obtained increment cores of each tree along with data about tree and surrounding canopy heights, plus their current levels of liana infestation. We used tree-ring analyses to estimate age and growth rate of trees. Sampled trees and surrounding canopy were taller at the forest interior than at the edge, even though both sampled populations have similar ages. Overall, trees at forest interior show a lifetime growth pattern common to shade-tolerant species, with a peak of growth rate at 120 years. Indeed, all sampled trees exhibited this pattern before fragmentation. However, trees at forest edge presented constantly slow growth rates for all diameter classes after the fragmentation event. The strong presence of lianas at forest edge prevents trees from experiencing the expected growth releases throughout their lifetime, probably by keeping the leaves of A. polyneuron under shaded conditions. Therefore, the management of lianas at the forest edge is likely the most effective procedure to ensure the growth of emergent trees, guarantying their role on forests structure, carbon storage, and ecosystem functioning.
In three essays, this dissertation examines the past, present and future of branding in an international context, contributing to the research area of global/local brands, while also offering managers valuable insights for their branding strategies.
The first essay provides scholars and practitioners a detailed state of the art of global/local brand research and proposes promising angles for future research, especially considering major
challenges for our societies.
The second essay incorporates the segment of cosmopolitan consumers into perceived brand globalness/localness research. Theoretically grounded in the concepts of social identity theory and complexity, the essay builds on perceived brand globalness/localness to analyze how cosmopolitans arrange both their global and local orientations. Aside offering scholars a new theoretical lens regarding consumer cosmopolitanism, managers can benefit from the gained insights, if cosmopolitans are a particular target group in their business strategy.
The third and final essay meta-analytically investigates how the variables perceived brand globalness and localness materialize on various key outcome variables. At heart of this essay is a comparison of both perceived brand globalness and localness, offering scholars and practitioners valuable empirical insights on similarities and differences between their effects on outcomes such as brand quality.
The identification and estimation of trends in hydroclimatic time series remains an important task in applied climate research. The statistical challenge arises from the inherent nonlinearity, complex dependence structure, heterogeneity and resulting non-standard distributions of the underlying time series. Quantile regressions are considered an important modeling technique for such analyses because of their rich interpretation and their broad insensitivity to extreme distributions. This paper provides an asymptotic justification of quantile trend regression in terms of unknown heterogeneity and dependence structure and the corresponding interpretation. An empirical application sheds light on the relevance of quantile regression modeling for analyzing monthly Central England temperature anomalies and illustrates their various heterogenous trends. Our results suggest the presence of heterogeneities across the considered seasonal cycle and an increase in the relative frequency of observing unusually high temperatures.
Die deutsche Handelsgerichtsbarkeit zeichnet sich bis heute durch die Mitwirkung sachkundiger Laienrichter aus. Die Arbeit zeigt zunächst ausführlich die historischen Wurzeln und die Entwicklung der Handelsrichter in Italien, in Frankreich und in den deutschen Rechtskreisen. Dabei wird insbesondere auf die Entstehungsgeschichte des sog. deutschen Systems eingegangen. Im Anschluss analysiert der Verfasser die Wesensmerkmale der modernen KfH. In einem dritten Schritt werden die historischen Erkenntnisse als Lösungsansätze für aktuelle Reformbemühungen um die Zukunft der KfH nutzbar gemacht.
Trotz digitaler Vermittlung und körperlicher Distanz kommt es während Videokonferenzen zur Wahrnehmung von Nähe oder gemeinsamen Atmosphären. Solche Interaktionen können mit einer erweiterten neophänomenologischen Soziologie nach Hermann Schmitz theoretisch hergeleitet und empirisch untersucht werden. Ein Grounded-Theory-Zugriff mit Interviews als zentralem Erhebungsverfahren erlaubt die Analyse von Möglichkeiten und Grenzen dieses Beieinanderseins an physisch getrennten Orten. Die Videokonferenz zeigt, dass bei digitaler Kommunikation nicht allein die Technik Interaktionen ermöglicht, sondern erst die leidenschaftliche Konstruktionsarbeit der Teilnehmenden einen „Leiberspace“ schaffen kann, um sich digital vermittelt näherzukommen.
Prior to the emergence of Big Data and technologies such as Learning Analytics (LA), classroom research focused mainly on measuring learning outcomes of a small sample through tests. Research on online environments shows that learners’ engagement is a critical precondition for successful learning and lack of engagement is associated with failure and dropout. LA helps instructors to track, measure and visualize students’ online behavior and use such digital traces to improve instruction and provide individualized support, i.e., feedback. This paper examines 1) metrics or indicators of learners’ engagement as extracted and displayed by LA, 2) their relationship with academic achievement and performance, and 3) some freely available LA tools for instructors and their usability. The paper concludes with making recommendations for practice and further research by considering challenges associated with using LA in classrooms.
Germany is considered a role model for dealing with past mass atrocities. In particular, the social reappraisal of the Holocaust is emblematic of this. However, when considering the genocide on the Herero and Nama in present-day Namibia, it is puzzling that an official recognition was only pronounced after almost 120 years, in May 2021. For a long time, silence surrounded this colonial cruelty in German political discourse. Although the discourse on German responsibility toward Namibia emerged after the end of World War II, it initially appeared detached from the genocide. That silence on colonial atrocities is to be considered a cruelty itself. Studies on silence have been expanding and becoming richer. Building on these works, the paper sets two goals: First, it advances the theorization of silence by producing a new typology, which is then integrated into discourse-bound identity theory. Second, it applies this theory to the analysis of the silencing and later acknowledging of the genocide on the Herero and Nama by German political elites. To this end, Bundestag debates, official documents, and statements by relevant political actors are analyzed in the period from 1980 to 2021. The results reveal the dynamics between hegemonic and counter-hegemonic discursive formations, how those are shifting in a period of 40 years, and what role silence plays in it. Beyond our emphasis on the genocide on the Herero and Nama, our findings might benefit future studies as the approach proposed in this paper can make silence a tangible research object for global studies.
Binging Family
(2021)
In diesem Open-Access-Buch zeigt Jakob Kelsch, wie sich in der US-amerikanischen TV-Serie der 1950er und 1960er Jahre der Mythos der patriarchal-strukturierten Kernfamilie als Ideal des familiären Zusammenlebens herausbildete. Trotz Phasen der Dekonstruktion und der zunehmenden Repräsentation problematischer und ethnisch wie sozial diverser Familienverhältnisse erweist sich dieser Mythos bis heute als äußerst persistent. Der durch den Digitalisierungsprozess bedingte Aufstieg der Streamingdienste und der Siegeszug deren serieller Erzeugnisse brachte eine inhaltliche Diversifizierung des Genres Familienserie und eine zunehmende narrative Komplexität mit sich. Doch auch diese kann nur an der Oberfläche des tief im kulturellen Wissen verankerten Mythos der heteronormativen Kernfamilie rütteln.
Hispanos en el mundo
(2021)
Este volumen examina la hasta hoy poco tratada imbricación entre la emoción y el desplazamiento de los hispanos por el mundo en función de los contextos socio-políticos y los momentos vitales de los concernidos. Reúne aportaciones en español y en inglés de un variado espectro de disciplinas –desde los estudios literarios, culturales y de género, hasta la antropología y la sociología– que analizan esta imbricación, sus funciones y modalidades en base a un entendimiento amplio del concepto de emotive de William Reddy. Así, este libro recoge una gran gama de medios –como la literatura, el cine, las páginas web, los vlogs y las entrevistas– en los que los hispanos expresan sus emociones respecto a sus viajes o experiencias migratorias, pero también a los efectos a largo plazo de desplazamientos históricos en sujetos que se sienten desplazados o fuera de lugar en el presente. De este modo, Hispanos en el mundo aborda un tema de gran actualidad y relevancia y, además, cubre un vacío investigativo, no solo en los estudios de la migración –con la excepción de algunos trabajos que suelen centrarse en el ámbito de lo familiar y de los cuidados–, sino también en el hispanismo, donde se erige como estudio pionero.
The described secondary data provide a comprehensive basis for modeling conditional mean nitrogen dioxide (NO2) concentration levels across Germany. Besides concentration levels, meta data on monitoring sites from the German air quality monitoring network, geocoordinates, altitudes, and data on land use and road lengths for different types of roads are provided. The data are based on a grid of resolution 1 × 1 km, which is also included. The underlying raw data are open access and were retrieved from different sources. The statistical software R was used for (pre-)processing the data and all codes are provided in an online repository. The data were employed for modeling mean annual NO2 concentration levels in the paper "Agglomeration and infrastructure effects in land use regression models for air pollution - Specification, estimation, and interpretations" by Fritsch and Behm (2021).
Durch die Verfassungsänderung zum Zwecke des Ausschlusses verfassungsfeindlicher Parteien aus der staatlichen Parteienfinanzierung wurde ein neues Mittel der wehrhaften Demokratie geschaffen. Das Prinzip erlaubt es dem Staat in engen Grenzen, gegen Parteien vorzugehen, die erklärtermaßen Gegner der freiheitlichen demokratischen Grundordnung sind. Die dem Ausschluss immanente Ungleichbehandlung gegenüber anderen Parteien sorgt allerdings für eine nicht unerhebliche Verzerrung des politischen Wettbewerbs, was Fragen zu seiner Vereinbarkeit mit dem Grundgesetz, insbesondere Art. 79 Abs. 3 GG, aufwirft. Diese Fragen und die Folgen des Ausschlusses aus der staatlichen Parteienfinanzierung für die betroffene Partei sind Gegenstand dieser Arbeit.
Gegenstand der wissenschaftlichen Abhandlung ist zunächst die verfassungsrechtliche Zulässigkeit der Bindungswirkung gemäß § 613 Abs. 1 S. 1 ZPO, insbesondere im Hinblick auf das rechtliche Gehör der angemeldeten Verbraucher. Unter diesem Blickwinkel wird im Fortgang die Anwendbarkeit der prozessualen Institute der Klageänderung und der Widerklage im Musterfeststellungsprozess untersucht, welche nach Auffassung des Verfassers einer sehr restriktiven Anwendung im Prozess der qualifizierten Einrichtung bedürfen. Zuletzt wird die praktisch höchst brisante Frage der Haftung der qualifizierten Einrichtung für eine unzureichende Prozessführung im Musterfeststellungsprozess thematisiert, die vom Gesetzgeber nur stiefmütterlich behandelt wurde. Die Arbeit wurde mit dem Promotionspreis der Rechtsanwaltskammer München und mit dem Promotionspreis der Freunde und Förderer der Rechtswissenschaft an der Universität Passau ausgezeichnet.
The autonomic composition of Virtual Networks (VNs) and Service Function Chains (SFCs)based on application requirements is significant for complex environments. In this paper, we use graph transformation in order to compose an Extended Virtual Network (EVN) that is based on
different requirements, such as locations, low latency, redundancy, and security functions. The EVN can represent physical environment devices and virtual application and network functions. We build
a generic Virtual Network Embedding (VNE) framework for transforming an Application Request (AR) to an EVN. Subsequently, we define a set of transformations that reflect preliminary topological, performance, reliability, and security policies. These transformations update the entities and demands of the VN and add SFCs that include the required Virtual Network Functions (VNFs). Additionally, we propose a greedy proactive heuristic for path-independent embedding of the composed SFCs. This heuristic is appropriate for real complex environments, such as industrial networks. Furthermore, we present an Industrail Internet of Things (IIoT) use case that was inspired by Industry 4.0 concepts,in which EVNs for remote asset management are deployed over three levels; manufacturing halls and edge and cloud computing. We also implement the developed methods in Alevin and show exemplary mapping results from our use case. Finally, we evaluate the chain embedding heuristic while using a random topology that is typical for such a use case, and show that it can improve the admission ratio and resource utilization with minimal overhead.
Das Werk widmet sich der Arbeit in der Industrie 4.0, einer Entwicklung, bei der alle realen Prozesse in der Fabrik digital erfassen und so noch stärker automatisiert werden sollen. Für die Beschäftigten bringt dies Chancen, es droht aber auch Dequalifizierung und umfassende Überwachung.
Im Arbeitsrecht werden deswegen die rechtliche Bindungskraft der menschengerechten Gestaltung der Arbeit und die Mensch-Maschine-Kollaboration untersucht. Im Datenschutz steht die Frage im Mittelpunkt, inwiefern die effiziente Organisation der Arbeit eine intensive Datenverarbeitung rechtfertigt. Dazu wird in Abgrenzung beider Rechtsgebiete ein neuer Lösungsansatz vorgestellt, abgeleitet aus dem Zusammenspiel deutscher und europäischer Grundrechte.
Earlier research showed that religion is related to participation
among adolescents. It emphasized the effects of belonging (affiliation to groups and traditions) on community service among Western populations. This article takes one step further and focuses on religiosity as a potential motivation for community problem solving during adolescence and young adulthood, in the Eastern European Orthodox cultural setting. Data comes from several semistructured interviews with participants in a civic project conducted in the city of Timisoara (Romania). Findings indicated a low impact of the social religious component on engagement. The cognitive dimension of belief and the emotional bonding (prayer, ritual connection to the higher reality) function as indirect motivators, through the moral element of behavior. Results also showed a privatization of spiritual life at young adults (the invisible religion): estrangement from doctrines and the development of an individualistic type of morality, meant to drive volunteer activities further.
Southeast Asia is one of the most dynamic regions in the world. This volume offers a timely approach to Southeast Asian Studies, covering recent transitions in the realms of urbanism, rural development, politics, and media. While most of the contributions deal with the era of post-independence, some tackle the colonial period and the resulting developments. The volume also includes insights from Southern India.
As a tribute to the interdisciplinary project of Southeast Asian Studies, this book brings together authors from disciplines as diverse as area studies, sociology, history, geography, and journalism.
The power demand (kW) and energy consumption (kWh) of data centers were augmenteddrastically due to the increased communication and computation needs of IT services. Leveragingdemand and energy management within data centers is a necessity. Thanks to the automated ICTinfrastructure empowered by the IoT technology, such types of management are becoming more feasiblethan ever. In this paper, we look at management from two different perspectives: (1) minimization of theoverall energy consumption and (2) reduction of peak power demand during demand-response periods.Both perspectives have a positive impact on total cost of ownership for data centers. We exhaustivelyreviewed the potential mechanisms in data centers that provided flexibilities together with flexiblecontracts such as green service level and supply-demand agreements. We extended state-of-the-artby introducing the methodological building blocks and foundations of management systems for theabove mentioned two perspectives. We validated our results by conducting experiments on a lab-gradescale cloud computing data center at the premises of HPE in Milano. The obtained results support thetheoretical model, by highlighting the excellent potential of flexible service level agreements in Green IT:33% of overall energy savings and 50% of power demand reduction during demand-response periods inthe case of data center federation.
La inclusión es un tema de gran actualidad en muchos ámbitos de la vida contemporánea. Son muy amplios los sectores en los que se viene observando un considerable aumento de la sensibilidad social acerca de las circunstancias y condiciones en que se produce. El presente libro tiene dos intereses centrales: analizar la relación entre personas con y sin diversidad funcional en los textos culturales a través de los tres términos clave inclusión, integración, diferenciación; y estudiar en los textos concretos cómo se tratan o incluso se practican la inclusión, la integración y la diferenciación. El campo de estudio se orienta hacia el corpus de los textos literarios, teatrales y audiovisuales.
¿Discapacidad?
(2021)
La diversidad funcional, entendida como una afección corporal, cognitiva o psíquica, a menudo altera gravemente la participación de las personas en la vida colectiva. En ocasiones, la diversidad funcional también puede venir acompañada de una estigmatización social muy marcada. El presente volumen ofrece una de las primeras aproximaciones a la representación de la diversidad funcional en el ámbito hispánico. Los trabajos que se incluyen en este libro se encargan de mostrar los posibles acercamientos teóricos y prácticos a la representación de la diversidad funcional, a través de un variado corpus integrado por obras literarias, escénicas, cinematográficas y audiovisuales, extraídas de la creación hispánica contemporánea.
Functional diversity, understood as a bodily, cognitive or psychic affection, often seriously disrupts the participation of people in collective life. At times, functional diversity can also be accompanied by a strong social stigma. This volume offers one of the first approaches to the representation of functional diversity in the Hispanic sphere. The works included in this book are in charge of showing the possible theoretical and practical approaches to the representation of functional diversity, through a varied corpus composed of literary, scenic, cinematographic and audiovisual works, extracted from the contemporary Hispanic creation.
Natural Language Processing has an important role in Artificial Intelligence for easing human-machine interaction. Processing human language, though, poses many challenges, among which is the semantics-related phenomenon known as language variability, the fact that the same thing can be said in several ways. NLP applications' inputs and outputs can be expressed in different forms, whose equivalence can be verified through inference. The textual entailment paradigm was established to enable the creation of a unifying framework for applied inference, providing a means of delivering other NLP task from handling inference issues in an ad-hoc manner, using instead the outputs of an inference-dedicated mechanism.
Text entailment, the task of determining whether a piece of text logically follows from another piece of text, involves different scenarios, which can range from a simple syntactic variation to more complex semantic relationships between sentences. However, most approaches try a one-size-fits-all solution that usually favors some scenario to the detriment of another. The commonsense world knowledge necessary to support more complex inferences is also usually employed in a limited way, with most approaches sticking to shallow semantic information, leaving more elaborate semantic relationships aside. Furthermore, most systems still work as a "black box", providing a yes/no answer that does not explain the underlying reasoning process.
This thesis aims at addressing these issues by proposing a composite interpretable approach for recognizing text entailment where the entailment pair is analyzed so the most relevant phenomenon is detected and the suitable method can be used to solve it. Syntactic variations are dealt with through the analysis of the sentences' syntactic structures, and semantic relationships are detected with the aid of a knowledge graph built from natural language dictionary definitions. Also, if a semantic matching is involved, the answer is made interpretable through the generation of natural language justifications that explain the semantic relationship between the pieces of text. The result is the XTE - Explainable Text Entailment - a system that outperforms well-established tools based on single-technique entailment algorithms, and that also gives an important step towards Explainable AI, allowing the inference model interpretation, making the semantic reasoning process explicit and understandable.
Programming is a key skill in a world where businesses are driven by digital transformations. Although many of the programming demand can be addressed by a simple set of instructions composing libraries and services available in the web, non-technical professionals, such as domain experts and analysts, are still unable to construct their own programs due to the intrinsic complexity of coding. Among other types of end-user development, natural language programming has emerged to allow users to program without the formalism of traditional programming languages, where a tailored semantic parser can translate a natural language utterance to a formal command representation able to be processed by a computational machine. Currently, semantic parsers are typically built on the top of a learning method that defines its behaviours based on the patterns behind a large training data, whose production frequently are costly and time-consuming. Our research is devoted to study and propose a semantic parser for natural language commands targeting a scenario with low availability of training data. Our proposed semantic parser follows a multi-component architecture, composed of a specialised shallow parser that associates natural language commands to predicate-argument structures, integrated to a distributional ranking model that matches the command to a function signature available from an API knowledge base. Systems developed with statistical learning models and complex linguistics resources, as the proposed semantic parser, do not provide natively an easy way to associate a single feature from the input data to the impact in system behaviour. In this scenario, end-user explanations for intelligent systems has become a strong requirement to increase user confidence and system literacy. Thus, our research designed an explanation model for the proposed semantic parser that fits the heterogeneity of its multi-component architecture. The explanation model explores a hierarchical representation in an increasing degree of technical depth, providing higher-level explanations in the initial layers, going gradually to those that demand technical knowledge, applying different explanation strategies to better express the approach behind each component. With the support of a user-centred experiment, we compared the utility of different types of explanations and the impact of background knowledge in their preferences.
In this thesis we consider real analytic functions, i.e. functions which can be described locally as convergent power series and ask the following: Which real analytic functions definable in R_{an,exp} have a holomorphic extension which is again definable in R_{an,exp}? Finding a holomorphic extension is of course not difficult simply by power series expansion. The difficulty is to construct it in a definably way.
We will not answer the question above completely, but introduce a large non trivial class of definable functions in R_{an,exp} where for example functions which are iterated compositions from either side of globally subanalytic functions and the global logarithm are contained. We call them restricted log-exp-analytic. After giving some preliminary results like preparation theorems and Tamm's Theorem for this class of functions we are able to show that real analytic restricted log-exp-analytic functions have a holomorphic extension which is again restricted log-exp-analytic.
Die gesetzlichen Formen bilden einen praxisrelevanten Teilbereich der Rechtswissenschaft, über den in der Vergangenheit wenig diskutiert wurde. Die Einführung der Textform und der elektronischen Form im Jahr 2001 und die Verabschiedung des DiRUG verdeutlichen, dass die Digitalisierung auch vor den Formen des deutschen Rechtssystems keinen Halt macht.
In dieser Dissertation wird die Entwicklung der Formen im Zuge der Digitalisierung aufgearbeitet. Dafür werden die papiergebundenen Formen und die digitalen Formen nach demselben Schema und unter denselben inhaltlichen Gesichtspunkten untersucht und chronologisch dargestellt. Hierdurch werden die Gemeinsamkeiten und Unterschiede systematisch herausgearbeitet. Besonderer Fokus liegt dabei auf den Formwirkungen sowie der Wirkungsäquivalenz von digitalen Formen und ihrem jeweiligen papiergebundenen Pendant. Zudem werden die digitalen notariellen Formen im Sinne des DiRUG in das altbekannte System der gesetzlichen Formen eingeordnet.
Nach den Untersuchungen ist festzuhalten, dass die elektronische Form mittlerweile hinreichend wirkungsäquivalent im Vergleich zur Schriftform ist, auch hinsichtlich der Warnwirkung, die vom deutschen Gesetzgeber bisher noch anders beurteilt wird. Die digitale öffentliche Beglaubigung und die digitale notarielle Beurkundung im Sinne des DiRUG sind ebenfalls hinreichend gleichwertig im Vergleich zu ihrem jeweiligen papiergebundenen Äquivalent. Aus dem Grund wird sich in dieser Dissertation dafür ausgesprochen, den Anwendungsbereich der digitalen Formen entsprechend ihrer Wirkungsäquivalenz in Zukunft zu erweitern.
Insgesamt ist die Digitalisierung der gesetzlichen Formen begrüßenswert und ein Schritt in die richtige Richtung.
Noise in schools is considered a massive stress factor for teachers because of various reasons and can therefore lead to performance deficits in the job, but also to physical and psychological impairments. Consequently, research on school noise and its effects is essential. The three studies presented in this paper examined the immediate effects of noise on student teachers and the collateral effects on practicing teachers.
In the first and second study, two experiments were conducted to examine the effects of noise during breaks on stress experience, performance in a concentration test, and error correction of a dictation. Based on the transactional stress model (Lazarus & Folkman, 1984), it was hypothesized that noise leads to an increase in stress experience. According to the maximal adaptability theory (Hancock & Warm, 1989, 2003), noise should initially cause optimal performance, but in the long run should cause performance impairment. For this purpose, in the first study 74 and in the second study 104 student teachers of the University of Passau worked on two different concentration tests and corrected a student’s dictation while listening to short, continuous, or no noise. In both experiments, continuous noise led to an increase in the experience of stress. Neither short nor continuous noise led to a deterioration in concentration performance. Further, different findings emerged: In the first experiment, a short concentration test in combination with continuous noise led to positive effects in dictation correction, i.e., subjects showed better performance in error correction. In the second experiment, a long concentration test combined with short or continuous noise resulted in negative effects, i.e., subjects made more errors in the subsequent correction of dictation. It can be concluded that school noise can, on the one hand, increase the experience of stress and, on the other hand, promote or limit teachers’ subsequent performance. The latest, however, seems to depend on the specific situation of the individual.
In the first part of the third study, the focus was on teachers’ coping styles and experienced stress. Since coping styles have been shown to have a major impact on mental health, it was reasonable to assume that the stress experience caused by noise would vary depending on the coping style. Based on the stress-strain model (Rudow, 2000) and the transactional stress model (Lazarus & Folkman, 1984), it was hypothesized that teachers with risky coping styles experience more stress symptoms. Therefore, an online study was conducted to investigate whether there were differences in psychological and physical symptoms between teachers with different coping styles. For this purpose, 99 Bavarian elementary and middle school teachers were surveyed. Four professional coping styles resulted from the overarching scales of professional commitment and resilience. The healthy type (high commitment, high resilience), the unambitious type (low commitment, high resilience), type A (high commitment, low resilience), and type burnout (low commitment, low resilience) differed in terms of threat appraisal, noise stress, voice and hearing problems, and noise-related burnout. Compared to the healthy type, the risk types − type A and type burnout − exhibited higher stress experience and were generally more susceptible to school noise than the healthy type. This is the first study to show that school noise is particularly hazardous for teachers with risky coping styles.
The second part of the third study focused on the impact pathways of school noise. Associations between teachers’ individual characteristics and the consequences of school noise were hypothesized. Based on the simplified model of teacher stress (van Dick & Wagner, 2001), we examined 159 Bavarian elementary and middle school teachers to determine whether noise stress and vocal fatigue mediate the association between noise sensitivity and noise-related burnout. Results indicated that noise stress mediated the relationship between noise sensitivity and vocal fatigue; vocal fatigue mediated the relationship between noise stress and noise-related burnout; noise stress and vocal fatigue serially mediated the relationship between noise sensitivity and noise-related burnout. This is the first study to show links between noise-sensitive teachers and noise stress, voice problems, and noise-related burnout.
Die Digitalisierung des Geldes durch die Einführung des elektronischen Zahlungsverkehrs in diesem Jahrhundert bildet die Grundlage des heutigen unkörperlichen Geldverkehrs. Das Aufkommen neuer rein digitaler Zahlungsarten wie Kryptowährungen setzen diesen Trend der Entmaterialisierung des Geldverkehrs fort.
Insofern ist auch das Recht der Zwangsvollstreckung der Frage ausgesetzt, inwieweit die Vollstreckung in solche Werte zur Befriedigung des Gläubigers möglich ist. Dieser Frage geht die vorliegende Dissertation auf Basis des deutschen Vollstreckungsrecht am Beispiel der Kryptowährung Bitcoins nach.
Network virtualization provides high flexibility for deploying communication services in dense and heterogeneous environments. Two main approaches (dimensions) that are usually combined exist: Network Function Virtualization (NFV) technologies for functionality virtualization and Virtual Network Embedding (VNE) algorithms for resource virtualization. These approaches can be applied to different network levels, such as factory and enterprise levels of industrial networks. Several objectives and constraints, that might be conflicting, shall be considered when network virtualization is applied, mainly in complex topologies. This thesis proposes a network virtualization model that considers both virtualization dimensions, two network levels, and different objectives and constraints. The network levels considered are two primary levels in industrial networks. However, this consideration does not restrict the model to a particular environment or certain levels. The considered objectivities/constraints are topology, reliability, security, performance, and resource usage.
Based on this model, we first build an overall combined solution for autonomic and composite virtual networking. This solution considers both virtualization dimensions, two network levels, and target objectives. Furthermore, this solution combines three novel virtualization sub-approaches that consider performance, reliability, and performance. However, the sub-approaches apply to different combinations of levels and dimensions, and the reliability approach additionally considers the resource usage objective. After presenting all solutions, we map them to the defined model.
Regarding applicability to industrial networks, the combined approach is applied to an enterprise-level Industrial Internet of Things (IIoT) use case inspired by the smart factory concept in Industry 4.0. However, the sub-approaches are applied to more specific use cases. The performance and reliability solutions are integrated with relevant components of the Time Sensitive Networks (TSN) standard as a modern technology for industrial networks. The goal is to enrich the reliability and performance capabilities of TSN with the flexibility of network virtualization.
In the combined approach, we compose and embed an environment-aware Extended Virtual Network (EVN) that represents the physical devices, virtual application functions, and required Service Function Chains (SFCs). We use the graph transformation method to transform abstract application requirements (represented by an Application Request (AR)) into an EVN. Both EVN composition and embedding methods consider the Substrate Network (SN) topology and different security, reliability, performance, and resource usage policies. These policies are applied with a certain priority and depend on the properties of communicating entities such as location and type. The EVN is embedded using property-based node mapping, reliability-aware branching, and a greedy chain embedding heuristic. The chain embedding heuristic is evaluated using a random topology that represents the use case.
The performance sub-approach is NFV-based and is applied to a specific use case with Time-critical Traffic (TCT) flows. We develop and evaluate a complete framework for virtualizing Time-aware Shaper (TAS) using high-performance NFV. The reliability sub-approach is VNE-based and is applied to a specific factory level use case. We develop minimal and maximal branching heuristics based on a reliability-aware k-shortest path algorithm and compare them using a typical factory topology. We then integrate these algorithms with a Frame Replication and Elimination for Reliability (FRER) simulator to realize reliability policies by the autonomic and efficient configuration of a supporting technology.
The security sub-approaches are related to both virtualization dimensions and are applied to generic enterprise-level use cases. However, the applicability of the security aspect to industrial networks is only shown in the combined (EVN) approach and its use case. We research the autonomic security management in Network Function Virtualization Infrastructure (NFVI) with the main goal of early reaction to threats through SFC reconfiguration through Virtual Network Function (VNF) live migration. This goal is approached by supporting the security measurements with a decision making architecture that considers, on the one hand, the threats and events in the environment and, on the other hand, the Service Level Agreement (SLA) between the NFVI provider and user. For this purpose, we classify the VNF-specific attacks and define possible early detectable behavior patterns. Finally, we develop a security-aware VNE heuristic that considers the security requirements of the Virtual Network (VN) and the security capabilities of the SN. This approach is modified in the combined approach to consider deploying virtualized security VNFs.
Due to the advances of digitalization, firms are able to collect more and more personal consumer data and strive to do so. Moreover, many firms nowadays have a data sharing cooperation with other firms, so consumer data is shared with third parties. Accordingly, consumers are confronted regularly with the decision whether to disclose personal data to such a data sharing cooperation (DSC). Despite privacy research has become highly important, peculiarities of such disclosure settings with a DSC between firms have been neglected until now. To address this gap is the first research objective in this thesis. Another underexplored aspect in privacy research is the impact of low-cognitive-effort decision-making. This is because the privacy calculus, the most dominant theory in privacy research, assumes for consumers a purely cognitive effortful and deliberative disclosure decision-making process. Therefore, to expand this perspective and examine the impact of low-cognitive-effort decision-making is the second research objective in this thesis. Additionally, with the third research objective, this thesis strives to unify and increase the understanding of perceived privacy risks and privacy concerns which are the two major antecedents that reduce consumers’ disclosure willingness.
To this end, five studies are conducted: i) essay 1 examines and compares consumers’ privacy risk perception in a DSC disclosure setting with disclosure settings that include no DSC, ii) essay 2 examines whether in a DSC disclosure setting consumers rely more strongly on low-cognitive-effort processing for their disclosure decision, iii) essay 3 explores different consumer groups that vary in their perception of how a DSC affects their privacy risks, iv) essay 4 refines the understanding of privacy concerns and privacy risks and examines via meta-analysis the varying effect sizes of privacy concerns and privacy risks on privacy behavior depending on the applied measurement approach, v) essay 5 examines via autobiographical recall the effects of consumers’ feelings and arousal on disclosure willingness.
Overall, this thesis shines light on consumers’ personal data disclosure decision-making: essay 1 shows that the perceived risk associated with a disclosure in a DSC setting is not necessarily higher than to an identical firm without DSC. Also, essay 3 indicates that only for the smallest share of consumers a DSC has a negative impact on their disclosure willingness and that one third of consumers do not intensively think about consequences for their privacy risks arising through a DSC. Additionally, essay 2 shows that a stronger reliance on low-cognitive-effort processing is prevalent in DSC disclosure settings. Moreover, essay 5 displays that even unrelated feelings of consumers can impact their disclosure willingness, but the effect direction also depends on consumers’ arousal level.
This thesis contributes in three ways to theory: i) it shines light on peculiarities of DSC disclosure settings, ii) it suggests mechanisms and results of low-effort processing, and iii) it enhances the understanding of perceived privacy risks and privacy concerns as well as their resulting effect sizes.
Besides theoretical contributions, this thesis offers practical implications as well: it allows firms to adjust the disclosure setting and the communication with their consumers in a way that makes them more successful in data collection. It also shows that firms do not need to be too anxious about a reduced disclosure willingness due to being part of a DSC. However, it also helps consumers themselves by showing in which circumstances they are most vulnerable to disclose personal data. That consumers become conscious of situations in which they are especially vulnerable to disclose data could serve as a countermeasure: this could prevent that consumers disclose too much data and regret it afterwards. Similarly, this thesis serves as a thought-provoking input for regulators as it emphasizes the importance of low-cognitive-effort processing for consumers’ decision-making, thus regulators may be able to consider this in the future.
In sum, this thesis expands knowledge on how consumers decide whether to disclose personal data, especially in DSC settings and regarding low-cognitive-effort processing. It offers a more unified understanding for antecedents of disclosure willingness as well as for consumers’ disclosure decision-making processes. This thesis opens up new research avenues and serves as groundwork, in particular for more research on data disclosures in DSC settings.
This collection of three chapters responds to today’s energy challenges. It explores innovative policy aimed to equip the energy poor with access to improved cooking energy and electricity, looking both at the demand and supply side of modern energy technologies. Concretely, it discusses mechanisms to increase uptake of off-grid solar electricity in rural Rwanda based on experimental demand measurements (Chapter 1), it studies how to diffuse improved cooking technologies in rural Senegal via supply-side mechanisms (Chapter 2), and it identifies the need to target cooking technologies in consideration of the broader household context in rural Senegal and beyond (Chapter 3).
Sentences that present a complex linguistic structure act as a major stumbling block for Natural Language Processing (NLP) applications whose predictive quality deteriorates with sentence length and complexity. The task of Text Simplification (TS) may remedy this situation. It aims to modify sentences in order to make them easier to process, using a set of rewriting operations, such as reordering, deletion or splitting. These transformations are executed with the objective of converting the input into a simplified output, while preserving its main idea and keeping it grammatically sound. State-of-the-art syntactic TS approaches suffer from two major drawbacks: first, they follow a very conservative approach in that they tend to retain the input rather than transforming it, and second, they ignore the cohesive nature of texts, where context spread across clauses or sentences is needed to infer the true meaning of a statement. To address these problems, we present a discourse-aware TS framework that is able to split and rephrase complex English sentences within the semantic context in which they occur. By generating a fine-grained output with a simple canonical structure that is easy to analyze by downstream applications, we tackle the first issue. For this purpose, we decompose a source sentence into smaller units by using a linguistically grounded transformation stage. The result is a set of selfcontained propositions, with each of them presenting a minimal semantic unit. To address the second concern, we suggest not only to split the input into isolated sentences, but to also incorporate the semantic context in the form of hierarchical structures and semantic relationships between the split propositions. In that way, we generate a semantic hierarchy of minimal propositions that benefits downstream Open Information Extraction (IE) tasks. To function well, the TS approach that we propose requires syntactically well-formed input sentences. It targets generalpurpose texts in English, such as newswire or Wikipedia articles, which commonly contain a high proportion of complex assertions.
In a second step, we present a method that allows state-of-the-art Open IE systems to leverage the semantic hierarchy of simplified sentences created by our discourseaware TS approach in constructing a lightweight semantic representation of complex assertions in the form of semantically typed predicate-argument structures. In that way, important contextual information of the extracted relations is preserved that allows for a proper interpretation of the output. Thus, we address the problem of extracting incomplete, uninformative or incoherent relational tuples that is commonly to be observed in existing Open IE approaches. Moreover, assuming that shorter sentences with a more regular structure are easier to process, the extraction of relational tuples is facilitated, leading to a higher coverage and accuracy of the extracted relations when operating on the simplified sentences. Aside from taking advantage of the semantic hierarchy of minimal propositions in existing Open IE Abstract approaches, we also develop an Open IE reference system, Graphene. It implements a relation extraction pattern upon the simplified sentences.
The framework we propose is evaluated within our reference TS implementation DisSim. In a comparative analysis, we demonstrate that our approach outperforms the state of the art in structural TS both in an automatic and a manual analysis. It obtains the highest score on three simplification datasets from two different domains with regard to SAMSA (0.67, 0.57, 0.54), a recently proposed metric targeted at automatically measuring the syntactic complexity of sentences which highly correlates with human judgments on structural simplicity and grammaticality. These findings are supported by the ratings from the human evaluation, which indicate that our baseline implementation DisSim returns fine-grained simplified sentences that achieve a high level of syntactic correctness and largely preserve the meaning of the input. Furthermore, a comparative analysis with the annotations contained in the RST Discourse Treebank (RST-DT) reveals that we are able to capture the contextual hierarchy between the split sentences with a precision of approximately 90% and reach an average precision of almost 70% for the classification of the rhetorical relations that hold between them. Finally, an extrinsic evaluation shows that when applying our TS framework as a pre-processing step, the performance of state-ofthe-art Open IE systems can be improved by up to 32% in precision and 30% in recall of the extracted relational tuples.
Accordingly, we can conclude that our proposed discourse-aware TS approach succeeds in transforming sentences that present a complex linguistic structure into a sequence of simplified sentences that are to a large extent grammatically correct, represent atomic semantic units and preserve the meaning of the input. Moreover, the evaluation provides sufficient evidence that our framework is able to establish a semantic hierarchy between the split sentences, generating a fine-grained representation of complex assertions in the form of hierarchically ordered and semantically interconnected propositions. Finally, we demonstrate that state-of-the-art Open IE systems benefit from using our TS approach as a pre-processing step by increasing both the accuracy and coverage of the extracted relational tuples for the majority of the Open IE approaches under consideration. In addition, we outline that the semantic hierarchy of simplified sentences can be leveraged to enrich the output of existing Open IE systems with additional meta information, thus transforming the shallow semantic representation of state-of-the-art approaches into a canonical context-preserving representation of relational tuples.
Die Studie betrachtet das Matthäusevangelium als zusammenhängende und durchkomponierte Erzählung, um anhand einer narrativen Analyse eine Charakterisierung der Judasfigur vorzunehmen. Dieser Ansatz berücksichtigt, wie sich die Figur in das Gesamtkonzept des Textes einfügt und welches Erzählinteresse mit ihr verfolgt wird. Diese synchrone, zusammenhängende Lesart erhellt: Judas ist im Matthäusevangelium keine rein historische Figur, sondern seine Darstellung hat eine bestimmte Funktion, die mit dem historischen Kontext, der Pragmatik und der theologischen Konzeption des gesamten Textes zu tun hat. Das Matthäusevangelium behandelt das Thema der Schülerschaft und Nachfolge Jesu, wobei die beiden Pole „Mit-Jesus-Sein“ und „Wenigvertrauen“ eine zentrale Rolle spielen. Vor diesem Horizont steht Judas nicht für den „Bösen“, von dem sich die Leserinnen und Leser abgrenzen sollten, sondern er ist Teil der Nachfolgegemeinschaft. Sein Scheitern könnte im Grunde jeden treffen oder anders gesagt: Jeder könnte Judas sein.
Zugleich bietet der Text Strategien an, wie mit derartigem Versagen umzugehen ist: Scheitern darf nicht in die Selbstisolation führen, sondern der Erzählzusammenhang verdeutlicht, dass es eine Möglichkeit der Rückkehr zur Gruppe und Vergebung für einen einsichtigen Sünder gibt.
This dissertation deals with geostatistical, time series, and regression analytical approaches for modelling spatio-temporal processes, using air quality data in the applications. The work is structured into four essays the abstracts of which are given in the following.
The first essay is titled 'Spatial detrending revisited: Modelling local trend patterns in NO2-concentration in Belgium and Germany'. It is written in co-authorship by Prof. Dr. Harry Haupt and Dr. Angelika Schmid and published in 2018 in Spatial Statistics 28, pp. 331-351 (https://doi.org/10.1016/j.spasta.2018.04.004).
Abstract
Short-term predictions of air pollution require spatial modelling of trends, heterogeneities, and dependencies. Two-step methods allow real-time computations by separating spatial detrending and spatial extrapolation into two steps. Existing methods discuss trend models for specific environments and require specification search. Given more complex environments, specification search gets complicated by potential nonlinearities and heterogeneities. This research embeds a nonparametric trend modelling approach in real-time two-step methods. Form and complexity of trends are allowed to vary across heterogeneous environments. The proposed method avoids ad hoc specifications and potential generated predictor problems in previous contributions. Examining Belgian and German air quality and land use data, local trend patterns are investigated in a data driven way and are compared to results computed with existing methods and variations thereof. An important aspect of our empirical illustration is the heterogeneity and superior performance of local trend patterns for both research regions. The findings suggest that a nonparametric spatial trend modelling approach is a valuable tool for real-time predictions of pollution variables: it avoids specification search, provides useful exploratory insights and reduces computational costs.
The second essay is titled 'Predictability of hourly nitrogen dioxide concentration'. It is written in co-authorship with Prof. Dr. Harry Haupt and published in 2020 in Ecological Modelling 428, 109076 (https://doi.org/10.1016/j.ecolmodel.2020.109076).
Abstract
Temporal aggregation of air quality time series is typically used to investigate stylized facts of the underlying series such as multiple seasonal cycles. While aggregation reduces complexity, commonly used aggregates can suffer from non-representativeness or non-robustness. For example, definitions of specific events such as extremes are subjective and may be prone to data contaminations. The aim of this paper is to assess the predictability of hourly nitrogen dioxide concentrations and to explore how predictability depends on (i) level of temporal aggregation, (ii) hour of day, and (iii) concentration level. Exploratory tools are applied to identify structural patterns, problems related to commonly used aggregate statistics and suitable statistical modeling philosophies, capable of handling multiple seasonalities and non-stationarities. Hourly times series and subseries of daily measurements for each hour of day are used to investigate the predictability of pollutant levels for each hour of day, with prediction horizons ranging from one hour to one week ahead. Predictability is assessed by time series cross validation of a loss function based on out-of-sample prediction errors. Empirical evidence on hourly nitrogen dioxide measurements suggests that predictability strongly depends on conditions (i)-(iii) for all statistical models: for specific hours of day, models based on daily series outperform models based on hourly series, while in general predictability deteriorates with exposure level.
The third essay is titled 'Agglomeration and infrastructure effects in land use regression models for air pollution – Specification, estimation, and interpretations'. It is written in co-authorship with Dr. Markus Fritsch and published in 2021 in Atmospheric Environment 253, 118337 (https://doi.org/10.1016/j.atmosenv.2021.118337).
Abstract
Established land use regression (LUR) techniques such as linear regression utilize extensive selection of predictors and functional form to fit a model for every data set on a given pollutant. In this paper, an alternative to established LUR modeling is employed, which uses additive regression smoothers. Predictors and functional form are selected in a data-driven way and ambiguities resulting from specification search are mitigated. The approach is illustrated with nitrogen dioxide (NO2) data from German monitoring sites using the spatial predictors longitude, latitude, altitude and structural predictors; the latter include population density, land use classes, and road traffic intensity measures. The statistical performance of LUR modeling via additive regression smoothers is contrasted with LUR modeling based on parametric polynomials. Model evaluation is based on goodness of fit, predictive performance, and a diagnostic test for remaining spatial autocorrelation in the error terms.
Additionally, interpretation and counterfactual analysis for LUR modeling based on additive regression smoothers are discussed. Our results have three main implications for modeling air pollutant concentration levels: First, modeling via additive regression smoothers is supported by a specification test and exhibits superior in- and out-of-sample performance compared to modeling based on parametric polynomials. Second, different levels of prediction errors indicate that NO2 concentration levels observed at background and traffic/industrial monitoring sites stem from different processes. Third, accounting for agglomeration and infrastructure effects is important: NO2 concentration levels tend to increase around major cities, surrounding agglomeration areas, and their connecting road traffic network.
The fourth essay is titled 'Outlier detection and visualisation in multi-seasonal time series and its application to hourly nitrogen dioxide concentration'. It is written in single authorship and has not been published yet.
Abstract
Outlier detection in data on air pollutant recordings is conducted to uncover data points that refer to either invalid measurements or valid but unusually high concentration levels. As air pollutant data is typically characterised by multiple seasonalities, the task of outlier detection is associated with the question of how to deal with such non-stationarities. The present work proposes a method that combines time series segmentation, seasonal adjustment, and standardisation of random variables. While the former two are employed to obtain subseries of homoskedastic data, the latter ensures comparability across the subseries. Further, the standardised version of the seasonally adjusted subseries represents a scaled measure for the outlyingness of each data point in the original time series from its mean and therefore forms a suitable basis for outlier detection. In an empirical application to data on hourly NO2 concentration levels recorded at a traffic monitoring site in Cologne, Germany, over the years 2016 to 2019, the common boxplot criterion is used to examine each standardised seasonally adjusted subseries for positive outliers. The results of the analyses are put into their natural temporal order and displayed in a heatmap layout that provides information on when single and sequential outliers occur.
Das Durchsetzungssystem der Datenschutz-Grundverordnung (DS-GVO) regelt in den Art. 77 ff. DS-GVO nur ansatzweise die Durchsetzung der Rechte der Betroffenen durch Dritte. Die Arbeit hat sich deshalb zum Ziel gesetzt, Rechtsdurchsetzungsmöglichkeiten von Verbänden und Mitbewerbern in Deutschland genauer zu beleuchten. Hierzu wird zunächst der Spielraum untersucht, den die Datenschutz-Grundverordnung Dritten bei der Rechtsdurchsetzung einräumt, also mit anderen Worten, ob deren Rechtsbehelfssystem eine Sperrwirkung für die Mitgliedstaaten zeitigt. Darauf aufbauend wird näher untersucht, ob das Gesetz gegen den unlauteren Wettbewerb (UWG) und das Gesetz über Unterlassungsklagen bei Verbraucherrechts- und anderen Verstößen (UKlaG) innerhalb des ermittelten Spielraums verbleiben.
Poverty, underemployment, lack of infrastructure, low agricultural productivity, degradation of natural resources, climate change, and eroding social cohesion are among the biggest challenges that many low and lower-middle income countries are facing. Objectives linked to addressing these pressing challenges have been ascribed to public works programmes (PWPs). These are social protection instruments which offer remuneration (in cash or kind) for vulnerable people in exchange for temporary work on labour-intensive low-skill activities with social benefits. PWPs are being implemented in around two out of three developing countries. Given the substantial amounts spent on PWPs, it is critical to know to what extent the expectations towards them are backed by evidence. This dissertation sheds light on this overarching question with three self-contained essays. The first essay synthesises the evidence from PWPs in Sub-Saharan Africa, guided by three questions: First, what can we infer from the available impact evaluations regarding the effectiveness of PWPs as a social protection instrument? Second, what do we know about the role of the wage vector, asset vector, and skills vector in this respect? Third, what can we infer about the role of design features in explaining differences in outcomes? The other two essays use empirical evidence from Malawi to address more specific questions regarding the potential of PWPs to strengthen climate resilience and the relationship between PWPs and social cohesion.
What sets the evidence synthesis in my first essay apart from existing reviews of PWPs is that it accounts for their heterogeneity by systematically differentiating results by PWP type and outcome area (income, consumption and expenditures, labour supply, food security, nutrition, asset holdings, agricultural production and techniques, and education). Programmes that offer short-term ad-hoc employment (Type 1) are distinguished from programmes that offer more predictable employment over longer periods (Type 2). For the review of impacts, this paper relies solely on (quasi-)experimental studies, but for the analysis of the role of design factors also on other literature. In line with existing reviews, my results suggest that Type 1 programmes can effectively enable consumption smoothing in the wake of acute crises, whereas in contexts of chronic poverty, Type 2 programmes perform, on balance, better. Offering complementary access to extension services in Type 2 programmes can boost impacts further. However, in all cases, evidence is too scant and mixed to safely conclude whether the higher benefits of costlier PWP types justify the cost premium.
The second essay investigates the potential of PWPs to strengthen climate resilience. Among the main social protection instruments, the biggest potential to strengthen climate resilience is often ascribed to PWPs if they create climate-smart community assets and transfer knowledge of climate-smart practices. Yet, there is a lack of evidence whether design changes to this end can indeed enhance the contribution of an existing PWP to climate resilience. I use a difference-in-differences approach based on two-period panel data to analyse how a modified PWP model performs compared to the standard model of Malawi’s largest PWP after 24 months. The key modification is to embed public works in a communal watershed management plan with a strong emphasis on collective action and capacity building. I find that the modified approach considerably increased communal watershed management activities through voluntary labour contributions on top of the paid public works labour. While this increase was mainly driven by PWP participants, non-participants also made substantial contributions. I also find a small increase in the adoption of soil and water conservation practices on respondents’ private land, especially by non-PWP participants. These findings imply that such modest changes can make PWPs climate-smarter. In particular, they can broaden the engagement in and adoption of climate-smart activities beyond the group of PWP participants.
The co-authored third essay investigates the relationship between Malawi’s MASAF PWP and social cohesion, specifically within-community cooperation for the common good. Like the existing studies, we face the challenge that neither the assignment of the programme to communities nor the selection of individual participants is randomised. We try to mitigate the endogeneity concerns by triangulating fixed effects panel analyses for a set of outcomes and sectors using two datasets with different units of analysis (households and communities). We find that public works are positively associated with coordination activities and voluntary (unpaid) contributions to public goods, along both vertical ties (between community members and local leaders) and horizontal ties (among community members). Especially for school-building activities, voluntary inputs in the form of labour and other in-kind contributions are higher in the presence of the public works programme. Our results contribute to a better understanding of the link between social protection programmes with community-driven features and social cohesion.
Overall, the findings of the three essays in this dissertation contribute to the knowledge base regarding effectiveness and potential of PWPs across a broad range of outcome areas. Specifically, they offer new insights how to harness the potential of PWP to strengthen climate resilience and into the seemingly positive relationship between PWPs and social cohesion. The findings can help researchers and policy makers who are interested specifically in PWPs or in any of the many objectives that can be pursued through PWPs.
In meiner Dissertationsschrift schlage ich vor, gesellschaftliche – auf einer Mikroebene der spätsozialistischen Gesellschaft der DDR liegende – Interaktionsprozesse durch das analytische Visier des Konzeptes Persönlichkeit und seiner Dimensionen zu lesen und zu dechiffrieren. Dabei gehe ich davon aus, dass jene Dimensionen in Form von lebensgeschichtlichen Motiven, Wünschen und Ideologemen sowie normativen Forderungen und Bewertungen im Zentrum dieser Interaktionen standen und innerhalb der spätsozialistischen Sinnwelt (zum Begriff vgl. Kapitel 1.2.1; Sabrow 2007) prägend wurden.
Gestützt wird dieser Ansatz durch die Tatsache, dass Persönlichkeit ein zentrales innersystemisches Diskursmotiv des Spätsozialismus war und den diskursiv konstruierten offiziellen Gesellschaftsentwurf prägte (Kapitel 2). Als Teil der „Erziehungsdiktatur“ (Dietrich 2018: S. XXIX) prägte die Figur der sozialistischen Persönlichkeit die ideologischen Diskurse der DDR.
Vor diesem Hintergrund widme ich mich vier Fallstudien (Kapitel 3), in denen ich von mir erhobene narrativ-biografische Interviews mit vier Zeitzeugen sowie die zu diesen Personen durch das MfS geführten Vorgänge als verflochtene, konkurrierende und interagierende Narrationen lese (zur Methode vgl. Kapitel 1.2.1–1.2.4). Was diese Analysen deutlich zeigen, ist, dass sich in den einzelnen Fällen spezifische, sinn- und lebensweltlich eingebundene Konflikte und Interaktionsprozesse kondensierten, die ihrerseits Dimensionen von Persönlichkeit semantisierten oder aber auf deren Grundlage geführt wurden. Innerhalb der sinn- und lebensweltlichen Verflechtungen rezipierten die verschiedenen Akteur*innen (meine Gesprächspartner, ihre Freund*innen, staatliche Organe) die unterschiedlichen Referenzsysteme und diskursiven Einflüsse und zogen daraus entsprechende Konsequenzen für das eigene Handeln, eigene Ideen, Wünsche und Normen. In unterschiedlichen Ausprägungen und Formen artikulierten sie diese sinnweltlichen Verarbeitungen und traten so in kommunikative und performative Interaktionsprozesse, die wiederum das sinnweltliche Gefüge beeinflussten, da sie rezipiert wurden respektive sogar rezipiert werden sollten. Im Zentrum dieser Prozesse stand das Ideal kollektivistisch denkender, mitarbeitender und engagierter sozialistischer Persönlichkeiten. Alle vier Herren betrieben auf ihre jeweils spezifische Weise die Dekonstruktion dieses Ideals und wurden dabei durch die staatlichen Organe sprachlich bewertet, kriminalisiert, exkludiert oder vereinnahmt.
Das Studium der vier Fallgeschichten zeigt auf: Die staatlichen Akteur*innen, vor allem aber ihre kommunikativen Strategien, ihr Sprachgebrauch waren nicht in der Lage eine tatsächlich qualitative Erfassung zu bieten. Vielmehr wurden sprachliche Bausätze genutzt, die jede Abweichung, jedes nicht-sozialistische Verhalten von Persönlichkeiten als Defizit stigmatisierten und gerade dadurch die eigene Inadäquatheit und somit die Überforderung der staatlichen Organe offenbarten. Die Dynamik des Spätsozialismus konnten sie deshalb sprachlich nicht mehr vereinnahmen. So bietet die vorliegende Arbeit einen weiteren Einblick in den Wesenskern der spätsozialistischen, diktatorischen Herrschaft in der DDR.
The increasing relevance of massive graph data reinforces the need for adequate graph data management. While several graph database engines have been developed, the storage of graph data in a relational database management system, and therefore the seamless integration into existing information systems remains an open challenge.
Motivated by the use case to integrate Building Information Modeling (BIM) data into the MonArch system, we propose a solution that transforms the BIM data into a property graph and stores this graph in the database system.
We present a novel approach to efficiently store property graph data in a relational database management system using JSON functionality and redundant storage of edges in adjacency lists and show how to import huge data sets into this schema. Applying this approach, we import data sets of up to nearly 1 TB of disk space within the relational database, while only having 96 GB of main memory available.
We also present a new approach of how to retrieve data from this database schema, translating queries written in the popular property graph query language Cypher into SQL. Hence, we provide an intuitive way to write semantically complex queries.
We also demonstrate the efficiency of our approach using the standardized Linked Data Benchmark Council – Social Network Benchmark (LDBC - SNB) framework. Our approach increases the throughput for this benchmark by up to 85 times, compared to existing approaches for RDBMS.
In addition, we propose a new method to transform BIM data into the property graph model and how to apply the aforementioned property graph storage to this data. We can import IFC models of up to 300 MB within five minutes.
We show the suitability of our approach using our own use case specific benchmark, which we integrated into the previously mentioned Social Network Benchmark. For our interactive use case-specific queries, we achieve response times faster than 5 ms in 99% of all executions.
Finally, we present how the aforementioned approach to store BIM data in a relational database management system is integrated into the existing MonArch system by splitting the different functionalities of our approach into a microservice architecture.
Most Sub-Saharan African (SSA) countries experienced sound economic growth and a declining rate of poverty over the last two decades. Though, by far, the SSA region remains the poorest in the world and faces tremendous political, social, and economic challenges. Moreover, due to the COVID-19 pandemic, SSA entered into a recession with a GDP growth rate of minus 5% in 2020 as ever recorded over 25 years. This has also induced an increase in poverty in the region, which adds up to the structural challenges and further highlight the need of sound policies to address economic growth, governance, jobs, and poverty for the region to meet the Sustainable Development Goals (SDGs) in 2030 and beyond.
This thesis examines the effects of institutional quality, political instability, and a government targeted entrepreneurship program on the accumulation of human, physical, and financial capital by households and firms. In the literature, these factors are identified as the key determinants of economic growth and job creation, yet this thesis contributes to a knowledge gap, especially at the microeconomic level, on how households and firms accumulate these factors in the presence of weak institutional quality, political instability, and government targeted entrepreneurship programs. In particular, this thesis investigates heterogeneity as well as a single country study of the effects of institutional quality and political instability; it also employs a randomized controlled trial (RCT) to assess the impacts of two different targeted entrepreneurship support programs; and finally, it taps on data from this field experiment to assess the performance of two different targeting mechanisms for selecting growth-oriented entrepreneurs. Each paper is self-contained and three among the four papers were written with co-authors.
The first paper assesses the effects of institutional quality and political instability on household assets and human capital accumulation in 19 Sub-Saharan African countries for the period 2003-16. In this paper, the concept of instability is enlarged to include factual instability as measured by the number of political violence and civil unrest events, perceived instability as measured by the perceptions of the quality of institutions by households, and the interplay between factual and perceived instability. Contrary to most previous analyses, this paper takes into account household wealth distribution to show how the effects of political instability differ for poor vs. rich households. For identification, I exploit the variation of factual and perceived instability across 185 administrative regions in the 19 countries. My regressions control for a large range of confounding factors measured at the levels of households, regions, and countries. Overall, factual and perceived instability are associated with higher investments in assets, and factual instability is also associated with more investment in house improvements, yet it is negatively associated with the ownership of financial accounts. With regard to the heterogeneous effects, increased factual or perceived instability is associated with more investments in physical capital but less investments in financial and human capital among rich households, and with less investments in physical, financial and human capital among poor households. These findings suggest that political instability might enhance the accumulation of wealth by rich households and reduce that of poor households, implying that the detrimental effects of political instability have lasting consequences for poor households, especially when poor households are exposed to an actual or even just perceived deteriorating quality of the country’s institutions.
The second paper, written with Nicolas Büttner and Michael Grimm, analyzes households’ investments in assets and their consumption, and education and health expenditures when exposed to actual instability as measured by the number of political violence and protest events in Burkina Faso. There is a large, rather macroeconomic, literature that shows that political instability and social conflict are associated with poor economic outcomes including lower investment and reduced economic growth. However, there is only very little research on the impact of instability on households’ behavior, in particular their saving and investment decisions. This paper merges six rounds of household survey data and a geo-referenced time series of politically motivated events and fatalities from the Armed Conflict Location and Event Data project (ACLED) to analyze households’ decisions when exposed to instability in Burkina Faso. For identification, the paper exploits variation in the intensity of political instability across time and space while controlling for time-effects and municipality fixed effects as well as rainfall and nighttime light intensity, and many other potential confounders. The results show a negative effect of political instability on financial savings, the accumulation of durables, investment in house improvements, as well as on investment in education and health. Instability seems, in particular, to lead to a reshuffling from investment expenditures to increased food consumption, implying lower growth prospects in the future. With respect to economic growth, the sizable education and health effects seem to be particularly worrisome.
The third paper, written with Michael Grimm and Michael Weber, employs a randomized controlled trial (RCT) to assess the short-term effects of a government support program targeted at already existing and new firms located in a semi-urban area in Burkina Faso. Most support programs targeted at small firms in low- and middle-income countries fail to generate transformative effects and employment at a larger scale. Bad targeting, too little flexibility and the limited size of the support are some of the factors that are often seen as important constraints. This paper assesses the short-term effects of a randomized targeted government support program to a pool of small and medium-sized firms that have been selected based on a rigorous business plan competition (BPC). One group received large cash grants of up to US$8,000, flexible in use. A second group received cash grants of an equally important size, but earmarked to business development services (BDSs) and thus less flexible and with a required own contribution of 20%. A third group serves as a control group. All firms operate in agri-business or related activities in a semi-urban area in the Centre-Est and Centre-Sud regions of Burkina Faso. An assessment of the short-term impacts shows that beneficiaries of cash grants engage in better business practices, such as formalization and bookkeeping. They also invest more, though, this does not translate into higher profits and employment yet. Beneficiaries of cash grants and BDSs show a higher ability to innovate. The results also show that cash grants cushioned the adverse effects of the COVID-19 pandemic for the beneficiaries. More generally, this study adds to the thin literature on support programs implemented in a fragile-state context.
The fourth paper, written with Michael Weber, examines the selection of entrepreneurs based on expert judgments for a BPC in Burkina Faso. To support job creation in developing countries, governments allocate significant funds to a typically small number of new or already existing micro, small, and medium-sized enterprises (MSMEs) that are growth oriented. Increasingly, these enterprises are picked through BPCs where thematic experts are asked to make the selection. So far, there exists contrasting and limited evidence on the effectiveness and efficiency of these expert judgments for screening growth-oriented entrepreneurs among contestants in BPCs. Alternative or complementary approaches such as evaluation and selection algorithms are discussed in the literature but evidence on their performance is thin. This paper uses a principal component analysis (PCA) to build a metric for comparing the performance of these alternative mechanisms for targeting entrepreneurs with high potential to grow. The results show expert subjectivity bias in judging contestant entrepreneurs. The paper finds that the scores from the expert judgment and those from the algorithm perform similarly well for picking the top-ranked or talented entrepreneurs. It also finds that both types of scores have predictive power, i.e. have statistically significantly associated with 17 firm performance outcomes measured 10 or 34 months after the BPC started. Yet, the predictive power, as measured by the magnitude of the regression coefficients, is higher for the algorithm metric, even when it is considered jointly with expert judgment scores. Despite the statistical superiority of the algorithm, expert assessments at least through pitches of entrepreneurs have proved useful in many settings where free-riding or misuse of public funds may occur. Hence, efficiency and precision could be achieved by relying on a reasoned combination of expert judgments and an algorithm for targeting growth-oriented entrepreneurs.
These four papers bring new insights on the relationship between weak institutions, political instability, and targeted government support to entrepreneurship for increasing the accumulation of financial, physical, and human capital, and productivity. And these are the key factors for spurring economic growth and creating jobs in SSA. These findings suggest that efficient institutions building in SSA countries would enhance citizen perceptions of good governance which would reduce political instability and enable households including the poor to accumulate productive assets, increase their productivity and reduce poverty. The findings also suggest that targeted government entrepreneurship support programs, e.g. in the forms of cash grants with monitored disbursements yet flexible in use, can enhance firms’ human capital, productive assets, and innovations, even in the short term. Moreover, the targeting mechanism of such programs could be made more effective and efficient by relying on a combinaison of expert judgments and an algorithm for picking growth-oriented entrepreneurs.
Die Arbeit unternimmt eine begriffshistorische Einordnung des Wirtschaftsverfassungsbegriffs von 1924 bis 2017 aus Sicht der deutschen, vor allem rechtswissenschaftlichen Literatur und den damit verbundenen anhaltenden Deutungsschwierigkeiten.
Im Hauptteil wird die EU-Wirtschaftsordnung in Gestalt von Primärrecht, Sekundärrecht sowie der bedeutendsten (EU-)Kommissionspraxis in Sachen Regulierung und finanzieller Mittelvergabe auf deren wirtschaftssystemischen Gehalt untersucht, mithin inwieweit die EU-Politik aus ordnungspolitischer Sicht als liberal/liberalisierend, bürokraitsch-neutral oder als interventionistisch anzusehen ist. Dabei wird ein Schwerpunkt bei der bisher wirtschaftssystemisch wenig beleuchteten Geldpolitik am Beispiel der EWU gesetzt.
Zuletzt wird die Entwicklung mitgliedsstaatlicher Volkswirtschaften anhand üblicher Kennziffern der volkswirtschaftlichen Gesamtrechnungen im Zeitraum von 1997 bis 2018 im direkten Vergleich dargestellt. Ausgewählte Problemkreise der ökonomischen Analyse wurden vertieft, darunter die Verteilung der Geldschöpfung im Euroraum (nach Eurozonen-Mitgliedsstaaten) sowie die Bedeutung anhaltender Zahlungsbilanzungleichgewichte.
With respect to religious motivations for political participation and civic engagement, scholars have set their focus on social capital and the recruiting potential for volunteers inside religious communities. Less attention has been paid to individual religious impulses, but also to reasons for which people step out from deliberative processes, especially in regard to the Eastern European Orthodox cultural context. We are approaching this under-explored research field in this dissertation, with focus on the Romanian city of Timișoara, and look at three particular aspects: the influence of religious perceptions on political protests (analysis of the 1989 revolution), the manner in which religion motivates young people to volunteer (view on a local community project) and factors determining people to retreat from public engagement (analysis of citizens` local committees). The research methods are qualitative - interviews, group discussions and analytical interpretation. Results show that non- or less religious young people are encouraged to protest by an indefinable supernatural force and motivated by moral interests (need for dignity and a fair treatment / procedural justice), more than material ones (distributive justice). When engaging in the community, the impulse partially comes from an intrinsic spirituality and a privatized experience with the divine. Giving up civic engagement has nothing to do with remuneration, but with the need for freedom of expression and moral appreciation.
Demographischer Wandel und „Überalterung“ der Gesellschaft führen zu einem immensen Anstieg des Pflegebedarfs alter Menschen. Ein Großteil dieser Menschen wird zu Hause von pflegenden Angehörigen versorgt. Der oder die „pflegende Angehörige“ wurde jüngst vermehrt als wichtigster Leistungsträger im Rahmen der Pflege gewürdigt, dennoch verbleiben Pflegende weitestgehend isoliert im privaten Raum und nehmen Unterstützung nur eingeschränkt in Anspruch. Die überwiegende Mehrheit der pflegenden Angehörigen sind Frauen.
Während die Forschung die Herausforderungen, die sich Müttern - vor allem im Spannungsfeld Beruf und Familie - stellen, schon vielfach thematisiert hat, ist die Situation von pflegenden Angehörigen bisher nur unzureichend untersucht worden. Im Mittelpunkt der Arbeit stehen pflegende Frauen, die zum wiederholten Male im Laufe ihrer Biografie vor der Schwierigkeit stehen, eigene Berufs- und Lebenspläne gegenüber der ihnen überantworteten familialen Fürsorgeaufgaben zu verwirklichen.
Die qualitativ-rekonstruktive empirische Studie geht anhand von leitfadengestützen teil-narrativen biographischen Interviews der Frage nach, wie Töchter die Betreuung ihrer unterstützungsbedürftigen Eltern gestalten und unter welchen Bedingungen sich ‚erfolgreiche‘ Betreuungsarrangements ergeben, die trotz der Herausforderungen für alle Beteiligten erträglich, stabil und als ‚gelungen‘ zu beschreiben sind. Dabei spielen Beziehungen, Motivationen, Belastungserleben und Bewältigungsstrategien eine Rolle.
Es konnte festgestellt werden, dass die Art und Qualität der Fürsorgeerbringung für alte Eltern massgeblich von lebenslang kultivierten Beziehungen zwischen den Generationen beeinflusst ist. Es lassen sich vier idealtypische Modi der Beziehungsgestaltung identifizieren, die in das Konstanzer Modul der Generationenambivalenzgestaltung (nach Lüscher) einzuordnen sind. Verharren die Familienangehörigen in starren Erwartungsmustern und ist eine Betreuungsdelegation dadurch nur schwer vorstellbar, steigt das Überlastungsrisiko der betreuenden Töchtern erheblich, egal wie ‚gut‘ oder ‚schlecht‘ die Beziehungen zwischen den Generationen sind. Je besser Ambivalenzen thematisiert und bearbeitet werden können, desto höher ist die Resilienz der betreuenden Töchter und desto wirksamer sind Bewältigungsstrategien.
Julia Maria Mönig formuliert in ihrem Beitrag ein Zwischenfazit der bisherigen Privatheitsforschung, indem sie hier einen ›ethical turn‹ konstatiert. Am Beispiel der Covid-19-Pandemie stellt sie die drei Privatheitsdimensionen nach Rössler dar und zeigt dabei auf, wie diese unmittelbar ethische und prinzipielle Fragen aufwerfen, in welcher Gesellschaft wir leben wollen und welche Werte dabei gelten sollen. Die Erweiterungen der Privatheitsforschung um das ethische Moment zeigen sich insbesondere in der Technologie- und KI-Forschung, in denen ethische Leitlinien zukünftig erstellt und umgesetzt werden müssen.
Birgitt Riegraf versucht sich an einer soziologischen ›Neudefinition der Privatheit‹, die im Zuge der zu beobachtenden Digitalisierungs- und damit verbundenen gesellschaftlichen Transformationsprozesse notwendig geworden ist. Digitalisierungsprozesse führen zur Grenzverschiebungen und -verwischungen, wenn nicht gar zu Grenzauflösungen zwischen der privaten und der öffentlichen Sphäre, wobei die Grenzziehung zwischen den Sphären des ›Privaten‹ und des ›Öffentlichen‹ eine grundlegende Säule in der Konzeption liberaler Gesellschaften darstellt.
Was bedeutet es, in der digitalen Gesellschaft zu leben? Zur digitalen Transformation des Menschen
(2021)
Menschliche Handlungen werden zunehmend von digitalen Technologien übernommen und diese immer weitreichender in soziale Praktiken integriert. Personen, Beziehungen und soziale Strukturen unabhängig von digitalen Technologien zu verstehen, wird dabei quasi unmöglich. Beate Rössler beschäftigt sich aus philosophischer Perspektive mit den grundlegenden Konsequenzen der technischen und digitalen Umwälzungen für das menschliche Leben und prüft, ob diese lediglich auf menschliche Verhaltensweisen wirken und gar auf die menschliche Natur – ein Begriff, der dazu diskutiert wird.
Petra Grimm geht der Frage nach, inwiefern die Corona-Pandemie die Bedeutung von Privatheit in der Alltagswelt verändert hat. Hierzu leitet sie eingangs neue gesellschaftliche Narrative zur Privatheit unter Pandemie-Bedingungen aus dem gesellschaftlichen Diskurs ab. Hieran anknüpfend untersucht sie Privatheitsmodelle anhand fiktionaler Medientexte, die während des Lockdowns im Frühjahr 2020 entstanden sind (u.a. die Drama-Serie Liebe jetzt! und die Comedy-Serie Drinnen. Im Internet sind alle gleich). Dabei zeigt sie, inwiefern die Beschränkung auf den privaten Raum als Identitäts- oder Beziehungskrise verhandelt wird und macht auch Gendereffekte sichtbar, insofern die Krise je nach Geschlecht der Protagonist:innen unterschiedlich erzählt wird.
Kai Erik Trost nimmt eine soziale Perspektive auf Privatheit ein. In seinem Dissertationsprojekt am Graduiertenkolleg untersucht er die Semantiken des Freundschaftsbegriffs innerhalb digital kommunizierender jugendlicher Freundeskreise im Rahmen einer empirischen Interviewstudie. Im Magazinbeitrag stellt er anhand eines Interviewbeispiels Teilergebnisse seiner Arbeit vor. Dabei zeigt sich, dass Privatheit stets dynamisch und kontextspezifisch zu denken ist. Aus sozialräumlicher Sicht schafft Privatheit einen Rahmen, um als Person in Form unterschiedlicher Erscheinungen aufzutreten und dabei verschiedene Identitätsaspekte herausstellen zu können.
Wer politische Influencer:innen mit jenen Konzepten zu erklären versucht, die Medien- und Kommunikationswissenschaft bisher bereitstellen, muss scheitern, weil diese Disziplinen ihre Theorien in Abhängigkeit zu den Massenmedien formulierten. Denn wo vormals private Akteur:innen über Soziale Medien an öffentliche Sprecherrollen gelangen, offenbaren jene Theoriestränge Lücken, die sich bis dato unabhängig voneinander mit Einflusspersonen beider Sphären auseinandersetzen. Diesem Defizit nähert sich Marcel Schlegel an. Sein Vorschlag: Erst wenn man Meinungsführer- und Öffentlichkeitstheorien miteinander verbindet, lässt sich die kommunikative Rolle der neuartigen Online-Einflusspersonen beschreiben.
Carsten Ochs entwirft in seinem Beitrag den Versuch einer sozialhistorischen Systematik, die den Wandel informationeller Privatheit kategorisch nachzeichnen will. Dabei zeigt er auf, wie sich Praktiken informationeller Privatheit im 18. Jh. zunächst als ‚bürgerliche Privatheitstechniken‘ unter einem ‚Ehrschutzprinzip‘ herausbilden, sich im Verlauf in Richtung ‚Rückzugstechniken‘ transformieren und in einer digital-vernetzten Gesellschaft eher Techniken der individuellen Informationskontrolle vorherrschen. Es bleibt abzuwarten, welchem Prinzip die informationelle Privatheit im 21. Jh. folgen wird.
Tobias Keber legt dar, warum Datenschutz und Jugendmedienschutz gemeinsam gedacht werden müssen. Am Beispiel von entwicklungsbeeinträchtigenden Inhalten bei TikTok zeigt er Konfliktlinien zwischen dem Datenschutzrecht und den Medien und Informationsfreiheiten auf und macht deutlich, dass für eine angemessene Symmetrie sowohl ein Ausgleichswerkzeug als auch ein fachgebietsübergreifender Austausch notwendig sind.
Was haben Freiheit, Gleichheit und Sicherheit mit Sex, Drugs and Rock 'n' Roll zu tun? Unter anderem dieser Frage geht Alexander Krafka in seinem Beitrag nach. Anhand der Elemente dreier Trinitäten untersucht der Jurist, was es bedeuten könnte, wenn sich eine Gesellschaftsordnung dem Sicherheitsparadigma verschreibt. Er deckt dabei die gleichsam paradoxe Konsequenz auf, dass Sicherheit ebendies, was sie zu schützen vorgibt, immer auch in Gefahr bringen kann: nämlich die Freiheit. Und damit auch die Privatheit.
Stephanie Schiedermair befasst sich aus internationaler Perspektive mit dem ›Recht auf Vergessenwerden‹ und nimmt dabei die weltweite Rezeption, die Unterschiede nationaler Rechtsordnungen und die vielfältigen Herausforderungen mit in den Blick. Sie diskutiert die weltweiten Auswirkungen des Google Spain Urteils und geht der Frage nach, wie darauffolgende Gerichtsentscheidungen und Gesetzgebungsprozesse das Spannungsfeld von Erinnern und Vergessen im Internet balancieren zu versuchen.
Weil private Vorgänge und sensible Informationen im Internet immer auch in öffentliche Kontexte gelangen und dort potenziell dauerhaft verfügbar sein können, hat das Netz die Rechtsprechung von Beginn an vor schwierige Abwägungsentscheidungen gestellt: zwischen dem Recht auf Privatheit und jenem auf Information und Kommunikation. In welche Richtung das Pendel der Rechtsprechung in den vergangenen Jahren ausschlug, hat Ralf Müller-Terpitz untersucht und dabei eine klare Tendenz ausgemacht.
Kai von Lewinski nimmt eine juristische Perspektive ein und rekonstruiert kritisch die Grenzen der gegenwertigen Gesetzgebung im Bereich des Informationsrechts und des Datenschutzes, die äußerst individualistisch vorgehen. Das Datenschutzrecht fokussiert auf den einzelnen Datenverarbeitungsschritt in Bezug auf eine bestimmte ›betroffene Person‹ und kann daher die gegenwärtigen informationellen Vermachtungen kaum fassen. Der Artikel plädiert dafür, die interdisziplinäre Perspektive auf Privatsphäreschutz weiterzuverfolgen, die im Graduiertenkolleg entwickelt wurde.
1 TRANSFORMATIONEN DES PRIVATEN
Kai von Lewinski:
Die Borkenstruktur des Datenschutzes am Baum der Privatheit im Wald der Datenmacht (S. 6)
Julia Maria Mönig:
Von der Privatheit(-sforschung) zur (Werte-)Ethik (S. 11)
Birgitt Riegraf:
Die Sphäre der Privatheit in Zeiten der Digitalisierung (S. 17)
Beate Rössler:
Was bedeutet es, in der digitalen Gesellschaft zu leben? Zur digitalen Transformation des Menschen (S. 20)
2 MEDIEN UND KULTUREN DES PRIVATEN
Petra Grimm:
Mediatisierte Privatheit in der Corona-Pandemie (S. 27)
Kai Erik Trost:
Person(en) sein können – die heutige Privatheit aus einer sozialräumlichen Perspektive (S. 32)
Marcel Schlegel:
Aufenthaltsstatus: ungeklärt – Was Polit-Influencer:innen für Meinungsführer und Öffentlichkeitskonzepte bedeuten (S. 36)
Carsten Ochs:
Lost in Transformation? Einige Hypothesen zur Systematik der Strukturtransformation informationeller Privatheit vom 18. Jh. bis heute (S. 48)
3 SCHUTZ(-RÄUME) DES PRIVATEN
Tobias Keber:
Datenschutz und Mediensystem – Altersverifikation und Uploadfilter aus intradisziplinärer Perspektive (S. 56)
Alexander Krafka:
Einigkeit und Recht und Sicherheit – Das Sicherheitsdispositiv als aktuelles Paradigma der Privatheitskultur (S. 62)
Stephanie Schiedermaier:
Das Recht auf Vergessenwerden zwischen Luxemburg, Straßburg, Karlsruhe und der Welt (S. 66)
Ralf Müller-Terpitz:
Mediale Öffentlichkeit vs. Schutz der Privatheit – Juristische Grenzverschiebungen durch die Digitalisierung? (S. 71)
Die Arbeit beleuchtet den Einsatz algorithmischer Datenbearbeitungen bei sportwissenschaftlichen Spiroergometrien aus praktischen und theoretischen Gesichtspunkten. Die aktuelle Verbreitung von algorithmischen Datenbearbeitungen aus Breath-by-Breath Untersuchungen wird über die Ergebnisse eines Fragebogens und einer systematischen Literaturübersicht dargestellt. Zudem erfolgt die Analyse der durch Algorithmen verursachten Messwertvarianzen der Sauerstoffaufnahme in diskontinuierlichen Belastungsuntersuchungen, bei Jugendlichen und im submaximalen Belastungsbereich.
In many cases, transitioning towards sustainable agricultural production requires farmers to change their practices. These changes can include the adoption of sustainable agricultural practices, water-saving, or the disadoption of excessive chemical input use or land burning. Policy makers interested in making agricultural production more sustainable need to understand what encourages the uptake of sustainable practices and what is effective in reducing unsustainable practices. This thesis seeks to understand whether and how information provision and endorsement can contribute to the transition towards more sustainable agricultural systems.
The thesis consists of three self-contained papers. The first paper explores the potential of religious endorsement for inducing pro-environmental behaviour and encouraging the disadoption of fire as an agricultural practice, thereby preventing forest fires. The paper analyses the impact of a fatwa (an Islamic religious ruling) on reducing fire incidence in Indonesia. Results indicate that fire incidence decreased in Muslim majority villages following the issuing of the fatwa. For the post-fatwa period from August 2016 to December 2019, the average monthly effect amounts to around 2.2 prevented fire events per village. This is a considerable effect. The paper concludes that fire prevention efforts, and potentially other environmental conservation efforts, could benefit significantly from support by religious institutions and stakeholders.
The second paper investigates the role of information provision and training for the adoption of organic farming practices in Java, Indonesia. We use a randomised controlled trial (RCT) to identify the impact of a three-day hands-on training in organic farming for smallholder farmers. We find that the training intervention increased the adoption of organic inputs and had a positive and statistically significant effect on farmers’ knowledge and perceptions of organic farming. Overall, our findings suggest that information constraints are a barrier to the adoption of organic farming, as information provision increased the use of organic farming practices.
The third paper investigates whether urban and suburban Indonesian consumers are willing to pay a price premium for organic food. We use an incentive-compatible auction based on the Becker-DeGroot-Marschak (BDM) approach to elicit consumers’ WTP. We further study the effect of income and a randomised information treatment about the benefits of organic food on respondents’ WTP. Estimates suggest that consumers are willing to pay a price premium for organic rice, on average 20 percent more than what they paid for conventional rice outside of our experiment. However, our results also indicate that raising consumers’ WTP further is complex. Showing participants a video about the health or, alternatively, environmental benefits of organic food was not effective in further raising WTP. Exposure to the environmental benefits video was, however, effective in raising stated organic food consumption intentions.
Critical infrastructure and contemporary business organizations are experiencing an ongoing paradigm shift of business towards more collaboration and agility. On the one hand, this shift seeks to enhance business efficiency, coordinate large-scale distribution operations, and manage complex supply chains. But, on the other hand, it makes traditional security practices such as firewalls and other perimeter defenses insufficient. Therefore, concerns over risks like terrorism, crime, and business revenue loss increasingly impose the need for enhancing and managing security within the boundaries of these systems so that unwanted incidents (e.g., potential intrusions) can still be detected with higher probabilities. To this end, critical infrastructure organizations step up their efforts to investigate new possibilities for actively engaging in situational awareness practices to ensure a high level of persistent monitoring as well as on-site observation.
Compliance with security standards is necessary to ensure that organizations meet regulatory requirements mostly shaped by a set of best practices. Nevertheless, it does not necessarily result in a coherent security strategy that considers the different aims and practical constraints of each organization. In this regard, there is an increasingly growing demand for risk-based security management approaches that enable critical infrastructures to focus their efforts on mitigating the risks to which they are exposed. Broadly speaking, security management involves the identification, assessment, and evaluation of long-term (or overall) objectives and interests as well as the means of achieving them.
Due to the critical role of such systems, their decision-makers tend to enhance the system resilience against very unpleasant outcomes and severe consequences. That is, they seek to avoid decision options associated with likely extreme risks in the first place. Practically speaking, this risk attitude can significantly influence the decision-making process in such critical organizations. Towards incorporating the aversion to extreme risks into security management decisions, this thesis investigates thoroughly the capabilities of a recently emerged theory of games with payoffs that are probability distributions. Unlike traditional optimization techniques, this theory provides an alternative decision technique that is more robust to extreme risks and uncertainty. Furthermore, this thesis proposes a new method that gives a decision maker more control over the decision-making process through defining loss regions with different importance levels according to people's risk attitudes. In this way, the static decision analysis used in the distribution-valued games is transformed into a dynamic process to adapt to different subjective risk attitudes or account for future changes in the decision caused by a learning process or other changes in the context.
Throughout their different parts, this thesis shows how theoretical models, simulation, and risk assessment models can be combined into practical solutions. In this context, it deals with three facets of security management: allocating limited security resources, prioritizing security actions, and tweaking decision making. Finally, the author discusses experiences and limitations distilled from this research and from investigating the new theory of games, which can be taken into account in future approaches.
Governments around the world currently focus on shaping the digital economy. Particular attention is paid to Internet platforms, Internet infrastructure and data as essential components of the digital economy. The three studies in this thesis contribute to the understanding of the behavior of firms in each of these domains and derive insights for future regulations and business projects.
The first study deals with the ranking of content on Internet platforms and how it affects the incentives of content providers to invest in content quality. The focus of the study is on sponsored ranking and organic ranking, but the case that a vertically integrated content provider is favored by an Internet platform is also taken into account. Using a game theoretic model, it is shown that there is no ranking design that strictly leads to more investment compared to the other designs. It is also shown that the Internet platform usually chooses the type of ranking that, from the perspective of the Internet platform and consumers, yields the best expected overall content quality. The second study deals with the incentive of Internet service providers to throttle specific Internet content. The key finding is that Internet service providers use this instrument to utilize the capacity of their telecommunications network more efficiently. This leads not only to more benefits for Internet users, but also to a higher incentive to invest in network capacity due to better monetization. The third study examines the circumstances under which firms are willing to share data with other firms. By means of an economic laboratory experiment, it is shown that more data is shared if the firms have control over who exactly they share data with. Thus, for example, data pools that grant unrestricted data access to all participating firms can be expected to perform worse than data pools that give their participating firms control over with whom their uploaded data is shared. In addition, the third study finds that established relationships are characterized by more data sharing and less volatility in the amount of shared data than new relationships. The study concludes that data sharing projects should not be expected to work optimally right away.
In summary, the studies in this thesis identify a number of costs that may arise when digital firms' choice is restricted by regulation or design. The ability of Internet service providers to throttle certain content and the ability of Internet platforms to choose the ranking design are usually used in the best interests of consumers. Data sharing also works best when firms are free to decide who gets their data.
The current electricity grid is undergoing major changes. There is increasing pressure to move away from power generation from fossil fuels, both due to ecological concerns and fear of dependencies on scarce natural resources. Increasing the share of decentralized generation from renewable sources is a widely accepted way to a more sustainable power infrastructure. However, this comes at the price of new challenges: generation from solar or wind power is not controllable and only forecastable with limited accuracy. To compensate for the increasing volatility in power generation, exerting control on the demand side is a promising approach. By providing flexibility on demand side, imbalances between power generation and demand may be mitigated.
This work is concerned with developing methods to provide grid support on demand side while limiting the associated costs. This is done in four major steps: first, the target power curve to follow is derived taking both goals of a grid authority and costs of the respective load into account. In the following, the special case of data centers as an instance of significant loads inside a power grid are focused on more closely. Data center services are adapted in a way such as to achieve the previously derived power curve. By means of hardware power demand models, the required adaptation of hardware utilization can be derived. The possibilities of adapting software services are investigated for the special use case of live video encoding. A method to minimize quality of experience loss while reducing power demand is presented. Finally, the possibility of applying probabilistic model checking to a continuous demand-response scenario is demonstrated.
Netzwerke der Regional Cross-Border Governance haben in der EU insbesondere in den vergangenen zwei Jahrzehnten einen spürbaren Bedeutungszuwachs erfahren. Sie stellen hochkomplexe Governancestrukturen dar und bieten für beteiligte Akteure einen großen Mehrwert. Die EUSDR und EUSALP sind eine weitere Fortentwicklung von RCBG und können in verschiedenen Bereichen nachweisbare Erfolge vorweisen, werden jedoch der hohen im Vorfeld postulierten Erwartungshaltung nicht gerecht. RCBG-Netzwerke und insbesondere die Makroregionalen Strategien tragen in gewisser Weise zu einer territorialen Differenzierung der EU bei, diese sind jedoch noch weit davon entfernt, dass die zum Teil postulierte Erwartungshaltung bezüglich einer „(Makro-)Regionalisierung der EU“ erfüllt wird.
Replacing fossil-fueled vehicles with Electric Vehicles (EVs) poses new challenges for power distribution networks. Specifically speaking, the electrification of the mobility sector relies on the ability to process and analyze information on when, where, for how long, or how fast charging processes will take place. Nevertheless, such kind of information is typically difficult to acquire or insufficiently predictable due to the dynamic nature of the system. Also, the increasing adoption rate of the renewable energy sources, specifically the domestic Photovoltaic (PV) systems, and the potentially associated grid defection scenarios will significantly impact the cost and efforts required to operate the grid in terms of power quality and demand-supply aspects. However, such emerging requirements have arguably not been taken into account when the distribution grid was built originally. Besides, expanding the distribution and transmission capacity is a very costly and lengthy process. Therefore, any proposed solution should be cost-effective as well as environment-, grid- and user-friendly. To this end, the advancements in Information and Communications Technology (ICT) are increasingly adopted and applied. This thesis addresses the rapidly growing EV sector and deals with the problems to overcome potential power quality degradation caused by the challenges mentioned above.
Since time switch and radio ripple control as existing solutions in Germany are costly and neither very effective nor scalable as it requires hardware retrofitting of existing public Charging Stations (CSs), the primary focus of this work is the development of an appropriate, standards-based, scalable, and smart charging solution of EVs. Such a solution can, in turn, boost the usage of renewable energy by ensuring that the existing grid infrastructure can operate within its permissible limits while maintaining acceptable levels of power quality.
This work introduces a new definition of the concept, “grid-friendly EV charging”, where the power demand of a CS is adjusted depending on the real-time status of a power grid. In this regard, the conflicting concerns of stakeholders in an EV ecosystem are considered. For example, a Distribution System Operator (DSO) does not want to reveal a lot of technical details about the power grid or its status. Similarly, a Charging Service Provider (CSP) wants to keep its clients happy without sharing the details of its business model with others, namely, DSOs. For that sake, a distributed smart charging architecture is proposed in this thesis. It is event-driven and responds in nearly real-time to unforeseen and critical grid situations such as high/low voltage, congestion, phase unbalance, and harmonics. In that regard, the publish/subscribe messaging pattern, used as a part of the architecture, enables an efficient and well-performing communication scheme among the different components. Moreover, an indication mechanism about the different issues in a power grid is developed; it adopts the traffic light model. It works as a black box to separate smart controllers for each CS and configured only by the CSP. Smart chargers enable a smooth adjustment of the charging power to avoid drastic changes in the grid state. To that end, two types of intelligent controllers are developed and tested. While the first controller is inspired by the fuzzy logic, the second one is inspired by the slow-start mechanism used in TCP to control congestion in computer networks.
A simulative approach is applied to evaluate the solution, thereby, a topology of a real low voltage grid with realistic load and generation profiles is used. Furthermore, a set of metrics is defined regarding the main concerns of stakeholders: voltage, overloading, fairness, the satisfaction of EV users and grid operator, as well as the grid-friendly behavior of a CS/ EV user. The evaluation shows that the solution is able to guarantee a safe operation of the grid. The proposed system can ensure a grid-friendly charging by sacrificing of a small portion of user satisfaction, that sacrifice of a user is awarded via a points-based reward system. Last but not least, the proposed distributed controllers are compared to two other controllers: (1) a decentralized controller based only on sensing the local voltage and (2) a very strict centralized controller focusing on grid-friendliness. The latter ensures proportional fairness among users regarding the objective function of the optimization problem solved in each simulation step. The distributed controllers are superior to the decentralized controller in terms of grid friendly and fairness and converge in general to the centralized one.
Die Publikation stellt die Ausgestaltung und Zunahme der staatlichen Verwaltung und Leitung der Bergbaubetriebe im Zeitablauf auch vor dem Hintergrund der sich ändernden Landesherrschaften bis hin zum sogenannten Direktionsprinzip anschaulich dar. Mit dem Berggesetz von 1865 wurde der staatliche Einfluss auf ein Mindestmaß zurückgeführt.
Die Bergrechtsreformen Mitte des 19. Jh. in Preußen hatten weitreichende Auswirkungen auf die Bergarbeiterschaft. In Nordrhein-Westfalen wurde 1994 mit dem Wechsel von der Gewerbeaufsicht zur Arbeitsschutzverwaltung ein ähnlicher Paradigmenwechsel vollzogen. Die Abhandlung untersucht und vergleicht die Auswirkungen dieser Rechtsänderungen.
Mit der Bergrechtsrevision in Österreich in der Mitte des 19. Jahrhunderts war zunächst beabsichtigt, das Hüttenwesen aus dem Bergrecht zu entfernen und dem allgemeinen Gewerberecht zuzuordnen. Hiergegen erhob sich heftiger Widerstand der Berg- und Hüttenwerksbesitzer. Die Abhandlung stellt den Verlauf bis zum Erlass des Berggesetzes dar.
Habsburg versus Preußen
(2016)
Zweck dieser Arbeit ist es, näher auf das Verhältnis der beiden großen Staaten Österreich und Preußen in 'Deutschland' gerade auch hinsichtlich der persönlichen Animositäten der handelnden Akteure einzugehen und daraus abzuleiten, warum unterschiedliche Ansichten, Erwartungen, Mentalitäten und Vorurteile, gepaart mit dem jeweiligen Bestreben von Habsburg und Preußen, eine Dominanz unter den deutschen Staaten zu erreichen bzw. die Vorherrschaft des anderen Staates zu verhindern, nicht dazu führen konnten, ein gemeinsames deutsches Reich in Mitteleuropa zu errichten. Die Darstellung des politischen Geschehens wurde daher auf die zentralen Ereignisse reduziert, da die geschichtlichen Abläufe bereits in zahlreichen anderen Veröffentlichungen erschöpfend ausgearbeitet wurden.
Diese grundlegende Ausarbeitung ist eingebettet in eine umfangreichere Arbeit, die den Vergleich zwischen Preußen und Österreich in der Entwicklung des Bergwesens und der teilweise unterschiedlichen Ausgestaltung des Bergrechts zum Inhalt hat.
Arme sterben früher
(2016)
Im Rahmen einer Untersuchung des Bergarbeiterverhältnisse im 19. Jh. wurde festgestellt, dass der Invaliditätseintritt in immer jüngeren Jahren erfolgte. Auch die Sterblichkeit stieg mit den Jahren. Dies führte zu der Überlegung, ob nicht nur die Arbeitsbedingungen Ursache der Frühinvalidität und höheren Morbidität sind, sondern allein die Tatsache, zu einer bestimmten Bevölkerungsschicht zu gehören. Da dies ex post für die fragliche Zeit schwer nachzuvollziehen war, wurde untersucht, ob auch in heutiger Zeit Krankheit und Morbidität von der gesellschaftlichen Stellung abhängig sein könnte.
Bilingualer Sachfachunterricht gilt wohl als eine der bedeutsamsten Veränderungen im deutschen Schulsystem. Vor allem weiterführende Schulen wie Gymnasien und Realschulen nutzen diese didaktischen Neuerungen zur Weiterentwicklung ihres Schulprofils sowie zur sprachlichen und kognitiven Förderung ihrer Schülerschaft. Aber auch an der Mittelschule insbesondere im M-Zug, kann diese Form des Unterrichts durchaus einen Mehrwert bieten. Mit dieser Arbeit soll die didaktische Wirksamkeit des bilingualen Religionsunterrichts wissenschaftlich erforscht und weiterentwickelt werden. Mittels der praktischen Umsetzung religionspädagogischer und religionsdidaktischer sowie bilingualer Theorien, soll ein Prototyp des bilingualen Religionsunterrichts geschaffen werden, welcher die Fremdsprache Religion zu entschlüsseln vermag. In der iterativen Durchführung bilingualer Religionsunterrichtseinheiten gilt es sodann herauszufinden, inwieweit der Einsatz einer fremden Sprache die Möglichkeit einer kognitiven Durchdringung und Erschließung von Glaubenswissen bieten kann. Zielführend dabei ist jedoch nicht die Erstellung einer empirischen Studie. Vielmehr soll die vorliegende Feldstudie dazu dienen, Hypothesen und Theorien aufzustellen, die dann in einem nächsten Schritt einer empirischen Überprüfung unterzogen werden können.
The segmentation of volumetric datasets, i.e., the partitioning of the data into disjoint sub-volumes with the goal to extract information about these regions,is a difficult problem and has been discussed in medical imaging for decades.
Due to the ever-increasing imaging capabilities, in particular in X-ray computed tomography (CT) or magnetic resonance imaging, segmentation in industrial applications also gains interest.
Especially in industrial applications the generated datasets increase in size.
Hence, most applications apply well-known techniques in a 2+1-dimensional manner,i.e., they apply image segmentation procedures on each slice separately and track the progress along the axis of the volume in which the slices are stacked on.
This discards the information on preceding or subsequent slices, which is often assumed to be nearly identical. However, in the industrial context this might prove wrong since industrial parts might change their appearance significantly over the course of even a few slices.
Moreover, artifacts can further distort the content of the slices.
Therefore, three-dimensional processing of voxel volumes has to be preferred, which induces constraints upon the segmentation procedures. For example, they must not consider global information as it is usually not feasible in big scans to compute them efficiently.
Yet another frequent problem is that applications focus on individual parts only and algorithms are tailored to that case. Most prominent medical segmentation procedures do so by applying methods to specifically find the liver and only the liver of a patient, for example.
The implication is that the same method then cannot be applied to find other parts of the scan and such methods have to be designed individually for any object to be segmented.
Flexible segmentation methods are needed too specifically when partitioning unique scans. We define a unique scan to be a voxel dataset for which no comparable volume exists.
Classical examples include the use case of cultural heritage where not only the objects themselves are unique but also scan parameters are optimized to obtain the best image quality possible for that specific scan.
This thesis aims at introducing novel methods for voxelwise classifications based on local geometric features.
The latter are computed from local environments around each voxel and extract information in similar ways as humans do, namely by observing their similarity to geometric or textural primitives.
These features serve as the foundation to learning the proposed voxelwise classifiers and to discriminate between segmented and unsegmented voxels.
On the one hand, they perform fully automated clustering of volumes for which a representative random sample is extracted first.
On the other hand, a set of segmenting classifiers can be trained from few seed voxels, i.e., volume elements for which a domain expert marked if they belong to the components that shall be segmented. The interactive selection offers the advantage that no completely labeled voxel volumes are necessary and hence that unique scans of objects can be segmented for which no comparable scans exist.
Overall, it will be shown that all proposed segmentation methods are effectively of linear runtime with respect to the number of voxels in the volume. Thus, voxel volumes without size restrictions can be segmented in an efficient linear pass through the volume.
Finally, the segmentation performance is evaluated on selected datasets which shows that the introduced methods can achieve good results on scans from a broad variety of domains for both small and big voxel volumes.
Online social networks provide a rich source of information about millions of users worldwide. However, due to sparsity and complex structure, analyzing these networks is quite challenging and expensive. Recently, graph embedding emerged to map networked data into low-dimensional representations, i.e. vector embeddings. These representations are fed into off-the-shelf machine learning algorithms to simplify and speed up graph analytic tasks. Given the immense importance of social network analysis, in this thesis, we aim to study graph embedding for social networks in three directions.
Firstly, we focus on social networks at microscopic level to primarily encode the structural characteristic of users' personal networks so-called ego networks. These representations are utilized in evaluation tasks whose performance depends on relational information from direct neighbors. For example, social circle prediction and event attendance inference both need structural information from neighbors in social networks.
Secondly, we explore assessing the content of vector embeddings in terms of topological properties. This could be explained via two proposed approaches: 1) a learning to rank algorithm in which the model weights reveal the importance of properties at subgraph level (ego networks), 2) a regression model for direct approximation of network statistical properties at vertex level.
Thirdly, we propose extensions of graph embedding to capture sign or additional content of social networks. Users in social media often express their feelings and attitudes towards others which forms sentiment links besides social links. We design a joint objective function whose terms capture semantics of both social and sentiment links simultaneously. We also propose a multi-task learning framework for networks with attributes and labels by stacking autoencoders. The weights of the learning tasks are automatically assigned via an adaptive loss weighting layer.
This thesis is concerned with proposals that aimed at transforming or reforming the English-speaking world so that it could continue to dominate the world in the future. In the late 19th and early 20th centuries, these ideas emerged from the discourse of Anglo-Saxonism that represented the Anglo-Saxon ‘race’ as the most developed ‘race’ in the world which could, therefore, ‘legitimately’ rule the world. In the later 20th century, an Atlantic discourse developed, which appeared to address further nations in the group of world leaders. However, it seems to rely on similar discursive elements as Anglo-Saxonism, which only includes the English-speaking world. The construction of the respective discourses is examined in late 19th/early 20th century writings by authors broadly associated with the British Empire as well as in Union Now, a 1939 book by U.S.-American Clarence K. Streit. The latter part presents the focus of this thesis. Streit developed a new concept of a world order in which a world state – the Atlantic Union – was to be established. In a first step, it should only be founded by a nucleus of the 15 ‘leading’ democracies in the world and should subsequently be expanded. In addition to the connection between Anglo-Saxonism and Atlanticism which is investigated in Streit's writings, his network and prominence are analyzed, as are the resolutions Streit's supporters introduced into the U.S. Congress and Streit's stance on imperialism.
Forschungsdokumentation
(2021)
Das „Projekt zur Rettung der Flussperlmuschel in Niederbayern“ widmet sich der Aufzucht und Wiederansiedlung der Flussperlmuschel in Gewässern, in denen die zuvor heimische Muschel mittlerweile beinahe ausgestorben ist. Die Umsetzung des Projekts wird von einer wissenschaftlichen Evaluation begleitet, die relevante Akteure, die zur Rettung der Flussperlmuschel beitragen, identifiziert und Spannungsfelder oder Kompromissbereitschaft zwischen diesen aufzeigt (Heinrich / Karlstetter 2021). Die hier vorliegende Forschungsdokumentation umfasst den Projektbericht unterstützende Informationen. Dies sind die Interview-Transkripte, auf deren Basis eine Inhaltsanalyse durchgeführt wurde, Ergebnisse der Inhaltsanalyse, sowie das Anschreiben, mit dem die Interviewten kontaktiert wurden.
Fundamental changes in business-to-business (B2B) buying behavior confront B2B supplier firms with unprecedented challenges. On the one hand, a rising share of industrial buyers demands digitalized offerings and processes from suppliers. Consequently, suppliers are urged to implement digital transformations by expanding the range of both digital offerings and processes. On the other hand, B2B buyers increasingly expect suppliers to provide individually tailored solutions to their idiosyncratic needs. Hence, suppliers are also required to implement non-digital transformations by providing offerings and processes that are customized to each customers’ specific requirements.
The rise of these digital and non-digital transformations calls established knowledge into question. Thus, B2B marketing research and practice are urged to create a comprehensive understanding of digital and non-digital transformations by means of novel and empirically grounded insights and derive actionable response strategies. In respond, my dissertation addresses the overall research question of how B2B supplier firms can successfully implement both digital and non-digital transformations in three individual essays.
In Essay 1, I offer a broader perspective on both digital and non-digital transformations by investigating digital service customization (i.e., the tailoring of digital B2B services to customers’ individual needs). Through a systematic literature review and bibliometric analysis, I outline a comprehensive set of factors that favor the application of distinct digital service customization strategies. Essay 2 represents a deep dive into digital transformations of sales processes. By making use of two rich sets of qualitative interview material from supplier and buyer firms, I identify the challenges resulting for B2B salespeople from the introduction of digital sales channels into personal selling. Moreover, I uncover facilitating mechanisms that sales managers can employ to support salespeople in coping with digital sales channels. Finally, Essay 3 constitutes a deep dive into non-digital transformations. Based on qualitative interview material and survey data from matched sales manager–salesperson dyads, the essay explores how configurations of individual salespeople’s personal and procedural competencies facilitate success at selling customer solutions (i.e., highly customized, performance-oriented offerings comprising products and/or services). The essay shows that successfully selling customized offerings like solutions hinges on salespeople’s unique configurations of present and absent competencies.
In a nutshell, these essays provide three major insights on how B2B suppliers can successfully implement digital and non-digital transformations. First, they underscore that a comprehensive understanding of the origins and spillover effects of transformations is a key prerequisite to successfully implementing them. Second, they unveil that digital and non-digital transformations impact on multiple organizational levels. Third, they point out important resources and capabilities that help suppliers to successfully implement transformations, be they digital or non-digital.
With this dissertation, I make substantial contributions to the broader literature on digital and non-digital transformations in B2B contexts. At the same time, my dissertation provides hands-on implications for managers in B2B supplier firms that are facing fundamental transformations in the marketplace—both digital and non-digital in nature.
Als Ende des 19. und Anfang des 20. Jahrhunderts die ersten Bauernsiedler nach Mittelasien, in Russlands neu erobertes Gebiet kamen, darunter prägend viele deutsche Mennoniten, bestimmten sie von nun an die weitere Entwicklung der Region entscheidend mit. Nichtsdestotrotz sind sie im Schatten der Erfolge der militärischen Eroberung und im Schatten der sowjetischen Periode zu den „toten Seelen“ der Kolonisationsgeschichte schlechthin geworden.
Betrachtet als ein mehrdimensionales soziales Phänomen offenbart die Migration nach Turkestan ein kulturübergreifendes Ziel der bäuerlichen Gemeinschaftsbildung, in dem kulturspezifische Deutungs- und Sinnbildungsprozesse vonstatten gingen. Zahlreiche bäuerliche Projektionen des als richtig angesehenen Lebens im Rahmen der angestrebten Beheimatung liefen zum bäuerlichen Imaginationsraum zusammen. Die bisher in der Literatur einseitig mit religiösen Motiven erklärte Mennoniten-Auswanderung ist ebenfalls die Folge dieser Erscheinungen. Die bäuerlichen Erwartungen konnten sich keinesfalls alle erfüllen, aber sie übten eine reale Wirkung auf die neue Heimat aus: sie bildeten durchaus einen Hintergrund für den Aufstand der indigenen Bevölkerung von 1916, machten aber gleichwohl Turkestan zum Raum des Möglichen, einem stabilen Narrativ, das noch bis in die 30er Jahre der Sowjetzeiten hineinwirkte.
The concept of programmable networks is radically changing the way communication infrastructures are designed, integrated, and operated. Currently, the topic is spearheaded by concepts such as software-defined networking, forwarding and control element separation, and network function virtualization. Notably, software-defined networking has attracted significant attention in telecommunication and data centers and thus already in some production-grade networks.
Despite the prevalence of software-defined networking in these domains, industrial networks are yet to see its benefits to encourage adoption. However, the misconceptions around the concept itself, the role of virtualization, and algorithms pose a significant obstacle.
Furthermore, the desire to accommodate new services in the automation industry results in a pattern of constantly increasing complexity of industrial networks, which is compounded by the requirement to provide stringent deterministic service guarantees considering characteristically different applications and thus posing a significant challenge for management, configuration, and maintenance as existing solutions are architecturally inflexible.
Therefore, the first contribution of this thesis addresses the misconceptions around software-defined networking by providing a comparative analysis of programmable network concepts, detailing where software-defined networks compare with other concepts and how its principles can be leveraged to evolve industrial networks.
Armed with the fundamental principles of programmable networks, the second contribution identifies virtualization technologies and proposes novel algorithms to provide varied quality of service guarantees on converged time-sensitive Ethernet networks using software-defined networking concepts.
Finally, a performance analysis of a software-defined hybrid deployment solution for control and management of time-sensitive Ethernet networks that integrates proposed novel algorithms is presented as an industrial use-case that enables industrial operators to harness the full potential of time-sensitive networks.
IoT is defined as a paradigm where "things" have sensing, actuating, communicating, and self-configuring abilities, and are connected to each other and to the Internet. Recent advancements in the manufacturing industry have helped to produce embedded devices with various sensors and actuators in mass numbers at a reduced cost. As part of the IoT revolution, everyday devices such as television, refrigerator, cars, even industrial machines are now connected IoT devices. Recent studies have predicted that by 2025 there will be over 75 billion of such IoT devices connected to the Internet.
The providers of IoT based services want to integrate their services to satisfy customer requirements. For example, in the mobility scenario, different mobility solution providers want to offer a multi-modal ticket to their customers jointly. In such a distributed and loosely coupled environment, each owner and stakeholder wants to secure his/her own integrity, confidentiality, and functionality goals. This means that distributed rules and conditions defined by the individual owners must be enforced on the participating entities (e.g., customers or partners using their services). The owners and stakeholders may not necessarily trust each other's actions. Therefore, a mechanism is required that guarantees the rules and conditions specified by the different owners.
Attacks on IoT devices and similar computing systems are increasing and getting more advanced. IoT devices are often constrained, i.e., they have limited processing power, memory, and energy. Security mechanisms designed for traditional computing systems, e.g., computers, servers, or mobile computing devices such as smartphones, may not fit in those constrained IoT devices. Weak security mechanisms and unenforced security measures were one of the main reasons for recent successful attacks on IoT devices and services. As IoT is now used in many sensitive places, including critical infrastructures, securing them becomes more critical than ever. This thesis focuses on developing mechanisms that secure IoT devices and services and enforcing the rules and conditions specified by the owners on entities that want to access owners' resources.
In classical computer systems, security automata are used for specifying security policies and monitoring mechanisms are used for enforcing such policies. For instance, a reference monitor observes and stops the execution when the security policies are about to be violated, thus, the security policies are enforced. To restrict the adversary from using protected IoT devices or services for malicious purposes, it is required to ensure that a workflow must be followed to access the protected resource. In distributed IoT systems where the policies are governed by different owners, each owner would like to specify their rules and conditions in their workflows. The workflows contain tasks that must be performed in a particular order. The goal of this thesis is to develop mechanisms to specify and enforce these workflows in the distributed IoT environment.
This thesis introduces a distributed WFAC framework that restricts the entities to do only what they are allowed to do in a collaborative environment. To gain access to a service protected by the WFAC framework, every workflow participant must prove that he/she is in a particular state of an authorized workflow. Authorized means two things: (a) the owner has authorized the workflow to be executed; (b) the workflow participant is authorized to execute it. This restricts the adversary's access to the devices and its services. The security policies defined by different owners are modeled as workflows and specified using Petri Nets. The policies are then enforced with the help of the WFAC framework which supports error-handling, accountability, integration of practitioner-friendly tools, and interoperability with existing security mechanisms such as OAuth. Thus, the WFAC guarantees the integrity of workflows in a distributed environment.
Hazardous materials (hazmat) have become important goods for satisfying the industrial and customer demand in our modern society. The transportation of these materials is always associated with safety, security and environmental concerns due to the dangerous nature of the cargo. To improve the safety of the transportation process hazmat transportation problems have become a popular research topic in the field of operations research. This thesis contributes to the ongoing research on the hazmat transportation problem. It provides an extensive overview of the existing literature on the hazardous materials transportation problem and offers a new classification extending the existing ones. With particular focus on the hazardous materials vehicle routing problem (HMVRP), this thesis compares different risk models and analyses their influence on the problem outcomes. Additionally, heuristic and meta-heuristic solution procedures are proposed for handling the NP-hard nature of the problem.
For this purpose, four different studies are conducted. Study 1 presents a state of the art literature review including over 300 contributions to the hazmat transportation problem. The historical development of the research field is analyzed and the most important journals are identified. A detailed classification focusing on hazmat transportation on public roads is provided. Furthermore, the study identifies research gaps and presents new research opportunities. Study 2 and 3 investigate the effects of path generation in a realistic urban network on the outcomes of the HMVRP. Additionally, different risk models for the HMVRP are compared and their influence on the problem solutions is analyzed. Study 2 proposes a simple but effective heuristic algorithm to solve the HMVRP with load-independent risk models. Study 3 extends the focus and includes load-dependent risk models. The influence of six different risk models on the solution outcomes of the HMVRP is compared and the tradeoff between risk minimization and the minimization of traveled distance is investigated. For this purpose, more than 1,700 problem instances are solved to optimality using CPLEX. In study 4 a hybrid genetic algorithm (HGA) for solving the HMVRP with a load-depending risk model is proposed. The HGA aims to find pareto-optimal solutions for the bi-objective HMVRP when risk and travel distance are addressed simultaneously. The structure of the HGA is explained and experimental findings are presented.
In conclusion, this thesis contributes to an improved understanding of the general development in the research field of hazmat logistics and the influence of different risk models on the solution outcomes of the HMVRP. Additionally, heuristic solution methods are proposed and tested for finding compromise solutions when the bi-objective case of risk and distance minimization is addressed. Furthermore, this thesis helps new researchers the access to the field of hazmat logistics as it provides a structured overview of the research field while pointing out research gaps. To address some of the identified research gaps, the thesis provides an extensive analysis of the risk modelling approaches. Thereby, it provides new insights to the basic research on risk modelling for the HMVRP. Finally, to overcome the long computation times of large problem instances heuristic solution approaches are proposed.
Obwohl Pompeius dreimal mit dem Konsulat das höchste Amt in Rom innehatte, haftet ihm seit Mommsen das Image eines äußerst erfolgreichen Feldherrn und Organisators, jedoch mäßig begabten, ja unfähigen Politikers an. So hat das politische Handeln des Machthabers bisher nur wenig Beachtung gefunden. In dieser Arbeit werden im Detail die politischen Prozesse der Jahre 54 bis 49 beleuchtet, einem Zeitraum, in dem Pompeius in seinem dritten Konsulat, das er ohne Kollegen ausübte über größten Handlungs- und Gestaltungsspielraum verfügte. Es wird nach seinen politischen Zielen, deren Umsetzung, Kommunikation sowie Reaktionen in der Führungsschicht Roms auf seine Maßnahmen gefragt. Seine Politik im Jahre 52 lässt ein klares politisches Konzept erkennen: Die aufeinander abgestimmten und ineinandergreifenden Maßnahmen zielten auf eine Stabilisierung des bestehenden politischen Systems und die Stärkung der Macht der Häupter des Senats, der Erben der von Sulla eingesetzten Elite. Er vermied dabei die Probleme der Reformen Sullas, indem er die Kollateralschäden seiner Maßnahmen gering hielt und nicht darauf bestand, für sich eine überragende Machtstellung institutionell zu verankern. Diese beabsichtigte er stattdessen in Form von legitimen Leistungsbeziehungen zu Senat und Volk zu etablieren.
Dabei stützte er sich auf seine eigenen, unbestreitbar überragenden Leistungen für die res publica, die Senat und Volk nach dem Herkommen zu Gegenleistungen und damit auch zur Anerkennung einer entsprechenden Machtstellung verpflichteten. Die Untersuchung von Konkurrenzverhältnissen zeigt, dass Pompeius überdies darauf abzielte, mit der Bewältigung innerer Krisen künftige Leistungen für Rom auf seine Person zu monopolisieren. Hierfür garantierte ihm sein fünfjähriges außerordentliches Imperium den Zugriff auf die erforderlichen Machtmittel: Pompeius beabsichtigte durch das kontinuierliche Erbringen von Leistungen als der Patron Roms anerkannt zu werden. Durch Agieren über loyale Magistrate, geschicktes Ausnutzen monarchischer Tendenzen innerhalb der Führungsschicht, mit Hilfe Caesars als starkes Gegengewicht und schließlich mit einem erfolgreichen Krisenmanagement verstand er es immer wieder, den entschlossenen Widerstand führender Senatoren zu brechen, die sich gegen jegliche Entwicklung zur Alleinherrschaft wehrten, sodass er bis zum Frühjahr 50 seiner angestrebten Machtstellung schon sehr nahe kam. Wie instabil diese jedoch war, zeigt sich, als Pompeius für wenige Monate der stadtrömischen Politik fernbleiben musste. Im Machtvakuum, das daraufhin entstand, änderten sich die Konstellationen. Pompeius geriet dadurch in eine anhaltende politische Schwächephase, aus der er sich nach seiner Rückkehr bis zum Ende des Untersuchungszeitraums nicht mehr befreien konnte.
Die Gründe für die Abwanderung aus dem ländlichen Raum sind ebenso wie die Folgen vielfältig. Um entsprechende Gegenmaßnahmen ergreifen und auf die Bedürfnisse der Bewohnerinnen und Bewohner angemessen eingehen zu können, ist ein gemeinsamer Diskurs über die Zukunft ländlicher Räume notwendig. Durch eine entsprechende Bürgerbeteiligung und aktive Gestaltung der Lebensverhältnisse kann die Lebensqualität erhöht und ein Leben dort weiterhin ermöglicht werden.
Damit auf die Wünsche und Bedürfnisse der Bürgerinnen und Bürger der ILE-Nationalparkgemeinden optimal und rechtzeitig eingegangen werden kann, haben sich fünf der sechs Gemeinden an einem Forschungsprojekt zum Thema Innenentwicklung beteiligt. Die Gemeinden Bayerisch-Eisenstein, Frauenau, Neuschönau, Spiegelau und St. Oswald-Riedlhütte haben dabei Unterstützung von der Professur für Regionale Geographie der Universität Passau und dem Amt für ländliche Entwicklung Niederbayern erhalten. Ziel des Projekts ist es, mit Hilfe einer umfangreichen Bürgerbeteiligung Handlungsempfehlungen für die Innenentwicklung in den Bereichen Nahversorgung, Wohnen, Öffentlicher Raum, Gesellschaftliche Teilhabe und Klimaschutz zu geben und so die Gemeinden zukunftsfähig und altersgerecht zu gestalten.
§ 626 Abs. 1 BGB verlangt zwei Voraussetzungen für die außerordentliche Kündigung: die Eignung des Sachverhaltes als wichtigen Grund und die umfassende Interessenabwägung im Einzelfall. Die Verletzung der Nebenleistungspflicht im Arbeitsverhältnis ist je nach ihrer Bedeutung und Qualität als wichtiger Grund nach § 626 Abs. 1 BGB anzusehen. Das Vermögensdelikt ist eine Verletzung der Nebenleistungspflicht. Somit ist das Vermögensdelikt an sich nach seiner Bedeutung und Qualität im Einzelfall auch als wichtiger Grund zur außerordentlichen Kündigung anzusehen.Im Rahmen der Interessenabwägung in § 626 Abs. 1 BGB ergibt sich, ab welchem Grad des Vertrauensverlusts eine fristlose Kündigung gerechtfertigt wird. Die Höhe des Schadens wird bei dem Vertrauensverlust zur fristlosen Kündigung berücksichtigt. Die Analyse der Rechtsprechung des BAG hat gezeigt, dass nur manche Entscheidungen nach dem Fall "Emmely" die Dauer der Betriebszugehörigkeit als Kriterium in der Interessenabwägung in § 626 Abs. 1 BGB berücksichtigen. Aber in den meisten Fällen ist das BAG von seiner bisherigen Rechtsprechung abgerückt: Die fristlose Kündigung war weiterhin bei Vermögensdelikten wirksam, ohne die Interessen angemessen gegeneinander abzuwägen und ohne mildere Mittel zu berücksichtigen. Das BAG sollte aus diesen Gründen seine Auffassung dahingehend revidieren, die Kriterien der Interessen gezielt auszuwählen und gegeneinander abzuwägen.
Ziel der Arbeit ist es, Narrative und diskursive Strategien von politisch rechten Akteuren zu erforschen und aus einer pädagogischen Sicht zu bewerten. Der Fokus der Untersuchung wird hierbei auf Kommunikation innerhalb des Social Web gelegt. Zur Bewältigung des Forschungsvorhabens wird als Methode eine Diskursanalyse angewendet. Die Analyse kommt zu dem Ergebnis, dass sowohl die erzeugten Narrative als auch die verwendeten diskursiven Strategien in allen untersuchten diskursiven Arenen starke Ähnlichkeiten aufweisen – unabhängig davon, ob die Arena vorrangig rechtsextreme, rechtsradikale oder rechtspopulistische Tendenzen besitzt. Dies legt den Verdacht nahe, dass ein Großteil der politisch rechte Akteure immer wiederkehrende Narrative verbreiten und diese mit den stets gleichen diskursiven Strategien untermauern, eine klare diskursive Grenze zwischen Rechtsextremismus, Rechtsradikalismus und Rechtspopulismus scheint es nicht zu geben. Dadurch besteht die Gefahr, dass Rechtsextreme Gedanken zunehmend in die Mitte der Gesellschaft vordringen können.
Das Aufzeichnen der Internetaktivität ist mit der Verknüpfung persönlicher Daten zu einer Schlüsselressource für viele kostenpflichtige und kostenfreie Dienste im Web geworden. Diese Dienste sind zum einen Webanwendungen, wie beispielsweise die von Google bereitgestellten Karten/Navigation oder Websuche, die täglich kostenlos verwendet werden. Zum anderen sind es alle Webseiten, die meist kostenlos Nachrichten oder allgemeine Informationen zu verschiedenen Themen bereitstellen. Durch das Aufrufen und die Nutzung dieser Webdienste werden alle Informationen, die im Webdienst verarbeitet werden, an den Dienstanbieter weitergeben. Dies umfasst nicht nur die im Benutzerkonto des Webdienstes gespeicherte Profildaten wie Name oder Adresse, sondern auch die Aktivität mit dem Webdienst wie das anklicken von Links oder die Verweildauer.
Darüber hinaus gibt es jedoch auch unzählige Drittparteien, welche zumeist im Hintergrund in die Webdienste eingebunden sind und das Benutzerverhalten der kompletten Webaktivität - Webseiten übergreifend - mitspeichern sowie auswerten. Der Einsatz verschiedener, in der Regel für den Benutzer verborgener Techniken, dient dazu das Online-Verhalten der Benutzer genau zu verfolgen und viele sensible Daten zu sammeln. Dieses Verhalten wird als Web-Tracking bezeichnet und wird hauptsächlich von Werbeunternehmen genutzt. Die gesammelten Daten sind oft personenbezogen und eine wertvolle Ressourcen der Unternehmen, um Beispielsweise passend zum Benutzerprofil personalisierte Werbung schalten zu können. Mit der Nutzung dieser personenbezogenen Daten entstehen aber auch weitreichendere Auswirkungen, welche sich unter anderem in Preisanpassungen für Benutzer mit speziellen Profilattributen, wie der Nutzung von teuren Endgeräten, widerspiegeln. Ziel dieser Arbeit ist es die Privatsphäre der Nutzer im Internet zu steigern und die Nutzerverfolgung von Web-Tracking signifikant zu reduzieren. Dabei stellen sich vier Herausforderungen, die jeweils einen Forschungsschwerpunkt dieser Arbeit bilden: (1) Systematische Analyse und Einordnung eingesetzter Tracking-Techniken, (2) Untersuchung vorhandener Schutzmechanismen und deren Schwachstellen,(3) Konzeption einer Referenzarchitektur zum Schutz vor Web-Tracking und (4) Entwurf einer automatisierten Testumgebungen unter Realbedingungen, um die Reduzierung von Web-Tracking in den entwickelten Schutzmaßnahmen zu untersuchen. Jeder dieser Forschungsschwerpunkte stellt neue Beiträge bereit, um einheitlich das übergeordnete Ziel zu erreichen: der Entwicklung von Schutzmaßnahmen gegen die Preisgabe sensibler Benutzerdaten im Internet. Der erste wissenschaftliche Beitrag dieser Dissertation ist eine umfassende Evaluation eingesetzter Web-Tracking Techniken und Methoden, sowie deren Gefahren, Risiken und Implikationen für die Privatsphäre der Internetnutzer. Die Evaluation beinhaltet zusätzlich die Untersuchung vorhandener Tracking-Schutzmechanismen und deren Schwachstellen. Die gewonnenen Erkenntnisse sind maßgeblich für die in dieser Arbeit neu entwickelten Ansätze und verbessern den bisherigen nicht hinreichend gewährleisteten Schutz vor Web-Tracking. Der zweite wissenschaftliche Beitrag ist die Entwicklung einer robusten Klassifizierung von Web-Tracking, der Entwurf einer effizienten Architektur zur Langzeituntersuchung von Web-Tracking sowie einer interaktiven Visualisierung des Auftreten von Web-Tracking im Internet. Dabei basiert der neue Klassifizierungsansatz, um Tracking zu identifizieren, auf der Entropie Messung des Informationsgehalts von Cookies. Die Resultate der Web-Tracking Langzeitstudien sind unter anderem 1.209 identifizierte Tracking-Domains auf den meistbesuchten Webseiten in Deutschland. Hierbei wurden innerhalb der Top 25 Webseiten im Durchschnitt 45 Tracking-Elemente pro Webseite gefunden. Der Tracker mit dem höchsten Potenzial zum Erstellen eines Benutzerprofils war doubleclick.com, da er 90% der Webseiten überwacht. Die Auswertung des untersuchten Tracking-Netzwerks ergab weiterhin einen detaillierten Einblick in die Tracking-Technik mithilfe von Weiterleitungslinks. Dabei haben wir 1,2 Millionen HTTP-Traces von monatelangen Crawls der 50.000 international meistbesuchten Webseiten analysiert. Die Ergebnisse zeigen, dass 11,6% dieser Webseiten HTTP-Redirects, verborgen in Webseiten-Links, zum Tracken verwenden. Dies wird eingesetzt, um den Webseitenverlauf des Benutzers nach dem Klick durch eine Kette von (Tracking-)Servern umzuleiten, welche in der Regel nicht sichtbar sind, bevor das beabsichtigte Link-Ziel geladen wird. In diesem Szenario erfasst der Tracker wertvolle Verbindungs-Metadaten zu Inhalt, Thema oder Benutzerinteressen der Website. Die Visualisierung des Tracking Ökosystem stellen wir in einem interaktiven Open-Source Web-Tool bereit. Der dritte wissenschaftliche Beitrag dieser Dissertation ist die Konzeption von zwei neuartigen Schutzmechanismen gegen Web-Tracking und der Aufbau einer automatisierten Simulationsumgebung unter Realbedingungen, um die Effektivität der Umsetzungen zu verifizieren. Der Fokus liegt auf den beiden meist verwendeten Tracking-Verfahren: Cookies (hierbei wird eine eindeutigen ID auf dem Gerät des Benutzers gespeichert), sowie Browser-Fingerprinting. Letzteres beschreibt eine Methode zum Sammeln einer Vielzahl an Geräteeigenschaften, um den Benutzer eindeutig zu (re- )identifizieren, ohne eine eindeutige ID auf dem Gerät zu speichern. Um die Effektivität der in dieser Arbeit entwickelten Schutzmechanismen vor Web-Tracking zu untersuchen, implementierten und evaluierten wir die Schutzkonzepte direkt im Chromium Browser. Das Ergebnis zeigt eine erfolgreiche Reduzierung von Web-Tracking um 44%. Zusätzlich verbessert das in dieser Arbeit entwickelte Konzept “Site Isolation” den Datenschutz des privaten Browsing-Modus, ermöglicht das Setzen eines manuellen Speicher-Zeitlimits von Cookies und schützt den Browser gegen verschiedene Bedrohungen wie CSRF (Cross-Site Request Forgery) oder CORS (Cross-Origin Ressource Sharing). Site Isolation speichert dabei den Status der lokalen Website in separaten Containern und kann dadurch diverse Tracking-Methoden wie Cookies, lokalStorage oder redirect tracking verhindern. Bei der Auswertung von 1,6 Millionen Webseiten haben wir gezeigt, dass der Tracker doubleclick.com das höchste Potenzial besitzt, den Nutzer zu verfolgen und auf 25% der 40.000 international meistbesuchten Webseiten vertreten ist. Schließlich demonstrieren wir in unserem erweiterten Chromium-Browser einen robusten Browser-Fingerprinting-Schutz. Der Test unseres Prototyps mittels 70.000 Browsersitzungen zeigt, dass unser Browser den Nutzer vor sogenanntem Browser-Fingerprinting Tracking schützt. Im Vergleich zu fünf anderen Browser-Fingerprint-Tools erzielte unser Prototyp die besten Ergebnisse und ist der erste Schutzmechanismus gegen Flash sowie Canvas Fingerprinting.
With the frequency and impact of data breaches raising, it has become essential for organizations to automate intrusion detection via machine learning solutions. This generally comes with numerous challenges, among others high class imbalance, changing target concepts and difficulties to conduct sound evaluation. In this thesis, we adopt a user-centered anomaly detection perspective to address selected challenges of intrusion detection, through a real-world use case in the identity and access management (IAM) domain. In addition to the previous challenges, salient properties of this particular problem are high relevance of categorical data, limited feature availability and total absence of ground truth.
First, we ask how to apply anomaly detection to IAM audit logs containing a restricted set of mixed (i.e. numeric and categorical) attributes. Then, we inquire how anomalous user behavior can be separated from normality, and this separation evaluated without ground truth. Finally, we examine how the lack of audit data can be alleviated in two complementary settings. On the one hand, we ask how to cope with users without relevant activity history ("cold start" problem). On the other hand, we seek how to extend audit data collection with heterogeneous attributes (i.e. categorical, graph and text) to improve insider threat detection.
After aggregating IAM audit data into sessions, we introduce and compare general anomaly detection methods for mixed data to a user identification approach, designed to learn the distinction between normal and malicious user behavior. We find that user identification outperforms general anomaly detection and is effective against masquerades. An additional clustering step allows to reduce false positives among similar users. However, user identification is not effective against insider threats. Furthermore, results suggest that the current scope of our audit data collection should be extended.
In order to tackle the "cold start" problem, we adopt a zero-shot learning approach. Focusing on the CERT insider threat use case, we extend an intrusion detection system by integrating user relations to organizational entities (like assignments to projects or teams) in order to better estimate user behavior and improve intrusion detection performance. Results show that this approach is effective in two realistic scenarios.
Finally, to support additional sources of audit data for insider threat detection, we propose a method representing audit events as graph edges with heterogeneous attributes. By performing detection at fine-grained level, this approach advantageously improves anomaly traceability while reducing the need for aggregation and feature engineering. Our results show that this method is effective to find intrusions in authentication and email logs.
Overall, our work suggests that masquerades and insider threats call for different detection methods. For masquerades, user identification is a promising approach. To find malicious insiders, graph features representing user context and relations to other entities can be informative. This opens the door for tighter coupling of intrusion detection with user identities, roles and privileges used in IAM solutions.
New arising phenomena in the occupational realm strongly shape contemporary work settings. These developments heavily affect how individuals work within and beyond organizational boundaries. Two phenomena associated with the changing nature of work have been especially prevalent in work settings and intensively discussed in public debates. First, organizations started to introduce mindfulness practices to their workforce. Rooted in spirituality and formerly used in clinical therapy, mindfulness is applied as a human resource development practice to train employees and managers to cope with the increased work intensification. Second, digitization and the importance of individualization opened up the path for work settings beyond organizational boundaries on crowdworking online platforms. On these online platforms, workers process tasks independently and remotely. Research just started to address the implications and meaning of mindfulness practices in organizations and the rise of crowdworking platforms. Several questions remain unanswered. This dissertation addresses unanswered but pressing questions related to these two phenomena shaping contemporary work settings. Structured in four essays the first two essays address the application and meaning of mindfulness practices. The first essay analyzes the meaning and interpretations of these new practices within organizations. The second essay takes contextual factors of the organizational environment into account and investigates their relevance for the successful implementation of mindfulness practices. The second two essays are dedicated to work attitudes and behavior on crowdworking online platform. Essay three captures individuals’ motivation for working on such platforms and their effects for workers’ work performance. The last essay deals with the role of professional crowdworking online communities in the work experience and asses the effects of social support in these communities on occupational identification, work meaningfulness and finally on work engagement. Each essay in this dissertation generates new insights on arising phenomena in contemporary work settings. They address several timely yet unanswered research questions for these rising phenomena and thereby offer a deeper and more nuanced understanding of the role mindfulness practices and crowdworking online platforms play in the context of the future of work.
Die vergleichende Sportpädagogik als Teildisziplin und zugleich Schnittmenge der vergleichenden Erziehungswissenschaft und der allgemeinen Sportpädagogik macht es sich zur Aufgabe, sportbezogene Charakteristika zweier oder mehrerer Länder oder Kulturen miteinander zu vergleichen und den dadurch entstandenen Erkenntnisgewinn in verschiedensten Sphären nutzbar zu machen – eine Wissenschaftsdisziplin, wie auch Forschungsmethode, deren Möglichkeiten nicht nur für den Sportunterricht per se, sondern auch für die (universitäre) Ausbildung der zukünftigen Arrangeure des Sportunterrichts nutzbar gemacht werden können. Im Rahmen eines melioristisch motivierten Vergleichs macht es sich diese Forschungsarbeit zum Ziel, Unterschiede und Gemeinsamkeiten der universitären Sportlehramtsausbildung in Deutschland und den USA zu eruieren, deren Ursachen zu analysieren und auf dieser Basis Potenziale, Entwicklungsperspektiven und Handlungsoptionen für den positiven Fortschritt beider universitärer Ausbildungssysteme aufzuzeigen. Dabei werden sowohl außensystemische, wie auch innensystemische Untersuchungsaspekte berücksichtigt.
Subjektive Theorien von Grundschullehrkräften über Eltern wurden qualitativ-empirisch erkundet, um einen Beitrag zur Schule-Elternhauskooperation zu leisten. Luhmanns Systemtheorie und Fends Schultheorie bilden die theoretische Basis. Mithilfe von drei Interaktionsfeldern, die entscheidende Begegnungsräume zwischen Grundschullehrkräften und Elternschaft markieren, wurde dem Verhältnis zueinander nachgegangen (Luhmann, 2014; Fend, 2008). Mittels eines halbstandardisierten Leitfadeninterviews und einer adaptierten Struktur-Legetechnik sind die Subjektiven Theorien der Untersuchungsteilnehmer*innen erfasst worden (Scheele & Groeben, 1988). Aus der Perspektive von Lehrkräften wurden die individuellen und generellen subjektiven Theorien herausgearbeitet. Die Ergebnisse offenbaren fachlich fehlerhafte und für Kooperation teilweise äußerst hemmende Überzeugungen, obgleich der Wunsch nach einem kooperativen Miteinander stets betont wurde.
Die vorliegende Arbeit erforscht den Einsatz von Ekphrasen in audiovisuellen Texten. Der ursprünglich aus der antiken Rhetoriklehre stammende Begriff bezeichnet die literarische Beschreibung von bildender Kunst und wird im Rahmen dieser Untersuchung auf den Film übertragen. Ziel der Analyse filmischer Beschreibungen von Kunst, speziell Malerei, ist es zu eruieren, wie Kunstwerke im Sinne der Bedeutungsvermittlung semantisch aufgeladen respektive funktionalisiert werden. Zentrale Forschungsgegenstände bilden demnach die Medien Bild und Film – somit ist die gesamte Arbeit in den Kontext des Intermedialitätsdiskurses eingebettet. Als Korpus dient ein Konglomerat an Texten, das neben zentralen Schlüsselwerken insbesondere jüngere – zwischen 2011 und 2016 entstandene –, wenig bis kaum erforschte Filme umfasst.
The current movement towards a smart grid serves as a solution to present power grid challenges by introducing numerous monitoring and communication technologies. A dependable, yet timely exchange of data is on the one hand an existential prerequisite to enable Advanced Metering Infrastructure (AMI) services, yet on the other a challenging endeavor, because the increasing complexity of the grid fostered by the combination of Information and Communications Technology (ICT) and utility networks inherently leads to dependability challenges.
To be able to counter this dependability degradation, current approaches based on high-reliability hardware or physical redundancy are no longer feasible, as they lead to increased hardware costs or maintenance, if not both. The flexibility of these approaches regarding vendor and regulatory interoperability is also limited. However, a suitable solution to the AMI dependability challenges is also required to maintain certain regulatory-set performance and Quality of Service (QoS) levels.
While a part of the challenge is the introduction of ICT into the power grid, it also serves as part of the solution. In this thesis a Network Functions Virtualization (NFV) based approach is proposed, which employs virtualized ICT components serving as a replacement for physical devices. By using virtualization techniques, it is possible to enhance the performability in contrast to hardware based solutions through the usage of virtual replacements of processes that would otherwise require dedicated hardware. This approach offers higher flexibility compared to hardware redundancy, as a broad variety of virtual components can be spawned, adapted and replaced in a short time. Also, as no additional hardware is necessary, the incurred costs decrease significantly. In addition to that, most of the virtualized components are deployed on Commercial-Off-The-Shelf (COTS) hardware solutions, further increasing the monetary benefit.
The approach is developed by first reviewing currently suggested solutions for AMIs and related services. Using this information, virtualization technologies are investigated for their performance influences, before a virtualized service infrastructure is devised, which replaces selected components by virtualized counterparts. Next, a novel model, which allows the separation of services and hosting substrates is developed, allowing the introduction of virtualization technologies to abstract from the underlying architecture. Third, the performability as well as monetary savings are investigated by evaluating the developed approach in several scenarios using analytical and simulative model analysis as well as proof-of-concept approaches. Last, the practical applicability and possible regulatory challenges of the approach are identified and discussed.
Results confirm that—under certain assumptions—the developed virtualized AMI is superior to the currently suggested architecture. The availability of services can be severely increased and network delays can be minimized through centralized hosting. The availability can be increased from 96.82% to 98.66% in the given scenarios, while decreasing the costs by over 60% in comparison to the currently suggested AMI architecture. Lastly, the performability analysis of a virtualized service prototype employing performance analysis and a Musa-Okumoto approach reveals that the AMI requirements are fulfilled.
Computer vision aims at developing algorithms to extract high-level information from images and videos. In the industry, for instance, such algorithms are applied to guide manufacturing robots, to visually monitor plants, or to assist human operators in recognizing specific components. Recent progress in computer vision has been dominated by deep artificial neural network, i.e., machine learning methods simulating the way that information flows in our biological brains, and the way that our neural networks adapt and learn from experience. For these methods to learn how to accurately perform complex visual tasks, large amounts of annotated images are needed. Collecting and labeling such domain-relevant training datasets is, however, a tedious—sometimes impossible—task. Therefore, it has become common practice to leverage pre-available three-dimensional (3D) models instead, to generate synthetic images for the recognition algorithms to be trained on. However, methods optimized over synthetic data usually suffer a significant performance drop when applied to real target images. This is due to the realism gap, i.e., the discrepancies between synthetic and real images (in terms of noise, clutter, etc.). In my work, three main directions were explored to bridge this gap.
First, an innovative end-to-end framework is proposed to render realistic depth images from 3D models, as a growing number of solutions (especially in the industry) are utilizing low-cost depth cameras (e.g., Microsoft Kinect and Intel RealSense) for recognition tasks. Based on a thorough study of these devices and the different types of noise impairing them, the proposed framework simulates their inner mechanisms, comprehensively modeling vital factors such as sensor noise, material reflectance, surface geometry, etc. Able to simulate a wide panel of depth sensors and to quickly generate large datasets, this framework is used to train algorithms for various recognition tasks, consistently and significantly enhancing their performance compared to other state-of-the-art simulation tools.
In some cases, however, relevant 2D or 3D object representations to generate synthetic samples are not available. Considering this different case of data scarcity, a solution is then proposed to incrementally build a representation of visual scenes from partial observations. Provided observations are localized from one to another based on their content and registered in a global memory with spatial properties. Simultaneously, this memory can be queried to render novel views of the scene. Furthermore, unobserved regions can be hallucinated in memory, in consistence with previous observations, hallucinations, and global priors. The efficacy of the proposed mnemonic and generative system, trainable end-to-end, is demonstrated on various 2D and 3D use-cases.
Finally, an advanced convolutional neural network pipeline is introduced, tackling the realism gap from a novel angle. While most methods addressing this problem focus on bringing synthetic samples—or the knowledge acquired from them—closer to the real target domain, the proposed solution performs the opposite process, mapping unseen target images into controlled synthetic domains. The pre-processed samples can then be handed to downstream recognition methods, themselves purely trained on similar synthetic data, to greatly improve their accuracy.
For each approach, a variety of qualitative and quantitative studies are detailed, providing successful comparisons to state-of-the-art methods. By proposing solutions to bridge the realism gap from either side, as well as a pipeline to improve the acquisition and generation of new visual content, this thesis provides a unique perspective on the challenges of data scarcity when building robust recognition systems.
A plethora of resources made available via retrieval systems in digital libraries remains untapped in the so called long tail of the Web. These long-tail websites get considerably less visits than major Web hubs.
Zero-effort queries ease the discovery of long-tail resources by proactively retrieving and presenting information based on a user’s context. However, zero-effort queries over existing digital library structures are challenging, since the underlying retrieval system is only accessible via an API. The information need must be expressed by a query, instead of optimizing the ranking between context and resources in the retrieval system directly. We address three research questions that arise from replacing the user information seeking process by zero-effort queries.
Our first question addresses the transformation of a user query to an automatic query, derived from the context. We present means to 1) identify the relevant context on different levels of granularity, 2) derive an information need from the context via keyword extraction and personalization and 3) express this information need in a query scheme that avoids over- or under-specified queries. We address the cold start problem with an approach to bootstrap user profiles from social media, even for passive users.
With the second question, we address the presentation of resources in zero-effort query scenarios, presenting guidelines for presentation interfaces in the browser and a visualization of the triadic relationship between context, query and results. QueryCrumbs, a compact query history visualization supports recalling information found in the past and exploratory search by visualizing qualitative and quantitative query similarity.
Our last question addresses the gap between (simple) keyword queries and the representation of resources by rich and complex meta-data. We investigate and extend feature representation learning techniques centered around the skip-gram model with negative sampling. Finally, we present an approach to learn representations from network and text jointly that can cope with the partial absence of one modality.
Experimental results show close to human performance of our zero-effort query and user profile generation approach and visualizations to be helpful in terms of transparency, efficiency and support for exploratory search. These results indicate that the proposed zero-effort query approach indeed eases the discovery of long-tail resources and the accompanying visualizations further facilitate this process. The joint representation model provides a first step to bridge the gap between query and resource representation and we plan to follow and investigate this route further in the future.